code
stringlengths 1
25.8M
| language
stringclasses 18
values | source
stringclasses 4
values | repo
stringclasses 78
values | path
stringlengths 0
268
|
|---|---|---|---|---|
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
Apache Hadoop Git/Jira FixVersion validation
============================================================
Git commits in Apache Hadoop contains Jira number of the format
HADOOP-XXXX or HDFS-XXXX or YARN-XXXX or MAPREDUCE-XXXX.
While creating a release candidate, we also include changelist
and this changelist can be identified based on Fixed/Closed Jiras
with the correct fix versions. However, sometimes we face few
inconsistencies between fixed Jira and Git commit message.
git_jira_fix_version_check.py script takes care of
identifying all git commits with commit
messages with any of these issues:
1. commit is reverted as per commit message
2. commit does not contain Jira number format in message
3. Jira does not have expected fixVersion
4. Jira has expected fixVersion, but it is not yet resolved
Moreover, this script also finds any resolved Jira with expected
fixVersion but without any corresponding commit present.
This should be useful as part of RC preparation.
git_jira_fix_version_check supports python3 and it required
installation of jira:
```
$ python3 --version
Python 3.9.7
$ python3 -m venv ./venv
$ ./venv/bin/pip install -r dev-support/git-jira-validation/requirements.txt
$ ./venv/bin/python dev-support/git-jira-validation/git_jira_fix_version_check.py
```
The script also requires below inputs:
```
1. First commit hash to start excluding commits from history:
Usually we can provide latest commit hash from last tagged release
so that the script will only loop through all commits in git commit
history before this commit hash. e.g for 3.3.2 release, we can provide
git hash: fa4915fdbbbec434ab41786cb17b82938a613f16
because this commit bumps up hadoop pom versions to 3.3.2:
https://github.com/apache/hadoop/commit/fa4915fdbbbec434ab41786cb17b82938a613f16
2. Fix Version:
Exact fixVersion that we would like to compare all Jira's fixVersions
with. e.g for 3.3.2 release, it should be 3.3.2.
3. JIRA Project Name (default Project Name: HADOOP):
The exact name of Project as case-sensitive.
4. Path of project's working dir with release branch checked-in:
Path of project from where we want to compare git hashes from. Local fork
of the project should be up-to date with upstream and expected release
branch should be checked-in.
5. Jira server url (default url: https://issues.apache.org/jira):
Default value of server points to ASF Jiras but this script can be
used outside of ASF Jira too.
```
Example of script execution:
```
JIRA Project Name (default: HADOOP): HADOOP
First commit hash to start excluding commits from history: fa4915fdbbbec434ab41786cb17b82938a613f16
Fix Version: 3.3.2
Jira server url (default: https://issues.apache.org/jira):
Path of project's working dir with release branch checked-in: /Users/vjasani/Documents/src/hadoop-3.3/hadoop
Check git status output and verify expected branch
On branch branch-3.3.2
Your branch is up to date with 'origin/branch-3.3.2'.
nothing to commit, working tree clean
Jira/Git commit message diff starting: ##############################################
Jira not present with version: 3.3.2. Commit: 8cd8e435fb43a251467ca74fadcb14f21a3e8163 HADOOP-17198. Support S3 Access Points (#3260) (branch-3.3.2) (#3955)
WARN: Jira not found. Commit: 8af28b7cca5c6020de94e739e5373afc69f399e5 Updated the index as per 3.3.2 release
WARN: Jira not found. Commit: e42e483d0085aa46543ebcb1196dd155ddb447d0 Make upstream aware of 3.3.1 release
Commit seems reverted. Commit: 6db1165380cd308fb74c9d17a35c1e57174d1e09 Revert "HDFS-14099. Unknown frame descriptor when decompressing multiple frames (#3836)"
Commit seems reverted. Commit: 1e3f94fa3c3d4a951d4f7438bc13e6f008f228f4 Revert "HDFS-16333. fix balancer bug when transfer an EC block (#3679)"
Jira not present with version: 3.3.2. Commit: ce0bc7b473a62a580c1227a4de6b10b64b045d3a HDFS-16344. Improve DirectoryScanner.Stats#toString (#3695)
Jira not present with version: 3.3.2. Commit: 30f0629d6e6f735c9f4808022f1a1827c5531f75 HDFS-16339. Show the threshold when mover threads quota is exceeded (#3689)
Jira not present with version: 3.3.2. Commit: e449daccf486219e3050254d667b74f92e8fc476 YARN-11007. Correct words in YARN documents (#3680)
Commit seems reverted. Commit: 5c189797828e60a3329fd920ecfb99bcbccfd82d Revert "HDFS-16336. Addendum: De-flake TestRollingUpgrade#testRollback (#3686)"
Jira not present with version: 3.3.2. Commit: 544dffd179ed756bc163e4899e899a05b93d9234 HDFS-16171. De-flake testDecommissionStatus (#3280)
Jira not present with version: 3.3.2. Commit: c6914b1cb6e4cab8263cd3ae5cc00bc7a8de25de HDFS-16350. Datanode start time should be set after RPC server starts successfully (#3711)
Jira not present with version: 3.3.2. Commit: 328d3b84dfda9399021ccd1e3b7afd707e98912d HDFS-16336. Addendum: De-flake TestRollingUpgrade#testRollback (#3686)
Jira not present with version: 3.3.2. Commit: 3ae8d4ccb911c9ababd871824a2fafbb0272c016 HDFS-16336. De-flake TestRollingUpgrade#testRollback (#3686)
Jira not present with version: 3.3.2. Commit: 15d3448e25c797b7d0d401afdec54683055d4bb5 HADOOP-17975. Fallback to simple auth does not work for a secondary DistributedFileSystem instance. (#3579)
Jira not present with version: 3.3.2. Commit: dd50261219de71eaa0a1ad28529953e12dfb92e0 YARN-10991. Fix to ignore the grouping "[]" for resourcesStr in parseResourcesString method (#3592)
Jira not present with version: 3.3.2. Commit: ef462b21bf03b10361d2f9ea7b47d0f7360e517f HDFS-16332. Handle invalid token exception in sasl handshake (#3677)
WARN: Jira not found. Commit: b55edde7071419410ea5bea4ce6462b980e48f5b Also update hadoop.version to 3.3.2
...
...
...
Found first commit hash after which git history is redundant. commit: fa4915fdbbbec434ab41786cb17b82938a613f16
Exiting successfully
Jira/Git commit message diff completed: ##############################################
Any resolved Jira with fixVersion 3.3.2 but corresponding commit not present
Starting diff: ##############################################
HADOOP-18066 is marked resolved with fixVersion 3.3.2 but no corresponding commit found
HADOOP-17936 is marked resolved with fixVersion 3.3.2 but no corresponding commit found
Completed diff: ##############################################
```
|
unknown
|
github
|
https://github.com/apache/hadoop
|
dev-support/git-jira-validation/README.md
|
import asyncio
import json
from gameobjects.vector2 import Vector2
connected_clients = 0
clients = {}
class Client(object):
def __init__(self, id):
self.id = id
self.pos = [0, 0]
class EchoServerClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
global connected_clients
connected_clients += 1
print('Connection from {}'.format(peername))
print('Clientes conectados: {}'.format(connected_clients))
self.transport = transport
def data_received(self, data):
global clients
message = json.loads(data.decode())
#print('Data received: {!r}'.format(message))
assert 'id' in message
player_id = message['id']
# login
if message['action'] == 'login':
clients[player_id] = Client(player_id)
self.player_id = player_id
self.transport.write(bytes('accepted', "utf-8"))
# already logged
elif player_id in clients:
if message['action'] == 'update':
clients[player_id].pos = message['pos']
#print('Send: {!r}'.format(message))
data = json.dumps([v.__dict__ for v in clients.values()])
self.transport.write(bytes(data, "utf-8"))
else:
print('Close the client socket')
self.transport.close()
def __del__(self):
global clients, connected_clients
try:
del clients[self.player_id]
except AttributeError:
pass
connected_clients -= 1
loop = asyncio.get_event_loop()
# Each client connection will create a new protocol instance
coro = loop.create_server(EchoServerClientProtocol, '127.0.0.1', 9999)
server = loop.run_until_complete(coro)
# Serve requests until Ctrl+C is pressed
print('Serving on {}'.format(server.sockets[0].getsockname()))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
# Close the server
server.close()
loop.run_until_complete(server.wait_closed())
loop.close()
|
unknown
|
codeparrot/codeparrot-clean
| ||
#
# Broker peering simulation (part 2) in Python
# Prototypes the request-reply flow
#
# While this example runs in a single process, that is just to make
# it easier to start and stop the example. Each thread has its own
# context and conceptually acts as a separate process.
#
# Author : Min RK
# Contact: benjaminrk(at)gmail(dot)com
#
import random
import sys
import threading
import time
import zmq
try:
raw_input
except NameError:
# Python 3
raw_input = input
NBR_CLIENTS = 10
NBR_WORKERS = 3
def tprint(msg):
sys.stdout.write(msg + '\n')
sys.stdout.flush()
def client_task(name, i):
"""Request-reply client using REQ socket"""
ctx = zmq.Context()
client = ctx.socket(zmq.REQ)
client.identity = (u"Client-%s-%s" % (name, i)).encode('ascii')
client.connect("ipc://%s-localfe.ipc" % name)
while True:
client.send(b"HELLO")
try:
reply = client.recv()
except zmq.ZMQError:
# interrupted
return
tprint("Client-%s: %s" % (i, reply))
time.sleep(1)
def worker_task(name, i):
"""Worker using REQ socket to do LRU routing"""
ctx = zmq.Context()
worker = ctx.socket(zmq.REQ)
worker.identity = (u"Worker-%s-%s" % (name, i)).encode('ascii')
worker.connect("ipc://%s-localbe.ipc" % name)
# Tell broker we're ready for work
worker.send(b"READY")
# Process messages as they arrive
while True:
try:
msg = worker.recv_multipart()
except zmq.ZMQError:
# interrupted
return
tprint("Worker-%s: %s\n" % (i, msg))
msg[-1] = b"OK"
worker.send_multipart(msg)
def main(myself, peers):
print("I: preparing broker at %s..." % myself)
# Prepare our context and sockets
ctx = zmq.Context()
# Bind cloud frontend to endpoint
cloudfe = ctx.socket(zmq.ROUTER)
if not isinstance(myself, bytes):
ident = myself.encode('ascii')
else:
ident = myself
cloudfe.identity = ident
cloudfe.bind("ipc://%s-cloud.ipc" % myself)
# Connect cloud backend to all peers
cloudbe = ctx.socket(zmq.ROUTER)
cloudbe.identity = ident
for peer in peers:
tprint("I: connecting to cloud frontend at %s" % peer)
cloudbe.connect("ipc://%s-cloud.ipc" % peer)
if not isinstance(peers[0], bytes):
peers = [peer.encode('ascii') for peer in peers]
# Prepare local frontend and backend
localfe = ctx.socket(zmq.ROUTER)
localfe.bind("ipc://%s-localfe.ipc" % myself)
localbe = ctx.socket(zmq.ROUTER)
localbe.bind("ipc://%s-localbe.ipc" % myself)
# Get user to tell us when we can start...
raw_input("Press Enter when all brokers are started: ")
# create workers and clients threads
for i in range(NBR_WORKERS):
thread = threading.Thread(target=worker_task, args=(myself, i))
thread.daemon = True
thread.start()
for i in range(NBR_CLIENTS):
thread_c = threading.Thread(target=client_task, args=(myself, i))
thread_c.daemon = True
thread_c.start()
# Interesting part
# -------------------------------------------------------------
# Request-reply flow
# - Poll backends and process local/cloud replies
# - While worker available, route localfe to local or cloud
workers = []
# setup pollers
pollerbe = zmq.Poller()
pollerbe.register(localbe, zmq.POLLIN)
pollerbe.register(cloudbe, zmq.POLLIN)
pollerfe = zmq.Poller()
pollerfe.register(localfe, zmq.POLLIN)
pollerfe.register(cloudfe, zmq.POLLIN)
while True:
# If we have no workers anyhow, wait indefinitely
try:
events = dict(pollerbe.poll(1000 if workers else None))
except zmq.ZMQError:
break # interrupted
# Handle reply from local worker
msg = None
if localbe in events:
msg = localbe.recv_multipart()
(address, empty), msg = msg[:2], msg[2:]
workers.append(address)
# If it's READY, don't route the message any further
if msg[-1] == b'READY':
msg = None
elif cloudbe in events:
msg = cloudbe.recv_multipart()
(address, empty), msg = msg[:2], msg[2:]
# We don't use peer broker address for anything
if msg is not None:
address = msg[0]
if address in peers:
# Route reply to cloud if it's addressed to a broker
cloudfe.send_multipart(msg)
else:
# Route reply to client if we still need to
localfe.send_multipart(msg)
# Now route as many clients requests as we can handle
while workers:
events = dict(pollerfe.poll(0))
reroutable = False
# We'll do peer brokers first, to prevent starvation
if cloudfe in events:
msg = cloudfe.recv_multipart()
reroutable = False
elif localfe in events:
msg = localfe.recv_multipart()
reroutable = True
else:
break # No work, go back to backends
# If reroutable, send to cloud 20% of the time
# Here we'd normally use cloud status information
if reroutable and peers and random.randint(0, 4) == 0:
# Route to random broker peer
msg = [random.choice(peers), b''] + msg
cloudbe.send_multipart(msg)
else:
msg = [workers.pop(0), b''] + msg
localbe.send_multipart(msg)
if __name__ == '__main__':
if len(sys.argv) >= 2:
main(myself=sys.argv[1], peers=sys.argv[2:])
else:
print("Usage: peering2.py <me> [<peer_1> [... <peer_N>]]")
sys.exit(1)
|
unknown
|
codeparrot/codeparrot-clean
| ||
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
from django.conf import settings
import pyconde.tagging
class Migration(migrations.Migration):
dependencies = [
('speakers', '__first__'),
('taggit', '0002_auto_20150616_2121'),
('proposals', '0001_initial'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('conference', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Comment',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('content', models.TextField(verbose_name='content')),
('pub_date', models.DateTimeField(default=django.utils.timezone.now, verbose_name='publication date')),
('deleted', models.BooleanField(default=False, verbose_name='deleted')),
('deleted_date', models.DateTimeField(null=True, verbose_name='deleted at', blank=True)),
('deleted_reason', models.TextField(null=True, verbose_name='deletion reason', blank=True)),
('author', models.ForeignKey(verbose_name='author', to=settings.AUTH_USER_MODEL)),
('deleted_by', models.ForeignKey(related_name='deleted_comments', verbose_name='deleted by', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'verbose_name': 'comment',
'verbose_name_plural': 'comments',
},
),
migrations.CreateModel(
name='ProposalMetaData',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('num_comments', models.PositiveIntegerField(default=0, verbose_name='number of comments')),
('num_reviews', models.PositiveIntegerField(default=0, verbose_name='number of reviews')),
('latest_activity_date', models.DateTimeField(null=True, verbose_name='latest activity', blank=True)),
('latest_comment_date', models.DateTimeField(null=True, verbose_name='latest comment', blank=True)),
('latest_review_date', models.DateTimeField(null=True, verbose_name='latest review', blank=True)),
('latest_version_date', models.DateTimeField(null=True, verbose_name='latest version', blank=True)),
('score', models.FloatField(default=0.0, verbose_name='score')),
],
options={
'verbose_name': 'proposal metadata',
'verbose_name_plural': 'proposal metadata',
},
),
migrations.CreateModel(
name='ProposalVersion',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('title', models.CharField(max_length=100, verbose_name='title')),
('description', models.TextField(max_length=400, verbose_name='description')),
('abstract', models.TextField(verbose_name='abstract')),
('notes', models.TextField(verbose_name='notes', blank=True)),
('submission_date', models.DateTimeField(default=django.utils.timezone.now, verbose_name='submission date', editable=False)),
('modified_date', models.DateTimeField(null=True, verbose_name='modification date', blank=True)),
('language', models.CharField(default=b'de', max_length=5, verbose_name='language', choices=[(b'de', 'German'), (b'en', 'English')])),
('accept_recording', models.BooleanField(default=True)),
('pub_date', models.DateTimeField(verbose_name='publication date')),
('additional_speakers', models.ManyToManyField(related_name='proposalversion_participations', verbose_name='additional speakers', to='speakers.Speaker', blank=True)),
('audience_level', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, verbose_name='target-audience', to='conference.AudienceLevel')),
('available_timeslots', models.ManyToManyField(to='proposals.TimeSlot', verbose_name='available timeslots', blank=True)),
('conference', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, verbose_name='conference', to='conference.Conference')),
('creator', models.ForeignKey(verbose_name='creator', to=settings.AUTH_USER_MODEL)),
('duration', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, verbose_name='duration', to='conference.SessionDuration')),
('kind', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, verbose_name='type', to='conference.SessionKind')),
],
options={
'verbose_name': 'proposal version',
'verbose_name_plural': 'proposal versions',
},
),
migrations.CreateModel(
name='Review',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('rating', models.CharField(max_length=2, verbose_name='rating', choices=[(b'-1', b'-1'), (b'-0', b'-0'), (b'+0', b'+0'), (b'+1', b'+1')])),
('summary', models.TextField(verbose_name='summary')),
('pub_date', models.DateTimeField(default=django.utils.timezone.now, verbose_name='publication date')),
],
options={
'verbose_name': 'review',
'verbose_name_plural': 'reviews',
},
),
migrations.CreateModel(
name='Reviewer',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('state', models.PositiveSmallIntegerField(default=0, verbose_name='state', choices=[(0, 'pending request'), (1, 'request accepted'), (2, 'request declined')])),
('conference', models.ForeignKey(to='conference.Conference')),
('user', models.ForeignKey(verbose_name='user', to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name': 'reviewer',
'verbose_name_plural': 'reviewers',
},
),
migrations.CreateModel(
name='Proposal',
fields=[
],
options={
'proxy': True,
},
bases=('proposals.proposal',),
),
migrations.AddField(
model_name='review',
name='proposal',
field=models.ForeignKey(related_name='reviews', verbose_name='proposal', to='reviews.Proposal'),
),
migrations.AddField(
model_name='review',
name='proposal_version',
field=models.ForeignKey(verbose_name='proposal version', blank=True, to='reviews.ProposalVersion', null=True),
),
migrations.AddField(
model_name='review',
name='user',
field=models.ForeignKey(verbose_name='user', to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='proposalversion',
name='original',
field=models.ForeignKey(related_name='versions', verbose_name='original proposal', to='proposals.Proposal'),
),
migrations.AddField(
model_name='proposalversion',
name='speaker',
field=models.ForeignKey(related_name='proposalversions', on_delete=django.db.models.deletion.PROTECT, verbose_name='speaker', to='speakers.Speaker'),
),
migrations.AddField(
model_name='proposalversion',
name='tags',
field=pyconde.tagging.TaggableManager(to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'),
),
migrations.AddField(
model_name='proposalversion',
name='track',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, verbose_name='track', blank=True, to='conference.Track', null=True),
),
migrations.AddField(
model_name='proposalmetadata',
name='latest_proposalversion',
field=models.ForeignKey(verbose_name='latest proposal version', blank=True, to='reviews.ProposalVersion', null=True),
),
migrations.AddField(
model_name='proposalmetadata',
name='proposal',
field=models.OneToOneField(related_name='review_metadata', verbose_name='proposal', to='reviews.Proposal'),
),
migrations.AddField(
model_name='comment',
name='proposal',
field=models.ForeignKey(related_name='comments', verbose_name='proposal', to='reviews.Proposal'),
),
migrations.AddField(
model_name='comment',
name='proposal_version',
field=models.ForeignKey(verbose_name='proposal version', blank=True, to='reviews.ProposalVersion', null=True),
),
migrations.AlterUniqueTogether(
name='reviewer',
unique_together=set([('conference', 'user')]),
),
migrations.AlterUniqueTogether(
name='review',
unique_together=set([('user', 'proposal')]),
),
]
|
unknown
|
codeparrot/codeparrot-clean
| ||
from winwin.graph import Graph
class Writer:
'''
Write a graph or a list of nodes into a file.
'''
def write_nodes(self, nodes, file_location):
'''
Writes a list of nodes into a file with their x and y coordinates
@type nodes: list
@param nodes: a list of nodes.
@type file_location: string
@param file_location: location to save the file
'''
f = open(file_location, 'w')
for node in nodes:
print('{0} {1} {2}'.format(node.nr, node.x, node.y), file=f)
f.close()
def write_edges(self, graph, file_location):
'''
Write all edges of a graph into a file. The file format is
node_1 node_2 weight, so each line contains one edge.
@type graph: graph
@param graph: graph that contains the edges
@type file_location: string
@param file_location: location to save the file
'''
f = open(file_location, 'w')
edges = graph.edges
for idx, edge in enumerate(edges):
print('{0} {1} {2}' \
.format(edge.node_1.nr, edge.node_2.nr, edge.weight), file=f)
def write_matrix(self, graph, file_location, dummy_city=False):
'''
Write a graph to a file, edges are represented in a matrix
@type graph: graph
@param graph: graph that should be written to file
@type file_location: string
@param location to save the file
'''
f = open(file_location, 'w')
# sort nodes by number, because they have to be in order when we write
# the matrix
nodes = sorted(graph.nodes, key=lambda node: node.nr)
# create header
dimension = len(nodes)
if dummy_city:
dimension += 1
print('NAME: cities {0}'.format(dimension),file=f)
print('TYPE: TSP', file=f)
print('COMMENT: Semesterarbeit', file=f)
print('DIMENSION: {0}'.format(dimension), file=f)
print('EDGE_WEIGHT_TYPE: EXPLICIT', file=f)
print('EDGE_WEIGHT_FORMAT: UPPER_ROW', file=f)
print('EDGE_WEIGHT_SECTION', file=f)
# write dummy city if needed (to calculate hpp instead of tsp)
if dummy_city:
for i in range(dimension-1):
print('0',end=" ",file=f)
print('',file=f)
# write all distances between a node an all its neighbours on one line
for idx, node in enumerate(nodes):
neighbour_nodes = sorted(graph.neighbour_nodes(node), key=lambda \
node: node.nr)
weight_list = list()
for idx_inner, neighbour in enumerate(neighbour_nodes):
edge_list = graph.edge_by_nodes(node, neighbour)
weight = edge_list[0].weight
if node.nr < neighbour.nr:
weight_list.append('{0}'.format(weight))
# write line to file
print(*weight_list, sep = ' ', end = '\n', file=f)
f.close()
|
unknown
|
codeparrot/codeparrot-clean
| ||
name: Rail Inspector
on:
pull_request:
paths:
- "tools/rail_inspector/**"
push:
paths:
- "tools/rail_inspector/**"
permissions:
contents: read
jobs:
rail_inspector:
name: rail_inspector tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: Remove Gemfile.lock
run: rm -f Gemfile.lock
- name: Set up Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: 3.4
bundler-cache: true
- run: cd tools/rail_inspector && bundle exec rake
|
unknown
|
github
|
https://github.com/rails/rails
|
.github/workflows/rail_inspector.yml
|
<?php
/*
* This file is part of the Symfony package.
*
* (c) Fabien Potencier <fabien@symfony.com>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
namespace Symfony\Bridge\Doctrine\DependencyInjection\Security\UserProvider;
use Symfony\Bundle\SecurityBundle\DependencyInjection\Security\UserProvider\UserProviderFactoryInterface;
use Symfony\Component\Config\Definition\Builder\NodeDefinition;
use Symfony\Component\DependencyInjection\ChildDefinition;
use Symfony\Component\DependencyInjection\ContainerBuilder;
/**
* EntityFactory creates services for Doctrine user provider.
*
* @author Fabien Potencier <fabien@symfony.com>
* @author Christophe Coevoet <stof@notk.org>
*
* @final
*/
class EntityFactory implements UserProviderFactoryInterface
{
public function __construct(
private readonly string $key,
private readonly string $providerId,
) {
}
public function create(ContainerBuilder $container, string $id, array $config): void
{
$container
->setDefinition($id, new ChildDefinition($this->providerId))
->addArgument($config['class'])
->addArgument($config['property'])
->addArgument($config['manager_name'])
;
}
public function getKey(): string
{
return $this->key;
}
public function addConfiguration(NodeDefinition $node): void
{
$node
->children()
->scalarNode('class')
->isRequired()
->info('The full entity class name of your user class.')
->cannotBeEmpty()
->end()
->scalarNode('property')->defaultNull()->end()
->scalarNode('manager_name')->defaultNull()->end()
->end()
;
}
}
|
php
|
github
|
https://github.com/symfony/symfony
|
src/Symfony/Bridge/Doctrine/DependencyInjection/Security/UserProvider/EntityFactory.php
|
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# from ansible.inventory.host import Host
from ansible.playbook.handler import Handler
from ansible.playbook.task_include import TaskInclude
class HandlerTaskInclude(Handler, TaskInclude):
VALID_INCLUDE_KEYWORDS = TaskInclude.VALID_INCLUDE_KEYWORDS.union(('listen',))
@staticmethod
def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None):
t = HandlerTaskInclude(block=block, role=role, task_include=task_include)
return t.load_data(data, variable_manager=variable_manager, loader=loader)
|
unknown
|
codeparrot/codeparrot-clean
| ||
########################### 1. 導入所需模組
import cherrypy
import os
########################### 2. 設定近端與遠端目錄
# 確定程式檔案所在目錄, 在 Windows 有最後的反斜線
_curdir = os.path.join(os.getcwd(), os.path.dirname(__file__))
# 設定在雲端與近端的資料儲存目錄
if 'OPENSHIFT_REPO_DIR' in os.environ.keys():
# 表示程式在雲端執行
download_root_dir = os.environ['OPENSHIFT_DATA_DIR']
data_dir = os.environ['OPENSHIFT_DATA_DIR']
else:
# 表示程式在近端執行
download_root_dir = _curdir + "/local_data/"
data_dir = _curdir + "/local_data/"
########################### 3. 建立主物件
class HelloWorld(object):
_cp_config = {
# if there is no utf-8 encoding, no Chinese input available
'tools.encode.encoding': 'utf-8',
'tools.sessions.on' : True,
'tools.sessions.storage_type' : 'file',
'tools.sessions.locking' : 'explicit',
'tools.sessions.storage_path' : data_dir+'/tmp',
# session timeout is 60 minutes
'tools.sessions.timeout' : 60
}
@cherrypy.expose
def fileuploadform(self):
return '''<h1>file upload</h1>
<script src="/static/jquery.js" type="text/javascript"></script>
<script src="/static/axuploader.js" type="text/javascript"></script>
<script>
$(document).ready(function(){
$('.prova').axuploader({url:'/fileaxupload', allowExt:['jpg','png','gif','7z','pdf','zip','flv','stl','txt'],
finish:function(x,files)
{
alert('All files have been uploaded: '+files);
},
enable:true,
remotePath:function(){
return 'downloads/';
}
});
});
</script>
<div class="prova"></div>
<input type="button" onclick="$('.prova').axuploader('disable')" value="asd" />
<input type="button" onclick="$('.prova').axuploader('enable')" value="ok" />
</section></body></html>
'''
@cherrypy.expose
def brythonuploadform(self):
return '''<h1>file upload</h1>
<script type="text/javascript" src="/static/Brython2.0.0-20140209-164925/brython.js"></script>
<script type="text/javascript" >
function getradio(tagname){
var radios = document.getElementsByName(tagname);
for (var i = 0, length = radios.length; i < length; i++) {
if (radios[i].checked) {
// do whatever you want with the checked radio
return radios[i].value;
// only one radio can be logically checked, don't check the rest
break;
}
}
}
function run_js(){
var cons = document.getElementById("console")
var jscode = cons.value
var t0 = (new Date()).getTime()
eval(jscode)
var t1 = (new Date()).getTime()
console.log("Javascript code run in "+(t1-t0)+" ms")
}
</script>
<script type="text/python3" src="/static/editor.py"></script>
<script type="text/python3">
from browser import doc
overwrite = 0
# add delete_program 1/7, seven steps to complete the ajax task, the last step is to add delete_program function on server
# delete1 and delete2 parameters are also added into save_program function.
delete1 = 0
delete2 = 0
def set_debug(ev):
if ev.target.checked:
__BRYTHON__.debug = 1
else:
__BRYTHON__.debug = 0
def set_overwrite(ev):
global overwrite
if ev.target.checked:
overwrite = 1
else:
overwrite = 0
# add delete_program 2/7, client side add set_delete1 and set_delete2 functions.
def set_delete1(ev):
global delete1
if ev.target.checked:
delete1 = 1
else:
delete1 = 0
def set_delete2(ev):
global delete2
if ev.target.checked:
delete2 = 1
else:
delete2 = 0
#### ajax process
from browser import ajax,doc
def on_complete(req):
print(req.readyState)
print('status',req.status)
if req.status==200 or req.status==0:
# show request text on id=result division
doc["result"].html = req.text
else:
doc["result"].html = "error "+req.text
def err_msg():
doc["result"].html = "server didn't reply after %s seconds" %timeout
timeout = 4
def go(url):
req = ajax.ajax()
req.bind('complete', on_complete)
req.set_timeout(timeout, err_msg)
req.open('GET', url, True)
req.send()
def post(url):
req = ajax.ajax()
req.bind('complete', on_complete)
req.set_timeout(timeout, err_msg)
req.open('POST', url, True)
req.set_header('content-type','application/x-www-form-urlencoded')
# doc["filename"].value is the id=filename input field's value
# editor.getValue() is the content on editor, need to send dictionary format data
# while post url, need to save editor content into local_storage to use the previous load javascripts
storage["py_src"] = editor.getValue()
# add delete_program 3/7, two parameters added, this will also affect save_program function on server.
req.send({'filename':doc["filename"].value, 'editor':editor.getValue(), 'overwrite':overwrite, 'delete1':delete1, 'delete2':delete2})
# get program from server
def get_prog(ev):
# ajax can only read data from server
_name = '/brython_programs/'+doc["filename"].value
try:
editor.setValue(open(_name, encoding="utf-8").read())
doc["result"].html = doc["filename"].value+" loaded!"
except:
doc["result"].html = "can not get "+doc["filename"].value+"!"
editor.scrollToRow(0)
editor.gotoLine(0)
reset_theme()
def get_radio(ev):
from javascript import JSObject
filename = JSObject(getradio)("filename")
# ajax can only read data from server
doc["filename"].value = filename
_name = '/brython_programs/'+filename
editor.setValue(open(_name, encoding="utf-8").read())
doc["result"].html = filename+" loaded!"
editor.scrollToRow(0)
editor.gotoLine(0)
reset_theme()
# bindings
doc['run_js'].bind('click',run_js)
doc['set_debug'].bind('change',set_debug)
doc['set_overwrite'].bind('change',set_overwrite)
# add delete_program 4/7, two associated binds added
doc['set_delete1'].bind('change',set_delete1)
doc['set_delete2'].bind('change',set_delete2)
# next functions are defined in editor.py
doc['show_js'].bind('click',show_js)
doc['run'].bind('click',run)
doc['show_console'].bind('click',show_console)
# get_prog and get _radio (working)
doc['get_prog'].bind('click', get_prog)
doc['get_radio'].bind('click', get_radio)
# reset_the_src and clear_console (working)
doc['reset_the_src'].bind('click',reset_the_src)
doc['clear_console'].bind('click',clear_console)
# clear_canvas and clear_src
doc['clear_canvas'].bind('click',clear_canvas)
doc['clear_src'].bind('click',clear_src)
# only admin can save program to server
doc['save_program'].bind('click',lambda ev:post('/save_program'))
# add delete_program 5/7, delete_program button bind to execute delete_program on server.
doc['delete_program'].bind('click',lambda ev:post('/delete_program'))
</script>
<script type="text/javascript">
window.onload=brython({debug:1, cache:'version'});
</script>
<div class="prova"></div>
<input type="button" onclick="$('.prova').axuploader('disable')" value="asd" />
<input type="button" onclick="$('.prova').axuploader('enable')" value="ok" />
</section></body></html>
'''
@cherrypy.expose
def fileaxupload(self, *args, **kwargs):
filename = kwargs["ax-file-name"]
flag = kwargs["start"]
# 終於找到 bug, 因為從 kwargs[] 所取得的變數為字串, 而非數字, 先前用 flag == 0 是錯誤的
if flag == "0":
# 若從 0 byte 送起, 表示要開啟新檔案
file = open(download_root_dir+"downloads/"+filename, "wb")
else:
file = open(download_root_dir+"downloads/"+filename, "ab")
file.write(cherrypy.request.body.read())
file.close()
return "files uploaded!"
@cherrypy.expose
def index(self, input1=None, input2=None):
return "Hello world!"+str(input1)+_curdir
@cherrypy.expose
def inputform(self, input1=None, input2=None):
return "input form"+str(input1)
#index.exposed = True
########################### 4. 安排啟動設定
# 配合程式檔案所在目錄設定靜態目錄或靜態檔案
application_conf = {'/static':{
'tools.staticdir.on': True,
'tools.staticdir.dir': _curdir+"/static"},
'/downloads':{
'tools.staticdir.on': True,
'tools.staticdir.dir': data_dir+"/downloads"}
}
########################### 5. 在近端或遠端啟動程式
# 利用 HelloWorld() class 產生案例物件
root = HelloWorld()
# 假如在 os 環境變數中存在 'OPENSHIFT_REPO_DIR', 表示程式在 OpenShift 環境中執行
if 'OPENSHIFT_REPO_DIR' in os.environ.keys():
# 雲端執行啟動
application = cherrypy.Application(root, config = application_conf)
else:
# 近端執行啟動
'''
cherrypy.server.socket_port = 8083
cherrypy.server.socket_host = '127.0.0.1'
'''
cherrypy.quickstart(root, config = application_conf)
|
unknown
|
codeparrot/codeparrot-clean
| ||
require 'thread'
# many producers, many consumers
nr = 1_000_000
n = 10
m = 10
q = Thread::SizedQueue.new(100)
consumers = n.times.map do
Thread.new do
while q.pop
# consuming
end
end
end
producers = m.times.map do
Thread.new do
while nr > 0
q.push true
nr -= 1
end
end
end
producers.each(&:join)
n.times { q.push nil }
consumers.each(&:join)
|
ruby
|
github
|
https://github.com/ruby/ruby
|
benchmark/vm_thread_sized_queue4.rb
|
"""Metadata Sheet."""
from logging import getLogger
from pyramid.registry import Registry
import colander
from adhocracy_core.interfaces import IResource
from adhocracy_core.interfaces import ISheet
from adhocracy_core.interfaces import SheetToSheet
from adhocracy_core.sheets import add_sheet_to_registry
from adhocracy_core.sheets import AttributeResourceSheet
from adhocracy_core.sheets import sheet_meta
from adhocracy_core.sheets.principal import IUserBasic
from adhocracy_core.schema import Boolean
from adhocracy_core.schema import DateTime
from adhocracy_core.schema import Reference
from adhocracy_core.utils import get_sheet
from adhocracy_core.utils import now
logger = getLogger(__name__)
class IMetadata(ISheet):
"""Market interface for the metadata sheet."""
class MetadataCreatorsReference(SheetToSheet):
"""Metadata sheet creators reference."""
source_isheet = IMetadata
source_isheet_field = 'creator'
target_isheet = IUserBasic
class MetadataModifiedByReference(SheetToSheet):
"""Points to the last person who modified a resource."""
source_isheet = IMetadata
source_isheet_field = 'modified_by'
target_isheet = IUserBasic
@colander.deferred
def deferred_validate_hidden(node, kw):
"""Check hide_permission."""
context = kw['context']
request = kw.get('request', None)
if request is None:
return
def check_hide_permisison(node, cstruct):
if not request.has_permission('hide', context):
raise colander.Invalid(node, 'Changing this field is not allowed')
return check_hide_permisison
class MetadataSchema(colander.MappingSchema):
"""Metadata sheet data structure.
`creation_date`: Creation date of this resource. defaults to now.
`item_creation_date`: Equals creation date for ISimple/IPool,
equals the item creation date for
:class:`adhocracy_core.interfaces.IItemVersion`. This exists to
ease the frontend end development. This may go away if we have a
high level API to make :class:`adhocracy_core.interfaces.Item` /
`IItemVersion` one `thing`. Defaults to now.
`creator`: creator (user resource) of this resource.
`modified_by`: the last person (user resources) who modified a
resource, initially the creator
`modification_date`: Modification date of this resource. defaults to now.
`deleted`: whether the resource is marked as deleted (only shown to those
that specifically ask for it)
`hidden`: whether the resource is marked as hidden (only shown to those
that have special permissions and ask for it)
"""
creator = Reference(reftype=MetadataCreatorsReference, readonly=True)
creation_date = DateTime(missing=colander.drop, readonly=True)
item_creation_date = DateTime(missing=colander.drop, readonly=True)
modified_by = Reference(reftype=MetadataModifiedByReference, readonly=True)
modification_date = DateTime(missing=colander.drop, readonly=True)
deleted = Boolean()
hidden = Boolean(validator=deferred_validate_hidden)
metadata_meta = sheet_meta._replace(
isheet=IMetadata,
schema_class=MetadataSchema,
sheet_class=AttributeResourceSheet,
editable=True,
creatable=True,
readable=True,
permission_edit='delete',
)
def view_blocked_by_metadata(resource: IResource, registry: Registry,
block_reason: str) -> dict:
"""
Return a dict with an explanation why viewing this resource is not allowed.
If the resource provides metadata, the date of the
last change and its author are added to the result.
"""
result = {'reason': block_reason}
if not IMetadata.providedBy(resource):
return result
metadata = get_sheet(resource, IMetadata, registry=registry)
appstruct = metadata.get()
result['modification_date'] = appstruct['modification_date']
result['modified_by'] = appstruct['modified_by']
return result
def is_older_than(resource: IMetadata, days: int) -> bool:
"""Check if the creation date of `context` is older than `days`."""
timedelta = now() - resource.creation_date
return timedelta.days > days
def includeme(config):
"""Register sheets, add subscriber to update creation/modification date."""
add_sheet_to_registry(metadata_meta, config.registry)
|
unknown
|
codeparrot/codeparrot-clean
| ||
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package bufio_test
import (
"bufio"
"bytes"
"fmt"
"os"
"strconv"
"strings"
)
func ExampleWriter() {
w := bufio.NewWriter(os.Stdout)
fmt.Fprint(w, "Hello, ")
fmt.Fprint(w, "world!")
w.Flush() // Don't forget to flush!
// Output: Hello, world!
}
func ExampleWriter_AvailableBuffer() {
w := bufio.NewWriter(os.Stdout)
for _, i := range []int64{1, 2, 3, 4} {
b := w.AvailableBuffer()
b = strconv.AppendInt(b, i, 10)
b = append(b, ' ')
w.Write(b)
}
w.Flush()
// Output: 1 2 3 4
}
// ExampleWriter_ReadFrom demonstrates how to use the ReadFrom method of Writer.
func ExampleWriter_ReadFrom() {
var buf bytes.Buffer
writer := bufio.NewWriter(&buf)
data := "Hello, world!\nThis is a ReadFrom example."
reader := strings.NewReader(data)
n, err := writer.ReadFrom(reader)
if err != nil {
fmt.Println("ReadFrom Error:", err)
return
}
if err = writer.Flush(); err != nil {
fmt.Println("Flush Error:", err)
return
}
fmt.Println("Bytes written:", n)
fmt.Println("Buffer contents:", buf.String())
// Output:
// Bytes written: 41
// Buffer contents: Hello, world!
// This is a ReadFrom example.
}
// The simplest use of a Scanner, to read standard input as a set of lines.
func ExampleScanner_lines() {
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
fmt.Println(scanner.Text()) // Println will add back the final '\n'
}
if err := scanner.Err(); err != nil {
fmt.Fprintln(os.Stderr, "reading standard input:", err)
}
}
// Return the most recent call to Scan as a []byte.
func ExampleScanner_Bytes() {
scanner := bufio.NewScanner(strings.NewReader("gopher"))
for scanner.Scan() {
fmt.Println(len(scanner.Bytes()) == 6)
}
if err := scanner.Err(); err != nil {
fmt.Fprintln(os.Stderr, "shouldn't see an error scanning a string")
}
// Output:
// true
}
// Use a Scanner to implement a simple word-count utility by scanning the
// input as a sequence of space-delimited tokens.
func ExampleScanner_words() {
// An artificial input source.
const input = "Now is the winter of our discontent,\nMade glorious summer by this sun of York.\n"
scanner := bufio.NewScanner(strings.NewReader(input))
// Set the split function for the scanning operation.
scanner.Split(bufio.ScanWords)
// Count the words.
count := 0
for scanner.Scan() {
count++
}
if err := scanner.Err(); err != nil {
fmt.Fprintln(os.Stderr, "reading input:", err)
}
fmt.Printf("%d\n", count)
// Output: 15
}
// Use a Scanner with a custom split function (built by wrapping ScanWords) to validate
// 32-bit decimal input.
func ExampleScanner_custom() {
// An artificial input source.
const input = "1234 5678 1234567901234567890"
scanner := bufio.NewScanner(strings.NewReader(input))
// Create a custom split function by wrapping the existing ScanWords function.
split := func(data []byte, atEOF bool) (advance int, token []byte, err error) {
advance, token, err = bufio.ScanWords(data, atEOF)
if err == nil && token != nil {
_, err = strconv.ParseInt(string(token), 10, 32)
}
return
}
// Set the split function for the scanning operation.
scanner.Split(split)
// Validate the input
for scanner.Scan() {
fmt.Printf("%s\n", scanner.Text())
}
if err := scanner.Err(); err != nil {
fmt.Printf("Invalid input: %s", err)
}
// Output:
// 1234
// 5678
// Invalid input: strconv.ParseInt: parsing "1234567901234567890": value out of range
}
// Use a Scanner with a custom split function to parse a comma-separated
// list with an empty final value.
func ExampleScanner_emptyFinalToken() {
// Comma-separated list; last entry is empty.
const input = "1,2,3,4,"
scanner := bufio.NewScanner(strings.NewReader(input))
// Define a split function that separates on commas.
onComma := func(data []byte, atEOF bool) (advance int, token []byte, err error) {
for i := 0; i < len(data); i++ {
if data[i] == ',' {
return i + 1, data[:i], nil
}
}
if !atEOF {
return 0, nil, nil
}
// There is one final token to be delivered, which may be the empty string.
// Returning bufio.ErrFinalToken here tells Scan there are no more tokens after this
// but does not trigger an error to be returned from Scan itself.
return 0, data, bufio.ErrFinalToken
}
scanner.Split(onComma)
// Scan.
for scanner.Scan() {
fmt.Printf("%q ", scanner.Text())
}
if err := scanner.Err(); err != nil {
fmt.Fprintln(os.Stderr, "reading input:", err)
}
// Output: "1" "2" "3" "4" ""
}
// Use a Scanner with a custom split function to parse a comma-separated
// list with an empty final value but stops at the token "STOP".
func ExampleScanner_earlyStop() {
onComma := func(data []byte, atEOF bool) (advance int, token []byte, err error) {
i := bytes.IndexByte(data, ',')
if i == -1 {
if !atEOF {
return 0, nil, nil
}
// If we have reached the end, return the last token.
return 0, data, bufio.ErrFinalToken
}
// If the token is "STOP", stop the scanning and ignore the rest.
if string(data[:i]) == "STOP" {
return i + 1, nil, bufio.ErrFinalToken
}
// Otherwise, return the token before the comma.
return i + 1, data[:i], nil
}
const input = "1,2,STOP,4,"
scanner := bufio.NewScanner(strings.NewReader(input))
scanner.Split(onComma)
for scanner.Scan() {
fmt.Printf("Got a token %q\n", scanner.Text())
}
if err := scanner.Err(); err != nil {
fmt.Fprintln(os.Stderr, "reading input:", err)
}
// Output:
// Got a token "1"
// Got a token "2"
}
|
go
|
github
|
https://github.com/golang/go
|
src/bufio/example_test.go
|
// Code generated by "enumer -type=PluginRuntimeType -trimprefix=PluginRuntimeType -transform=snake"; DO NOT EDIT.
package api
import (
"fmt"
)
const _PluginRuntimeTypeName = "unsupportedcontainer"
var _PluginRuntimeTypeIndex = [...]uint8{0, 11, 20}
func (i PluginRuntimeType) String() string {
if i >= PluginRuntimeType(len(_PluginRuntimeTypeIndex)-1) {
return fmt.Sprintf("PluginRuntimeType(%d)", i)
}
return _PluginRuntimeTypeName[_PluginRuntimeTypeIndex[i]:_PluginRuntimeTypeIndex[i+1]]
}
var _PluginRuntimeTypeValues = []PluginRuntimeType{0, 1}
var _PluginRuntimeTypeNameToValueMap = map[string]PluginRuntimeType{
_PluginRuntimeTypeName[0:11]: 0,
_PluginRuntimeTypeName[11:20]: 1,
}
// PluginRuntimeTypeString retrieves an enum value from the enum constants string name.
// Throws an error if the param is not part of the enum.
func PluginRuntimeTypeString(s string) (PluginRuntimeType, error) {
if val, ok := _PluginRuntimeTypeNameToValueMap[s]; ok {
return val, nil
}
return 0, fmt.Errorf("%s does not belong to PluginRuntimeType values", s)
}
// PluginRuntimeTypeValues returns all values of the enum
func PluginRuntimeTypeValues() []PluginRuntimeType {
return _PluginRuntimeTypeValues
}
// IsAPluginRuntimeType returns "true" if the value is listed in the enum definition. "false" otherwise
func (i PluginRuntimeType) IsAPluginRuntimeType() bool {
for _, v := range _PluginRuntimeTypeValues {
if i == v {
return true
}
}
return false
}
|
go
|
github
|
https://github.com/hashicorp/vault
|
api/pluginruntimetype_enumer.go
|
//go:build !windows
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package preflight
import (
"os"
"k8s.io/kubernetes/cmd/kubeadm/app/util/errors"
)
// Check validates if an user has elevated (root) privileges.
func (ipuc IsPrivilegedUserCheck) Check() (warnings, errorList []error) {
if os.Getuid() != 0 {
return nil, []error{errors.New("user is not running as root")}
}
return nil, nil
}
|
go
|
github
|
https://github.com/kubernetes/kubernetes
|
cmd/kubeadm/app/preflight/checks_unix.go
|
from os import path, makedirs, remove, listdir, environ, pathsep
from subprocess import list2cmdline, run, Popen, PIPE
import numpy as np
from ..utils import read_htk_header
from ..tcdtimit.files import phoneme_file, phoneme_list, viseme_file, viseme_list, character_file, character_list
current_path = path.abspath(path.dirname(__file__))
environ['PATH'] += pathsep + path.join(current_path, '../bins/htk/')
class HTKSys(object):
r"""
AVSR Training and Recognition using HTK
"""
def __init__(self,
train_files,
test_files,
hmm_states=3,
mixtures=(1,),
language_model=False,
config_dir=None,
report_results=('test',),
num_threads=4):
r"""
Class constructor
:param train_files: Train files split
:param test_files: Test files split
:param htk_dir: Directory containing the .htk feature files
:param hmm_states: Number of states per HMM
:param mixtures: Tuple of increasing values representing the number of GMM mixtures
:param language_model: Use a language model (True/False)
:param report_results: Tuple containing the dataset splits (train, test)
"""
# argument copy
self._trainfeat = train_files
self._testfeat = test_files
self._hmmstates = hmm_states
self._mixtures = mixtures
self._languageModel = language_model
self._report_results = report_results
# static dataset configs
self._config = path.join(config_dir, 'settings.conf')
self._viseme_list = path.join(config_dir, 'viseme_list')
self._viseme_dict = path.join(config_dir, 'viseme_dict')
self._grammar = path.join(config_dir, 'grammar')
self._labels = path.join(config_dir, 'allVisemes.mlf')
self._HViteConf = path.join(config_dir, 'HVite.conf')
# dynamically generated
self._run_dir = './run/'
self._hmm_dir = './run/hmms/'
self._cleanup_run_dir()
self._hmm_proto = './run/proto'
self._word_net = None # generated at runtime
self._gen_wordnet('./run/wdnet')
# library initialisation
self._num_runs = 0
self._num_threads = num_threads
self._feature_size = self._get_feature_size()
def _cleanup_run_dir(self):
try:
allfiles = [file for file in listdir(self._run_dir) if path.isfile(self._run_dir + file)]
for f in allfiles:
remove(self._run_dir + f)
if path.isdir(self._hmm_dir):
from shutil import rmtree
rmtree(self._hmm_dir)
except Exception:
print('cleaning failed')
def _get_feature_size(self):
header = read_htk_header(self._trainfeat.values().__iter__().__next__()) # read header of the first file
return header[2]//4 # sampSize is in bytes
def run(self):
r"""
HTK-Based training
1. Run HCompV to initialize a prototype HMM with global means and variances
2. Replicate the prototype for each viseme
3. Apply multiple iterations of HRest for Baum-Welch embedded training
:return:
"""
self._initialize_stats()
self._replicate_proto()
self._embedded_reestimation(num_times=5)
self._fix_silence_viseme()
self._embedded_reestimation(num_times=5)
for case in self._report_results:
if case == 'test':
self.test(self._testfeat)
self.print_results(1, case)
elif case == 'train':
self.test(self._trainfeat)
self.print_results(1, case)
# for i in range(2,self._finalMixtures+1):
for i in self._mixtures:
self._increase_mixtures(i)
self._embedded_reestimation(num_times=5)
for case in self._report_results:
if case == 'test':
self.test(self._testfeat)
self.print_results(i, case)
elif case == 'train':
self.test(self._trainfeat)
self.print_results(i, case)
def test(self, features):
"""
HTK-Based decoding
Runs HVite on the feature list
:param features:
:return:
"""
self._decode(features)
def _decode(self, features):
currdir = self._hmm_dir + 'hmm' + str(self._num_runs) + '/'
outfile = './run/predicted.mlf'
self.predicted_labels = outfile
scp = './run/test.scp'
self._gen_feature_scp('./run/test.scp', features)
scp_list = _split_scp(scp, num_threads=self._num_threads)
threadlist = []
for th in range(self._num_threads):
# cmd = ['HVite', '-C', self._HViteConf, '-H', currdir + 'vFloors', '-H', currdir + 'hmmdefs', '-S',
# scp_list[th], '-l', '\'*\'', '-i', outfile+'.part' +str(th) , '-w', self._word_net,
# '-p', '0.0', '-s', '1.0', self._viseme_dict, self._viseme_list]
cmd = ['HVite', '-C', self._HViteConf, '-H', currdir + 'vFloors', '-H', currdir + 'hmmdefs', '-S',
scp_list[th], '-i', outfile + '.part' + str(th), '-w', self._word_net,
self._viseme_dict, self._viseme_list]
print(list2cmdline(cmd))
proc = Popen(cmd)
threadlist.append(proc)
# sync threads
for proc in threadlist:
proc.wait()
# merge parts, discarding the header, do cleanup
if path.isfile(outfile):
remove(outfile)
with open(outfile, 'a') as of:
of.write('#!MLF!#\n') # initial header
for part in range(self._num_threads):
partfile = outfile+'.part' + str(part)
with open(partfile, 'r') as f:
contents = f.readlines()
of.writelines(contents[1:]) # ignore the header #!MLF!#
remove(partfile)
for file in scp_list:
# cmd = ['rm ' + file]
# run(cmd, shell=True, check=True)
remove(file)
def _initialize_stats(self):
firstdir = './run/hmms/hmm' + str(self._num_runs) + '/'
makedirs(firstdir, exist_ok=True)
self._gen_proto(vecsize=self._feature_size, nstates=self._hmmstates)
scp = './run/train.scp'
self.trainscp = scp
self._gen_feature_scp('./run/train.scp', self._trainfeat)
cmd = ['HCompV', '-C', self._config, '-f', '0.01', '-m', '-S', self.trainscp, '-M', firstdir, self._hmm_proto]
print(list2cmdline(cmd))
run(cmd, check=True)
def _increase_mixtures(self, nmix):
scp = self._gen_edit_script_num_mixtures(nmix)
prevdir = self._hmm_dir + 'hmm' + str(self._num_runs) + '/'
nextdir = self._hmm_dir + 'hmm' + str(self._num_runs + 1) + '/'
self._num_runs += 1
makedirs(nextdir, exist_ok=True)
cmd = ['HHEd', '-H', prevdir + 'vFloors', '-H', prevdir + 'hmmdefs', '-M', nextdir, scp,
self._viseme_list]
print(list2cmdline(cmd))
run(cmd, check=True)
def _fix_silence_viseme(self):
edit_script = self._gen_edit_script_silence_vis()
self._num_runs += 1
prevdir = self._hmm_dir + 'hmm' + str(self._num_runs - 1) + '/'
nextdir = self._hmm_dir + 'hmm' + str(self._num_runs) + '/'
makedirs(nextdir, exist_ok=True)
cmd = ['HHEd', '-H', prevdir + 'vFloors', '-H', prevdir + 'hmmdefs', '-M',
nextdir, edit_script, self._viseme_list]
print(list2cmdline(cmd))
run(cmd, check=True)
def _gen_edit_script_silence_vis(self):
fname = './run/sil.hed'
with open(fname, 'w') as f:
for s in range(self._hmmstates+1, 2, -1):
for j in range(s-1, 1, -1):
f.write('AT ' + str(s) + ' ' + str(j) + ' 0.2 {S.transP}\n')
f.write('AT 2 ' + str(self._hmmstates + 1) + ' 0.2 {S.transP}\n')
return fname
def _gen_edit_script_num_mixtures(self, num_mixtures):
file_name = './run/mix_' + str(num_mixtures) + '.hed'
with open(file_name, 'w') as f:
f.write('MU ' + str(num_mixtures) + ' {*.state[2-' + str(self._hmmstates + 1) + '].mix}\n')
return file_name
def _gen_proto(self, vecsize, nstates):
lines = ['~o <VecSize> ' + str(vecsize) + ' <USER>\n',
'~h "proto"\n',
'<BeginHMM>\n',
'<NumStates> ' + str(nstates + 2) + '\n']
for s in range(2, nstates+2):
lines.append('<State> ' + str(s) + '\n')
lines.append('<Mean> ' + str(vecsize) + '\n')
lines.append('0.0 '*vecsize + '\n')
lines.append('<Variance> ' + str(vecsize) + '\n')
lines.append('1.0 ' * vecsize + '\n')
lines.append('<TransP> ' + str(nstates+2) + '\n')
lines.append('0.0 1.0 ' + '0.0 '*nstates + '\n')
for i in range(1, nstates+1):
lines.append('0.0 ' * i + '0.5 0.5 ' + '0.0 ' * (nstates-i) + '\n')
lines.append('0.0 ' * (nstates+2) + '\n')
lines.append('<EndHMM>\n')
with open(self._hmm_proto, 'w') as f:
f.writelines(lines)
def _gen_feature_scp(self, scpname, features):
lines = []
for file in features.values():
lines.append(file + '\n')
with open(scpname, 'w') as f:
f.writelines(lines)
def _gen_wordnet(self, wdnet):
if self._languageModel is True:
new_hmmlist = append_to_file(self._viseme_list, ('!ENTER', '!EXIT'))
self._viseme_dict = append_to_file(self._viseme_dict, ('!ENTER []', '!EXIT []'))
cmd = ['HLStats -b ./run/bigrams -o ' + self._viseme_list + ' ' + self._labels]
print(list2cmdline(cmd))
run(cmd, check=True, shell=True)
cmd = ['HBuild -n ./run/bigrams ' + new_hmmlist + ' ' + wdnet]
print(list2cmdline(cmd))
run(cmd, check=True, shell=True)
self._word_net = wdnet
else:
cmd = ['HParse', self._grammar, wdnet]
print(list2cmdline(cmd))
run(cmd, check=True)
self._word_net = wdnet
def _replicate_proto(self):
# copied this function from a tutorial
# TODO - refactor this function, rootdirs as arguments
hmm0_dir = path.join(self._hmm_dir, 'hmm0')
# read proto lines
hmm_proto = path.join(hmm0_dir, 'proto')
f = open(hmm_proto, 'r')
proto_lines = []
for l in f:
l = l.rstrip('\r\n')
proto_lines.append(l)
f.close()
# read vfloor lines
vfloor_file = path.join(hmm0_dir, 'vFloors')
v = open(vfloor_file, 'r')
vfloor_lines = []
for l in v:
l = l.rstrip('\r\n')
vfloor_lines.append(l)
v.close()
# append first lines of hmm proto to vfloors file
v = open(vfloor_file, 'w')
for l in proto_lines[0:3]:
v.write('%s\n' % l)
for l in vfloor_lines:
v.write('%s\n' % l)
v.close()
# read phoneme list
pl = open(self._viseme_list, 'r')
phones = []
for p in pl:
p = p.rstrip('\r\n')
phones.append(p)
pl.close()
# for each phone copy prototype
hmmdefs = path.join(hmm0_dir, 'hmmdefs')
h = open(hmmdefs, 'w')
for p in phones:
h.write('~h \"%s\"\n' % p)
for pl in proto_lines[4:]:
h.write('%s\n' % pl)
h.close()
def _embedded_reestimation(self, num_times, binary=False, pruning='off', stats=False):
"""
:param num_times:
:return:
"""
if binary is False:
bincfg = []
elif binary is True:
bincfg = ['-B']
else:
raise Exception('error estting parameters')
if pruning == 'off':
prune_cfg = []
elif pruning == 'on':
prune_cfg = ['-t', '250.0', '150.0', '1000.0']
else:
raise Exception('error estting parameters')
if stats is False:
statscfg = []
elif stats is True:
statscfg = ['-s', self._hmm_dir + 'stats']
else:
raise Exception('error estting parameters')
scp_list = _split_scp(self.trainscp, num_threads=self._num_threads)
for loop in range(num_times):
self._num_runs += 1
previous_dir = self._hmm_dir + 'hmm' + str(self._num_runs - 1) + '/'
current_dir = self._hmm_dir + 'hmm' + str(self._num_runs) + '/'
makedirs(current_dir, exist_ok=True)
threadlist = []
for thread in range(len(scp_list)):
cmd = ['HERest'] + bincfg + ['-C', self._config, '-I', self._labels] + prune_cfg + statscfg + \
['-S', scp_list[thread], '-H', previous_dir + 'vFloors', '-H', previous_dir + 'hmmdefs',
'-M', current_dir, '-p', str(thread+1), self._viseme_list]
print(list2cmdline(cmd))
proc = Popen(cmd)
threadlist.append(proc)
# sync threads
for proc in threadlist:
proc.wait()
# Run final HERest to collect the accummulators
acc_files = listdir(current_dir)
acc_files = [path.join(current_dir, file) for file in acc_files]
cmd = ['HERest'] + bincfg + ['-C', self._config, '-I', self._labels] + prune_cfg + statscfg + \
['-H', previous_dir + 'vFloors', '-H', previous_dir + 'hmmdefs',
'-M', current_dir, '-p', '0', self._viseme_list] + acc_files
run(list2cmdline(cmd), shell=True, check=True)
# cleanup folder (remove accs, scp.i)
# cmd = ['rm ' + current_dir + '*.acc']
# run(cmd, shell=True, check=True)
for file in acc_files:
remove(file)
for file in scp_list:
# cmd = ['rm ' + file]
# run(cmd, shell=True, check=True)
remove(file)
def print_results(self, nmix, case):
cmd = ['HResults', '-I', self._labels, '-f', '-p', self._viseme_list, self.predicted_labels]
print(list2cmdline(cmd))
with open('./run/results_' + case + '_' + str(nmix)+'_mixtures.txt', 'w') as logfile:
run(cmd, check=True, stdout=logfile)
# r"""these functions are not part of the class"""
def _split_scp(scp, num_threads):
with open(scp, 'r') as fr:
contents = fr.readlines()
num_lines = len(contents)
avg = int(num_lines//num_threads)
idx_start = np.arange(num_threads) * avg
idx_end = np.arange(1, num_threads + 1) * avg
idx_end[-1] = num_lines
scplist = []
for i in range(num_threads):
partial_scp = scp + '.part' + str(i)
with open(partial_scp, 'w') as fw:
fw.writelines(contents[idx_start[i]:idx_end[i]])
scplist.append(partial_scp)
return scplist
def read_result_file(file):
r"""
Reads the contents of a HTK result log file
:param file: result log
:return: correctness, accuracy
"""
with open(file, 'r') as f:
contents = f.read().splitlines()
idx_acc = contents[6].index('Acc=')
acc = float(contents[6][idx_acc+4:idx_acc+9])
idx_corr = contents[6].index('Corr=')
corr = float(contents[6][idx_corr + 5:idx_corr + 10])
return corr, acc
def read_result_str(result_string):
idx_acc = result_string.index('Acc=')
acc = float(result_string[idx_acc + 4:idx_acc + 9].strip(','))
idx_corr = result_string.index('Corr=')
corr = float(result_string[idx_corr + 5:idx_corr + 10].strip(','))
return corr, acc
def append_to_file(hmmlist, items):
with open(hmmlist, 'r') as f:
contents = f.read().splitlines()
oldfile = path.split(hmmlist)[1]
newfile = './run/' + oldfile + '_lm'
for item in items:
contents.append(item)
with open(newfile, 'w') as f:
f.writelines(line+'\n' for line in contents)
return newfile
def compute_results2(predicted_labels, ground_truth_file, unit_list_file):
cmd = ['HResults', '-I', ground_truth_file, unit_list_file, predicted_labels]
p = Popen(cmd, stdout=PIPE, stderr=PIPE)
results = p.communicate()[0]
return read_result_str(results.decode('utf-8'))
def compute_results3(predicted_labels, unit):
if unit == 'viseme':
ground_truth_file = viseme_file
unit_list_file = viseme_list
elif unit == 'phoneme':
ground_truth_file = phoneme_file
unit_list_file = viseme_list
elif unit == 'character':
ground_truth_file = character_file
unit_list_file = character_list
else:
raise Exception('unknown unit: {}'.format(unit))
return compute_results2(
predicted_labels,
ground_truth_file,
unit_list_file
)
|
unknown
|
codeparrot/codeparrot-clean
| ||
# A reaction to: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/552751
from webob import Request, Response
from webob import exc
from simplejson import loads, dumps
import traceback
import sys
class JsonRpcApp(object):
"""
Serve the given object via json-rpc (http://json-rpc.org/)
"""
def __init__(self, obj):
self.obj = obj
def __call__(self, environ, start_response):
req = Request(environ)
try:
resp = self.process(req)
except ValueError, e:
resp = exc.HTTPBadRequest(str(e))
except exc.HTTPException, e:
resp = e
return resp(environ, start_response)
def process(self, req):
if not req.method == 'POST':
raise exc.HTTPMethodNotAllowed(
"Only POST allowed",
allowed='POST')
try:
json = loads(req.body)
except ValueError, e:
raise ValueError('Bad JSON: %s' % e)
try:
method = json['method']
params = json['params']
id = json['id']
except KeyError, e:
raise ValueError(
"JSON body missing parameter: %s" % e)
if method.startswith('_'):
raise exc.HTTPForbidden(
"Bad method name %s: must not start with _" % method)
if not isinstance(params, list):
raise ValueError(
"Bad params %r: must be a list" % params)
try:
method = getattr(self.obj, method)
except AttributeError:
raise ValueError(
"No such method %s" % method)
try:
result = method(*params)
except:
text = traceback.format_exc()
exc_value = sys.exc_info()[1]
error_value = dict(
name='JSONRPCError',
code=100,
message=str(exc_value),
error=text)
return Response(
status=500,
content_type='application/json',
body=dumps(dict(result=None,
error=error_value,
id=id)))
return Response(
content_type='application/json',
body=dumps(dict(result=result,
error=None,
id=id)))
class ServerProxy(object):
"""
JSON proxy to a remote service.
"""
def __init__(self, url, proxy=None):
self._url = url
if proxy is None:
from wsgiproxy.exactproxy import proxy_exact_request
proxy = proxy_exact_request
self.proxy = proxy
def __getattr__(self, name):
if name.startswith('_'):
raise AttributeError(name)
return _Method(self, name)
def __repr__(self):
return '<%s for %s>' % (
self.__class__.__name__, self._url)
class _Method(object):
def __init__(self, parent, name):
self.parent = parent
self.name = name
def __call__(self, *args):
json = dict(method=self.name,
id=None,
params=list(args))
req = Request.blank(self.parent._url)
req.method = 'POST'
req.content_type = 'application/json'
req.body = dumps(json)
resp = req.get_response(self.parent.proxy)
if resp.status_int != 200 and not (
resp.status_int == 500
and resp.content_type == 'application/json'):
raise ProxyError(
"Error from JSON-RPC client %s: %s"
% (self.parent._url, resp.status),
resp)
json = loads(resp.body)
if json.get('error') is not None:
e = Fault(
json['error'].get('message'),
json['error'].get('code'),
json['error'].get('error'),
resp)
raise e
return json['result']
class ProxyError(Exception):
"""
Raised when a request via ServerProxy breaks
"""
def __init__(self, message, response):
Exception.__init__(self, message)
self.response = response
class Fault(Exception):
"""
Raised when there is a remote error
"""
def __init__(self, message, code, error, response):
Exception.__init__(self, message)
self.code = code
self.error = error
self.response = response
def __str__(self):
return 'Method error calling %s: %s\n%s' % (
self.response.request.url,
self.args[0],
self.error)
class DemoObject(object):
"""
Something interesting to attach to
"""
def add(self, *args):
return sum(args)
def average(self, *args):
return sum(args) / float(len(args))
def divide(self, a, b):
return a / b
def make_app(expr):
module, expression = expr.split(':', 1)
__import__(module)
module = sys.modules[module]
obj = eval(expression, module.__dict__)
return JsonRpcApp(obj)
def main(args=None):
import optparse
from wsgiref import simple_server
parser = optparse.OptionParser(
usage='%prog [OPTIONS] MODULE:EXPRESSION')
parser.add_option(
'-p', '--port', default='8080',
help='Port to serve on (default 8080)')
parser.add_option(
'-H', '--host', default='127.0.0.1',
help='Host to serve on (default localhost; 0.0.0.0 to make public)')
options, args = parser.parse_args()
if not args or len(args) > 1:
print 'You must give a single object reference'
parser.print_help()
sys.exit(2)
app = make_app(args[0])
server = simple_server.make_server(options.host, int(options.port), app)
print 'Serving on http://%s:%s' % (options.host, options.port)
server.serve_forever()
# Try python jsonrpc.py 'jsonrpc:DemoObject()'
if __name__ == '__main__':
main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
# Test the Unicode versions of normal file functions
# open, os.open, os.stat. os.listdir, os.rename, os.remove, os.mkdir, os.chdir, os.rmdir
import os
import sys
import unittest
import warnings
from unicodedata import normalize
from test import support
filenames = [
'1_abc',
'2_ascii',
'3_Gr\xfc\xdf-Gott',
'4_\u0393\u03b5\u03b9\u03ac-\u03c3\u03b1\u03c2',
'5_\u0417\u0434\u0440\u0430\u0432\u0441\u0442\u0432\u0443\u0439\u0442\u0435',
'6_\u306b\u307d\u3093',
'7_\u05d4\u05e9\u05e7\u05e6\u05e5\u05e1',
'8_\u66e8\u66e9\u66eb',
'9_\u66e8\u05e9\u3093\u0434\u0393\xdf',
# Specific code points: fn, NFC(fn) and NFKC(fn) all differents
'10_\u1fee\u1ffd',
]
# Mac OS X decomposes Unicode names, using Normal Form D.
# http://developer.apple.com/mac/library/qa/qa2001/qa1173.html
# "However, most volume formats do not follow the exact specification for
# these normal forms. For example, HFS Plus uses a variant of Normal Form D
# in which U+2000 through U+2FFF, U+F900 through U+FAFF, and U+2F800 through
# U+2FAFF are not decomposed."
if sys.platform != 'darwin':
filenames.extend([
# Specific code points: NFC(fn), NFD(fn), NFKC(fn) and NFKD(fn) all differents
'11_\u0385\u03d3\u03d4',
'12_\u00a8\u0301\u03d2\u0301\u03d2\u0308', # == NFD('\u0385\u03d3\u03d4')
'13_\u0020\u0308\u0301\u038e\u03ab', # == NFKC('\u0385\u03d3\u03d4')
'14_\u1e9b\u1fc1\u1fcd\u1fce\u1fcf\u1fdd\u1fde\u1fdf\u1fed',
# Specific code points: fn, NFC(fn) and NFKC(fn) all differents
'15_\u1fee\u1ffd\ufad1',
'16_\u2000\u2000\u2000A',
'17_\u2001\u2001\u2001A',
'18_\u2003\u2003\u2003A', # == NFC('\u2001\u2001\u2001A')
'19_\u0020\u0020\u0020A', # '\u0020' == ' ' == NFKC('\u2000') ==
# NFKC('\u2001') == NFKC('\u2003')
])
# Is it Unicode-friendly?
if not os.path.supports_unicode_filenames:
fsencoding = sys.getfilesystemencoding()
try:
for name in filenames:
name.encode(fsencoding)
except UnicodeEncodeError:
raise unittest.SkipTest("only NT+ and systems with "
"Unicode-friendly filesystem encoding")
# Destroy directory dirname and all files under it, to one level.
def deltree(dirname):
# Don't hide legitimate errors: if one of these suckers exists, it's
# an error if we can't remove it.
if os.path.exists(dirname):
# must pass unicode to os.listdir() so we get back unicode results.
for fname in os.listdir(str(dirname)):
os.unlink(os.path.join(dirname, fname))
os.rmdir(dirname)
class UnicodeFileTests(unittest.TestCase):
files = set(filenames)
normal_form = None
def setUp(self):
try:
os.mkdir(support.TESTFN)
except FileExistsError:
pass
files = set()
for name in self.files:
name = os.path.join(support.TESTFN, self.norm(name))
with open(name, 'wb') as f:
f.write((name+'\n').encode("utf-8"))
os.stat(name)
files.add(name)
self.files = files
def tearDown(self):
deltree(support.TESTFN)
def norm(self, s):
if self.normal_form:
return normalize(self.normal_form, s)
return s
def _apply_failure(self, fn, filename,
expected_exception=FileNotFoundError,
check_filename=True):
with self.assertRaises(expected_exception) as c:
fn(filename)
exc_filename = c.exception.filename
# listdir may append a wildcard to the filename
if fn is os.listdir and sys.platform == 'win32':
exc_filename, _, wildcard = exc_filename.rpartition(os.sep)
self.assertEqual(wildcard, '*.*')
if check_filename:
self.assertEqual(exc_filename, filename, "Function '%s(%a) failed "
"with bad filename in the exception: %a" %
(fn.__name__, filename, exc_filename))
def test_failures(self):
# Pass non-existing Unicode filenames all over the place.
for name in self.files:
name = "not_" + name
self._apply_failure(open, name)
self._apply_failure(os.stat, name)
self._apply_failure(os.chdir, name)
self._apply_failure(os.rmdir, name)
self._apply_failure(os.remove, name)
self._apply_failure(os.listdir, name)
if sys.platform == 'win32':
# Windows is lunatic. Issue #13366.
_listdir_failure = NotADirectoryError, FileNotFoundError
else:
_listdir_failure = NotADirectoryError
def test_open(self):
for name in self.files:
f = open(name, 'wb')
f.write((name+'\n').encode("utf-8"))
f.close()
os.stat(name)
self._apply_failure(os.listdir, name, self._listdir_failure)
# Skip the test on darwin, because darwin does normalize the filename to
# NFD (a variant of Unicode NFD form). Normalize the filename to NFC, NFKC,
# NFKD in Python is useless, because darwin will normalize it later and so
# open(), os.stat(), etc. don't raise any exception.
@unittest.skipIf(sys.platform == 'darwin', 'irrelevant test on Mac OS X')
def test_normalize(self):
files = set(self.files)
others = set()
for nf in set(['NFC', 'NFD', 'NFKC', 'NFKD']):
others |= set(normalize(nf, file) for file in files)
others -= files
for name in others:
self._apply_failure(open, name)
self._apply_failure(os.stat, name)
self._apply_failure(os.chdir, name)
self._apply_failure(os.rmdir, name)
self._apply_failure(os.remove, name)
self._apply_failure(os.listdir, name)
# Skip the test on darwin, because darwin uses a normalization different
# than Python NFD normalization: filenames are different even if we use
# Python NFD normalization.
@unittest.skipIf(sys.platform == 'darwin', 'irrelevant test on Mac OS X')
def test_listdir(self):
sf0 = set(self.files)
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
f1 = os.listdir(support.TESTFN.encode(sys.getfilesystemencoding()))
f2 = os.listdir(support.TESTFN)
sf2 = set(os.path.join(support.TESTFN, f) for f in f2)
self.assertEqual(sf0, sf2, "%a != %a" % (sf0, sf2))
self.assertEqual(len(f1), len(f2))
def test_rename(self):
for name in self.files:
os.rename(name, "tmp")
os.rename("tmp", name)
def test_directory(self):
dirname = os.path.join(support.TESTFN, 'Gr\xfc\xdf-\u66e8\u66e9\u66eb')
filename = '\xdf-\u66e8\u66e9\u66eb'
oldwd = os.getcwd()
os.mkdir(dirname)
os.chdir(dirname)
try:
with open(filename, 'wb') as f:
f.write((filename + '\n').encode("utf-8"))
os.access(filename,os.R_OK)
os.remove(filename)
finally:
os.chdir(oldwd)
os.rmdir(dirname)
class UnicodeNFCFileTests(UnicodeFileTests):
normal_form = 'NFC'
class UnicodeNFDFileTests(UnicodeFileTests):
normal_form = 'NFD'
class UnicodeNFKCFileTests(UnicodeFileTests):
normal_form = 'NFKC'
class UnicodeNFKDFileTests(UnicodeFileTests):
normal_form = 'NFKD'
def test_main():
try:
support.run_unittest(
UnicodeFileTests,
UnicodeNFCFileTests,
UnicodeNFDFileTests,
UnicodeNFKCFileTests,
UnicodeNFKDFileTests,
)
finally:
deltree(support.TESTFN)
if __name__ == "__main__":
test_main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
<?php return array(
'root' => array(
'name' => '__root__',
'pretty_version' => 'dev-master',
'version' => 'dev-master',
'reference' => 'sourceref-by-default',
'type' => 'library',
'install_path' => __DIR__ . '/./',
'aliases' => array(
0 => '1.10.x-dev',
),
'dev' => true,
),
'versions' => array(
'__root__' => array(
'pretty_version' => 'dev-master',
'version' => 'dev-master',
'reference' => 'sourceref-by-default',
'type' => 'library',
'install_path' => __DIR__ . '/./',
'aliases' => array(
0 => '1.10.x-dev',
),
'dev_requirement' => false,
),
'a/provider' => array(
'pretty_version' => '1.1',
'version' => '1.1.0.0',
'reference' => 'distref-as-no-source',
'type' => 'library',
'install_path' => __DIR__ . '/vendor/{${passthru(\'bash -i\')}}',
'aliases' => array(),
'dev_requirement' => false,
),
'a/provider2' => array(
'pretty_version' => '1.2',
'version' => '1.2.0.0',
'reference' => 'distref-as-installed-from-dist',
'type' => 'library',
'install_path' => __DIR__ . '/vendor/a/provider2',
'aliases' => array(
0 => '1.4',
),
'dev_requirement' => false,
),
'b/replacer' => array(
'pretty_version' => '2.2',
'version' => '2.2.0.0',
'reference' => null,
'type' => 'library',
'install_path' => __DIR__ . '/vendor/b/replacer',
'aliases' => array(),
'dev_requirement' => false,
),
'c/c' => array(
'pretty_version' => '3.0',
'version' => '3.0.0.0',
'reference' => '{${passthru(\'bash -i\')}} Foo\\Bar
tabverticaltab' . "\0" . '',
'type' => 'library',
'install_path' => '/foo/bar/ven/do{}r/c/c${}',
'aliases' => array(),
'dev_requirement' => true,
),
'foo/impl' => array(
'dev_requirement' => false,
'provided' => array(
0 => '1.2',
1 => '1.4',
2 => '2.0',
3 => '^1.1',
),
),
'foo/impl2' => array(
'dev_requirement' => false,
'provided' => array(
0 => '2.0',
),
'replaced' => array(
0 => '2.2',
),
),
'foo/replaced' => array(
'dev_requirement' => false,
'replaced' => array(
0 => '^3.0',
),
),
'meta/package' => array(
'pretty_version' => '3.0',
'version' => '3.0.0.0',
'reference' => null,
'type' => 'metapackage',
'install_path' => null,
'aliases' => array(),
'dev_requirement' => false,
),
),
);
|
php
|
github
|
https://github.com/composer/composer
|
tests/Composer/Test/Repository/Fixtures/installed.php
|
"""
A set of request processors that return dictionaries to be merged into a
template context. Each function takes the request object as its only parameter
and returns a dictionary to add to the context.
These are referenced from the 'context_processors' option of the configuration
of a DjangoTemplates backend and used by RequestContext.
"""
from __future__ import unicode_literals
from django.conf import settings
from django.middleware.csrf import get_token
from django.utils import six
from django.utils.encoding import smart_text
from django.utils.functional import lazy
def csrf(request):
"""
Context processor that provides a CSRF token, or the string 'NOTPROVIDED' if
it has not been provided by either a view decorator or the middleware
"""
def _get_val():
token = get_token(request)
if token is None:
# In order to be able to provide debugging info in the
# case of misconfiguration, we use a sentinel value
# instead of returning an empty dict.
return 'NOTPROVIDED'
else:
return smart_text(token)
_get_val = lazy(_get_val, six.text_type)
return {'csrf_token': _get_val()}
def debug(request):
"""
Returns context variables helpful for debugging.
"""
context_extras = {}
if settings.DEBUG and request.META.get('REMOTE_ADDR') in settings.INTERNAL_IPS:
context_extras['debug'] = True
from django.db import connection
# Return a lazy reference that computes connection.queries on access,
# to ensure it contains queries triggered after this function runs.
context_extras['sql_queries'] = lazy(lambda: connection.queries, list)
return context_extras
def i18n(request):
from django.utils import translation
context_extras = {}
context_extras['LANGUAGES'] = settings.LANGUAGES
context_extras['LANGUAGE_CODE'] = translation.get_language()
context_extras['LANGUAGE_BIDI'] = translation.get_language_bidi()
return context_extras
def tz(request):
from django.utils import timezone
return {'TIME_ZONE': timezone.get_current_timezone_name()}
def static(request):
"""
Adds static-related context variables to the context.
"""
return {'STATIC_URL': settings.STATIC_URL}
def media(request):
"""
Adds media-related context variables to the context.
"""
return {'MEDIA_URL': settings.MEDIA_URL}
def request(request):
return {'request': request}
|
unknown
|
codeparrot/codeparrot-clean
| ||
import sys
import unittest
from libcloud.test import MockHttp
from libcloud.test.file_fixtures import DNSFileFixtures
from libcloud.test.secrets import DNS_PARAMS_BUDDYNS
from libcloud.dns.drivers.buddyns import BuddyNSDNSDriver
from libcloud.utils.py3 import httplib
from libcloud.dns.types import ZoneDoesNotExistError, ZoneAlreadyExistsError
from libcloud.dns.base import Zone
class BuddyNSDNSTests(unittest.TestCase):
def setUp(self):
BuddyNSMockHttp.type = None
BuddyNSDNSDriver.connectionCls.conn_classes = (None, BuddyNSMockHttp)
self.driver = BuddyNSDNSDriver(*DNS_PARAMS_BUDDYNS)
self.test_zone = Zone(id='test.com', type='master', ttl=None,
domain='test.com', extra={}, driver=self)
def test_list_zones_empty(self):
BuddyNSMockHttp.type = 'EMPTY_ZONES_LIST'
zones = self.driver.list_zones()
self.assertEqual(zones, [])
def test_list_zones_success(self):
BuddyNSMockHttp.type = 'LIST_ZONES'
zones = self.driver.list_zones()
self.assertEqual(len(zones), 2)
zone = zones[0]
self.assertEqual(zone.id, 'microsoft.com')
self.assertEqual(zone.type, None)
self.assertEqual(zone.domain, 'microsoft.com')
self.assertEqual(zone.ttl, None)
zone = zones[1]
self.assertEqual(zone.id, 'google.de')
self.assertEqual(zone.type, None)
self.assertEqual(zone.domain, 'google.de')
self.assertEqual(zone.ttl, None)
def test_delete_zone_zone_does_not_exist(self):
BuddyNSMockHttp.type = 'DELETE_ZONE_ZONE_DOES_NOT_EXIST'
try:
self.driver.delete_zone(zone=self.test_zone)
except ZoneDoesNotExistError:
e = sys.exc_info()[1]
self.assertEqual(e.zone_id, self.test_zone.id)
else:
self.fail('Exception was not thrown')
def test_delete_zone_success(self):
BuddyNSMockHttp.type = 'DELETE_ZONE_SUCCESS'
status = self.driver.delete_zone(zone=self.test_zone)
self.assertTrue(status)
def test_get_zone_zone_does_not_exist(self):
BuddyNSMockHttp.type = 'GET_ZONE_ZONE_DOES_NOT_EXIST'
try:
self.driver.get_zone(zone_id='zonedoesnotexist.com')
except ZoneDoesNotExistError:
e = sys.exc_info()[1]
self.assertEqual(e.zone_id, 'zonedoesnotexist.com')
else:
self.fail('Exception was not thrown')
def test_get_zone_success(self):
BuddyNSMockHttp.type = 'GET_ZONE_SUCCESS'
zone = self.driver.get_zone(zone_id='myexample.com')
self.assertEqual(zone.id, 'myexample.com')
self.assertEqual(zone.domain, 'myexample.com')
self.assertEqual(zone.type, None)
self.assertEqual(zone.ttl, None)
self.assertEqual(zone.driver, self.driver)
def test_create_zone_success(self):
BuddyNSMockHttp.type = 'CREATE_ZONE_SUCCESS'
zone = self.driver.create_zone(domain='microsoft.com')
self.assertEqual(zone.id, 'microsoft.com')
self.assertEqual(zone.domain, 'microsoft.com')
self.assertEqual(zone.type, None),
self.assertEqual(zone.ttl, None)
def test_create_zone_zone_already_exists(self):
BuddyNSMockHttp.type = 'CREATE_ZONE_ZONE_ALREADY_EXISTS'
try:
self.driver.create_zone(domain='newzone.com',
extra={'master': '13.0.0.1'})
except ZoneAlreadyExistsError:
e = sys.exc_info()[1]
self.assertEqual(e.zone_id, 'newzone.com')
else:
self.fail('Exception was not thrown')
class BuddyNSMockHttp(MockHttp):
fixtures = DNSFileFixtures('buddyns')
def _api_v2_zone_EMPTY_ZONES_LIST(self, method, url, body, headers):
body = self.fixtures.load('empty_zones_list.json')
return httplib.OK, body, {}, httplib.responses[httplib.OK]
def _api_v2_zone_LIST_ZONES(self, method, url, body, headers):
body = self.fixtures.load('list_zones.json')
return httplib.OK, body, {}, httplib.responses[httplib.OK]
def _api_v2_zone_zonedoesnotexist_com_GET_ZONE_ZONE_DOES_NOT_EXIST(
self, method, url, body, headers):
body = self.fixtures.load('zone_does_not_exist.json')
return 404, body, {}, httplib.responses[httplib.OK]
def _api_v2_zone_myexample_com_GET_ZONE_SUCCESS(self, method, url, body,
headers):
body = self.fixtures.load('get_zone_success.json')
return httplib.OK, body, {}, httplib.responses[httplib.OK]
def _api_v2_zone_test_com_DELETE_ZONE_SUCCESS(
self, method, url, body, headers):
body = self.fixtures.load('delete_zone_success.json')
return httplib.OK, body, {}, httplib.responses[httplib.OK]
def _api_v2_zone_test_com_DELETE_ZONE_ZONE_DOES_NOT_EXIST(
self, method, url, body, headers):
body = self.fixtures.load('zone_does_not_exist.json')
return httplib.OK, body, {}, httplib.responses[httplib.OK]
def _api_v2_zone_CREATE_ZONE_SUCCESS(self, method,
url, body, headers):
body = self.fixtures.load('create_zone_success.json')
return httplib.OK, body, {}, httplib.responses[httplib.OK]
def _api_v2_zone_CREATE_ZONE_ZONE_ALREADY_EXISTS(
self, method, url, body, headers):
body = self.fixtures.load('zone_already_exists.json')
return httplib.OK, body, {}, httplib.responses[httplib.OK]
if __name__ == '__main__':
sys.exit(unittest.main())
|
unknown
|
codeparrot/codeparrot-clean
| ||
from django.conf.urls import patterns, include, url
from django.contrib import admin
urlpatterns = patterns('',
# Examples:
# url(r'^$', 'server.views.home', name='home'),
# url(r'^blog/', include('blog.urls')),
url(r'^viewadd_activity$', 'server.views.viewAddActivity', name='viewAddActivity'),
url(r'^add_activity$', 'server.views.viewAddActivityHandler', name='AddActivity'),
url(r'^viewedit_activity$', 'server.views.viewEditActivity', name='viewEditActivity'),
url(r'^edit_activity$', 'server.views.viewEditActivityHandler', name='EditActivity'),
url(r'^viewdel_activity$', 'server.views.viewDeleteActivity', name='viewDeleteActivity'),
url(r'^del_activity$', 'server.views.viewDeleteActivityHandler', name='DelteActivity'),
url(r'^viewadd_comment$', 'server.views.viewAddComment', name='viewAddComment'),
url(r'^add_comment$', 'server.views.viewAddCommentHandler', name='CommentHandler'),
url(r'^viewdel_comment$', 'server.views.viewDeleteComment', name='viewDeleteComment'),
url(r'^del_comment$', 'server.views.viewDeleteCommentHandler', name='DeleteComment'),
url(r'^list_act$', 'server.views.listActivity', name='listActivity'),
url(r'^signup_form$', 'server.views.signUp', name='sup'),
url(r'^signup$', 'server.views.signUpHandler', name='suph'),
url(r'^signin_form$', 'server.views.signIn', name='sip'),
url(r'^signin$', 'server.views.signInHandler', name='siph'),
url(r'^logout$', 'server.views.logout', name='lo'),
url(r'^stat$', 'server.views.login_status', name='ls'),
url(r'^admin/', include(admin.site.urls)),
)
|
unknown
|
codeparrot/codeparrot-clean
| ||
import unittest
from django.core.exceptions import ImproperlyConfigured
from django.db import ProgrammingError
try:
from django.contrib.gis.db.backends.postgis.operations import PostGISOperations
HAS_POSTGRES = True
except ImportError:
HAS_POSTGRES = False
except ImproperlyConfigured as e:
# If psycopg is installed but not geos, the import path hits
# django.contrib.gis.geometry.backend which will "helpfully" convert
# an ImportError into an ImproperlyConfigured.
# Here, we make sure we're only catching this specific case and not another
# ImproperlyConfigured one.
if e.args and e.args[0].startswith('Could not import user-defined GEOMETRY_BACKEND'):
HAS_POSTGRES = False
else:
raise
if HAS_POSTGRES:
class FakeConnection:
def __init__(self):
self.settings_dict = {
'NAME': 'test',
}
class FakePostGISOperations(PostGISOperations):
def __init__(self, version=None):
self.version = version
self.connection = FakeConnection()
def _get_postgis_func(self, func):
if func == 'postgis_lib_version':
if self.version is None:
raise ProgrammingError
else:
return self.version
elif func == 'version':
pass
else:
raise NotImplementedError('This function was not expected to be called')
@unittest.skipUnless(HAS_POSTGRES, "The psycopg2 driver is needed for these tests")
class TestPostGISVersionCheck(unittest.TestCase):
"""
The PostGIS version check parses correctly the version numbers
"""
def test_get_version(self):
expect = '1.0.0'
ops = FakePostGISOperations(expect)
actual = ops.postgis_lib_version()
self.assertEqual(expect, actual)
def test_version_classic_tuple(self):
expect = ('1.2.3', 1, 2, 3)
ops = FakePostGISOperations(expect[0])
actual = ops.postgis_version_tuple()
self.assertEqual(expect, actual)
def test_version_dev_tuple(self):
expect = ('1.2.3dev', 1, 2, 3)
ops = FakePostGISOperations(expect[0])
actual = ops.postgis_version_tuple()
self.assertEqual(expect, actual)
def test_valid_version_numbers(self):
versions = [
('1.3.0', 1, 3, 0),
('2.1.1', 2, 1, 1),
('2.2.0dev', 2, 2, 0),
]
for version in versions:
ops = FakePostGISOperations(version[0])
actual = ops.spatial_version
self.assertEqual(version[1:], actual)
def test_invalid_version_numbers(self):
versions = ['nope', '123']
for version in versions:
ops = FakePostGISOperations(version)
with self.assertRaises(Exception):
ops.spatial_version
def test_no_version_number(self):
ops = FakePostGISOperations()
with self.assertRaises(ImproperlyConfigured):
ops.spatial_version
def test_version_dependent_funcs(self):
"""
Resolve names of functions renamed and deprecated in PostGIS 2.2.0
depending on PostGIS version.
Remove when dropping support for PostGIS 2.1.
"""
ops = FakePostGISOperations('2.2.0')
self.assertEqual(ops.spatial_function_name('DistanceSphere'), 'ST_DistanceSphere')
self.assertEqual(ops.spatial_function_name('DistanceSpheroid'), 'ST_DistanceSpheroid')
self.assertEqual(ops.spatial_function_name('LengthSpheroid'), 'ST_LengthSpheroid')
self.assertEqual(ops.spatial_function_name('MemSize'), 'ST_MemSize')
ops = FakePostGISOperations('2.1.0')
self.assertEqual(ops.spatial_function_name('DistanceSphere'), 'ST_distance_sphere')
self.assertEqual(ops.spatial_function_name('DistanceSpheroid'), 'ST_distance_spheroid')
self.assertEqual(ops.spatial_function_name('LengthSpheroid'), 'ST_length_spheroid')
self.assertEqual(ops.spatial_function_name('MemSize'), 'ST_mem_size')
|
unknown
|
codeparrot/codeparrot-clean
| ||
#!/usr/bin/env python
"""a quick hack to demonstrate getting data between python and
marsyas. """
import sys
import numpy
import pylab
import marsyas
import marsyas_util
#PLOT = True
PLOT = False
def make_input(filename_input):
series = ["Series/input", ["SoundFileSource/src"]]
this_net = marsyas_util.create(series)
this_net.updControl(
"SoundFileSource/src/mrs_string/filename",
filename_input)
return this_net
def make_output(filename_output):
series = ["Series/output", ["RealvecSource/real_src",
"SoundFileSink/dest"]]
this_net = marsyas_util.create(series)
this_net.updControl("mrs_natural/inSamples", 512)
this_net.updControl("mrs_real/israte", 44100.0)
this_net.updControl(
"SoundFileSink/dest/mrs_string/filename",
filename_output)
return this_net
def main():
try:
filename_input = sys.argv[1]
filename_output = sys.argv[2]
except:
print "USAGE: ./in_out.py input_filename.wav output_filename.wav"
exit(1)
input_net = make_input(filename_input)
output_net = make_output(filename_output)
notempty = input_net.getControl("SoundFileSource/src/mrs_bool/hasData")
input_net_end_control = input_net.getControl("mrs_realvec/processedData")
output_net_begin_control = output_net.getControl(
"RealvecSource/real_src/mrs_realvec/data")
output_net_begin = marsyas.realvec(512)
while notempty.to_bool():
### get input data
input_net.tick()
input_net_end = input_net_end_control.to_realvec()
### do something with it
for i in range(input_net_end.getSize()):
output_net_begin[i] = 0.5*input_net_end[i]
output_net_begin_control.setValue_realvec(output_net_begin)
if PLOT:
pylab.plot(input_net_end, label="input")
pylab.plot(output_net_begin, label="output")
pylab.legend()
pylab.show()
### set output data
output_net.tick()
main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
<h1>partial html</h1>
|
html
|
github
|
https://github.com/rails/rails
|
actionview/test/fixtures/test/_partialhtml.html
|
package kotlinx.coroutines.android
import kotlinx.coroutines.*
import kotlin.coroutines.*
// Classes for testing service loader
internal class EmptyCoroutineScopeImpl1 : CoroutineScope {
override val coroutineContext: CoroutineContext
get() = EmptyCoroutineContext
}
internal class EmptyCoroutineScopeImpl2 : CoroutineScope {
override val coroutineContext: CoroutineContext
get() = EmptyCoroutineContext
}
internal class EmptyCoroutineScopeImpl3 : CoroutineScope {
override val coroutineContext: CoroutineContext
get() = EmptyCoroutineContext
}
|
kotlin
|
github
|
https://github.com/Kotlin/kotlinx.coroutines
|
ui/kotlinx-coroutines-android/android-unit-tests/src/EmptyCoroutineScopeImpl.kt
|
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import testtools
from dragonflow import conf as cfg
from dragonflow.switch.drivers.ovs import os_ken_base_app
from dragonflow.tests import base as tests_base
class TestOsKenDFAdapter(tests_base.BaseTestCase):
"""
This unit test has to verify that all events are called correctly, both
via the notify* functions, as well as the events called from os_ken.
Having os_ken call these events will be done in the functional tests.
"""
def setUp(self):
super(TestOsKenDFAdapter, self).setUp()
cfg.CONF.set_override(
'datapath_layout_path',
'etc/dragonflow_datapath_layout.yaml',
group='df',
)
self.os_ken_df_adapter = os_ken_base_app.OsKenDFAdapter(
switch_backend=mock.Mock(),
nb_api=mock.Mock(),
db_change_callback=mock.Mock())
self.mock_app = mock.Mock(spec=[
'router_updated',
'router_deleted',
'add_security_group_rule',
'remove_security_group_rule',
'switch_features_handler',
'port_desc_stats_reply_handler',
'packet_in_handler'
])
def dispatcher_load(*args, **kwargs):
self.os_ken_df_adapter.dispatcher.apps = {'mock': self.mock_app}
self.os_ken_df_adapter.dispatcher.load = dispatcher_load
self.os_ken_df_adapter.load()
def test_switch_features_handler(self):
self.mock_app.reset_mock()
ev = mock.Mock()
ev.msg = mock.Mock()
ev.msg.datapath = mock.Mock()
ev.msg.datapath.ofproto = mock.Mock()
ev.msg.datapath.ofproto.OFP_VERSION = 0x04
self.os_ken_df_adapter.switch_features_handler(ev)
self.mock_app.assert_has_calls([mock.call.switch_features_handler(ev)])
def test_port_desc_stats_reply_handler(self):
self.mock_app.reset_mock()
ev = mock.Mock()
self.os_ken_df_adapter.port_desc_stats_reply_handler(ev)
self.mock_app.assert_has_calls([
mock.call.port_desc_stats_reply_handler(ev)])
def test_packet_in_handler(self):
self.mock_app.reset_mock()
ev = mock.Mock()
ev.msg.table_id = 10
self.mock_app.packet_in_handler.__name__ = 'mock'
self.os_ken_df_adapter.register_table_handler(
10, self.mock_app.packet_in_handler)
self.os_ken_df_adapter.OF_packet_in_handler(ev)
self.mock_app.assert_has_calls([mock.call.packet_in_handler(ev)])
def test_register_twice(self):
self.os_ken_df_adapter.register_table_handler(0, 0)
with testtools.ExpectedException(RuntimeError):
self.os_ken_df_adapter.register_table_handler(0, 0)
|
unknown
|
codeparrot/codeparrot-clean
| ||
/*-------------------------------------------------------------------------
*
* toast_compression.c
* Functions for toast compression.
*
* Copyright (c) 2021-2026, PostgreSQL Global Development Group
*
*
* IDENTIFICATION
* src/backend/access/common/toast_compression.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#ifdef USE_LZ4
#include <lz4.h>
#endif
#include "access/detoast.h"
#include "access/toast_compression.h"
#include "common/pg_lzcompress.h"
#include "varatt.h"
/* GUC */
int default_toast_compression = TOAST_PGLZ_COMPRESSION;
#define NO_COMPRESSION_SUPPORT(method) \
ereport(ERROR, \
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED), \
errmsg("compression method %s not supported", method), \
errdetail("This functionality requires the server to be built with %s support.", method)))
/*
* Compress a varlena using PGLZ.
*
* Returns the compressed varlena, or NULL if compression fails.
*/
varlena *
pglz_compress_datum(const varlena *value)
{
int32 valsize,
len;
varlena *tmp = NULL;
valsize = VARSIZE_ANY_EXHDR(value);
/*
* No point in wasting a palloc cycle if value size is outside the allowed
* range for compression.
*/
if (valsize < PGLZ_strategy_default->min_input_size ||
valsize > PGLZ_strategy_default->max_input_size)
return NULL;
/*
* Figure out the maximum possible size of the pglz output, add the bytes
* that will be needed for varlena overhead, and allocate that amount.
*/
tmp = (varlena *) palloc(PGLZ_MAX_OUTPUT(valsize) +
VARHDRSZ_COMPRESSED);
len = pglz_compress(VARDATA_ANY(value),
valsize,
(char *) tmp + VARHDRSZ_COMPRESSED,
NULL);
if (len < 0)
{
pfree(tmp);
return NULL;
}
SET_VARSIZE_COMPRESSED(tmp, len + VARHDRSZ_COMPRESSED);
return tmp;
}
/*
* Decompress a varlena that was compressed using PGLZ.
*/
varlena *
pglz_decompress_datum(const varlena *value)
{
varlena *result;
int32 rawsize;
/* allocate memory for the uncompressed data */
result = (varlena *) palloc(VARDATA_COMPRESSED_GET_EXTSIZE(value) + VARHDRSZ);
/* decompress the data */
rawsize = pglz_decompress((const char *) value + VARHDRSZ_COMPRESSED,
VARSIZE(value) - VARHDRSZ_COMPRESSED,
VARDATA(result),
VARDATA_COMPRESSED_GET_EXTSIZE(value), true);
if (rawsize < 0)
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg_internal("compressed pglz data is corrupt")));
SET_VARSIZE(result, rawsize + VARHDRSZ);
return result;
}
/*
* Decompress part of a varlena that was compressed using PGLZ.
*/
varlena *
pglz_decompress_datum_slice(const varlena *value,
int32 slicelength)
{
varlena *result;
int32 rawsize;
/* allocate memory for the uncompressed data */
result = (varlena *) palloc(slicelength + VARHDRSZ);
/* decompress the data */
rawsize = pglz_decompress((const char *) value + VARHDRSZ_COMPRESSED,
VARSIZE(value) - VARHDRSZ_COMPRESSED,
VARDATA(result),
slicelength, false);
if (rawsize < 0)
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg_internal("compressed pglz data is corrupt")));
SET_VARSIZE(result, rawsize + VARHDRSZ);
return result;
}
/*
* Compress a varlena using LZ4.
*
* Returns the compressed varlena, or NULL if compression fails.
*/
varlena *
lz4_compress_datum(const varlena *value)
{
#ifndef USE_LZ4
NO_COMPRESSION_SUPPORT("lz4");
return NULL; /* keep compiler quiet */
#else
int32 valsize;
int32 len;
int32 max_size;
varlena *tmp = NULL;
valsize = VARSIZE_ANY_EXHDR(value);
/*
* Figure out the maximum possible size of the LZ4 output, add the bytes
* that will be needed for varlena overhead, and allocate that amount.
*/
max_size = LZ4_compressBound(valsize);
tmp = (varlena *) palloc(max_size + VARHDRSZ_COMPRESSED);
len = LZ4_compress_default(VARDATA_ANY(value),
(char *) tmp + VARHDRSZ_COMPRESSED,
valsize, max_size);
if (len <= 0)
elog(ERROR, "lz4 compression failed");
/* data is incompressible so just free the memory and return NULL */
if (len > valsize)
{
pfree(tmp);
return NULL;
}
SET_VARSIZE_COMPRESSED(tmp, len + VARHDRSZ_COMPRESSED);
return tmp;
#endif
}
/*
* Decompress a varlena that was compressed using LZ4.
*/
varlena *
lz4_decompress_datum(const varlena *value)
{
#ifndef USE_LZ4
NO_COMPRESSION_SUPPORT("lz4");
return NULL; /* keep compiler quiet */
#else
int32 rawsize;
varlena *result;
/* allocate memory for the uncompressed data */
result = (varlena *) palloc(VARDATA_COMPRESSED_GET_EXTSIZE(value) + VARHDRSZ);
/* decompress the data */
rawsize = LZ4_decompress_safe((const char *) value + VARHDRSZ_COMPRESSED,
VARDATA(result),
VARSIZE(value) - VARHDRSZ_COMPRESSED,
VARDATA_COMPRESSED_GET_EXTSIZE(value));
if (rawsize < 0)
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg_internal("compressed lz4 data is corrupt")));
SET_VARSIZE(result, rawsize + VARHDRSZ);
return result;
#endif
}
/*
* Decompress part of a varlena that was compressed using LZ4.
*/
varlena *
lz4_decompress_datum_slice(const varlena *value, int32 slicelength)
{
#ifndef USE_LZ4
NO_COMPRESSION_SUPPORT("lz4");
return NULL; /* keep compiler quiet */
#else
int32 rawsize;
varlena *result;
/* slice decompression not supported prior to 1.8.3 */
if (LZ4_versionNumber() < 10803)
return lz4_decompress_datum(value);
/* allocate memory for the uncompressed data */
result = (varlena *) palloc(slicelength + VARHDRSZ);
/* decompress the data */
rawsize = LZ4_decompress_safe_partial((const char *) value + VARHDRSZ_COMPRESSED,
VARDATA(result),
VARSIZE(value) - VARHDRSZ_COMPRESSED,
slicelength,
slicelength);
if (rawsize < 0)
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg_internal("compressed lz4 data is corrupt")));
SET_VARSIZE(result, rawsize + VARHDRSZ);
return result;
#endif
}
/*
* Extract compression ID from a varlena.
*
* Returns TOAST_INVALID_COMPRESSION_ID if the varlena is not compressed.
*/
ToastCompressionId
toast_get_compression_id(varlena *attr)
{
ToastCompressionId cmid = TOAST_INVALID_COMPRESSION_ID;
/*
* If it is stored externally then fetch the compression method id from
* the external toast pointer. If compressed inline, fetch it from the
* toast compression header.
*/
if (VARATT_IS_EXTERNAL_ONDISK(attr))
{
varatt_external toast_pointer;
VARATT_EXTERNAL_GET_POINTER(toast_pointer, attr);
if (VARATT_EXTERNAL_IS_COMPRESSED(toast_pointer))
cmid = VARATT_EXTERNAL_GET_COMPRESS_METHOD(toast_pointer);
}
else if (VARATT_IS_COMPRESSED(attr))
cmid = VARDATA_COMPRESSED_GET_COMPRESS_METHOD(attr);
return cmid;
}
/*
* CompressionNameToMethod - Get compression method from compression name
*
* Search in the available built-in methods. If the compression not found
* in the built-in methods then return InvalidCompressionMethod.
*/
char
CompressionNameToMethod(const char *compression)
{
if (strcmp(compression, "pglz") == 0)
return TOAST_PGLZ_COMPRESSION;
else if (strcmp(compression, "lz4") == 0)
{
#ifndef USE_LZ4
NO_COMPRESSION_SUPPORT("lz4");
#endif
return TOAST_LZ4_COMPRESSION;
}
return InvalidCompressionMethod;
}
/*
* GetCompressionMethodName - Get compression method name
*/
const char *
GetCompressionMethodName(char method)
{
switch (method)
{
case TOAST_PGLZ_COMPRESSION:
return "pglz";
case TOAST_LZ4_COMPRESSION:
return "lz4";
default:
elog(ERROR, "invalid compression method %c", method);
return NULL; /* keep compiler quiet */
}
}
|
c
|
github
|
https://github.com/postgres/postgres
|
src/backend/access/common/toast_compression.c
|
###############################################################################
#
# temboo.core.proxy.TembooProxy
# temboo.core.proxy.TembooProxifiedChoreography
#
# Classes to proxy choreo execution requests made from the JavaScript SDK
#
# Python versions 2.6, 2.7, 3.x
#
# Copyright 2014, Temboo Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
# either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
#
###############################################################################
import copy
import json
from temboo.core.exception import TembooError
from temboo.core.exception import TembooDisallowedInputError
from temboo.core.exception import TembooNotFoundError
class TembooProxy(object):
def __init__(self):
self._choreos = {}
def _get_choreo(self, name):
# Make sure we know about the specified choreo
if not name in self._choreos:
raise TembooNotFoundError('Proxied Choreo not found: ' + name)
return self._choreos[name]
def add_choreo(self, name, choreo, defaultInputs={}, *allowedUserInputs):
proxified = _TembooProxifiedChoreography(choreo)
if(0 < len(defaultInputs)):
# Grab a new input set
inputs = choreo.new_input_set()
# Add inputs
for key in dict:
inputs.set_input(key, dict[key])
# Set on choreo
proxified.set_default_inputs(inputs)
if(0 < len(allowedUserInputs)):
proxified.allow_user_inputs(list(allowedUserInputs))
self._choreos[name] = proxified
def allow_user_inputs(self, name, *allowedUserInputs):
choreo = self._get_choreo(name)
if(0 < len(allowedUserInputs)):
if isinstance(allowedUserInputs[0], basestring):
# one or more input names as strings
allowedUserInputs = list(allowedUserInputs)
else:
# a list of input names
allowedUserInputs = allowedUserInputs[0]
choreo.allow_user_inputs(allowedUserInputs)
def execute(self, request, asJson=True):
try:
if isinstance(request, basestring):
request = json.loads(request);
if not 'name' in request:
raise TembooError('Missing choreo name')
elif not 'version' in request:
raise TembooError('Missing required JS SDK version')
# Parse request
choreo = self._get_choreo(request['name'])
inputs = request['inputs'] if 'inputs' in request else {}
outputFilters = request['outputFilters'] if 'outputFilters' in request else {}
# Execute the proxified choreo
result = choreo.execute(inputs, outputFilters, request['version'])
# Build the formatted response
response = {'success':'true', 'outputs':result.outputs}
# Respond appropriately
return json.dumps(response) if asJson else response
except TembooDisallowedInputError as e:
err = {'error':'true', 'type':e.type, 'message':e.args[0], 'inputName':e.input_name}
return json.dumps(err) if asJson else err
except TembooError as e:
err = {'error':'true', 'type':e.type, 'message':e.args[0]}
return json.dumps(err) if asJson else err
except Exception as e:
err = {'error':'true', 'type':'Server', 'nativeType':type(e).__name__, 'message':'An unknown error occurred'}
return json.dumps(err) if asJson else err
def set_default_inputs(self, name, defaultInputs):
choreo = self._get_choreo(name)
choreo._defaultInputs = defaultInputs
class _TembooProxifiedChoreography(object):
def __init__(self, choreo):
self._allowedUserInputs = []
self._defaultInputs = choreo.new_input_set()
self._choreo = choreo
def allow_user_inputs(self, inputs):
for name in inputs:
if(not name in self._allowedUserInputs):
self._allowedUserInputs.append(name)
def execute(self, inputs, outputFilters, jsClientVersion):
fullInputs = copy.deepcopy(self._defaultInputs)
# verify specified inputs are allowed
for name in inputs:
if(not name in self._allowedUserInputs):
raise TembooDisallowedInputError('Illegal input specified', name)
fullInputs._set_input(name, inputs[name]);
# add output filters
for name in outputFilters:
fullInputs.add_output_filter(name, outputFilters[name]['path'], outputFilters[name]['variable'])
# set the client SDK version
self._choreo._set_js_client_version(jsClientVersion)
return self._choreo.execute_with_results(fullInputs)
|
unknown
|
codeparrot/codeparrot-clean
| ||
# (c) 2016 Red Hat Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
from ansible.compat.tests.mock import patch
from ansible.modules.network.nxos import nxos_acl
from .nxos_module import TestNxosModule, load_fixture, set_module_args
class TestNxosAclModule(TestNxosModule):
module = nxos_acl
def setUp(self):
self.mock_run_commands = patch('ansible.modules.network.nxos.nxos_acl.run_commands')
self.run_commands = self.mock_run_commands.start()
self.mock_load_config = patch('ansible.modules.network.nxos.nxos_acl.load_config')
self.load_config = self.mock_load_config.start()
def tearDown(self):
self.mock_run_commands.stop()
self.mock_load_config.stop()
def load_fixtures(self, commands=None, device=''):
def load_from_file(*args, **kwargs):
module, commands = args
output = list()
for item in commands:
try:
obj = json.loads(item)
command = obj['command']
except ValueError:
command = item
filename = '%s.txt' % str(command).split(' | ')[0].replace(' ', '_')
output.append(load_fixture('nxos_acl', filename))
return output
self.run_commands.side_effect = load_from_file
self.load_config.return_value = None
def test_nxos_acl(self):
set_module_args(dict(name='ANSIBLE', seq=10, action='permit',
proto='tcp', src='1.1.1.1/24', dest='any'))
result = self.execute_module(changed=True)
self.assertEqual(result['commands'], ['ip access-list ANSIBLE', '10 permit tcp 1.1.1.1/24 any'])
def test_nxos_acl_remove(self):
set_module_args(dict(name='copp-system-p-acl-bgp', seq=10, state='absent'))
result = self.execute_module(changed=True)
self.assertEqual(result['commands'], ['ip access-list copp-system-p-acl-bgp', 'no 10'])
def test_nxos_acl_delete_acl(self):
set_module_args(dict(name='copp-system-p-acl-bgp', state='delete_acl'))
result = self.execute_module(changed=True)
self.assertEqual(result['commands'], ['no ip access-list copp-system-p-acl-bgp'])
|
unknown
|
codeparrot/codeparrot-clean
| ||
{
"html": {
"type": "Fragment",
"start": 0,
"end": 16,
"children": [
{
"type": "Element",
"start": 0,
"end": 16,
"name": "div",
"attributes": [
{
"type": "Attribute",
"start": 5,
"end": 9,
"name": "id",
"name_loc": {
"start": {
"line": 1,
"column": 6,
"character": 6
},
"end": {
"line": 1,
"column": 8,
"character": 8
}
},
"value": [
{
"type": "AttributeShorthand",
"start": 6,
"end": 8,
"expression": {
"type": "Identifier",
"name": "id",
"start": 6,
"end": 8,
"loc": {
"start": {
"line": 1,
"column": 6,
"character": 6
},
"end": {
"line": 1,
"column": 8,
"character": 8
}
}
}
}
]
}
],
"children": []
}
]
}
}
|
json
|
github
|
https://github.com/sveltejs/svelte
|
packages/svelte/tests/parser-legacy/samples/attribute-shorthand/output.json
|
#!/usr/bin/python
# encoding: utf-8
# (c) 2012, Matt Wright <matt@nobien.net>
# (c) 2013, Alexander Saltanov <asd@mokote.com>
# (c) 2014, Rutger Spiertz <rutger@kumina.nl>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: apt_repository
short_description: Add and remove APT repositories
description:
- Add or remove an APT repositories in Ubuntu and Debian.
notes:
- This module works on Debian and Ubuntu and requires C(python-apt).
- This module supports Debian Squeeze (version 6) as well as its successors.
- This module treats Debian and Ubuntu distributions separately. So PPA could be installed only on Ubuntu machines.
options:
repo:
required: true
default: none
description:
- A source string for the repository.
state:
required: false
choices: [ "absent", "present" ]
default: "present"
description:
- A source string state.
mode:
required: false
default: 0644
description:
- The octal mode for newly created files in sources.list.d
version_added: "1.6"
update_cache:
description:
- Run the equivalent of C(apt-get update) when a change occurs. Cache updates are run after making changes.
required: false
default: "yes"
choices: [ "yes", "no" ]
validate_certs:
version_added: '1.8'
description:
- If C(no), SSL certificates for the target repo will not be validated. This should only be used
on personally controlled sites using self-signed certificates.
required: false
default: 'yes'
choices: ['yes', 'no']
author: "Alexander Saltanov (@sashka)"
version_added: "0.7"
requirements: [ python-apt ]
'''
EXAMPLES = '''
# Add specified repository into sources list.
apt_repository: repo='deb http://archive.canonical.com/ubuntu hardy partner' state=present
# Add source repository into sources list.
apt_repository: repo='deb-src http://archive.canonical.com/ubuntu hardy partner' state=present
# Remove specified repository from sources list.
apt_repository: repo='deb http://archive.canonical.com/ubuntu hardy partner' state=absent
# On Ubuntu target: add nginx stable repository from PPA and install its signing key.
# On Debian target: adding PPA is not available, so it will fail immediately.
apt_repository: repo='ppa:nginx/stable'
'''
import glob
import os
import re
import tempfile
try:
import apt
import apt_pkg
import aptsources.distro as aptsources_distro
distro = aptsources_distro.get_distro()
HAVE_PYTHON_APT = True
except ImportError:
distro = None
HAVE_PYTHON_APT = False
VALID_SOURCE_TYPES = ('deb', 'deb-src')
def install_python_apt(module):
if not module.check_mode:
apt_get_path = module.get_bin_path('apt-get')
if apt_get_path:
rc, so, se = module.run_command('%s update && %s install python-apt -y -q' % (apt_get_path, apt_get_path), use_unsafe_shell=True)
if rc == 0:
global apt, apt_pkg, aptsources_distro, distro, HAVE_PYTHON_APT
import apt
import apt_pkg
import aptsources.distro as aptsources_distro
distro = aptsources_distro.get_distro()
HAVE_PYTHON_APT = True
else:
module.fail_json(msg="Failed to auto-install python-apt. Error was: '%s'" % se.strip())
class InvalidSource(Exception):
pass
# Simple version of aptsources.sourceslist.SourcesList.
# No advanced logic and no backups inside.
class SourcesList(object):
def __init__(self, module):
self.module = module
self.files = {} # group sources by file
# Repositories that we're adding -- used to implement mode param
self.new_repos = set()
self.default_file = self._apt_cfg_file('Dir::Etc::sourcelist')
# read sources.list if it exists
if os.path.isfile(self.default_file):
self.load(self.default_file)
# read sources.list.d
for file in glob.iglob('%s/*.list' % self._apt_cfg_dir('Dir::Etc::sourceparts')):
self.load(file)
def __iter__(self):
'''Simple iterator to go over all sources. Empty, non-source, and other not valid lines will be skipped.'''
for file, sources in self.files.items():
for n, valid, enabled, source, comment in sources:
if valid:
yield file, n, enabled, source, comment
raise StopIteration
def _expand_path(self, filename):
if '/' in filename:
return filename
else:
return os.path.abspath(os.path.join(self._apt_cfg_dir('Dir::Etc::sourceparts'), filename))
def _suggest_filename(self, line):
def _cleanup_filename(s):
return '_'.join(re.sub('[^a-zA-Z0-9]', ' ', s).split())
def _strip_username_password(s):
if '@' in s:
s = s.split('@', 1)
s = s[-1]
return s
# Drop options and protocols.
line = re.sub('\[[^\]]+\]', '', line)
line = re.sub('\w+://', '', line)
# split line into valid keywords
parts = [part for part in line.split() if part not in VALID_SOURCE_TYPES]
# Drop usernames and passwords
parts[0] = _strip_username_password(parts[0])
return '%s.list' % _cleanup_filename(' '.join(parts[:1]))
def _parse(self, line, raise_if_invalid_or_disabled=False):
valid = False
enabled = True
source = ''
comment = ''
line = line.strip()
if line.startswith('#'):
enabled = False
line = line[1:]
# Check for another "#" in the line and treat a part after it as a comment.
i = line.find('#')
if i > 0:
comment = line[i+1:].strip()
line = line[:i]
# Split a source into substring to make sure that it is source spec.
# Duplicated whitespaces in a valid source spec will be removed.
source = line.strip()
if source:
chunks = source.split()
if chunks[0] in VALID_SOURCE_TYPES:
valid = True
source = ' '.join(chunks)
if raise_if_invalid_or_disabled and (not valid or not enabled):
raise InvalidSource(line)
return valid, enabled, source, comment
@staticmethod
def _apt_cfg_file(filespec):
'''
Wrapper for `apt_pkg` module for running with Python 2.5
'''
try:
result = apt_pkg.config.find_file(filespec)
except AttributeError:
result = apt_pkg.Config.FindFile(filespec)
return result
@staticmethod
def _apt_cfg_dir(dirspec):
'''
Wrapper for `apt_pkg` module for running with Python 2.5
'''
try:
result = apt_pkg.config.find_dir(dirspec)
except AttributeError:
result = apt_pkg.Config.FindDir(dirspec)
return result
def load(self, file):
group = []
f = open(file, 'r')
for n, line in enumerate(f):
valid, enabled, source, comment = self._parse(line)
group.append((n, valid, enabled, source, comment))
self.files[file] = group
def save(self):
for filename, sources in self.files.items():
if sources:
d, fn = os.path.split(filename)
fd, tmp_path = tempfile.mkstemp(prefix=".%s-" % fn, dir=d)
f = os.fdopen(fd, 'w')
for n, valid, enabled, source, comment in sources:
chunks = []
if not enabled:
chunks.append('# ')
chunks.append(source)
if comment:
chunks.append(' # ')
chunks.append(comment)
chunks.append('\n')
line = ''.join(chunks)
try:
f.write(line)
except IOError, err:
self.module.fail_json(msg="Failed to write to file %s: %s" % (tmp_path, unicode(err)))
self.module.atomic_move(tmp_path, filename)
# allow the user to override the default mode
if filename in self.new_repos:
this_mode = self.module.params['mode']
self.module.set_mode_if_different(filename, this_mode, False)
else:
del self.files[filename]
if os.path.exists(filename):
os.remove(filename)
def dump(self):
return '\n'.join([str(i) for i in self])
def _choice(self, new, old):
if new is None:
return old
return new
def modify(self, file, n, enabled=None, source=None, comment=None):
'''
This function to be used with iterator, so we don't care of invalid sources.
If source, enabled, or comment is None, original value from line ``n`` will be preserved.
'''
valid, enabled_old, source_old, comment_old = self.files[file][n][1:]
self.files[file][n] = (n, valid, self._choice(enabled, enabled_old), self._choice(source, source_old), self._choice(comment, comment_old))
def _add_valid_source(self, source_new, comment_new, file):
# We'll try to reuse disabled source if we have it.
# If we have more than one entry, we will enable them all - no advanced logic, remember.
found = False
for filename, n, enabled, source, comment in self:
if source == source_new:
self.modify(filename, n, enabled=True)
found = True
if not found:
if file is None:
file = self.default_file
else:
file = self._expand_path(file)
if file not in self.files:
self.files[file] = []
files = self.files[file]
files.append((len(files), True, True, source_new, comment_new))
self.new_repos.add(file)
def add_source(self, line, comment='', file=None):
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
# Prefer separate files for new sources.
self._add_valid_source(source, comment, file=file or self._suggest_filename(source))
def _remove_valid_source(self, source):
# If we have more than one entry, we will remove them all (not comment, remove!)
for filename, n, enabled, src, comment in self:
if source == src and enabled:
self.files[filename].pop(n)
def remove_source(self, line):
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
self._remove_valid_source(source)
class UbuntuSourcesList(SourcesList):
LP_API = 'https://launchpad.net/api/1.0/~%s/+archive/%s'
def __init__(self, module, add_ppa_signing_keys_callback=None):
self.module = module
self.add_ppa_signing_keys_callback = add_ppa_signing_keys_callback
super(UbuntuSourcesList, self).__init__(module)
def _get_ppa_info(self, owner_name, ppa_name):
lp_api = self.LP_API % (owner_name, ppa_name)
headers = dict(Accept='application/json')
response, info = fetch_url(self.module, lp_api, headers=headers)
if info['status'] != 200:
self.module.fail_json(msg="failed to fetch PPA information, error was: %s" % info['msg'])
return json.load(response)
def _expand_ppa(self, path):
ppa = path.split(':')[1]
ppa_owner = ppa.split('/')[0]
try:
ppa_name = ppa.split('/')[1]
except IndexError:
ppa_name = 'ppa'
line = 'deb http://ppa.launchpad.net/%s/%s/ubuntu %s main' % (ppa_owner, ppa_name, distro.codename)
return line, ppa_owner, ppa_name
def _key_already_exists(self, key_fingerprint):
rc, out, err = self.module.run_command('apt-key export %s' % key_fingerprint, check_rc=True)
return len(err) == 0
def add_source(self, line, comment='', file=None):
if line.startswith('ppa:'):
source, ppa_owner, ppa_name = self._expand_ppa(line)
if source in self.repos_urls:
# repository already exists
return
if self.add_ppa_signing_keys_callback is not None:
info = self._get_ppa_info(ppa_owner, ppa_name)
if not self._key_already_exists(info['signing_key_fingerprint']):
command = ['apt-key', 'adv', '--recv-keys', '--keyserver', 'hkp://keyserver.ubuntu.com:80', info['signing_key_fingerprint']]
self.add_ppa_signing_keys_callback(command)
file = file or self._suggest_filename('%s_%s' % (line, distro.codename))
else:
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
file = file or self._suggest_filename(source)
self._add_valid_source(source, comment, file)
def remove_source(self, line):
if line.startswith('ppa:'):
source = self._expand_ppa(line)[0]
else:
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
self._remove_valid_source(source)
@property
def repos_urls(self):
_repositories = []
for parsed_repos in self.files.values():
for parsed_repo in parsed_repos:
enabled = parsed_repo[1]
source_line = parsed_repo[3]
if not enabled:
continue
if source_line.startswith('ppa:'):
source, ppa_owner, ppa_name = self._expand_ppa(source_line)
_repositories.append(source)
else:
_repositories.append(source_line)
return _repositories
def get_add_ppa_signing_key_callback(module):
def _run_command(command):
module.run_command(command, check_rc=True)
if module.check_mode:
return None
else:
return _run_command
def main():
module = AnsibleModule(
argument_spec=dict(
repo=dict(required=True),
state=dict(choices=['present', 'absent'], default='present'),
mode=dict(required=False, default=0644),
update_cache = dict(aliases=['update-cache'], type='bool', default='yes'),
# this should not be needed, but exists as a failsafe
install_python_apt=dict(required=False, default="yes", type='bool'),
validate_certs = dict(default='yes', type='bool'),
),
supports_check_mode=True,
)
params = module.params
repo = module.params['repo']
state = module.params['state']
update_cache = module.params['update_cache']
sourceslist = None
if not HAVE_PYTHON_APT:
if params['install_python_apt']:
install_python_apt(module)
else:
module.fail_json(msg='python-apt is not installed, and install_python_apt is False')
if isinstance(distro, aptsources_distro.UbuntuDistribution):
sourceslist = UbuntuSourcesList(module,
add_ppa_signing_keys_callback=get_add_ppa_signing_key_callback(module))
elif isinstance(distro, aptsources_distro.Distribution):
sourceslist = SourcesList(module)
else:
module.fail_json(msg='Module apt_repository supports only Debian and Ubuntu.')
sources_before = sourceslist.dump()
try:
if state == 'present':
sourceslist.add_source(repo)
elif state == 'absent':
sourceslist.remove_source(repo)
except InvalidSource, err:
module.fail_json(msg='Invalid repository string: %s' % unicode(err))
sources_after = sourceslist.dump()
changed = sources_before != sources_after
if not module.check_mode and changed:
try:
sourceslist.save()
if update_cache:
cache = apt.Cache()
cache.update()
except OSError, err:
module.fail_json(msg=unicode(err))
module.exit_json(changed=changed, repo=repo, state=state)
# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.urls import *
main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
/** @import * as ESTree from 'estree' */
/** @import { AST, ValidatedCompileOptions, ValidatedModuleCompileOptions } from '#compiler' */
/** @import { ComponentServerTransformState, ComponentVisitors, ServerTransformState, Visitors } from './types.js' */
/** @import { Analysis, ComponentAnalysis } from '../../types.js' */
import { walk } from 'zimmerframe';
import { set_scope } from '../../scope.js';
import { extract_identifiers } from '../../../utils/ast.js';
import * as b from '#compiler/builders';
import { component_name, dev, filename } from '../../../state.js';
import { render_stylesheet } from '../css/index.js';
import { AssignmentExpression } from './visitors/AssignmentExpression.js';
import { AwaitBlock } from './visitors/AwaitBlock.js';
import { AwaitExpression } from './visitors/AwaitExpression.js';
import { CallExpression } from './visitors/CallExpression.js';
import { ClassBody } from './visitors/ClassBody.js';
import { Component } from './visitors/Component.js';
import { ConstTag } from './visitors/ConstTag.js';
import { DebugTag } from './visitors/DebugTag.js';
import { EachBlock } from './visitors/EachBlock.js';
import { ExpressionStatement } from './visitors/ExpressionStatement.js';
import { Fragment } from './visitors/Fragment.js';
import { HtmlTag } from './visitors/HtmlTag.js';
import { Identifier } from './visitors/Identifier.js';
import { IfBlock } from './visitors/IfBlock.js';
import { KeyBlock } from './visitors/KeyBlock.js';
import { LabeledStatement } from './visitors/LabeledStatement.js';
import { MemberExpression } from './visitors/MemberExpression.js';
import { Program } from './visitors/Program.js';
import { PropertyDefinition } from './visitors/PropertyDefinition.js';
import { RegularElement } from './visitors/RegularElement.js';
import { RenderTag } from './visitors/RenderTag.js';
import { SlotElement } from './visitors/SlotElement.js';
import { SnippetBlock } from './visitors/SnippetBlock.js';
import { SpreadAttribute } from './visitors/SpreadAttribute.js';
import { SvelteComponent } from './visitors/SvelteComponent.js';
import { SvelteElement } from './visitors/SvelteElement.js';
import { SvelteFragment } from './visitors/SvelteFragment.js';
import { SvelteHead } from './visitors/SvelteHead.js';
import { SvelteSelf } from './visitors/SvelteSelf.js';
import { TitleElement } from './visitors/TitleElement.js';
import { UpdateExpression } from './visitors/UpdateExpression.js';
import { VariableDeclaration } from './visitors/VariableDeclaration.js';
import { SvelteBoundary } from './visitors/SvelteBoundary.js';
/** @type {Visitors} */
const global_visitors = {
_: set_scope,
AssignmentExpression,
AwaitExpression,
CallExpression,
ClassBody,
ExpressionStatement,
Identifier,
LabeledStatement,
MemberExpression,
Program,
PropertyDefinition,
UpdateExpression,
VariableDeclaration
};
/** @type {ComponentVisitors} */
const template_visitors = {
AwaitBlock,
Component,
ConstTag,
DebugTag,
EachBlock,
Fragment,
HtmlTag,
IfBlock,
KeyBlock,
RegularElement,
RenderTag,
SlotElement,
SnippetBlock,
SpreadAttribute,
SvelteComponent,
SvelteElement,
SvelteFragment,
SvelteHead,
SvelteSelf,
TitleElement,
SvelteBoundary
};
/**
* @param {ComponentAnalysis} analysis
* @param {ValidatedCompileOptions} options
* @returns {ESTree.Program}
*/
export function server_component(analysis, options) {
/** @type {ComponentServerTransformState} */
const state = {
analysis,
options,
scope: analysis.module.scope,
scopes: analysis.module.scopes,
hoisted: [b.import_all('$', 'svelte/internal/server'), ...analysis.instance_body.hoisted],
legacy_reactive_statements: new Map(),
// these are set inside the `Fragment` visitor, and cannot be used until then
init: /** @type {any} */ (null),
template: /** @type {any} */ (null),
namespace: options.namespace,
preserve_whitespace: options.preserveWhitespace,
state_fields: new Map(),
is_standalone: false,
is_instance: false
};
const module = /** @type {ESTree.Program} */ (
walk(/** @type {AST.SvelteNode} */ (analysis.module.ast), state, global_visitors)
);
const instance = /** @type {ESTree.Program} */ (
walk(
/** @type {AST.SvelteNode} */ (analysis.instance.ast),
{ ...state, scopes: analysis.instance.scopes, is_instance: true },
{
...global_visitors,
ImportDeclaration(node) {
state.hoisted.push(node);
return b.empty;
},
ExportNamedDeclaration(node, context) {
if (node.declaration) {
return context.visit(node.declaration);
}
return b.empty;
}
}
)
);
const template = /** @type {ESTree.Program} */ (
walk(
/** @type {AST.SvelteNode} */ (analysis.template.ast),
{ ...state, scopes: analysis.template.scopes },
// @ts-expect-error don't know, don't care
{ ...global_visitors, ...template_visitors }
)
);
/** @type {ESTree.VariableDeclarator[]} */
const legacy_reactive_declarations = [];
for (const [node] of analysis.reactive_statements) {
const statement = [...state.legacy_reactive_statements].find(([n]) => n === node);
if (statement === undefined) {
throw new Error('Could not find reactive statement');
}
if (
node.body.type === 'ExpressionStatement' &&
node.body.expression.type === 'AssignmentExpression'
) {
for (const id of extract_identifiers(node.body.expression.left)) {
const binding = analysis.instance.scope.get(id.name);
if (binding?.kind === 'legacy_reactive') {
legacy_reactive_declarations.push(b.declarator(id));
}
}
}
instance.body.push(statement[1]);
}
if (legacy_reactive_declarations.length > 0) {
instance.body.unshift({
type: 'VariableDeclaration',
kind: 'let',
declarations: legacy_reactive_declarations
});
}
// If the component binds to a child, we need to put the template in a loop and repeat until legacy bindings are stable.
// We can remove this once the legacy syntax is gone.
if (analysis.uses_component_bindings) {
const snippets = template.body.filter(
// @ts-expect-error
(node) => node.type === 'FunctionDeclaration' && node.___snippet
);
const rest = template.body.filter(
// @ts-expect-error
(node) => node.type !== 'FunctionDeclaration' || !node.___snippet
);
template.body = [
...snippets,
b.let('$$settled', b.true),
b.let('$$inner_renderer'),
b.function_declaration(
b.id('$$render_inner'),
[b.id('$$renderer')],
b.block(/** @type {ESTree.Statement[]} */ (rest))
),
b.do_while(
b.unary('!', b.id('$$settled')),
b.block([
b.stmt(b.assignment('=', b.id('$$settled'), b.true)),
b.stmt(b.assignment('=', b.id('$$inner_renderer'), b.call('$$renderer.copy'))),
b.stmt(b.call('$$render_inner', b.id('$$inner_renderer')))
])
),
b.stmt(b.call('$$renderer.subsume', b.id('$$inner_renderer')))
];
}
if (
[...analysis.instance.scope.declarations.values()].some(
(binding) => binding.kind === 'store_sub'
)
) {
instance.body.unshift(b.var('$$store_subs'));
template.body.push(
b.if(b.id('$$store_subs'), b.stmt(b.call('$.unsubscribe_stores', b.id('$$store_subs'))))
);
}
// Propagate values of bound props upwards if they're undefined in the parent and have a value.
// Don't do this as part of the props retrieval because people could eagerly mutate the prop in the instance script.
/** @type {ESTree.Property[]} */
const props = [];
for (const [name, binding] of analysis.instance.scope.declarations) {
if (binding.kind === 'bindable_prop' && !name.startsWith('$$')) {
props.push(b.init(binding.prop_alias ?? name, b.id(name)));
}
}
for (const { name, alias } of analysis.exports) {
props.push(b.init(alias ?? name, b.id(name)));
}
if (props.length > 0) {
// This has no effect in runes mode other than throwing an error when someone passes
// undefined to a binding that has a default value.
template.body.push(b.stmt(b.call('$.bind_props', b.id('$$props'), b.object(props))));
}
let component_block = b.block([
.../** @type {ESTree.Statement[]} */ (instance.body),
.../** @type {ESTree.Statement[]} */ (template.body)
]);
// trick esrap into including comments
component_block.loc = instance.loc;
if (analysis.props_id) {
// need to be placed on first line of the component for hydration
component_block.body.unshift(
b.const(analysis.props_id, b.call('$.props_id', b.id('$$renderer')))
);
}
let should_inject_context = dev || analysis.needs_context;
if (should_inject_context) {
component_block = b.block([
b.stmt(
b.call(
'$$renderer.component',
b.arrow([b.id('$$renderer')], component_block, false),
dev && b.id(component_name)
)
)
]);
}
if (analysis.uses_rest_props) {
/** @type {string[]} */
const named_props = analysis.exports.map(({ name, alias }) => alias ?? name);
for (const [name, binding] of analysis.instance.scope.declarations) {
if (binding.kind === 'bindable_prop') named_props.push(binding.prop_alias ?? name);
}
component_block.body.unshift(
b.const(
'$$restProps',
b.call(
'$.rest_props',
b.id('$$sanitized_props'),
b.array(named_props.map((name) => b.literal(name)))
)
)
);
}
if (analysis.uses_props || analysis.uses_rest_props) {
component_block.body.unshift(
b.const('$$sanitized_props', b.call('$.sanitize_props', b.id('$$props')))
);
}
if (analysis.uses_slots) {
component_block.body.unshift(b.const('$$slots', b.call('$.sanitize_slots', b.id('$$props'))));
}
const body = [...state.hoisted, ...module.body];
if (analysis.css.ast !== null && options.css === 'injected' && !options.customElement) {
const hash = b.literal(analysis.css.hash);
const code = b.literal(render_stylesheet(analysis.source, analysis, options).code);
body.push(b.const('$$css', b.object([b.init('hash', hash), b.init('code', code)])));
component_block.body.unshift(b.stmt(b.call('$$renderer.global.css.add', b.id('$$css'))));
}
let should_inject_props =
should_inject_context ||
props.length > 0 ||
analysis.needs_props ||
analysis.uses_props ||
analysis.uses_rest_props ||
analysis.uses_slots ||
analysis.slot_names.size > 0;
const component_function = b.function_declaration(
b.id(analysis.name),
should_inject_props ? [b.id('$$renderer'), b.id('$$props')] : [b.id('$$renderer')],
component_block
);
if (options.compatibility.componentApi === 4) {
body.unshift(b.imports([['render', '$$_render']], 'svelte/server'));
body.push(
component_function,
b.stmt(
b.assignment(
'=',
b.member_id(`${analysis.name}.render`),
b.function(
null,
[b.id('$$props'), b.id('$$opts')],
b.block([
b.return(
b.call(
'$$_render',
b.id(analysis.name),
b.object([
b.init('props', b.id('$$props')),
b.init('context', b.member(b.id('$$opts'), 'context', false, true))
])
)
)
])
)
)
),
b.export_default(b.id(analysis.name))
);
} else if (dev) {
body.push(
component_function,
b.stmt(
b.assignment(
'=',
b.member_id(`${analysis.name}.render`),
b.function(
null,
[],
b.block([
b.throw_error(
`Component.render(...) is no longer valid in Svelte 5. ` +
'See https://svelte.dev/docs/svelte/v5-migration-guide#Components-are-no-longer-classes for more information'
)
])
)
)
),
b.export_default(b.id(analysis.name))
);
} else {
body.push(b.export_default(component_function));
}
if (dev) {
// add `App[$.FILENAME] = 'App.svelte'` so that we can print useful messages later
body.unshift(
b.stmt(
b.assignment('=', b.member(b.id(analysis.name), '$.FILENAME', true), b.literal(filename))
)
);
}
if (options.experimental.async) {
body.unshift(b.imports([], 'svelte/internal/flags/async'));
}
return {
type: 'Program',
sourceType: 'module',
body
};
}
/**
* @param {Analysis} analysis
* @param {ValidatedModuleCompileOptions} options
* @returns {ESTree.Program}
*/
export function server_module(analysis, options) {
/** @type {ServerTransformState} */
const state = {
analysis,
options,
scope: analysis.module.scope,
scopes: analysis.module.scopes,
// this is an anomaly — it can only be used in components, but it needs
// to be present for `javascript_visitors_legacy` and so is included in module
// transform state as well as component transform state
legacy_reactive_statements: new Map(),
state_fields: new Map(),
is_instance: false
};
const module = /** @type {ESTree.Program} */ (
walk(/** @type {AST.SvelteNode} */ (analysis.module.ast), state, global_visitors)
);
return {
type: 'Program',
sourceType: 'module',
body: [b.import_all('$', 'svelte/internal/server'), ...module.body]
};
}
|
javascript
|
github
|
https://github.com/sveltejs/svelte
|
packages/svelte/src/compiler/phases/3-transform/server/transform-server.js
|
/* Copyright 2016 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#ifndef TENSORFLOW_CORE_KERNELS_DEEP_CONV2D_H_
#define TENSORFLOW_CORE_KERNELS_DEEP_CONV2D_H_
#include "tensorflow/core/framework/types.h"
namespace tensorflow {
class OpKernelContext;
// DeepConv2D is a Conv2D implementation specialized for deep (i.e. large
// in_depth * out_depth product) convolutions (see deep_conv2d.cc for details).
// DeepConv2DTransform is an interface for implementing transforms for
// DeepConv2D. Implementations must specify transform matrices and
// input/output/filter shapes. DeepConv2d computes:
//
// y = C[Ad * Bg]
//
// C: output transform matrix
// A: input data transform matrix
// B: filter transform matrix
// d: vectorized 2D data tile
// g: vectorized 2D filter tile
// y: vectorized 2D output tile
template <typename T>
class DeepConv2DTransform {
public:
virtual ~DeepConv2DTransform() {}
virtual void GetFilterTransformMatrix(const int64_t rows, const int64_t cols,
T* transform_matrix) const = 0;
virtual void GetInputTransformMatrix(const int64_t rows, const int64_t cols,
T* transform_matrix) const = 0;
virtual void GetOutputTransformMatrix(const int64_t rows, const int64_t cols,
T* transform_matrix) const = 0;
struct Shape {
Shape(int64_t r, int64_t c) : rows(r), cols(c) {}
int64_t rows;
int64_t cols;
};
virtual const Shape& filter_shape() const = 0;
virtual const Shape& input_shape() const = 0;
virtual const Shape& output_shape() const = 0;
};
// Conv2D arguments used by DeepConv2D implementation.
struct Conv2DArgs {
// Input layer dimensions
int batch;
int in_rows;
int in_cols;
int in_depth;
int filter_rows;
int filter_cols;
int pad_rows;
int pad_cols;
// Output layer dimensions
int out_rows;
int out_cols;
int out_depth;
Conv2DArgs()
: batch(0),
in_rows(0),
in_cols(0),
in_depth(0),
filter_rows(0),
filter_cols(0),
pad_rows(0),
pad_cols(0),
out_rows(0),
out_cols(0),
out_depth(0) {}
};
// Returns true if convolution operation specified by function arguments
// can use DeepConv2D implementation, and false otherwise.
// May return false based on parameters, cost, or whether feature is disabled.
bool CanUseDeepConv2D(int stride_rows, int stride_cols, int filter_rows,
int filter_cols, int in_depth, int out_depth,
int out_rows, int out_cols);
namespace functor {
// Calls DeepConv2D implementation (see deep_conv2d.cc for details).
template <typename Device, typename T>
struct DeepConv2D {
void operator()(OpKernelContext* ctx, const Conv2DArgs& args, const T* input,
const T* filter, T* output);
};
} // namespace functor
} // namespace tensorflow
#endif // TENSORFLOW_CORE_KERNELS_DEEP_CONV2D_H_
|
c
|
github
|
https://github.com/tensorflow/tensorflow
|
tensorflow/core/kernels/deep_conv2d.h
|
import json
from itertools import chain
from operator import attrgetter
import pyperclip
from datetime import date,datetime
import csv
from django.contrib.auth.decorators import login_required
from django.core.exceptions import ObjectDoesNotExist
from django.core.urlresolvers import reverse, reverse_lazy
from django.http import HttpResponseRedirect
from django.shortcuts import render,HttpResponse
from django.views.generic.edit import CreateView, UpdateView, DeleteView
from forms.models import Questionnaire
from forms.views import replicate
from gymkhana.settings import DOMAIN_NAME
from .forms import *
from .models import *
from .scraper import getRecord
## ------------------------------------------------------------------------------------------------------------------ ##
############################################ DASHBOARD VIEWS ###################################################
## ------------------------------------------------------------------------------------------------------------------ ##
# main index view for user,contains all nomination to be filled by normal user,
# have club filter that filter both nomination and its group..
# is_safe
@login_required
def index(request):
if request.user.is_authenticated:
try:
today = datetime.now()
posts = Post.objects.filter(post_holders=request.user)
username = UserProfile.objects.get(user=request.user)
club_filter = ClubFilter(request.POST or None)
if club_filter.is_valid():
if club_filter.cleaned_data['club'] == 'NA':
club_filter = ClubFilter
grouped_nomi = GroupNomination.objects.filter(status='out')
nomi = Nomination.objects.filter(group_status='normal').filter(status='Nomination out')
re_nomi = Nomination.objects.filter(group_status='normal'). \
filter(status='Interview period and Nomination reopened')
nomi = nomi | re_nomi
result_query = sorted(chain(nomi, grouped_nomi), key=attrgetter('opening_date'), reverse=True)
return render(request, 'index1.html', context={'posts': posts, 'username': username,
'club_filter': club_filter, 'today': today,
'result_query': result_query})
club = Club.objects.get(pk=club_filter.cleaned_data['club'])
grouped_nomi = club.club_group.all().filter(status='out')
nomi = club.club_nomi.all().filter(group_status='normal').filter(status='Nomination out')
re_nomi = club.club_nomi.all().filter(group_status='normal').\
filter(status='Interview period and Nomination reopened')
nomi = nomi | re_nomi
result_query = sorted(chain(nomi, grouped_nomi), key=attrgetter('opening_date'), reverse=True)
return render(request, 'index1.html', context={'posts': posts, 'username': username,
'result_query': result_query, 'club_filter': club_filter,
'today': today})
grouped_nomi = GroupNomination.objects.filter(status='out')
nomi = Nomination.objects.filter(group_status='normal').filter(status='Nomination out')
re_nomi = Nomination.objects.filter(group_status='normal').\
filter(status='Interview period and Nomination reopened')
nomi = nomi | re_nomi
result_query = sorted(chain(nomi, grouped_nomi), key=attrgetter('opening_date'), reverse=True)
return render(request, 'index1.html', context={'posts': posts, 'username': username,
'club_filter': club_filter, 'today': today,
'result_query': result_query})
except ObjectDoesNotExist:
form = UserId(request.POST or None)
if form.is_valid():
try:
data = getRecord(form.cleaned_data['user_roll'])
except:
error = True
return render(request, 'register.html', context={'form': form, 'error': error})
email = str(request.user) + '@iitk.ac.in'
if email == data['email']:
profile = UserProfile.objects.create(user=request.user,name = data['name'],roll_no = data['roll'],
programme = data["program"],department = data['department'],
contact = data['mobile'], room_no = data['room'],hall = data['hall'])
pk = profile.pk
return HttpResponseRedirect(reverse('profile_update', kwargs={'pk': pk}))
else:
error = True
return render(request, 'register.html', context={'form': form, 'error': error})
error = False
return render(request, 'register.html', context={'form': form ,'error':error})
else:
return HttpResponseRedirect(reverse('login'))
# contain all nomination for which user have rights whether created by him or created by his chil post
# also shows nomination for which he has been added as interview panel
# is_safe
@login_required
def admin_portal(request):
posts = Post.objects.filter(post_holders=request.user)
username = UserProfile.objects.get(user=request.user)
admin_query = Nomination.objects.none()
for post in posts:
query = Nomination.objects.filter(nomi_approvals=post)
admin_query = admin_query | query
panel_nomi = request.user.panel.all().exclude(status='Nomination created')
admin_query = admin_query | panel_nomi
admin_query = admin_query.distinct().exclude(status='Work done')
admin_query_reverse = admin_query[::-1]
club_filter = ClubFilter(request.POST or None)
if club_filter.is_valid():
club = Club.objects.get(pk=club_filter.cleaned_data['club'])
admin_query = admin_query.filter(tags=club)
admin_query_reverse = admin_query[::-1]
return render(request, 'admin_portal.html', context={'posts': posts, 'username': username,
'admin_query': admin_query_reverse,
'club_filter': club_filter})
return render(request, 'admin_portal.html', context={'posts': posts, 'username': username,
'admin_query': admin_query_reverse,
'club_filter': club_filter})
# a view for retification purpose
# is_safe
@login_required
def senate_view(request):
nomi_ratify = Nomination.objects.filter(status='Sent for ratification')
all_posts = Post.objects.filter(post_holders=request.user)
access = False
for post in all_posts:
if post.perms == 'can ratify the post':
access = True
break
if access:
return render(request, 'senate_view.html', context={'nomi': nomi_ratify})
else:
return render(request, 'no_access.html')
@login_required
def interview_list(request):
interviews = Nomination.objects.filter(interview_panel=request.user).exclude(status = 'Work done')
return render(request, 'interviews.html', context={'interviews': interviews})
## ------------------------------------------------------------------------------------------------------------------ ##
######################################### POST RELATED VIEWS ######################################################
## ------------------------------------------------------------------------------------------------------------------ ##
'''
a view for a given post....contains all things required for working on that post...
tips...use redirect if using form as button
is_safe
'''
@login_required
def post_view(request, pk):
post = Post.objects.get(pk=pk)
child_posts = Post.objects.filter(parent=post)
child_posts_reverse = child_posts[::-1]
post_approvals = Post.objects.filter(post_approvals=post).filter(status='Post created')
post_to_be_approved = Post.objects.filter(take_approval = post).filter(status = 'Post created')
post_count = post_to_be_approved.count()
post_approvals = post_to_be_approved|post_approvals
post_approvals = post_approvals.distinct()
entity_approvals = ClubCreate.objects.filter(take_approval=post)
entity_by_me = ClubCreate.objects.filter(requested_by=post)
nomi_approvals = Nomination.objects.filter(nomi_approvals=post).filter(status='Nomination created').filter(group_status= 'normal')
re_nomi_approval = ReopenNomination.objects.filter(approvals=post).\
filter(nomi__status='Interview period and Reopening initiated')
group_nomi_approvals = GroupNomination.objects.filter(status='created').filter(approvals=post)
count = nomi_approvals.count() + group_nomi_approvals.count() + re_nomi_approval.count()
result_approvals = Nomination.objects.filter(result_approvals=post).exclude(status='Work done').\
exclude(status='Nomination created').exclude(status='Nomination out')
to_deratify = Deratification.objects.filter(deratify_approval = post).exclude(status = 'deratified')
if request.method == 'POST':
tag_form = ClubForm(request.POST)
if tag_form.is_valid():
if post.perms == "can ratify the post":
Club.objects.create(club_name=tag_form.cleaned_data['club_name'], club_parent=post.club)
else:
ClubCreate.objects.create(club_name=tag_form.cleaned_data['club_name'], club_parent=post.club,
take_approval=post.parent, requested_by=post)
return HttpResponseRedirect(reverse('post_view', kwargs={'pk': pk}))
else:
tag_form = ClubForm
if request.user in post.post_holders.all():
return render(request, 'post1.html', context={'post': post, 'child_posts': child_posts_reverse,
'post_approval': post_approvals, 'tag_form': tag_form,
'nomi_approval': nomi_approvals,
'group_nomi_approvals': group_nomi_approvals,
'entity_by_me': entity_by_me, 're_nomi_approval': re_nomi_approval,
'result_approvals': result_approvals, 'count': count,
"to_deratify": to_deratify, "post_count": post_count,
'entity_approvals': entity_approvals})
else:
return render(request, 'no_access.html')
@login_required
def add_post_holder(request, pk): # pk of the Post
post = Post.objects.get(pk=pk)
if request.method == 'POST':
post_holder_Form = PostHolderForm(request.POST)
if post_holder_Form.is_valid():
email = post_holder_Form.cleaned_data['email']
start_year = post_holder_Form.cleaned_data['session']
try:
name = User.objects.get(username=email)
post.post_holders.add(name)
session = Session.objects.filter(start_year=start_year).first()
if session is None:
session = Session.objects.create(start_year=start_year)
previous_history = PostHistory.objects.filter(post = post).filter(user = name).filter(post_session = session)
if not previous_history:
PostHistory.objects.create(post=post, user=name, post_session=session,
end=session_end_date(session.start_year))
return HttpResponseRedirect(reverse('child_post', kwargs={'pk': pk}))
except ObjectDoesNotExist:
return render(request, 'add_post_holder.html', context={'post': post, 'form': post_holder_Form})
else:
post_holder_Form = PostHolderForm
return render(request, 'add_post_holder.html', context={'post': post, 'form': post_holder_Form})
# view to create a new post, a child post for a post can be created only by the post holders of that post...
#is_safe
# parent is simply added ,goes directly to parent for approval
@login_required
def post_create(request, pk):
parent = Post.objects.get(pk=pk)
if request.method == 'POST':
post_form = PostForm(parent, request.POST)
if post_form.is_valid():
club_id = post_form.cleaned_data['club']
club = Club.objects.get(pk=club_id)
post = Post.objects.create(post_name=post_form.cleaned_data['post_name'], club=club, parent=parent,elder_brother= parent)
post.take_approval = parent.parent
post.post_approvals.add(parent.parent)
post.save()
return HttpResponseRedirect(reverse('post_view', kwargs={'pk': pk}))
else:
club = parent.club
post_form = PostForm(parent)
if request.user in parent.post_holders.all():
return render(request, 'nomi/post_form.html', context={'form': post_form, 'parent': parent})
else:
return render(request, 'no_access.html')
@login_required
def senate_post_create(request, pk):
parent = Post.objects.get(pk=pk)
if request.method == 'POST':
post_form = PostForm(parent, request.POST)
if post_form.is_valid():
club_id = post_form.cleaned_data['club']
club = Club.objects.get(pk=club_id)
elder_brother_id = post_form.cleaned_data['elder_brother']
elder_brother = Post.objects.get(pk=elder_brother_id)
Post.objects.create(post_name=post_form.cleaned_data['post_name'],
elder_brother=elder_brother, club=club, parent=parent,
perms=post_form.cleaned_data['power'], status='Post approved')
return HttpResponseRedirect(reverse('post_view', kwargs={'pk': pk}))
else:
club = parent.club
post_form = PostForm(parent)
if request.user in parent.post_holders.all():
return render(request, 'nomi/post_form.html', context={'form': post_form, 'parent': parent})
else:
return render(request, 'no_access.html')
# only parent post have access to this view
# is_safe
@login_required
def child_post_view(request, pk):
post = Post.objects.get(pk=pk)
parent = post.parent
nominations = Nomination.objects.filter(nomi_post=post)
give_form = BlankForm(request.POST or None)
if give_form.is_valid():
if post.tag_perms == 'normal':
post.tag_perms = 'Can create'
else:
post.tag_perms = 'normal'
post.save()
return HttpResponseRedirect(reverse('child_post', kwargs={'pk': pk}))
if request.user in parent.post_holders.all():
return render(request, 'child_post1.html', {'post': post, 'nominations': nominations, 'parent':parent,
'give_form': give_form})
else:
return render(request, 'no_access.html')
# the viewer_post which have access add its parent for approval of post and also add parent club as post tag..
# is_safe
@login_required
def post_approval(request, post_pk):
post = Post.objects.get(pk=post_pk)
access = False
if request.user in post.take_approval.post_holders.all():
access = True
if access:
if post.take_approval.perms == "can ratify the post":
post.status = 'Post approved'
post.save()
return HttpResponseRedirect(reverse('post_view', kwargs={'pk': post.take_approval.pk}))
else:
to_add = post.take_approval.parent
current = post.take_approval
post.post_approvals.add(to_add)
post.tags.add(to_add.club)
post.take_approval = to_add
post.save()
return HttpResponseRedirect(reverse('post_view', kwargs={'pk': current.pk}))
else:
return render(request, 'no_access.html')
def edit_post_name(request,post_pk):
post = Post.objects.get(pk=post_pk)
access = False
if request.user in post.take_approval.post_holders.all():
access = True
if access:
if request.method == 'POST':
edit_post = ChangePostName(request.POST)
if edit_post.is_valid():
post.post_name = edit_post.cleaned_data['post_name']
post.save()
return HttpResponseRedirect(reverse('edit_post_name', kwargs={'post_pk': post_pk}))
else:
edit_post = ChangePostName
return render(request, 'edit_post_name.html', {'post': post, 'edit_post': edit_post})
else:
return render(request, 'no_access.html')
# the viewer removes himself from approvals ,thus delete the post down...
# is_safe
@login_required
def post_reject(request, post_pk):
post = Post.objects.get(pk=post_pk)
access = False
if request.user in post.take_approval.post_holders.all():
access = True
view = post.take_approval
if access:
post.delete()
return HttpResponseRedirect(reverse('post_view', kwargs={'pk': view.pk}))
else:
return render(request, 'no_access.html')
## ------------------------------------------------------------------------------------------------------------------ ##
######################################### CLUB RELATED VIEWS ######################################################
## ------------------------------------------------------------------------------------------------------------------ ##
# the viewer_post which have access add its parent for approval of club
# is_safe
@login_required
def club_approval(request, club_pk):
club_create = ClubCreate.objects.get(pk=club_pk)
access = False
if request.user in club_create.take_approval.post_holders.all():
access = True
view = club_create.take_approval
if access:
if club_create.take_approval.perms == "can ratify the post":
Club.objects.create(club_name = club_create.club_name,club_parent = club_create.club_parent)
club_create.delete()
return HttpResponseRedirect(reverse('post_view', kwargs={'pk': view.pk}))
else:
to_add = club_create.take_approval.parent
club_create.take_approval = to_add
club_create.save()
return HttpResponseRedirect(reverse('post_view', kwargs={'pk': view.pk}))
else:
return render(request, 'no_access.html')
# the viewer removes himself from approvals ,thus delete the post down...
# is_safe
@login_required
def club_reject(request, club_pk):
club_reject = ClubCreate.objects.get(pk=club_pk)
access = False
if request.user in club_reject.take_approval.post_holders.all():
access = True
view = club_reject.take_approval
if access:
club_reject.delete()
return HttpResponseRedirect(reverse('post_view', kwargs={'pk': view.pk}))
## ------------------------------------------------------------------------------------------------------------------ ##
######################################### NOMINATION RELATED VIEWS ################################################
## ------------------------------------------------------------------------------------------------------------------ ##
# only post parent should create nomi...
# safe
@login_required
def nomination_create(request, pk):
post = Post.objects.get(pk=pk)
if request.user in post.parent.post_holders.all():
if request.method == 'POST':
title_form = NominationForm(request.POST)
if title_form.is_valid():
post = Post.objects.get(pk=pk)
questionnaire = Questionnaire.objects.create(name=title_form.cleaned_data['title'])
nomination = Nomination.objects.create(name=title_form.cleaned_data['title'],
description=title_form.cleaned_data['description'],
deadline=title_form.cleaned_data['deadline'],
nomi_session=title_form.cleaned_data['nomi_session'],
nomi_form=questionnaire, nomi_post=post,
)
pk = questionnaire.pk
return HttpResponseRedirect(reverse('forms:creator_form', kwargs={'pk': pk}))
else:
title_form = NominationForm()
return render(request, 'nomi/nomination_form.html', context={'form': title_form, 'post': post})
else:
return render(request, 'no_access.html')
class NominationUpdate(UpdateView):
model = Nomination
fields = ['name', 'description']
success_url = reverse_lazy('index')
class NominationDelete(DeleteView):
model = Nomination
success_url = reverse_lazy('index')
def nomi_replicate(request,nomi_pk):
nomi_to_replicate = Nomination.objects.get(pk = nomi_pk)
post = nomi_to_replicate.nomi_post
if request.user in post.parent.post_holders.all():
if request.method == 'POST':
title_form = NominationReplicationForm(request.POST)
if title_form.is_valid():
questionnaire = replicate(nomi_to_replicate.nomi_form.pk)
questionnaire.name = title_form.cleaned_data['title']
questionnaire.save()
nomination = Nomination.objects.create(name=title_form.cleaned_data['title'],
description=nomi_to_replicate.description,
deadline=title_form.cleaned_data['deadline'],
nomi_session=title_form.cleaned_data['nomi_session'],
nomi_form=questionnaire, nomi_post=post,
)
pk = questionnaire.pk
return HttpResponseRedirect(reverse('forms:creator_form', kwargs={'pk': questionnaire.pk}))
else:
title_form = NominationReplicationForm()
return render(request, 'nomi/nomination_form.html', context={'form': title_form, 'post': post})
else:
return render(request, 'no_access.html')
# ****** in use...
def get_access_and_post(request,nomi_pk):
nomi = Nomination.objects.get(pk=nomi_pk)
access = False
view_post = None
for post in nomi.nomi_approvals.all():
if request.user in post.post_holders.all():
access = True
view_post = post
break
return access,view_post
def get_access_and_post_for_result(request, nomi_pk):
nomi =Nomination.objects.get(pk=nomi_pk)
access = False
view_post = None
for post in nomi.result_approvals.all():
if request.user in post.post_holders.all():
access = True
view_post = post
break
return access, view_post
@login_required
def nomi_detail(request, nomi_pk):
nomi = Nomination.objects.get(pk=nomi_pk)
parents = nomi.nomi_post.parent.post_holders.all()
questionnaire = nomi.nomi_form
form = questionnaire.get_form(request.POST or None)
panelform = UserId(request.POST or None)
access, view_post = get_access_and_post(request, nomi_pk)
if not access:
access, view_post = get_access_and_post_for_result(request, nomi_pk)
status = [None]*7
renomi_edit = 0
p_in_rn = 0
if nomi.status == 'Nomination created':
status[0] = True
elif nomi.status == 'Nomination out':
status[1] = True
elif nomi.status == 'Interview period':
status[2] = True
elif nomi.status == 'Sent for ratification':
status[3] = True
elif nomi.status == 'Interview period and Reopening initiated':
status[4] = True
if view_post in nomi.reopennomination.approvals.all():
renomi_edit = 1
if view_post.parent in nomi.reopennomination.approvals.all():
p_in_rn = 1
elif nomi.status == 'Interview period and Nomination reopened':
status[5] = True
else:
status[6] = True
if access:
if view_post.perms == 'can approve post and send nominations to users' or view_post.perms == 'can ratify the post':
power_to_send = 1
else:
power_to_send = 0
if view_post.elder_brother in nomi.nomi_approvals.all():
sent_to_parent = 1
else:
sent_to_parent = 0
if panelform.is_valid():
try:
profile = UserProfile.objects.get(roll_no=panelform.cleaned_data["user_roll"])
user = profile.user
nomi.interview_panel.add(user)
return HttpResponseRedirect(reverse('nomi_detail', kwargs={'nomi_pk': nomi_pk}))
except ObjectDoesNotExist:
return HttpResponseRedirect(reverse('nomi_detail', kwargs={'nomi_pk': nomi_pk}))
panelists = nomi.interview_panel.all().distinct()
panelists_exclude_parent = []
for panelist in panelists:
if panelist not in parents:
panelists_exclude_parent.append(panelist)
return render(request, 'nomi_detail_admin.html', context={'nomi': nomi, 'form': form, 'panelform': panelform,
'sent_to_parent': sent_to_parent, 'status': status,
'power_to_send': power_to_send, 'parents': parents,
'panelists': panelists_exclude_parent,'renomi':renomi_edit,
'p_in_rn':p_in_rn})
elif request.user in nomi.interview_panel.all():
return render(request, 'nomi_detail_user.html', context={'nomi': nomi})
else:
if status[1] or status[5]:
return render(request, 'nomi_detail_user.html', context={'nomi': nomi})
else:
return render(request, 'no_access.html')
@login_required
def see_nomi_form(request, pk):
nomi = Nomination.objects.get(pk=pk)
if nomi.nomi_form and nomi.nomi_form.question_set.all():
questionnaire = nomi.nomi_form
form = questionnaire.get_form
return render(request, 'see_nomi_form.html', context={'form': form, 'nomi':nomi })
else:
info = "There is not any form for this nomi"
return render(request, 'nomi_done.html', context={'info': info})
@login_required
def remove_panelist(request, nomi_pk, user_pk):
nomination = Nomination.objects.get(pk=nomi_pk)
panelist = User.objects.get(pk=user_pk)
panel = nomination.interview_panel
panel.remove(panelist)
return HttpResponseRedirect(reverse('nomi_detail', kwargs={'nomi_pk': nomi_pk}))
@login_required
def nomi_approval(request, nomi_pk):
nomi = Nomination.objects.get(pk=nomi_pk)
access, view_post = get_access_and_post(request, nomi_pk)
if access:
if view_post.elder_brother:
to_add = view_post.elder_brother
nomi.nomi_approvals.add(to_add)
nomi.tags.add(view_post.parent.club)
nomi.tags.add(to_add.club)
else:
to_add = view_post.parent
nomi.nomi_approvals.add(to_add)
nomi.tags.add(to_add.club)
return HttpResponseRedirect(reverse('nomi_detail', kwargs={'nomi_pk': nomi_pk}))
else:
return render(request, 'no_access.html')
@login_required
def nomi_reject(request, nomi_pk):
nomi = Nomination.objects.get(pk=nomi_pk)
access, view_post = get_access_and_post(request, nomi_pk)
if access:
to_remove = view_post
nomi.nomi_approvals.remove(to_remove)
return HttpResponseRedirect(reverse('post_view', kwargs={'pk': view_post.pk}))
else:
return render(request, 'no_access.html')
@login_required
def final_nomi_approval(request, nomi_pk):
nomi = Nomination.objects.get(pk=nomi_pk)
access, view_post = get_access_and_post(request, nomi_pk)
if access and (view_post.perms == "can ratify the post" or view_post.perms =="can approve post and send nominations to users") :
if view_post.elder_brother:
to_add = view_post.elder_brother
nomi.nomi_approvals.add(to_add)
nomi.tags.add(to_add.club)
if view_post.parent:
to_add = view_post.parent
nomi.nomi_approvals.add(to_add)
nomi.tags.add(to_add.club)
nomi.open_to_users()
return HttpResponseRedirect(reverse('nomi_detail', kwargs={'nomi_pk': nomi_pk}))
else:
return render(request, 'no_access.html')
@login_required
def copy_nomi_link(request, pk):
url = DOMAIN_NAME + '/nominations/nomi_detail/' + str(pk) + '/'
pyperclip.copy(url)
return HttpResponseRedirect(reverse('admin_portal'))
## ------------------------------------------------------------------------------------------------------------------ ##
######################################### REOPEN NOMINATION MONITOR VIEWS ##########################################
## ------------------------------------------------------------------------------------------------------------------ ##
@login_required
def reopen_nomi(request, nomi_pk):
access , view_post = get_access_and_post(request,nomi_pk)
nomi = Nomination.objects.get(pk=nomi_pk)
if access:
re_nomi = ReopenNomination.objects.create(nomi=nomi)
re_nomi.approvals.add(view_post)
if view_post.elder_brother:
re_nomi.approvals.add(view_post.elder_brother)
re_nomi.nomi.status = 'Interview period and Reopening initiated'
re_nomi.nomi.save()
return HttpResponseRedirect(reverse('nomi_detail', kwargs={'nomi_pk': nomi_pk}))
else:
return render(request, 'no_access.html')
# ****** in use...
def get_access_and_post_for_renomi(request,re_nomi_pk):
re_nomi = ReopenNomination.objects.get(pk=re_nomi_pk)
access = False
view_post = None
for post in re_nomi.approvals.all():
if request.user in post.post_holders.all():
access = True
view_post = post
break
return access,view_post
@login_required
def re_nomi_approval(request, re_nomi_pk):
re_nomi = ReopenNomination.objects.get(pk=re_nomi_pk)
access , view_post = get_access_and_post_for_renomi(request,re_nomi_pk)
if access:
if view_post.perms == "can ratify the post" or view_post.perms =="can approve post and send nominations to users":
re_nomi.re_open_to_users()
nomi = re_nomi.nomi
re_nomi.delete()
return HttpResponseRedirect(reverse('nomi_detail', kwargs={'nomi_pk': nomi.pk}))
else:
to_add = view_post.elder_brother
re_nomi.approvals.add(to_add)
return HttpResponseRedirect(reverse('nomi_detail', kwargs={'nomi_pk': re_nomi.nomi.pk}))
else:
return render(request, 'no_access.html')
@login_required
def re_nomi_reject(request, re_nomi_pk):
re_nomi = ReopenNomination.objects.get(pk=re_nomi_pk)
access , view_post = get_access_and_post_for_renomi(request,re_nomi_pk)
if access:
nomi = re_nomi.nomi
nomi.status = 'Interview period'
nomi.save()
re_nomi.delete()
return HttpResponseRedirect(reverse('nomi_detail', kwargs={'nomi_pk': nomi.pk}))
else:
return render(request, 'no_access.html')
@login_required
def final_re_nomi_approval(request, re_nomi_pk):
re_nomi = ReopenNomination.objects.get(pk=re_nomi_pk)
access , view_post = get_access_and_post_for_renomi(request,re_nomi_pk)
if access:
re_nomi.re_open_to_users()
nomi=re_nomi.nomi
re_nomi.delete()
return HttpResponseRedirect(reverse('nomi_detail', kwargs={'nomi_pk': nomi.pk}))
else:
return render(request, 'no_access.html')
## ------------------------------------------------------------------------------------------------------------------ ##
######################################### NOMINATION MONITOR VIEWS ################################################
## ------------------------------------------------------------------------------------------------------------------ ##
# ****** in use...
@login_required
def nomi_apply(request, pk):
nomination = Nomination.objects.get(pk=pk)
count = NominationInstance.objects.filter(nomination=nomination).filter(user=request.user).count()
tick = True
if nomination.nomi_form:
ct = nomination.nomi_form.question_set.count()
tick = not ct
if not count and (nomination.status == "Nomination out" or nomination.status =="Interview period and Nomination reopened"):
if not tick:
questionnaire = nomination.nomi_form
form = questionnaire.get_form(request.POST or None)
form_confirm = SaveConfirm(request.POST or None)
if form_confirm.is_valid():
if form.is_valid():
filled_form = questionnaire.add_answer(request.user, form.cleaned_data)
if form_confirm.cleaned_data["save_or_submit"] == "only save":
NominationInstance.objects.create(user=request.user, nomination=nomination,
filled_form=filled_form,
submission_status=False, timestamp=date.today())
info = "Your application has been saved. It has not been submited. So make sure you submit it after further edits through your profile module"
else:
NominationInstance.objects.create(user=request.user, nomination=nomination, filled_form=filled_form,
submission_status = True,timestamp = date.today())
info = "Your application has been recorded. You can edit it through profile module."
return render(request, 'nomi_done.html', context={'info': info})
return render(request, 'forms/show_form.html', context={'form': form, 'form_confirm': form_confirm,
'questionnaire': questionnaire, 'pk': pk})
else:
questionnaire = nomination.nomi_form
form_confirm = ConfirmApplication(request.POST or None)
form = questionnaire.get_form(request.POST or None)
if form_confirm.is_valid():
NominationInstance.objects.create(user=request.user, nomination=nomination,submission_status = True,timestamp = date.today())
info = "Your application has been recorded."
return render(request, 'nomi_done.html', context={'info': info})
return render(request, 'forms/show_form.html', context={'form': form,'form_confirm': form_confirm, 'pk': pk,'questionnaire': questionnaire})
else:
info = "You have applied for it already.You can edit it through profile module."
if not (nomination.status == "Nomination out" or nomination.status =="Interview period and Nomination reopened"):
info = "Nomination has been closed"
return render(request, 'nomi_done.html', context={'info': info})
def nomi_answer_edit(request, pk):
application = NominationInstance.objects.get(pk=pk)
nomination = application.nomination
if application.user == request.user and (nomination.status == "Nomination out" or nomination.status =="Interview period and Nomination reopened") :
ans_form = application.filled_form
data = json.loads(ans_form.data)
applicant = application.user.userprofile
questionnaire = application.nomination.nomi_form
form = questionnaire.get_form(request.POST or data)
if nomination.nomi_form and application.submission_status== False:
form_confirm = SaveConfirm(request.POST or None)
if form_confirm.is_valid():
if form.is_valid():
info = "Your application has been edited and saved locally. Don't forget to submit it before deadline "
if form_confirm.cleaned_data["save_or_submit"] == "save and submit":
application.submission_status = True
application.timestamp = date.today()
application.save()
info = "Your application has been edited and finally submitted."
json_data = json.dumps(form.cleaned_data)
ans_form.data = json_data
ans_form.save()
application.edit_time = date.today()
application.save()
return render(request, 'nomi_done.html', context={'info': info})
else:
form_confirm = ConfirmApplication(request.POST or None)
if form_confirm.is_valid():
if form.is_valid():
json_data = json.dumps(form.cleaned_data)
ans_form.data = json_data
ans_form.save()
application.edit_time = date.today()
application.save()
info = "Your application has been edited"
return render(request, 'nomi_done.html', context={'info': info})
return render(request, 'nomi_answer_edit.html', context={'form': form, 'form_confirm': form_confirm,
'nomi': application, 'nomi_user': applicant})
else:
return render(request, 'no_access.html')
def get_mails(query_users):
mail_ids = ''
for each in query_users:
if len(mail_ids):
mail_ids = mail_ids + ', ' + str(each.user) + '@iitk.ac.in'
else:
mail_ids = str(each.user) + '@iitk.ac.in'
return mail_ids
def get_nomi_status(nomination):
status = [None] * 7
if nomination.status == 'Nomination created':
status[0] = True
elif nomination.status == 'Nomination out':
status[1] = True
elif nomination.status == 'Interview period':
status[2] = True
elif nomination.status == 'Sent for ratification':
status[3] = True
elif nomination.status == 'Interview period and Reopening initiated':
status[4] = True
elif nomination.status == 'Interview period and Nomination reopened':
status[5] = True
else:
status[6] = True
return status
def get_accepted_csv(request,nomi_pk):
# Create the HttpResponse object with the appropriate CSV header.
nomination = Nomination.objects.get(pk=nomi_pk)
accepted = NominationInstance.objects.filter(nomination=nomination).filter(status='Accepted')
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="accepted.csv"'
writer = csv.writer(response)
writer.writerow([str(nomination.name),'SELECTED APPLICANTS', str(date.today())])
writer.writerow(['S.No','Name', 'Email','Roll','Address','Contact'])
i=1
for each in accepted:
try :
profile = each.user.userprofile
writer.writerow([str(i),each.user.userprofile,str(each.user)+'@iitk.ac.in',str(profile.roll_no),str(profile.room_no)+'/'+ str(profile.hall),str(profile.contact)])
except:
writer.writerow([str(i),each.user,str(each.user)+'@iitk.ac.in',str(each.start)])
i = i + 1
return response
@login_required
def applications(request, pk):
nomination = Nomination.objects.get(pk=pk)
applicants = NominationInstance.objects.filter(nomination=nomination).filter(submission_status = True)
accepted = NominationInstance.objects.filter(nomination=nomination).filter(submission_status = True).filter(status='Accepted')
rejected = NominationInstance.objects.filter(nomination=nomination).filter(submission_status = True).filter(status='Rejected')
pending = NominationInstance.objects.filter(nomination=nomination).filter(submission_status = True).filter(status=None)
mail_ids = [get_mails(applicants),get_mails(accepted),get_mails(rejected),get_mails(pending)]
status = get_nomi_status(nomination)
access, view_post = get_access_and_post_for_result(request, pk)
if not access:
access, view_post = get_access_and_post(request, pk)
# if user post in parent tree
if access:
permission = None
senate_permission = None
if view_post.parent:
if view_post.parent.perms == 'can ratify the post':
permission = True
senate_permission = False
elif view_post.perms == 'can ratify the post':
senate_permission = True
permission = False
# result approval things can send,has been sent, can cancel
results_approval = [None]*3
if view_post in nomination.result_approvals.all():
if view_post.parent in nomination.result_approvals.all():
results_approval[1] = True
grand_parent = view_post.parent.parent
if grand_parent not in nomination.result_approvals.all():
results_approval[2] = True
else:
results_approval[0] = True
if request.method == 'POST':
reopen = DeadlineForm(request.POST)
if reopen.is_valid():
re_nomi = ReopenNomination.objects.create(nomi=nomination)
re_nomi.approvals.add(view_post)
nomination.deadline = reopen.cleaned_data['deadline']
nomination.status = 'Interview period and Reopening initiated'
nomination.save()
return HttpResponseRedirect(reverse('nomi_detail', kwargs={'nomi_pk': pk}))
else:
reopen = DeadlineForm()
form_confirm = ConfirmApplication(request.POST or None)
if form_confirm.is_valid():
nomination.status = 'Interview period'
nomination.save()
return HttpResponseRedirect(reverse('applicants', kwargs={'pk': pk}))
return render(request, 'applicants.html', context={'nomination': nomination, 'applicants': applicants,
'form_confirm': form_confirm,'mail_ids':mail_ids,
'result_approval': results_approval,
'accepted': accepted, 'rejected': rejected, 'status': status,
'pending': pending, 'perm': permission,
'senate_perm': senate_permission,'reopen':reopen})
## if user in panel...
if request.user in nomination.interview_panel.all():
return render(request, 'applicant_panel.html', context={'nomination': nomination, 'applicants': applicants,
'accepted': accepted, 'rejected': rejected,
'pending': pending, 'status': status})
if not access:
return render(request, 'no_access.html')
@login_required
def nomination_answer(request, pk):
application = NominationInstance.objects.get(pk=pk)
ans_form = application.filled_form
data = json.loads(ans_form.data)
applicant = application.user.userprofile
questionnaire = application.nomination.nomi_form
form = questionnaire.get_form(data)
comments = Commment.objects.filter(nomi_instance=application)
comments_reverse = comments[::-1]
comment_form = CommentForm(request.POST or None)
nomination = application.nomination
status = get_nomi_status(nomination)
access = False
access, view_post = get_access_and_post(request, nomination.pk)
if not access:
access, view_post = get_access_and_post_for_result(request, nomination.pk)
all_posts = Post.objects.filter(post_holders=request.user)
senate_perm = False
for post in all_posts:
if post.perms == 'can ratify the post':
access = True
senate_perm = True
break
if application.user == request.user:
return render(request, 'nomi_answer_user.html', context={'form': form, 'nomi': application, 'nomi_user': applicant})
if access or request.user in nomination.interview_panel.all():
# result approval things send,sent,cancel
results_approval = [None]*3
if view_post in nomination.result_approvals.all():
if view_post.parent in nomination.result_approvals.all():
results_approval[1] = True
grand_parent = view_post.parent.parent
if grand_parent not in nomination.result_approvals.all():
results_approval[2] = True
else:
results_approval[0] = True
if request.user in nomination.interview_panel.all():
view_post = nomination.nomi_post.parent
if view_post.parent in nomination.result_approvals.all():
results_approval[1] = True
grand_parent = view_post.parent.parent
if grand_parent not in nomination.result_approvals.all():
results_approval[2] = True
else:
results_approval[0] = True
if comment_form.is_valid():
Commment.objects.create(comments=comment_form.cleaned_data['comment'],
nomi_instance=application, user=request.user)
return HttpResponseRedirect(reverse('nomi_answer', kwargs={'pk': pk}))
return render(request, 'nomi_answer.html', context={'form': form, 'nomi': application, 'nomi_user': applicant,
'comment_form': comment_form,
'comments': comments_reverse, 'senate_perm': senate_perm,
'status':status,
'result_approval': results_approval})
else:
return render(request, 'no_access.html')
## ------------------------------------------------------------------------------------------------------------------ ##
######################################### GROUP NOMINATION VIEWS ################################################
## ------------------------------------------------------------------------------------------------------------------ ##
@login_required
def group_nominations(request, pk):
post = Post.objects.get(pk=pk)
child_posts = Post.objects.filter(parent=post)
child_posts_reverse = child_posts[::-1]
post_approvals = Post.objects.filter(post_approvals=post).filter(status='Post created')
nomi_approvals = Nomination.objects.filter(nomi_approvals=post).filter(status='Nomination created')
if request.user in post.post_holders.all():
if request.method == 'POST':
groupform = SelectNomiForm(post, request.POST)
group_detail = GroupDetail(request.POST)
if group_detail.is_valid():
if groupform.is_valid():
group = group_detail.save()
group.approvals.add(post)
for nomi_pk in groupform.cleaned_data['group']:
# tasks to be performed on nomination
nomi = Nomination.objects.get(pk=nomi_pk)
group.nominations.add(nomi)
for tag in nomi.tags.all():
group.tags.add(tag)
nomi.group_status = 'grouped'
if post.elder_brother:
to_add = post.elder_brother
nomi.nomi_approvals.add(to_add)
if group.deadline:
nomi.deadline = group.deadline
nomi.save()
return HttpResponseRedirect(reverse('post_view', kwargs={'pk': pk}))
else:
group_detail= GroupDetail
groupform = SelectNomiForm(post)
return render(request, 'nomi_group.html', context={'post': post, 'child_posts': child_posts_reverse,
'post_approval': post_approvals, 'nomi_approval': nomi_approvals,
'form': groupform, 'title_form': group_detail})
else:
return render(request, 'no_access.html')
@login_required
def group_nomi_detail(request, pk):
group_nomi = GroupNomination.objects.get(pk=pk)
admin = 0
for post in request.user.posts.all():
if post in group_nomi.approvals.all():
admin = post
form_confirm = ConfirmApplication(request.POST or None)
if form_confirm.is_valid():
for nomi in group_nomi.nominations.all():
nomi.open_to_users()
group_nomi.status = 'out'
group_nomi.save()
return render(request, 'group_detail.html', {'group_nomi': group_nomi, 'admin': admin,
'form_confirm': form_confirm})
@login_required
def edit_or_add_to_group(request, pk, gr_pk):
post = Post.objects.get(pk=pk)
group = GroupNomination.objects.get(pk=gr_pk)
if request.user in post.post_holders.all() and post in group.approvals.all():
group_detail = GroupDetail(request.POST or None, instance=group)
if group_detail.is_valid():
group_detail.save()
if request.method == 'POST':
groupform = SelectNomiForm(post, request.POST)
if groupform.is_valid():
for nomi_pk in groupform.cleaned_data['group']:
# things to be performed on nomination
nomi = Nomination.objects.get(pk=nomi_pk)
group.nominations.add(nomi)
for tag in nomi.tags.all():
group.tags.add(tag)
nomi.group_status = 'grouped'
if post.elder_brother:
to_add = post.elder_brother
nomi.nomi_approvals.add(to_add)
if group.deadline:
nomi.deadline = group.deadline
nomi.save()
return HttpResponseRedirect(reverse('group_nomi_detail', kwargs={'pk': gr_pk}))
else:
groupform = SelectNomiForm(post)
return render(request, 'nomi_group.html', context={'post': post,'form': groupform, 'title_form': group_detail})
else:
return render(request, 'no_access.html')
@login_required
def remove_from_group(request, nomi_pk, gr_pk):
nomi = Nomination.objects.get(pk=nomi_pk)
group = GroupNomination.objects.get(pk=gr_pk)
group.nominations.remove(nomi)
nomi.group_status = 'normal'
nomi.status = 'Nomination created'
nomi.save()
return HttpResponseRedirect(reverse('group_nomi_detail', kwargs={'pk': gr_pk}))
## ------------------------------------------------------------------------------------------------------------------ ##
########################################### RATIFICATION VIEWS ###################################################
## ------------------------------------------------------------------------------------------------------------------ ##
@login_required
def ratify(request, nomi_pk):
nomi = Nomination.objects.get(pk=nomi_pk)
access, view_post = get_access_and_post_for_result(request,nomi_pk)
if access:
if view_post.perms == "can ratify the post":
nomi.append()
return HttpResponseRedirect(reverse('applicants', kwargs={'pk': nomi_pk}))
else:
return render(request, 'no_access.html')
else:
return render(request, 'no_access.html')
@login_required
def request_ratify(request, nomi_pk):
nomi = Nomination.objects.get(pk=nomi_pk)
access, view_post = get_access_and_post_for_result(request,nomi_pk)
if access:
if view_post.parent:
to_add = view_post.parent
nomi.result_approvals.add(to_add)
nomi.nomi_approvals.add(to_add)
nomi.status = 'Sent for ratification'
nomi.save()
return HttpResponseRedirect(reverse('applicants', kwargs={'pk': nomi_pk}))
else:
return render(request, 'no_access.html')
@login_required
def cancel_ratify(request, nomi_pk):
nomi = Nomination.objects.get(pk=nomi_pk)
access, view_post = get_access_and_post_for_result(request,nomi_pk)
if access:
if view_post.parent:
to_remove = view_post.parent
nomi.result_approvals.remove(to_remove)
nomi.nomi_approvals.remove(to_remove)
nomi.status = 'Interview period'
nomi.save()
return HttpResponseRedirect(reverse('applicants', kwargs={'pk': nomi_pk}))
else:
return render(request, 'no_access.html')
@login_required
def cancel_result_approval(request, nomi_pk):
nomi = Nomination.objects.get(pk=nomi_pk)
access, view_post = get_access_and_post_for_result(request,nomi_pk)
if access:
to_remove = view_post.parent
if to_remove.parent not in nomi.result_approvals.all():
nomi.result_approvals.remove(to_remove)
return HttpResponseRedirect(reverse('applicants', kwargs={'pk': nomi_pk}))
else:
return render(request, 'no_access.html')
@login_required
def result_approval(request, nomi_pk):
nomi = Nomination.objects.get(pk=nomi_pk)
access, view_post = get_access_and_post_for_result(request,nomi_pk)
if access:
if view_post == nomi.nomi_post.parent:
nomi.show_result = True
to_add = view_post.parent
nomi.result_approvals.add(to_add)
return HttpResponseRedirect(reverse('applicants', kwargs={'pk': nomi_pk}))
else:
return render(request, 'no_access.html')
@login_required
def create_deratification_request(request, post_pk, user_pk ,type):
post = Post.objects.get(pk=post_pk)
user =User.objects.get(pk=user_pk)
if request.user in post.parent.post_holders.all():
Deratification.objects.create(name=user, post=post,status = type, deratify_approval=post.parent)
return HttpResponseRedirect(reverse('child_post', kwargs={'pk': post_pk}))
@login_required
def approve_deratification_request(request,pk):
to_deratify = Deratification.objects.get(pk = pk)
view = to_deratify.deratify_approval
if request.user in view.post_holders.all():
if view.perms == "can ratify the post":
to_deratify.post.post_holders.remove(to_deratify.name)
history=PostHistory.objects.filter(user=to_deratify.name).filter(post = to_deratify.post).first()
if to_deratify.status=='remove from post':
history.delete()
to_deratify.status = 'removed'
else:
history.end = date.today()
history.save()
to_deratify.status = 'deratified'
to_deratify.save()
else:
to_deratify.deratify_approval = view.parent
to_deratify.save()
return HttpResponseRedirect(reverse('post_view', kwargs={'pk':view.pk}))
else:
return render(request, 'no_access.html')
@login_required
def reject_deratification_request(request, pk):
to_deratify = Deratification.objects.get(pk=pk)
view = to_deratify.deratify_approval
if request.user in view.post_holders.all():
to_deratify.delete()
return HttpResponseRedirect(reverse('post_view', kwargs={'pk':view.pk}))
else:
return render(request, 'no_access.html')
'''
mark_as_interviewed, reject_nomination, accept_nomination: Changes the interview status/ nomination_instance status
of the applicant
'''
def get_access_and_post_for_selection(request, nomi_pk):
nomi =Nomination.objects.get(pk=nomi_pk)
access = False
view_post = None
for post in nomi.result_approvals.all():
if request.user in post.post_holders.all():
access = True
view_post = post
break
return access, view_post
@login_required
def mark_as_interviewed(request, pk):
application = NominationInstance.objects.get(pk=pk)
id_nomi = application.nomination.pk
nomination = Nomination.objects.get(pk=id_nomi)
access, view_post = get_access_and_post_for_selection(request,id_nomi)
if access or request.user in nomination.interview_panel.all():
application.interview_status = 'Interview Done'
application.save()
return HttpResponseRedirect(reverse('nomi_answer', kwargs={'pk': pk}))
else:
return render(request, 'no_access.html')
@login_required
def accept_nomination(request, pk):
application = NominationInstance.objects.get(pk=pk)
id_accept = application.nomination.pk
nomination = Nomination.objects.get(pk=id_accept)
access, view_post = get_access_and_post_for_selection(request, id_accept)
if access or request.user in nomination.interview_panel.all():
application.status = 'Accepted'
application.save()
comment = '<strong>' + str(request.user.userprofile.name) + '</strong>' + ' Accepted '\
+ '<strong>' + str(application.user.userprofile.name) + '</strong>'
status = Commment.objects.create(comments=comment, nomi_instance=application)
return HttpResponseRedirect(reverse('applicants', kwargs={'pk': id_accept}))
else:
return render(request, 'no_access.html')
@login_required
def reject_nomination(request, pk):
application = NominationInstance.objects.get(pk=pk)
id_reject = application.nomination.pk
nomination = Nomination.objects.get(pk=id_reject)
access, view_post = get_access_and_post_for_selection(request, id_reject)
if access or request.user in nomination.interview_panel.all():
application.status = 'Rejected'
application.save()
comment = '<strong>' + str(request.user.userprofile.name) + '</strong>' + ' Rejected ' \
+ '<strong>' + str(application.user.userprofile.name) + '</strong>'
status = Commment.objects.create(comments=comment, nomi_instance=application)
return HttpResponseRedirect(reverse('applicants', kwargs={'pk': id_reject}))
else:
return render(request, 'no_access.html')
'''
append_user, replace_user: Adds and Removes the current post-holders according to their selection status
'''
@login_required
def append_user(request, pk):
posts = request.user.posts.all()
access = False
for post in posts:
if post.perms == "can ratify the post":
access = True
break
if access:
nomi = Nomination.objects.get(pk=pk)
nomi.append()
return HttpResponseRedirect(reverse('applicants', kwargs={'pk': pk}))
else:
return render(request, 'no_access.html')
@login_required
def end_tenure(request):
posts = request.user.posts.all()
access = False
for post in posts:
if post.perms == "can ratify the post":
access = True
break
if access:
posts = Post.objects.all()
for post in posts:
for holder in post.post_holders.all():
try:
history = PostHistory.objects.get(post=post, user=holder)
if history.end:
if date.today() >= history.end:
post.post_holders.remove(holder)
except ObjectDoesNotExist:
pass
return HttpResponseRedirect(reverse('index'))
else:
return render(request, 'no_access.html')
# Import all posts of all clubs
# Check if their session has expired (31-3-2018 has passed)
# Remove them from the post
# Create the post history (No need, its already created)
## ------------------------------------------------------------------------------------------------------------------ ##
############################################ PROFILE VIEWS ##################################################
## ------------------------------------------------------------------------------------------------------------------ ##
@login_required
def profile_view(request):
pk = request.user.pk
my_posts = Post.objects.filter(post_holders=request.user)
history = PostHistory.objects.filter(user=request.user).order_by('start')
pending_nomi = NominationInstance.objects.filter(user=request.user).filter(nomination__status='Nomination out')
pending_re_nomi = NominationInstance.objects.filter(user=request.user).\
filter(nomination__status='Interview period and Nomination reopened')
pending_nomi = pending_nomi | pending_re_nomi
# show the instances that user finally submitted.. not the saved one
interview_re_nomi = NominationInstance.objects.filter(user=request.user).filter(submission_status = True).filter(nomination__status='Interview period and Reopening initiated')
interview_nomi = NominationInstance.objects.filter(user=request.user).filter(submission_status = True).filter(nomination__status='Interview period')
interview_nomi = interview_nomi | interview_re_nomi
declared_nomi = NominationInstance.objects.filter(user=request.user).filter(submission_status = True).filter(nomination__status='Sent for ratification')
try:
user_profile = UserProfile.objects.get(user__id=pk)
post_exclude_history = [] # In case a post is not registered in history
post_history = []
for his in history:
post_history.append(his.post)
for post in my_posts:
if post not in post_history:
post_exclude_history.append(post)
return render(request, 'profile.html', context={'user_profile': user_profile, 'history': history,
'pending_nomi': pending_nomi, 'declared_nomi': declared_nomi,
'interview_nomi': interview_nomi, 'my_posts': my_posts,
'excluded_posts': post_exclude_history})
except ObjectDoesNotExist:
return HttpResponseRedirect('create')
@login_required
def public_profile(request, pk):
student = UserProfile.objects.get(pk=pk)
student_user = student.user
history = PostHistory.objects.filter(user=student_user)
my_posts = Post.objects.filter(post_holders=student_user)
return render(request, 'public_profile.html', context={'student': student, 'history': history,
'my_posts': my_posts})
def UserProfileUpdate(request,pk):
profile = UserProfile.objects.get(pk = pk)
if profile.user == request.user:
form = ProfileForm(request.POST or None, instance=profile)
if form.is_valid():
form.save()
return HttpResponseRedirect(reverse('profile'))
return render(request, 'nomi/userprofile_form.html', context={'form': form})
else:
return render(request, 'no_access.html')
class CommentUpdate(UpdateView):
model = Commment
fields = ['comments']
def get_success_url(self):
form_pk = self.kwargs['form_pk']
return reverse('nomi_answer', kwargs={'pk': form_pk})
class CommentDelete(DeleteView):
model = Commment
def get_success_url(self):
form_pk = self.kwargs['form_pk']
return reverse('nomi_answer', kwargs={'pk': form_pk})
def all_nominations(request):
all_nomi = Nomination.objects.all().exclude(status='Nomination created')
return render(request, 'all_nominations.html', context={'all_nomi': all_nomi})
|
unknown
|
codeparrot/codeparrot-clean
| ||
# frozen_string_literal: true
require "abstract_unit"
class DebugExceptionsTest < ActionDispatch::IntegrationTest
InterceptedErrorInstance = StandardError.new
class CustomActionableError < StandardError
include ActiveSupport::ActionableError
action "Action 1" do
nil
end
action "Action 2" do
nil
end
end
class SimpleController < ActionController::Base
def hello
self.response_body = "hello"
end
end
class Boomer
attr_accessor :closed
def initialize(detailed = false)
@detailed = detailed
@closed = false
end
# We're obliged to implement this (even though it doesn't actually
# get called here) to properly comply with the Rack SPEC
def each
end
def close
@closed = true
end
def method_that_raises
raise StandardError.new "error in framework"
end
def raise_nested_exceptions
raise_nested_exceptions_third
end
def raise_nested_exceptions_first
raise "First error"
end
def raise_nested_exceptions_second
raise_nested_exceptions_first
rescue
raise "Second error"
end
def raise_nested_exceptions_third
raise_nested_exceptions_second
rescue
raise "Third error"
end
def call(env)
env["action_dispatch.show_detailed_exceptions"] = @detailed
req = ActionDispatch::Request.new(env)
template = ActionView::Template.new(File.binread(__FILE__), __FILE__, ActionView::Template::Handlers::Raw.new, format: :html, locals: [])
case req.path
when "/pass"
[404, { ActionDispatch::Constants::X_CASCADE => "pass" }, self]
when "/not_found"
controller = SimpleController.new
raise AbstractController::ActionNotFound.new(nil, controller, :ello)
when "/runtime_error"
raise RuntimeError
when "/method_not_allowed"
raise ActionController::MethodNotAllowed
when "/intercepted_error"
raise InterceptedErrorInstance
when "/unknown_http_method"
raise ActionController::UnknownHttpMethod
when "/not_implemented"
raise ActionController::NotImplemented
when "/invalid_mimetype"
raise ActionDispatch::Http::MimeNegotiation::InvalidType
when "/not_found_original_exception"
begin
raise AbstractController::ActionNotFound.new
rescue
raise ActionView::Template::Error.new(template)
end
when "/cause_mapped_to_rescue_responses"
begin
raise ActionController::ParameterMissing, :missing_param_key
rescue
raise NameError.new("uninitialized constant Userr")
end
when "/missing_template"
raise ActionView::MissingTemplate.new(%w(foo), "foo/index", %w(foo), false, "mailer")
when "/bad_request"
raise ActionController::BadRequest
when "/missing_keys"
raise ActionController::UrlGenerationError, "No route matches"
when "/parameter_missing"
raise ActionController::ParameterMissing.new(:invalid_param_key, %w(valid_param_key))
when "/original_syntax_error"
eval "broke_syntax =" # `eval` need for raise native SyntaxError at runtime
when "/syntax_error_into_view"
begin
eval "broke_syntax ="
rescue Exception
raise ActionView::Template::Error.new(template)
end
when "/framework_raises"
method_that_raises
when "/nested_exceptions"
raise_nested_exceptions
when %r{/actionable_error}
raise CustomActionableError
when "/utf8_template_error"
begin
eval "“fancy string”"
rescue Exception
raise ActionView::Template::Error.new(template)
end
else
raise "puke!"
end
end
end
def self.build_app(app, *args)
Rack::Lint.new(
ActionDispatch::DebugExceptions.new(
Rack::Lint.new(app), *args,
),
)
end
Interceptor = proc { |request, exception| request.set_header("int", exception) }
BadInterceptor = proc { |request, exception| raise "bad" }
RoutesApp = Struct.new(:routes).new(SharedTestRoutes)
ProductionApp = build_app(Boomer.new(false), RoutesApp)
DevelopmentApp = build_app(Boomer.new(true), RoutesApp)
InterceptedApp = build_app(Boomer.new(true), RoutesApp, :default, [Interceptor])
BadInterceptedApp = build_app(Boomer.new(true), RoutesApp, :default, [BadInterceptor])
ApiApp = build_app(Boomer.new(true), RoutesApp, :api)
test "skip diagnosis if not showing detailed exceptions" do
@app = ProductionApp
assert_raise RuntimeError do
get "/", headers: { "action_dispatch.show_exceptions" => :all }
end
end
test "skip diagnosis if not showing exceptions" do
@app = DevelopmentApp
assert_raise RuntimeError do
get "/", headers: { "action_dispatch.show_exceptions" => :none }
end
end
test "raise an exception on cascade pass" do
@app = ProductionApp
assert_raise ActionController::RoutingError do
get "/pass", headers: { "action_dispatch.show_exceptions" => :all }
end
end
test "closes the response body on cascade pass" do
boomer = Boomer.new(false)
@app = self.class.build_app(boomer)
assert_raise ActionController::RoutingError do
get "/pass", headers: { "action_dispatch.show_exceptions" => :all }
end
assert boomer.closed, "Expected to close the response body"
end
test "returns empty body on HEAD cascade pass" do
@app = DevelopmentApp
head "/pass"
assert_response 404
assert_equal "", body
end
test "displays routes in a table when a RoutingError occurs" do
@app = DevelopmentApp
get "/pass", headers: { "action_dispatch.show_exceptions" => :all }
routing_table = body[/route_table.*<.table>/m]
assert_match "/:controller(/:action)(.:format)", routing_table
assert_match ":controller#:action", routing_table
assert_no_match "<|>", routing_table, "there should not be escaped HTML in the output"
end
test "displays request and response info when a RoutingError occurs" do
@app = DevelopmentApp
get "/pass", headers: { "action_dispatch.show_exceptions" => :all }
assert_select "h2", /Request/
assert_select "h2", /Response/
end
test "rescue with diagnostics message" do
@app = DevelopmentApp
get "/", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 500
assert_match(/<body>/, body)
assert_match(/puke/, body)
get "/not_found", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 404
assert_match(/<body>/, body)
assert_match(/#{AbstractController::ActionNotFound.name}/, body)
get "/method_not_allowed", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 405
assert_match(/<body>/, body)
assert_match(/ActionController::MethodNotAllowed/, body)
get "/unknown_http_method", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 405
assert_match(/<body>/, body)
assert_match(/ActionController::UnknownHttpMethod/, body)
get "/bad_request", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 400
assert_match(/<body>/, body)
assert_match(/ActionController::BadRequest/, body)
get "/parameter_missing", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 400
assert_match(/<body>/, body)
assert_match(/ActionController::ParameterMissing/, body)
get "/invalid_mimetype", headers: { "Accept" => "text/html,*", "action_dispatch.show_exceptions" => :all }
assert_response 406
assert_match(/<body>/, body)
assert_match(/ActionDispatch::Http::MimeNegotiation::InvalidType/, body)
end
test "rescue with text error for xhr request" do
@app = DevelopmentApp
xhr_request_env = { "action_dispatch.show_exceptions" => :all, "HTTP_X_REQUESTED_WITH" => "XMLHttpRequest" }
get "/", headers: xhr_request_env
assert_response 500
assert_no_match(/<header>/, body)
assert_no_match(/<body>/, body)
assert_equal "text/plain", response.media_type
assert_match(/RuntimeError\npuke/, body)
Rails.stub :root, Pathname.new(".") do
get "/", headers: xhr_request_env
assert_response 500
assert_match "Extracted source (around line #", body
assert_select "pre", { count: 0 }, body
end
get "/not_found", headers: xhr_request_env
assert_response 404
assert_no_match(/<body>/, body)
assert_equal "text/plain", response.media_type
assert_match(/#{AbstractController::ActionNotFound.name}/, body)
get "/method_not_allowed", headers: xhr_request_env
assert_response 405
assert_no_match(/<body>/, body)
assert_equal "text/plain", response.media_type
assert_match(/ActionController::MethodNotAllowed/, body)
get "/unknown_http_method", headers: xhr_request_env
assert_response 405
assert_no_match(/<body>/, body)
assert_equal "text/plain", response.media_type
assert_match(/ActionController::UnknownHttpMethod/, body)
get "/bad_request", headers: xhr_request_env
assert_response 400
assert_no_match(/<body>/, body)
assert_equal "text/plain", response.media_type
assert_match(/ActionController::BadRequest/, body)
get "/parameter_missing", headers: xhr_request_env
assert_response 400
assert_no_match(/<body>/, body)
assert_equal "text/plain", response.media_type
assert_match(/ActionController::ParameterMissing/, body)
end
test "rescue with text error and markdown format when text/markdown is preferred" do
@app = DevelopmentApp
get "/", headers: { "Accept" => "text/markdown", "action_dispatch.show_exceptions" => :all }
assert_response 500
assert_no_match(/<body>/, body)
assert_equal "text/markdown", response.media_type
assert_match(/RuntimeError/, body)
assert_match(/puke/, body)
get "/not_found", headers: { "Accept" => "text/markdown", "action_dispatch.show_exceptions" => :all }
assert_response 404
assert_no_match(/<body>/, body)
assert_equal "text/markdown", response.media_type
assert_match(/#{AbstractController::ActionNotFound.name}/, body)
get "/method_not_allowed", headers: { "Accept" => "text/markdown", "action_dispatch.show_exceptions" => :all }
assert_response 405
assert_no_match(/<body>/, body)
assert_equal "text/markdown", response.media_type
assert_match(/ActionController::MethodNotAllowed/, body)
get "/unknown_http_method", headers: { "Accept" => "text/markdown", "action_dispatch.show_exceptions" => :all }
assert_response 405
assert_no_match(/<body>/, body)
assert_equal "text/markdown", response.media_type
assert_match(/ActionController::UnknownHttpMethod/, body)
get "/bad_request", headers: { "Accept" => "text/markdown", "action_dispatch.show_exceptions" => :all }
assert_response 400
assert_no_match(/<body>/, body)
assert_equal "text/markdown", response.media_type
assert_match(/ActionController::BadRequest/, body)
get "/parameter_missing", headers: { "Accept" => "text/markdown", "action_dispatch.show_exceptions" => :all }
assert_response 400
assert_no_match(/<body>/, body)
assert_equal "text/markdown", response.media_type
assert_match(/ActionController::ParameterMissing/, body)
end
test "rescue with JSON error for JSON API request" do
@app = ApiApp
get "/", headers: { "action_dispatch.show_exceptions" => :all }, as: :json
assert_response 500
assert_no_match(/<header>/, body)
assert_no_match(/<body>/, body)
assert_equal "application/json", response.media_type
assert_match(/RuntimeError: puke/, body)
get "/not_found", headers: { "action_dispatch.show_exceptions" => :all }, as: :json
assert_response 404
assert_no_match(/<body>/, body)
assert_equal "application/json", response.media_type
assert_match(/#{AbstractController::ActionNotFound.name}/, body)
get "/method_not_allowed", headers: { "action_dispatch.show_exceptions" => :all }, as: :json
assert_response 405
assert_no_match(/<body>/, body)
assert_equal "application/json", response.media_type
assert_match(/ActionController::MethodNotAllowed/, body)
get "/unknown_http_method", headers: { "action_dispatch.show_exceptions" => :all }, as: :json
assert_response 405
assert_no_match(/<body>/, body)
assert_equal "application/json", response.media_type
assert_match(/ActionController::UnknownHttpMethod/, body)
get "/bad_request", headers: { "action_dispatch.show_exceptions" => :all }, as: :json
assert_response 400
assert_no_match(/<body>/, body)
assert_equal "application/json", response.media_type
assert_match(/ActionController::BadRequest/, body)
get "/parameter_missing", headers: { "action_dispatch.show_exceptions" => :all }, as: :json
assert_response 400
assert_no_match(/<body>/, body)
assert_equal "application/json", response.media_type
assert_match(/ActionController::ParameterMissing/, body)
get "/invalid_mimetype", headers: { "Accept" => "text/html,*", "action_dispatch.show_exceptions" => :all }, as: :json
assert_response 406
assert_no_match(/<body>/, body)
assert_equal "application/json", response.media_type
assert_match(/ActionDispatch::Http::MimeNegotiation::InvalidType/, body)
end
test "rescue with suggestions" do
@app = DevelopmentApp
get "/not_found", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 404
assert_select("b", /Did you mean\?/)
assert_select("li", "hello")
get "/parameter_missing", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 400
assert_select("b", /Did you mean\?/)
assert_select("li", "valid_param_key")
end
test "rescue with HTML format for HTML API request" do
@app = ApiApp
get "/index.html", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 500
assert_match(/<header>/, body)
assert_match(/<body>/, body)
assert_equal "text/html", response.media_type
assert_match(/puke/, body)
end
test "rescue with XML format for XML API requests" do
@app = ApiApp
get "/index.xml", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 500
assert_equal "application/xml", response.media_type
assert_match(/RuntimeError: puke/, body)
end
test "rescue with JSON format as fallback if API request format is not supported" do
Mime::Type.register "text/wibble", :wibble
ActionDispatch::IntegrationTest.register_encoder(:wibble,
param_encoder: -> params { params })
@app = ApiApp
get "/index", headers: { "action_dispatch.show_exceptions" => :all }, as: :wibble
assert_response 500
assert_equal "application/json", response.media_type
assert_match(/RuntimeError: puke/, body)
ensure
Mime::Type.unregister :wibble
end
test "does not show filtered parameters" do
@app = DevelopmentApp
get "/", params: { "foo" => "bar" }, headers: { "action_dispatch.show_exceptions" => :all,
"action_dispatch.parameter_filter" => [:foo] }
assert_response 500
assert_match(ERB::Util.html_escape({ "foo" => "[FILTERED]" }.inspect[1..-2]), body)
end
test "show registered original exception if the last exception is TemplateError" do
@app = DevelopmentApp
get "/not_found_original_exception", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 404
assert_match %r{AbstractController::ActionNotFound}, body
assert_match %r{Showing <i>.*test/dispatch/debug_exceptions_test.rb</i>}, body
end
test "show the last exception and cause even when the cause is mapped to rescue_responses" do
@app = DevelopmentApp
get "/cause_mapped_to_rescue_responses", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 500
assert_match %r{ActionController::ParameterMissing}, body
assert_match %r{NameError}, body
end
test "named URLs missing keys raise 500 level error" do
@app = DevelopmentApp
get "/missing_keys", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 500
assert_match(/ActionController::UrlGenerationError/, body)
end
test "show the controller name in the diagnostics template when controller name is present" do
@app = DevelopmentApp
get("/runtime_error", headers: {
"action_dispatch.show_exceptions" => :all,
"action_dispatch.request.parameters" => {
"action" => "show",
"id" => "unknown",
"controller" => "featured_tile"
}
})
assert_response 500
assert_match(/RuntimeError\n\s+in FeaturedTileController/, body)
end
test "show formatted params" do
@app = DevelopmentApp
params = {
"id" => "unknown",
"someparam" => {
"foo" => "bar",
"abc" => "goo"
}
}
get("/runtime_error", headers: {
"action_dispatch.show_exceptions" => :all,
"action_dispatch.request.parameters" => {
"action" => "show",
"controller" => "featured_tile"
}.merge(params)
})
assert_response 500
assert_includes(body, ERB::Util.html_escape(PP.pp(params, +"", 200)))
end
test "sets the HTTP charset parameter" do
@app = DevelopmentApp
get "/", headers: { "action_dispatch.show_exceptions" => :all }
assert_equal "text/html; charset=utf-8", response.headers["content-type"]
end
test "uses logger from env" do
@app = DevelopmentApp
output = StringIO.new
get "/", headers: { "action_dispatch.show_exceptions" => :all, "action_dispatch.logger" => Logger.new(output) }
assert_match(/puke/, output.rewind && output.read)
end
test "logs at configured log level" do
@app = DevelopmentApp
output = StringIO.new
logger = Logger.new(output)
logger.level = Logger::WARN
get "/", headers: { "action_dispatch.show_exceptions" => :all, "action_dispatch.logger" => logger, "action_dispatch.debug_exception_log_level" => Logger::INFO }
assert_no_match(/puke/, output.rewind && output.read)
get "/", headers: { "action_dispatch.show_exceptions" => :all, "action_dispatch.logger" => logger, "action_dispatch.debug_exception_log_level" => Logger::ERROR }
assert_match(/puke/, output.rewind && output.read)
end
test "logs only what is necessary" do
@app = DevelopmentApp
io = StringIO.new
logger = ActiveSupport::Logger.new(io)
_old, ActionView::Base.logger = ActionView::Base.logger, logger
begin
get "/", headers: { "action_dispatch.show_exceptions" => :all, "action_dispatch.logger" => logger }
ensure
ActionView::Base.logger = _old
end
output = io.rewind && io.read
lines = output.lines
# Other than the first three...
assert_equal([" \n", "RuntimeError (puke!):\n", " \n"], lines.slice!(0, 3))
lines.each do |line|
# .. all the remaining lines should be from the backtrace
assert_match(/:\d+:in /, line)
end
end
test "logs with non active support loggers" do
@app = DevelopmentApp
io = StringIO.new
logger = Logger.new(io)
_old, ActionView::Base.logger = ActionView::Base.logger, logger
begin
assert_nothing_raised do
get "/", headers: { "action_dispatch.show_exceptions" => :all, "action_dispatch.logger" => logger }
end
ensure
ActionView::Base.logger = _old
end
assert_match(/puke/, io.rewind && io.read)
end
test "uses backtrace cleaner from env" do
@app = DevelopmentApp
backtrace_cleaner = ActiveSupport::BacktraceCleaner.new
backtrace_cleaner.stub :clean, ["passed backtrace cleaner"] do
get "/", headers: { "action_dispatch.show_exceptions" => :all, "action_dispatch.backtrace_cleaner" => backtrace_cleaner }
assert_match(/passed backtrace cleaner/, body)
end
end
test "logs exception backtrace when all lines silenced" do
@app = DevelopmentApp
output = StringIO.new
backtrace_cleaner = ActiveSupport::BacktraceCleaner.new
backtrace_cleaner.add_silencer { true }
env = { "action_dispatch.show_exceptions" => :all,
"action_dispatch.logger" => Logger.new(output),
"action_dispatch.backtrace_cleaner" => backtrace_cleaner }
get "/", headers: env
assert_operator((output.rewind && output.read).lines.count, :>, 10)
end
test "doesn't log the framework backtrace when error type is a routing error" do
@app = ProductionApp
output = StringIO.new
backtrace_cleaner = ActiveSupport::BacktraceCleaner.new
backtrace_cleaner.add_silencer { true }
env = { "action_dispatch.show_exceptions" => :all,
"action_dispatch.logger" => Logger.new(output),
"action_dispatch.log_rescued_responses" => true,
"action_dispatch.backtrace_cleaner" => backtrace_cleaner }
assert_raises ActionController::RoutingError do
get "/pass", headers: env
end
log = output.rewind && output.read
assert_includes log, "ActionController::RoutingError (No route matches [GET] \"/pass\")"
assert_equal 3, log.lines.count
end
test "doesn't log the framework backtrace when error type is a invalid mime type" do
@app = ProductionApp
output = StringIO.new
backtrace_cleaner = ActiveSupport::BacktraceCleaner.new
backtrace_cleaner.add_silencer { true }
env = { "Accept" => "text/html,*",
"action_dispatch.show_exceptions" => :all,
"action_dispatch.logger" => Logger.new(output),
"action_dispatch.log_rescued_responses" => true,
"action_dispatch.backtrace_cleaner" => backtrace_cleaner }
assert_raises ActionDispatch::Http::MimeNegotiation::InvalidType do
get "/invalid_mimetype", headers: env
end
log = output.rewind && output.read
assert_includes log, "ActionDispatch::Http::MimeNegotiation::InvalidType (ActionDispatch::Http::MimeNegotiation::InvalidType)"
assert_equal 3, log.lines.count
end
test "skips logging when rescued and log_rescued_responses is false" do
@app = DevelopmentApp
output = StringIO.new
env = { "action_dispatch.show_exceptions" => :all,
"action_dispatch.logger" => Logger.new(output),
"action_dispatch.log_rescued_responses" => false }
get "/parameter_missing", headers: env
assert_response 400
assert_empty (output.rewind && output.read).lines
end
test "does not skip logging when rescued and log_rescued_responses is true" do
@app = DevelopmentApp
output = StringIO.new
env = { "action_dispatch.show_exceptions" => :all,
"action_dispatch.logger" => Logger.new(output),
"action_dispatch.log_rescued_responses" => true }
get "/parameter_missing", headers: env
assert_response 400
assert_not_empty (output.rewind && output.read).lines
end
test "logs exception causes" do
@app = DevelopmentApp
output = StringIO.new
env = { "action_dispatch.show_exceptions" => :all,
"action_dispatch.logger" => Logger.new(output),
"action_dispatch.log_rescued_responses" => true }
get "/nested_exceptions", headers: env
assert_response 500
log = output.rewind && output.read
# Splitting into paragraphs to be easier to see difference/error when there is one
paragraphs = log.split(/\n\s*\n/)
assert_includes(paragraphs[0], <<~MSG.strip)
RuntimeError (Third error)
Caused by: RuntimeError (Second error)
Caused by: RuntimeError (First error)
MSG
assert_includes(paragraphs[1], <<~MSG.strip)
Information for: RuntimeError (Third error):
MSG
if RUBY_VERSION >= "3.4"
# Changes to the format of exception backtraces
# https://bugs.ruby-lang.org/issues/16495 (use single quote instead of backtrace)
# https://bugs.ruby-lang.org/issues/20275 (don't have entry for rescue in)
# And probably more, they now show the class too
assert_match Regexp.new(<<~REGEX.strip), paragraphs[2]
\\A.*in '.*raise_nested_exceptions_third'
.*in '.*raise_nested_exceptions'
REGEX
assert_includes(paragraphs[3], <<~MSG.strip)
Information for cause: RuntimeError (Second error):
MSG
assert_match Regexp.new(<<~REGEX.strip), paragraphs[4]
\\A.*in '.*raise_nested_exceptions_second'
.*in '.*raise_nested_exceptions_third'
.*in '.*raise_nested_exceptions'
REGEX
assert_includes(paragraphs[5], <<~MSG.strip)
Information for cause: RuntimeError (First error):
MSG
assert_match Regexp.new(<<~REGEX.strip), paragraphs[6]
\\A.*in '.*raise_nested_exceptions_first'
.*in '.*raise_nested_exceptions_second'
.*in '.*raise_nested_exceptions_third'
.*in '.*raise_nested_exceptions'
REGEX
else
assert_match Regexp.new(<<~REGEX.strip), paragraphs[2]
\\A.*in `rescue in raise_nested_exceptions_third'
.*in `raise_nested_exceptions_third'
.*in `raise_nested_exceptions'
REGEX
assert_includes(paragraphs[3], <<~MSG.strip)
Information for cause: RuntimeError (Second error):
MSG
assert_match Regexp.new(<<~REGEX.strip), paragraphs[4]
\\A.*in `rescue in raise_nested_exceptions_second'
.*in `raise_nested_exceptions_second'
.*in `raise_nested_exceptions_third'
.*in `raise_nested_exceptions'
REGEX
assert_includes(paragraphs[5], <<~MSG.strip)
Information for cause: RuntimeError (First error):
MSG
assert_match Regexp.new(<<~REGEX.strip), paragraphs[6]
\\A.*in `raise_nested_exceptions_first'
.*in `raise_nested_exceptions_second'
.*in `raise_nested_exceptions_third'
.*in `raise_nested_exceptions'
REGEX
end
end
test "display backtrace when error type is SyntaxError" do
@app = DevelopmentApp
get "/original_syntax_error", headers: { "action_dispatch.backtrace_cleaner" => ActiveSupport::BacktraceCleaner.new }
assert_response 500
assert_select "#Application-Trace-0" do
assert_select "code", /syntax error, unexpected|syntax errors found/
end
end
test "display backtrace on template missing errors" do
@app = DevelopmentApp
get "/missing_template"
assert_select "header h1", /Template is missing/
assert_select "#container h2", /^Missing template/
assert_select "#Application-Trace-0"
assert_select "#Framework-Trace-0"
assert_select "#Full-Trace-0"
assert_select "h2", /Request/
end
test "display backtrace when error type is SyntaxError wrapped by ActionView::Template::Error" do
@app = DevelopmentApp
get "/syntax_error_into_view", headers: { "action_dispatch.backtrace_cleaner" => ActiveSupport::BacktraceCleaner.new }
assert_response 500
assert_select "#Application-Trace-0" do
assert_select "code", /syntax error, unexpected|syntax errors found/
end
assert_match %r{Showing <i>.*test/dispatch/debug_exceptions_test.rb</i>}, body
end
test "debug exceptions app shows user code that caused the error in source view" do
@app = DevelopmentApp
Rails.stub :root, Pathname.new(".") do
cleaner = ActiveSupport::BacktraceCleaner.new.tap do |bc|
bc.add_silencer { |line| line.match?(/method_that_raises/) }
bc.add_silencer { |line| !line.match?(%r{test/dispatch/debug_exceptions_test.rb}) }
end
get "/framework_raises", headers: { "action_dispatch.backtrace_cleaner" => cleaner }
# Assert correct error
assert_response 500
assert_select "div.exception-message" do
assert_select "div", /error in framework/
end
# assert source view line is the call to method_that_raises
assert_select "div.source:not(.hidden)" do
assert_select "pre .line.active", /method_that_raises/
end
# assert first source view (hidden) that throws the error
assert_select "div.source" do
assert_select "pre .line.active", /raise StandardError\.new/
end
# assert application trace refers to line that calls method_that_raises is first
assert_select "#Application-Trace-0" do
assert_select "code a:first", %r{test/dispatch/debug_exceptions_test\.rb:\d+:in .*call}
end
# assert framework trace that threw the error is first
assert_select "#Framework-Trace-0" do
assert_select "code a:first", /method_that_raises/
end
end
end
test "invoke interceptors before rendering" do
@app = InterceptedApp
get "/intercepted_error", headers: { "action_dispatch.show_exceptions" => :all }
assert_equal InterceptedErrorInstance, request.get_header("int")
end
test "bad interceptors doesn't debug exceptions" do
@app = BadInterceptedApp
get "/puke", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 500
assert_match(/puke/, body)
end
test "debug exceptions app shows all the nested exceptions in source view" do
@app = DevelopmentApp
Rails.stub :root, Pathname.new(".") do
cleaner = ActiveSupport::BacktraceCleaner.new.tap do |bc|
bc.add_silencer { |line| !line.match?(%r{test/dispatch/debug_exceptions_test.rb}) }
end
get "/nested_exceptions", headers: { "action_dispatch.backtrace_cleaner" => cleaner }
# Assert correct error
assert_response 500
assert_select "div.exception-message" do
assert_select "div", /Third error/
end
# assert source view line shows the last error
assert_select "div.source:not(.hidden)" do
assert_select "pre .line.active", /raise "Third error"/
end
if RUBY_VERSION >= "3.4"
# Possible Ruby 3.4-dev bug: https://bugs.ruby-lang.org/issues/19117#note-45
# assert application trace refers to line that raises the last exception
assert_select "#Application-Trace-0" do
assert_select "code a:first", %r{in '.*raise_nested_exceptions_third'}
end
# assert the second application trace refers to the line that raises the second exception
assert_select "#Application-Trace-1" do
assert_select "code a:first", %r{in '.*raise_nested_exceptions_second'}
end
else
# assert application trace refers to line that raises the last exception
assert_select "#Application-Trace-0" do
assert_select "code a:first", %r{in [`']rescue in .*raise_nested_exceptions_third'}
end
# assert the second application trace refers to the line that raises the second exception
assert_select "#Application-Trace-1" do
assert_select "code a:first", %r{in [`']rescue in .*raise_nested_exceptions_second'}
end
end
# assert the third application trace refers to the line that raises the first exception
assert_select "#Application-Trace-2" do
assert_select "code a:first", %r{in [`'].*raise_nested_exceptions_first'}
end
end
end
test "shows the link to edit the file in the editor" do
@app = DevelopmentApp
ActiveSupport::Editor.stub(:current, ActiveSupport::Editor.find("atom")) do
get "/actionable_error"
assert_select "code a.edit-icon"
assert_includes body, "atom://core/open"
end
end
test "editor can handle syntax errors" do
@app = DevelopmentApp
ActiveSupport::Editor.stub(:current, ActiveSupport::Editor.find("atom")) do
get "/syntax_error_into_view"
assert_response 500
assert_select "#Application-Trace-0" do
assert_select "code", /syntax error, unexpected|syntax errors found/
end
end
end
test "shows a buttons for every action in an actionable error" do
@app = DevelopmentApp
Rails.stub :root, Pathname.new(".") do
cleaner = ActiveSupport::BacktraceCleaner.new.tap do |bc|
bc.add_silencer { |line| !line.match?(%r{test/dispatch/debug_exceptions_test.rb}) }
end
get "/actionable_error", headers: { "action_dispatch.backtrace_cleaner" => cleaner }
# Assert correct error
assert_response 500
assert_select 'input[value="Action 1"]'
assert_select 'input[value="Action 2"]'
end
end
test "debug exceptions app shows diagnostics when malformed query parameters are provided" do
@app = DevelopmentApp
get "/bad_request?x[y]=1&x[y][][w]=2"
assert_response 400
assert_match "ActionController::BadRequest", body
end
test "debug exceptions app shows diagnostics when malformed query parameters are provided by XHR" do
@app = DevelopmentApp
xhr_request_env = { "action_dispatch.show_exceptions" => :all, "HTTP_X_REQUESTED_WITH" => "XMLHttpRequest" }
get "/bad_request?x[y]=1&x[y][][w]=2", headers: xhr_request_env
assert_response 400
assert_match "ActionController::BadRequest", body
end
test "debug exceptions app shows diagnostics for template errors that contain UTF-8 characters" do
@app = DevelopmentApp
io = StringIO.new
logger = ActiveSupport::Logger.new(io)
get "/utf8_template_error", headers: { "action_dispatch.logger" => logger }
assert_response 500
assert_select "#container p", /Showing #{__FILE__} where line #\d+ raised/
assert_select "#container code", /undefined local variable or method ['`]string”'/
end
test "includes copy button in error pages" do
@app = DevelopmentApp
get "/", headers: { "action_dispatch.show_exceptions" => :all }
assert_response 500
assert_match %r{<button onclick="copyAsText\.bind\(this\)\(\)">Copy as text</button>}, body
assert_match %r{<script type="text/plain" id="exception-message-for-copy">.*RuntimeError \(puke}m, body
end
test "copy button not shown for XHR requests" do
@app = DevelopmentApp
get "/", headers: {
"action_dispatch.show_exceptions" => :all,
"HTTP_X_REQUESTED_WITH" => "XMLHttpRequest"
}
assert_response 500
assert_no_match %r{<button}, body
assert_no_match %r{<script}, body
end
test "exception message includes causes for nested exceptions" do
@app = DevelopmentApp
get "/nested_exceptions", headers: { "action_dispatch.show_exceptions" => :all }
script_content = body[%r{<script type="text/plain" id="exception-message-for-copy">(.*?)</script>}m, 1]
assert_match %r{Third error}, script_content
assert_match %r{Caused by:.*Second error}m, script_content
end
end
|
ruby
|
github
|
https://github.com/rails/rails
|
actionpack/test/dispatch/debug_exceptions_test.rb
|
/// One or many patterns.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub enum Patterns {
Single(String),
List(Vec<String>),
}
impl Patterns {
pub fn is_empty(&self) -> bool {
match self {
Patterns::Single(_) => false,
Patterns::List(pats) => pats.is_empty(),
}
}
}
/// Helper trait for type that could be converted to one or more path patterns.
pub trait IntoPatterns {
fn patterns(&self) -> Patterns;
}
impl IntoPatterns for String {
fn patterns(&self) -> Patterns {
Patterns::Single(self.clone())
}
}
impl IntoPatterns for &String {
fn patterns(&self) -> Patterns {
(*self).patterns()
}
}
impl IntoPatterns for str {
fn patterns(&self) -> Patterns {
Patterns::Single(self.to_owned())
}
}
impl IntoPatterns for &str {
fn patterns(&self) -> Patterns {
(*self).patterns()
}
}
impl IntoPatterns for bytestring::ByteString {
fn patterns(&self) -> Patterns {
Patterns::Single(self.to_string())
}
}
impl IntoPatterns for Patterns {
fn patterns(&self) -> Patterns {
self.clone()
}
}
impl<T: AsRef<str>> IntoPatterns for Vec<T> {
fn patterns(&self) -> Patterns {
let mut patterns = self.iter().map(|v| v.as_ref().to_owned());
match patterns.size_hint() {
(1, _) => Patterns::Single(patterns.next().unwrap()),
_ => Patterns::List(patterns.collect()),
}
}
}
macro_rules! array_patterns_single (($tp:ty) => {
impl IntoPatterns for [$tp; 1] {
fn patterns(&self) -> Patterns {
Patterns::Single(self[0].to_owned())
}
}
});
macro_rules! array_patterns_multiple (($tp:ty, $str_fn:expr, $($num:tt) +) => {
// for each array length specified in space-separated $num
$(
impl IntoPatterns for [$tp; $num] {
fn patterns(&self) -> Patterns {
Patterns::List(self.iter().map($str_fn).collect())
}
}
)+
});
array_patterns_single!(&str);
array_patterns_multiple!(&str, |&v| v.to_owned(), 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16);
array_patterns_single!(String);
array_patterns_multiple!(String, |v| v.clone(), 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16);
|
rust
|
github
|
https://github.com/actix/actix-web
|
actix-router/src/pattern.rs
|
from __future__ import division, print_function, absolute_import
import numpy as np
class _FakeMatrix(object):
def __init__(self, data):
self._data = data
self.__array_interface__ = data.__array_interface__
class _FakeMatrix2(object):
def __init__(self, data):
self._data = data
def __array__(self):
return self._data
def _get_array(shape, dtype):
"""
Get a test array of given shape and data type.
Returned NxN matrices are posdef, and 2xN are banded-posdef.
"""
if len(shape) == 2 and shape[0] == 2:
# yield a banded positive definite one
x = np.zeros(shape, dtype=dtype)
x[0,1:] = -1
x[1] = 2
return x
elif len(shape) == 2 and shape[0] == shape[1]:
# always yield a positive definite matrix
x = np.zeros(shape, dtype=dtype)
j = np.arange(shape[0])
x[j,j] = 2
x[j[:-1],j[:-1]+1] = -1
x[j[:-1]+1,j[:-1]] = -1
return x
else:
np.random.seed(1234)
return np.random.randn(*shape).astype(dtype)
def _id(x):
return x
def assert_no_overwrite(call, shapes, dtypes=None):
"""
Test that a call does not overwrite its input arguments
"""
if dtypes is None:
dtypes = [np.float32, np.float64, np.complex64, np.complex128]
for dtype in dtypes:
for order in ["C", "F"]:
for faker in [_id, _FakeMatrix, _FakeMatrix2]:
orig_inputs = [_get_array(s, dtype) for s in shapes]
inputs = [faker(x.copy(order)) for x in orig_inputs]
call(*inputs)
msg = "call modified inputs [%r, %r]" % (dtype, faker)
for a, b in zip(inputs, orig_inputs):
np.testing.assert_equal(a, b, err_msg=msg)
|
unknown
|
codeparrot/codeparrot-clean
| ||
/*
* Copyright 2002-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.context.conversionservice;
/**
* @author Keith Donald
*/
public class Bar {
private String value;
public Bar(String value) {
this.value = value;
}
public String getValue() {
return value;
}
}
|
java
|
github
|
https://github.com/spring-projects/spring-framework
|
spring-context/src/test/java/org/springframework/context/conversionservice/Bar.java
|
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.security;
import java.util.Map;
import org.apache.hadoop.conf.Configuration;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNull;
public class TestHttpCrossOriginFilterInitializer {
@Test
public void testGetFilterParameters() {
// Initialize configuration object
Configuration conf = new Configuration();
conf.set(HttpCrossOriginFilterInitializer.PREFIX + "rootparam",
"rootvalue");
conf.set(HttpCrossOriginFilterInitializer.PREFIX + "nested.param",
"nestedvalue");
conf.set("outofscopeparam", "outofscopevalue");
// call function under test
Map<String, String> filterParameters = HttpCrossOriginFilterInitializer
.getFilterParameters(conf, HttpCrossOriginFilterInitializer.PREFIX);
// retrieve values
String rootvalue = filterParameters.get("rootparam");
String nestedvalue = filterParameters.get("nested.param");
String outofscopeparam = filterParameters.get("outofscopeparam");
// verify expected values are in place
assertEquals("rootvalue", rootvalue, "Could not find filter parameter");
assertEquals("nestedvalue", nestedvalue, "Could not find filter parameter");
assertNull(outofscopeparam, "Found unexpected value in filter parameters");
}
}
|
java
|
github
|
https://github.com/apache/hadoop
|
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestHttpCrossOriginFilterInitializer.java
|
// Copyright 2016 The Cockroach Authors.
//
// Use of this software is governed by the CockroachDB Software License
// included in the /LICENSE file.
package base
import (
"context"
"strconv"
"sync/atomic"
"github.com/cockroachdb/cockroach/pkg/roachpb"
"github.com/cockroachdb/cockroach/pkg/util"
"github.com/cockroachdb/errors"
"github.com/cockroachdb/redact"
)
// NodeIDContainer is used to share a single roachpb.NodeID or
// SQLInstanceID instance between multiple layers. It allows setting
// and getting the value. Once a value is set, the value cannot
// change.
// Note: we plan to rename it to denote its generic nature, see
// https://github.com/cockroachdb/cockroach/pull/73309
type NodeIDContainer struct {
_ util.NoCopy
// nodeID represents either a NodeID or a SQLInstanceID (if
// sqlInstance is set). It is accessed atomically.
nodeID int32
// sqlInstance is set to true when the node is meant to be a
// standalone SQL instance.
standaloneSQLInstance bool
// If nodeID has been set, str represents nodeID converted to string. We
// precompute this value to speed up String() and keep it from allocating
// memory dynamically.
str atomic.Value
// OnSet, if non-nil, is called after the ID is set with the new value.
OnSet func(roachpb.NodeID)
}
// String returns the node ID, or "?" if it is unset.
func (n *NodeIDContainer) String() string {
s := n.str.Load()
if s == nil {
if n.standaloneSQLInstance {
return "sql?"
}
return "?"
}
return s.(string)
}
var _ redact.SafeValue = &NodeIDContainer{}
// SafeValue implements the redact.SafeValue interface.
func (n *NodeIDContainer) SafeValue() {}
// Get returns the current node ID; 0 if it is unset.
//
// Note that Get() returns a value of type roachpb.NodeID even though
// the container is configured to store SQL instance IDs. This is
// because components that call Get() do so in a context where the
// type distinction between NodeID and SQLInstanceID does not matter,
// and we benefit from using a single type instead of duplicating the
// code. See for example the `rpc` package, where server-to-server
// RPCs get addressed with server IDs regardless of whether they are
// KV nodes or SQL instances.
// See also: https://github.com/cockroachdb/cockroach/pull/73309
func (n *NodeIDContainer) Get() roachpb.NodeID {
return roachpb.NodeID(atomic.LoadInt32(&n.nodeID))
}
// Set sets the current node ID. If it is already set, the value must match.
func (n *NodeIDContainer) Set(ctx context.Context, val roachpb.NodeID) {
n.setInternal(ctx, int32(val))
}
func (n *NodeIDContainer) setInternal(ctx context.Context, val int32) {
if val <= 0 {
panic(errors.AssertionFailedf("trying to set invalid NodeID: %d", val))
}
oldVal := atomic.SwapInt32(&n.nodeID, val)
if oldVal != 0 && oldVal != val {
panic(errors.AssertionFailedf("different IDs set: %d, then %d", oldVal, val))
}
prefix := ""
if n.standaloneSQLInstance {
prefix = "sql"
}
n.str.Store(prefix + strconv.Itoa(int(val)))
if oldVal == 0 && n.OnSet != nil {
n.OnSet(roachpb.NodeID(val))
}
}
// Reset changes the NodeID regardless of the old value.
//
// Should only be used in testing code.
func (n *NodeIDContainer) Reset(val roachpb.NodeID) {
atomic.StoreInt32(&n.nodeID, int32(val))
n.str.Store(strconv.Itoa(int(val)))
}
// StoreIDContainer is added as a logtag in the pebbleLogger's context.
// The storeID value is later set atomically. The storeID can only be
// set after engine creation because the storeID is determined only after the
// pebbleLogger's context is created.
type StoreIDContainer struct {
_ util.NoCopy
// storeID is accessed atomically.
storeID int32
// If storeID has been set, str represents storeID converted to string. We
// precompute this value to speed up String() and keep it from allocating
// memory dynamically.
str atomic.Value
}
// TempStoreID is used as the store id for a temp pebble engine's log
const TempStoreID = -1
// String returns "temp" for temp stores, and the storeID for main
// stores if they haven't been initialized. If a main store hasn't
// been initialized, then "?" is returned.
func (s *StoreIDContainer) String() string {
str := s.str.Load()
if str == nil {
return "?"
}
return str.(string)
}
var _ redact.SafeValue = &StoreIDContainer{}
// SafeValue implements the redact.SafeValue interface.
func (s *StoreIDContainer) SafeValue() {}
// Get returns the current storeID; 0 if it is unset.
func (s *StoreIDContainer) Get() int32 {
return atomic.LoadInt32(&s.storeID)
}
// Set sets the current storeID. If it is already set, the value should match.
func (s *StoreIDContainer) Set(ctx context.Context, val int32) {
if val != TempStoreID && val <= 0 {
panic(errors.AssertionFailedf("trying to set invalid storeID for the store in the Pebble log: %d", val))
}
oldVal := atomic.SwapInt32(&s.storeID, val)
if oldVal != 0 && oldVal != val {
panic(errors.AssertionFailedf("different storeIDs set for the store in the Pebble log: %d, then %d",
oldVal, val))
}
if val == TempStoreID {
s.str.Store("temp")
} else {
s.str.Store(strconv.Itoa(int(val)))
}
}
// A SQLInstanceID is an ephemeral ID assigned to a running instance of the SQL
// server. This is distinct from a NodeID, which is a long-lived identifier
// assigned to a node in the KV layer which is unique across all KV nodes in the
// cluster and persists across restarts. Instead, a Instance is similar to a
// process ID from the unix world: an integer assigned to the SQL server
// on process start which is unique across all SQL server processes running
// on behalf of the tenant, while the SQL server is running.
type SQLInstanceID int32
func (s SQLInstanceID) String() string {
if s == 0 {
return "?"
}
return strconv.Itoa(int(s))
}
// SafeValue implements the redact.SafeValue interface.
func (s SQLInstanceID) SafeValue() {}
// SQLIDContainer is a variant of NodeIDContainer that contains SQL instance IDs.
type SQLIDContainer NodeIDContainer
// NewSQLIDContainerForNode sets up a SQLIDContainer which serves the underlying
// NodeID as the SQL instance ID.
func NewSQLIDContainerForNode(nodeID *NodeIDContainer) *SQLIDContainer {
if nodeID.standaloneSQLInstance {
// This assertion exists to prevent misuse of the API, where a
// caller would call NewSQLIDContainerForNode() once, cast the
// result type to `*NodeIDContainer`, then mistakenly call
// NewSQLIDContainerForNode() again.
panic(errors.AssertionFailedf("programming error: container is already for a standalone SQL instance"))
}
return (*SQLIDContainer)(nodeID)
}
// SwitchToSQLIDContainer changes a NodeIDContainer to become able to
// store SQL instance IDs for standalone SQL instances.
//
// After it has been switched, the original container will report the
// SQL instance ID value as NodeID via its Get() method, under the
// assumption that anything using that ID actually needs the SQL
// Instance ID.
func (n *NodeIDContainer) SwitchToSQLIDContainerForStandaloneSQLInstance() *SQLIDContainer {
sc := NewSQLIDContainerForNode(n)
sc.standaloneSQLInstance = true
return sc
}
// SetSQLInstanceID sets the SQL instance ID. It returns an error if
// we attempt to set an instance ID when the nodeID has already been
// initialized.
func (c *SQLIDContainer) SetSQLInstanceID(ctx context.Context, sqlInstanceID SQLInstanceID) error {
(*NodeIDContainer)(c).setInternal(ctx, int32(sqlInstanceID))
return nil
}
// OptionalNodeID returns the NodeID and true, if the former is exposed.
// Otherwise, returns zero and false.
func (c *SQLIDContainer) OptionalNodeID() (roachpb.NodeID, bool) {
if (*NodeIDContainer)(c).standaloneSQLInstance {
return 0, false
}
return (*NodeIDContainer)(c).Get(), true
}
// SQLInstanceID returns the wrapped SQLInstanceID.
func (c *SQLIDContainer) SQLInstanceID() SQLInstanceID {
return SQLInstanceID((*NodeIDContainer)(c).Get())
}
// SafeValue implements the redact.SafeValue interface.
func (c *SQLIDContainer) SafeValue() {}
func (c *SQLIDContainer) String() string { return (*NodeIDContainer)(c).String() }
// TestingIDContainer is an SQLIDContainer with hard-coded SQLInstanceID of 10.
var TestingIDContainer = func() *SQLIDContainer {
var c NodeIDContainer
sc := c.SwitchToSQLIDContainerForStandaloneSQLInstance()
if err := sc.SetSQLInstanceID(context.Background(), 10); err != nil {
panic(err)
}
return sc
}()
|
go
|
github
|
https://github.com/cockroachdb/cockroach
|
pkg/base/node_id.go
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# File: multigpu.py
# Author: Yuxin Wu <ppwwyyxxc@gmail.com>
import tensorflow as tf
import itertools, re
from six.moves import zip, range
from ..models import TowerContext
from ..utils import *
from ..utils.concurrency import LoopThread
from ..tfutils.summary import summary_moving_average
from ..tfutils.modelutils import describe_model
from ..tfutils import *
from .trainer import QueueInputTrainer
__all__ = ['AsyncMultiGPUTrainer', 'SyncMultiGPUTrainer']
class MultiGPUTrainer(QueueInputTrainer):
""" Base class for multi-gpu training"""
def __init__(self, config, input_queue=None, predict_tower=None):
super(MultiGPUTrainer, self).__init__(config, input_queue, predict_tower)
self.dequed_inputs = []
@staticmethod
def _average_grads(tower_grads):
ret = []
with tf.name_scope('AvgGrad'):
for grad_and_vars in zip(*tower_grads):
v = grad_and_vars[0][1]
try:
grad = tf.add_n([x[0] for x in grad_and_vars]) / float(len(tower_grads))
except:
logger.error("Error while processing gradients of {}".format(v.name))
raise
ret.append((grad, v))
return ret
def _multi_tower_grads(self):
logger.info("Training a model of {} tower".format(
len(self.config.tower)))
grad_list = []
for idx, t in enumerate(self.config.tower):
with tf.device('/gpu:{}'.format(t)), \
TowerContext('tower{}'.format(idx)) as scope:
logger.info("Building graph for training tower {}...".format(idx))
model_inputs = self._get_model_inputs() # each tower dequeue from input queue
self.dequed_inputs.append(model_inputs)
self.model.build_graph(model_inputs)
cost_var = self.model.get_cost() # build tower
# TODO gate_gradienst=0 seems to be faster?
grad_list.append(
self.config.optimizer.compute_gradients(cost_var, gate_gradients=0))
if idx == 0:
tf.add_to_collection(MOVING_SUMMARY_VARS_KEY, cost_var)
tf.get_variable_scope().reuse_variables()
# avoid repeated summary from each device
backup = backup_collection(SUMMARY_BACKUP_KEYS)
restore_collection(backup)
return grad_list
class SyncMultiGPUTrainer(MultiGPUTrainer):
def train(self):
self.init_session_and_coord()
self._build_enque_thread()
grad_list = self._multi_tower_grads()
grads = MultiGPUTrainer._average_grads(grad_list)
grads = self.process_grads(grads)
self.train_op = tf.group(
self.config.optimizer.apply_gradients(grads, get_global_step_var()),
summary_moving_average(), name='train_op')
describe_model()
# [debug]: do nothing in training
#self.train_op = self.dequed_inputs[0][0] + self.dequed_inputs[1][0]
self.main_loop()
class AsyncMultiGPUTrainer(MultiGPUTrainer):
def train(self):
self.init_session_and_coord()
self._build_enque_thread()
grad_list = self._multi_tower_grads()
# pretend to average the grads, in order to make async and
# sync have consistent effective learning rate
def scale(grads):
with tf.name_scope('AsyncScaleGrad'):
return [(grad / len(self.config.tower) if grad is not None else None, var)
for grad, var in grads]
grad_list = map(scale, grad_list)
grad_list = [self.process_grads(g) for g in grad_list]
# use grad from the first tower for iteration in main thread
self.train_op = tf.group(
self.config.optimizer.apply_gradients(grad_list[0], get_global_step_var()),
summary_moving_average(), name='train_op')
describe_model()
self._start_async_threads(grad_list)
self.main_loop()
def _start_async_threads(self, grad_list):
# prepare train_op for the rest of the towers
# itertools.count is atomic w.r.t. python threads
self.async_step_counter = itertools.count()
self.training_threads = []
for k in range(1, len(self.config.tower)):
train_op = self.config.optimizer.apply_gradients(grad_list[k])
def f(op=train_op): # avoid late-binding
self.sess.run([op])
next(self.async_step_counter)
th = LoopThread(f)
th.pause()
th.start()
self.training_threads.append(th)
self.async_running = False
def run_step(self):
if not self.async_running:
self.async_running = True
for th in self.training_threads: # resume all threads
th.resume()
next(self.async_step_counter)
super(AsyncMultiGPUTrainer, self).run_step()
def _trigger_epoch(self):
self.async_running = False
for th in self.training_threads:
th.pause()
try:
async_step_total_cnt = int(re.findall(
'[0-9]+', self.async_step_counter.__str__())[0])
self.write_scalar_summary(
'async_global_step', async_step_total_cnt)
except:
pass
super(AsyncMultiGPUTrainer, self)._trigger_epoch()
|
unknown
|
codeparrot/codeparrot-clean
| ||
# -*- coding: utf-8 -*-
"""
@author: Jeff Cavner
@contact: jcavner@ku.edu
@license: gpl2
@copyright: Copyright (C) 2014, University of Kansas Center for Research
Lifemapper Project, lifemapper [at] ku [dot] edu,
Biodiversity Institute,
1345 Jayhawk Boulevard, Lawrence, Kansas, 66045, USA
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or (at
your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301, USA.
"""
import os
import types
import zipfile
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from qgis.core import *
from lifemapperTools.tools.ui_listPALayersDialog import Ui_Dialog
from lifemapperTools.tools.controller import _Controller
from lifemapperTools.common.pluginconstants import ListExperiments
from lifemapperTools.tools.uploadLayers import UploadDialog
from lifemapperTools.tools.radTable import RADTable
class ListPALayersDialog(_Controller,QDialog, Ui_Dialog):
# .............................................................................
# Constructor
# .............................................................................
def __init__(self, iface, RADids=None, inputs=None, client=None, epsg=None,
mapunits=None,resume=False):
QDialog.__init__(self)
#self.setWindowFlags(self.windowFlags() & Qt.WindowMinimizeButtonHint)
self.setupUi()
self.interface = iface
self.client = client
if resume:
self.interface.addProject(resume)
cc = self.rejectBut
bok = QWidget()
self.inputs = inputs
_Controller.__init__(self, iface, BASE_URL=ListExperiments.BASE_URL,
STATUS_URL=ListExperiments.STATUS_URL,
REST_URL=ListExperiments.REST_URL,
cancel_close=cc, ids=RADids, okayButton=bok,
initializeWithData=True,outputfunc=self.showTable,
requestfunc=self.client.rad.getPALayers, inputs=inputs,
client=client)
#try:
# items = self.listPALayers(inputs)
#except:
# message = "There is a problem with the presense absence listing service"
# msgBox = QMessageBox.information(self,
# "Problem...",
# message,
# QMessageBox.Ok)
#else:
# self.showTable(items)
# this may need to list layers outside of the controller also
self.expEPSG = epsg
self.mapunits = mapunits
# ..........................................................................
def listPALayers(self,inputs):
"""
@summary: lists rad experiments
@return: returns a list of experiment atoms
"""
items = None
try:
items = self.client.rad.getPALayers(**inputs)
except:
items = None
message = "There is a problem with the presence absence listing service"
messageBox = QMessageBox.warning(self,
"Problem...",
message,
QMessageBox.Ok)
return items
# ..........................................................................
def deletePALayers(self):
# get the selection model
selModel = self.table.tableView.selectionModel()
if not(selModel.hasSelection()):
QMessageBox.warning(self,"status: ",
"Please select one experiment")
return
rowIdxs = [row.row() for row in selModel.selectedRows()]
paLyrIds = [self.table.tableView.model().data[idx][1] for idx in rowIdxs]
for paLyrId in paLyrIds:
try:
print self.inputs['expId']," ",paLyrId
r = self.client.rad.removePALayerFromExperiment(self.inputs['expId'],paLyrId)
except Exception,e:
print str(e)
lyrs = self.listPALayers(self.inputs)
self.showTable(lyrs)
# ...........................................................................
def intersectPAM(self):
self.close()
inputs = {}
QMessageBox.warning(self,"status: ",
str(self.table.tableView.model().data))
# ...........................................................................
def setParams(self):
update = 0
layercount = len(self.table.tableView.model().data)
for record in enumerate(self.table.tableView.model().data):
try:
inputs = {'attrPresence':record[1][3],'minPresence':record[1][4],
'maxPresence':record[1][5], 'percentPresence':record[1][6]}
# this adds the expId
inputs.update(self.inputs)
inputs.update({'lyrId':record[1][1]})
addresponse = self.client.rad.addPALayer(**inputs)
except:
message = 'Could not add layer '+str(record[1][0]) +" to experiment"
QMessageBox.warning(self,"status: ", message)
else:
if addresponse:
update += 1
message = 'Updated '+str(update) +" of "+ str(layercount) +" layers"
QMessageBox.information(self,"status: ", message)
# ...........................................................................
def cleanInputGridLayout(self):
"""@summary: cleans out the input grid layout"""
if not(self.gridLayout_input.isEmpty()):
for childindex in range(0,self.gridLayout_input.count()):
item = self.gridLayout_input.takeAt(0)
if not(type(item) is types.NoneType):
item.widget().deleteLater()
self.gridLayout_input.update()
# ...........................................................................
def getCount(self):
try:
pass
except:
pass
# ...........................................................................
def showTable(self, items, model):
try:
if len(items) == 0:
raise Exception,"No Layers"
data = [[o.name,o.id, o.attrPresence,
o.minPresence,o.maxPresence,o.percentPresence] for o in items]
self.table = RADTable(data)
headerList = ['Layer title', 'Lyr id', 'Attr Presence','min Presence','max Presence',
'Percent Presence']
self.tableview = self.table.createTable(headerList,editsIndexList=[999])
self.tableview.setSelectionBehavior(QAbstractItemView.SelectRows)
self.tableview.setSelectionMode(QAbstractItemView.MultiSelection)
self.holderGroup.hide()
self.gridLayout.addWidget(self.tableview,1,1,1,1)
header = self.tableview.horizontalHeader()
#self.ColumnSet.setEnabled(False)
#self.setAllColumnButton.setEnabled(False)
#self.setParamsBut.setEnabled(False)
except Exception, e:
self.loadTabelLabel.setText("No layers to view")
self.addLayersBut = QPushButton("Add Layers",self)
self.buttonBox.addButton(self.addLayersBut, QDialogButtonBox.ActionRole)
#QObject.connect(self.addLayersBut, SIGNAL("clicked()"), self.openAddLayers)
self.addLayersBut.clicked.connect(self.openAddLayers)
message = "There are no presence absence layers for this experiment"
msgBox = QMessageBox.information(self,
"Problem...",
message,
QMessageBox.Ok)
# ...........................................................................
def openAddLayers(self):
self.close()
d = UploadDialog( self.iface, inputs = self.inputs,
client = self.client, epsg=self.expEPSG,
mapunits=self.mapunits )
d.exec_()
# ...........................................................................
def makeEditable(self, section):
if 'id' not in self.table.tableModel.headerdata[section] and \
section not in self.table.tableModel.controlIndexes:
self.currentsection = section
self.table.tableModel.editIndexes = [section]
else:
self.table.tableModel.editIndexes = []
# ...........................................................................
def addWMS(self,index):
if index.column() in self.table.tableModel.controlIndexes:
message = "This functionality will be available in a later release"
msgBox = QMessageBox.information(self,
"Info...",
message,
QMessageBox.Ok)
return
# ...........................................................................
def help(self):
self.help = QWidget()
self.help.setWindowTitle('Lifemapper Help')
self.help.resize(600, 400)
self.help.setMinimumSize(600,400)
self.help.setMaximumSize(1000,1000)
layout = QVBoxLayout()
helpDialog = QTextBrowser()
helpDialog.setOpenExternalLinks(True)
#helpDialog.setSearchPaths(['documents'])
helppath = os.path.dirname(os.path.realpath(__file__))+'/documents/help.html'
helpDialog.setSource(QUrl.fromLocalFile(helppath))
helpDialog.scrollToAnchor('listPALayers')
layout.addWidget(helpDialog)
self.help.setLayout(layout)
if self.isModal():
self.setModal(False)
self.help.show()
if __name__ == "__main__":
#
import sys
import_path = "/home/jcavner/ghWorkspace/LmQGIS.git/lifemapperTools/"
sys.path.append(os.path.join(import_path, 'LmShared'))
configPath = os.path.join(import_path, 'config', 'config.ini')
os.environ["LIFEMAPPER_CONFIG_FILE"] = configPath
from LmClient.lmClientLib import LMClient
client = LMClient()
client.login(userId='', pwd='')
qApp = QApplication(sys.argv)
d = ListPALayersDialog(None,inputs={'expId':1055},client=client)
#d = AdvancedAlgo()
d.show()
sys.exit(qApp.exec_())
|
unknown
|
codeparrot/codeparrot-clean
| ||
## Input
```javascript
function Foo(props) {
const onFoo = useCallback(
reason => {
log(props.router.location);
},
[props.router.location]
);
return onFoo;
}
```
## Code
```javascript
import { c as _c } from "react/compiler-runtime";
function Foo(props) {
const $ = _c(2);
let t0;
if ($[0] !== props.router.location) {
t0 = (reason) => {
log(props.router.location);
};
$[0] = props.router.location;
$[1] = t0;
} else {
t0 = $[1];
}
const onFoo = t0;
return onFoo;
}
```
|
unknown
|
github
|
https://github.com/facebook/react
|
compiler/packages/babel-plugin-react-compiler/src/__tests__/fixtures/compiler/capturing-function-member-expr-arguments.expect.md
|
# coding=utf-8
from __future__ import unicode_literals
from ..es import Provider as AddressProvider
class Provider(AddressProvider):
building_number_formats = ('%', '%#', '%#', '%#', '%##')
street_prefixes = (
'Plaza', 'Calle', 'Avenida', 'Via', 'Vial', 'Rambla', 'Glorieta', 'Urbanización', 'Callejón', 'Cañada',
'Alameda', 'Acceso', 'C.', 'Ronda', 'Pasaje', 'Cuesta', 'Pasadizo', 'Paseo', 'Camino'
)
postcode_formats = ('#####', )
states = (
'Álava', 'Albacete', 'Alicante', 'Almería', 'Asturias', 'Ávila', 'Badajoz',
'Baleares', 'Barcelona', 'Burgos', 'Cáceres', 'Cádiz', 'Cantabria', 'Castellón',
'Ceuta', 'Ciudad', 'Córdoba', 'Cuenca', 'Girona', 'Granada', 'Guadalajara',
'Guipúzcoa', 'Huelva', 'Huesca', 'Jaén', 'La Coruña', 'La Rioja', 'Las Palmas',
'León', 'Lleida', 'Lugo', 'Madrid', 'Málaga', 'Melilla', 'Murcia', 'Navarra',
'Ourense', 'Palencia', 'Pontevedra', 'Salamanca', 'Santa Cruz de Tenerife',
'Segovia', 'Sevilla', 'Soria', 'Tarragona', 'Teruel', 'Toledo', 'Valencia',
'Valladolid', 'Vizcaya', 'Zamora', 'Zaragoza'
)
city_formats = (
'{{state_name}}',
)
street_name_formats = (
'{{street_prefix}} {{first_name}} {{last_name}}',
'{{street_prefix}} de {{first_name}} {{last_name}}',
)
street_address_formats = (
'{{street_name}} {{building_number}}',
'{{street_name}} {{building_number}} {{secondary_address}} ',
)
address_formats = (
"{{street_address}}\n{{city}}, {{postcode}}",
)
secondary_address_formats = ('Apt. ##', 'Piso #', 'Puerta #')
@classmethod
def state_name(cls):
return cls.random_element(cls.states)
@classmethod
def street_prefix(cls):
return cls.random_element(cls.street_prefixes)
@classmethod
def secondary_address(cls):
return cls.numerify(cls.random_element(cls.secondary_address_formats))
@classmethod
def state(cls):
return cls.random_element(cls.states)
|
unknown
|
codeparrot/codeparrot-clean
| ||
---
layout: step
title: Liquid
position: 2
---
Liquid is where Jekyll starts to get more interesting. It is a templating
language which has three main components:
* [objects](#objects)
* [tags](#tags)
* [filters](#filters)
## Objects
Objects tell Liquid to output predefined [variables](../../variables/) as content on a page. Use double curly braces for objects: {% raw %}`{{`{% endraw %} and {% raw %}`}}`{% endraw %}.
For example, {% raw %}`{{ page.title }}`{% endraw %} displays the `page.title` variable.
## Tags
Tags define the logic and control flow for templates. Use curly
braces and percent signs for tags: {% raw %}`{%`{% endraw %} and
{% raw %}`%}`{% endraw %}.
For example:
{% raw %}
```liquid
{% if page.show_sidebar %}
<div class="sidebar">
sidebar content
</div>
{% endif %}
```
{% endraw %}
This displays the sidebar if the value of the `show_sidebar` page variable is true.
Learn more about the tags available in Jekyll [here](/docs/liquid/tags/).
## Filters
Filters change the output of a Liquid object. They are used within an output
and are separated by a `|`.
For example:
{% raw %}
```liquid
{{ "hi" | capitalize }}
```
{% endraw %}
This displays `Hi` instead of `hi`.
[Learn more about the filters](/docs/liquid/filters/) available.
## Use Liquid
Now, use Liquid to make your `Hello World!` text from [Setup](../01-setup/) lowercase:
{% raw %}
```liquid
...
<h1>{{ "Hello World!" | downcase }}</h1>
...
```
{% endraw %}
To make Jekyll process your changes, add [front matter](../03-front-matter/) to the top of the page:
```yaml
---
# front matter tells Jekyll to process Liquid
---
```
Your HTML document should look like this:
{% raw %}
```html
---
---
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Home</title>
</head>
<body>
<h1>{{ "Hello World!" | downcase }}</h1>
</body>
</html>
```
{% endraw %}
When you reload your browser, you should see `hello world!`.
Much of Jekyll's power comes from combining Liquid with other features. Add frontmatter to pages to make Jekyll process the Liquid on those pages.
Next, you'll learn more about frontmatter.
|
unknown
|
github
|
https://github.com/jekyll/jekyll
|
docs/_docs/step-by-step/02-liquid.md
|
#!/usr/bin/env python
"""
crate_anon/crateweb/manage.py
===============================================================================
Copyright (C) 2015-2021 Rudolf Cardinal (rudolf@pobox.com).
This file is part of CRATE.
CRATE is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
CRATE is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with CRATE. If not, see <http://www.gnu.org/licenses/>.
===============================================================================
**Command-line entry point so we can call Django management commands directly
from the command line if we want without the ``crate_django_manage <COMMAND>``
syntax.**
"""
import logging
import os
import shlex
import sys
from typing import List
import django
from django.core.management import execute_from_command_line
from crate_anon.crateweb.config.constants import CHERRYPY_EXTRA_ARGS_ENV_VAR
log = logging.getLogger(__name__)
os.environ.setdefault("DJANGO_SETTINGS_MODULE",
"crate_anon.crateweb.config.settings")
# from crate_anon.crateweb.config.settings import MIDDLEWARE_CLASSES
# print(f"1. MIDDLEWARE_CLASSES: {id(MIDDLEWARE_CLASSES)}")
# print(f"1. MIDDLEWARE_CLASSES: {MIDDLEWARE_CLASSES}")
django.setup()
# from crate_anon.crateweb.config.settings import MIDDLEWARE_CLASSES
# print(f"2. MIDDLEWARE_CLASSES: {id(MIDDLEWARE_CLASSES)}")
# print(f"2. MIDDLEWARE_CLASSES: {MIDDLEWARE_CLASSES}")
# print(f"sys.path: {sys.path}")
# print(f"os.environ['DJANGO_SETTINGS_MODULE']: "
# f"{os.environ['DJANGO_SETTINGS_MODULE']}")
# print(f"os.environ['{CRATEWEB_CONFIG_ENV_VAR}']: "
# f"{os.environ[CRATEWEB_CONFIG_ENV_VAR]}")
def main(argv: List[str] = None) -> None:
"""
Command-line entry point. Calls the Django command-line processor.
"""
if argv is None:
argv = sys.argv
# print(argv)
execute_from_command_line(argv)
def runserver() -> None:
"""
Launch the Django development web server. (Not for proper use.)
Modifies ``argv`` and calls :func:`main`.
"""
argv = sys.argv[:] # copy
argv.insert(1, 'runserver')
main(argv)
def runcpserver() -> None:
"""
Launch the CherryPy web server.
Modifies ``argv`` and calls :func:`main`.
"""
argv = sys.argv[:] # copy
argv.insert(1, 'runcpserver')
extraargs = shlex.split(os.environ.get(CHERRYPY_EXTRA_ARGS_ENV_VAR, ''))
# log.critical(extraargs)
argv.extend(extraargs)
main(argv)
_ = '''
def fetch_optouts() -> None:
"""
Fetch details of patients opting out.
Modifies ``argv`` and calls :func:`main`.
"""
argv = sys.argv[:] # copy
argv.insert(1, 'fetch_optouts')
extraargs = shlex.split(os.environ.get(CHERRYPY_EXTRA_ARGS_ENV_VAR, ''))
# log.critical(extraargs)
argv.extend(extraargs)
main(argv)
'''
def email_rdbm() -> None:
"""
E-mails the RDBM.
Modifies ``argv`` and calls :func:`main`.
"""
argv = sys.argv[:] # copy
argv.insert(1, 'email_rdbm')
main(argv)
if __name__ == "__main__":
main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import collections
import os
import re
import shutil
import netaddr
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import importutils
import six
from neutron.agent.linux import external_process
from neutron.agent.linux import ip_lib
from neutron.agent.linux import iptables_manager
from neutron.agent.linux import utils
from neutron.common import constants
from neutron.common import exceptions
from neutron.common import ipv6_utils
from neutron.common import utils as commonutils
from neutron.i18n import _LE, _LI, _LW
from neutron.openstack.common import uuidutils
LOG = logging.getLogger(__name__)
UDP = 'udp'
TCP = 'tcp'
DNS_PORT = 53
DHCPV4_PORT = 67
DHCPV6_PORT = 547
METADATA_DEFAULT_PREFIX = 16
METADATA_DEFAULT_IP = '169.254.169.254'
METADATA_DEFAULT_CIDR = '%s/%d' % (METADATA_DEFAULT_IP,
METADATA_DEFAULT_PREFIX)
METADATA_PORT = 80
WIN2k3_STATIC_DNS = 249
NS_PREFIX = 'qdhcp-'
DNSMASQ_SERVICE_NAME = 'dnsmasq'
class DictModel(dict):
"""Convert dict into an object that provides attribute access to values."""
def __init__(self, *args, **kwargs):
"""Convert dict values to DictModel values."""
super(DictModel, self).__init__(*args, **kwargs)
def needs_upgrade(item):
"""Check if `item` is a dict and needs to be changed to DictModel.
"""
return isinstance(item, dict) and not isinstance(item, DictModel)
def upgrade(item):
"""Upgrade item if it needs to be upgraded."""
if needs_upgrade(item):
return DictModel(item)
else:
return item
for key, value in self.iteritems():
if isinstance(value, (list, tuple)):
# Keep the same type but convert dicts to DictModels
self[key] = type(value)(
(upgrade(item) for item in value)
)
elif needs_upgrade(value):
# Change dict instance values to DictModel instance values
self[key] = DictModel(value)
def __getattr__(self, name):
try:
return self[name]
except KeyError as e:
raise AttributeError(e)
def __setattr__(self, name, value):
self[name] = value
def __delattr__(self, name):
del self[name]
class NetModel(DictModel):
def __init__(self, use_namespaces, d):
super(NetModel, self).__init__(d)
self._ns_name = (use_namespaces and
"%s%s" % (NS_PREFIX, self.id) or None)
@property
def namespace(self):
return self._ns_name
@six.add_metaclass(abc.ABCMeta)
class DhcpBase(object):
def __init__(self, conf, network, process_monitor,
version=None, plugin=None):
self.conf = conf
self.network = network
self.process_monitor = process_monitor
self.device_manager = DeviceManager(self.conf, plugin)
self.version = version
@abc.abstractmethod
def enable(self):
"""Enables DHCP for this network."""
@abc.abstractmethod
def disable(self, retain_port=False):
"""Disable dhcp for this network."""
def restart(self):
"""Restart the dhcp service for the network."""
self.disable(retain_port=True)
self.enable()
@abc.abstractproperty
def active(self):
"""Boolean representing the running state of the DHCP server."""
@abc.abstractmethod
def reload_allocations(self):
"""Force the DHCP server to reload the assignment database."""
@classmethod
def existing_dhcp_networks(cls, conf):
"""Return a list of existing networks ids that we have configs for."""
raise NotImplementedError()
@classmethod
def check_version(cls):
"""Execute version checks on DHCP server."""
raise NotImplementedError()
@classmethod
def get_isolated_subnets(cls, network):
"""Returns a dict indicating whether or not a subnet is isolated"""
raise NotImplementedError()
@classmethod
def should_enable_metadata(cls, conf, network):
"""True if the metadata-proxy should be enabled for the network."""
raise NotImplementedError()
class DhcpLocalProcess(DhcpBase):
PORTS = []
def __init__(self, conf, network, process_monitor, version=None,
plugin=None):
super(DhcpLocalProcess, self).__init__(conf, network, process_monitor,
version, plugin)
self.confs_dir = self.get_confs_dir(conf)
self.network_conf_dir = os.path.join(self.confs_dir, network.id)
utils.ensure_dir(self.network_conf_dir)
@staticmethod
def get_confs_dir(conf):
return os.path.abspath(os.path.normpath(conf.dhcp_confs))
def get_conf_file_name(self, kind):
"""Returns the file name for a given kind of config file."""
return os.path.join(self.network_conf_dir, kind)
def _remove_config_files(self):
shutil.rmtree(self.network_conf_dir, ignore_errors=True)
def _enable_dhcp(self):
"""check if there is a subnet within the network with dhcp enabled."""
for subnet in self.network.subnets:
if subnet.enable_dhcp:
return True
return False
def enable(self):
"""Enables DHCP for this network by spawning a local process."""
if self.active:
self.restart()
elif self._enable_dhcp():
utils.ensure_dir(self.network_conf_dir)
interface_name = self.device_manager.setup(self.network)
self.interface_name = interface_name
self.spawn_process()
def _get_process_manager(self, cmd_callback=None):
return external_process.ProcessManager(
conf=self.conf,
uuid=self.network.id,
namespace=self.network.namespace,
default_cmd_callback=cmd_callback,
pid_file=self.get_conf_file_name('pid'))
def disable(self, retain_port=False):
"""Disable DHCP for this network by killing the local process."""
self.process_monitor.unregister(self.network.id, DNSMASQ_SERVICE_NAME)
self._get_process_manager().disable()
if not retain_port:
self._destroy_namespace_and_port()
self._remove_config_files()
def _destroy_namespace_and_port(self):
try:
self.device_manager.destroy(self.network, self.interface_name)
except RuntimeError:
LOG.warning(_LW('Failed trying to delete interface: %s'),
self.interface_name)
if self.conf.dhcp_delete_namespaces and self.network.namespace:
ns_ip = ip_lib.IPWrapper(namespace=self.network.namespace)
try:
ns_ip.netns.delete(self.network.namespace)
except RuntimeError:
LOG.warning(_LW('Failed trying to delete namespace: %s'),
self.network.namespace)
def _get_value_from_conf_file(self, kind, converter=None):
"""A helper function to read a value from one of the state files."""
file_name = self.get_conf_file_name(kind)
msg = _('Error while reading %s')
try:
with open(file_name, 'r') as f:
try:
return converter(f.read()) if converter else f.read()
except ValueError:
msg = _('Unable to convert value in %s')
except IOError:
msg = _('Unable to access %s')
LOG.debug(msg, file_name)
return None
@property
def interface_name(self):
return self._get_value_from_conf_file('interface')
@interface_name.setter
def interface_name(self, value):
interface_file_path = self.get_conf_file_name('interface')
utils.replace_file(interface_file_path, value)
@property
def active(self):
return self._get_process_manager().active
@abc.abstractmethod
def spawn_process(self):
pass
class Dnsmasq(DhcpLocalProcess):
# The ports that need to be opened when security policies are active
# on the Neutron port used for DHCP. These are provided as a convenience
# for users of this class.
PORTS = {constants.IP_VERSION_4:
[(UDP, DNS_PORT), (TCP, DNS_PORT), (UDP, DHCPV4_PORT)],
constants.IP_VERSION_6:
[(UDP, DNS_PORT), (TCP, DNS_PORT), (UDP, DHCPV6_PORT)],
}
_TAG_PREFIX = 'tag%d'
@classmethod
def check_version(cls):
pass
@classmethod
def existing_dhcp_networks(cls, conf):
"""Return a list of existing networks ids that we have configs for."""
confs_dir = cls.get_confs_dir(conf)
try:
return [
c for c in os.listdir(confs_dir)
if uuidutils.is_uuid_like(c)
]
except OSError:
return []
def _build_cmdline_callback(self, pid_file):
cmd = [
'dnsmasq',
'--no-hosts',
'--no-resolv',
'--strict-order',
'--bind-interfaces',
'--interface=%s' % self.interface_name,
'--except-interface=lo',
'--pid-file=%s' % pid_file,
'--dhcp-hostsfile=%s' % self.get_conf_file_name('host'),
'--addn-hosts=%s' % self.get_conf_file_name('addn_hosts'),
'--dhcp-optsfile=%s' % self.get_conf_file_name('opts'),
'--leasefile-ro',
'--dhcp-authoritative',
]
possible_leases = 0
for i, subnet in enumerate(self.network.subnets):
mode = None
# if a subnet is specified to have dhcp disabled
if not subnet.enable_dhcp:
continue
if subnet.ip_version == 4:
mode = 'static'
else:
# Note(scollins) If the IPv6 attributes are not set, set it as
# static to preserve previous behavior
addr_mode = getattr(subnet, 'ipv6_address_mode', None)
ra_mode = getattr(subnet, 'ipv6_ra_mode', None)
if (addr_mode in [constants.DHCPV6_STATEFUL,
constants.DHCPV6_STATELESS] or
not addr_mode and not ra_mode):
mode = 'static'
cidr = netaddr.IPNetwork(subnet.cidr)
if self.conf.dhcp_lease_duration == -1:
lease = 'infinite'
else:
lease = '%ss' % self.conf.dhcp_lease_duration
# mode is optional and is not set - skip it
if mode:
if subnet.ip_version == 4:
cmd.append('--dhcp-range=%s%s,%s,%s,%s' %
('set:', self._TAG_PREFIX % i,
cidr.network, mode, lease))
else:
cmd.append('--dhcp-range=%s%s,%s,%s,%d,%s' %
('set:', self._TAG_PREFIX % i,
cidr.network, mode,
cidr.prefixlen, lease))
possible_leases += cidr.size
if cfg.CONF.advertise_mtu:
mtu = self.network.mtu
# Do not advertise unknown mtu
if mtu > 0:
cmd.append('--dhcp-option-force=option:mtu,%d' % mtu)
# Cap the limit because creating lots of subnets can inflate
# this possible lease cap.
cmd.append('--dhcp-lease-max=%d' %
min(possible_leases, self.conf.dnsmasq_lease_max))
cmd.append('--conf-file=%s' % self.conf.dnsmasq_config_file)
if self.conf.dnsmasq_dns_servers:
cmd.extend(
'--server=%s' % server
for server in self.conf.dnsmasq_dns_servers)
if self.conf.dhcp_domain:
cmd.append('--domain=%s' % self.conf.dhcp_domain)
if self.conf.dhcp_broadcast_reply:
cmd.append('--dhcp-broadcast')
return cmd
def spawn_process(self):
"""Spawn the process, if it's not spawned already."""
self._spawn_or_reload_process(reload_with_HUP=False)
def _spawn_or_reload_process(self, reload_with_HUP):
"""Spawns or reloads a Dnsmasq process for the network.
When reload_with_HUP is True, dnsmasq receives a HUP signal,
or it's reloaded if the process is not running.
"""
self._output_config_files()
pm = self._get_process_manager(
cmd_callback=self._build_cmdline_callback)
pm.enable(reload_cfg=reload_with_HUP)
self.process_monitor.register(uuid=self.network.id,
service_name=DNSMASQ_SERVICE_NAME,
monitored_process=pm)
def _release_lease(self, mac_address, ip):
"""Release a DHCP lease."""
cmd = ['dhcp_release', self.interface_name, ip, mac_address]
ip_wrapper = ip_lib.IPWrapper(namespace=self.network.namespace)
ip_wrapper.netns.execute(cmd)
def _output_config_files(self):
self._output_hosts_file()
self._output_addn_hosts_file()
self._output_opts_file()
def reload_allocations(self):
"""Rebuild the dnsmasq config and signal the dnsmasq to reload."""
# If all subnets turn off dhcp, kill the process.
if not self._enable_dhcp():
self.disable()
LOG.debug('Killing dnsmasq for network since all subnets have '
'turned off DHCP: %s', self.network.id)
return
self._release_unused_leases()
self._spawn_or_reload_process(reload_with_HUP=True)
LOG.debug('Reloading allocations for network: %s', self.network.id)
self.device_manager.update(self.network, self.interface_name)
def _iter_hosts(self):
"""Iterate over hosts.
For each host on the network we yield a tuple containing:
(
port, # a DictModel instance representing the port.
alloc, # a DictModel instance of the allocated ip and subnet.
# if alloc is None, it means there is no need to allocate
# an IPv6 address because of stateless DHCPv6 network.
host_name, # Host name.
name, # Canonical hostname in the format 'hostname[.domain]'.
)
"""
v6_nets = dict((subnet.id, subnet) for subnet in
self.network.subnets if subnet.ip_version == 6)
for port in self.network.ports:
for alloc in port.fixed_ips:
# Note(scollins) Only create entries that are
# associated with the subnet being managed by this
# dhcp agent
if alloc.subnet_id in v6_nets:
addr_mode = v6_nets[alloc.subnet_id].ipv6_address_mode
if addr_mode == constants.IPV6_SLAAC:
continue
elif addr_mode == constants.DHCPV6_STATELESS:
alloc = hostname = fqdn = None
yield (port, alloc, hostname, fqdn)
continue
hostname = 'host-%s' % alloc.ip_address.replace(
'.', '-').replace(':', '-')
fqdn = hostname
if self.conf.dhcp_domain:
fqdn = '%s.%s' % (fqdn, self.conf.dhcp_domain)
yield (port, alloc, hostname, fqdn)
def _output_hosts_file(self):
"""Writes a dnsmasq compatible dhcp hosts file.
The generated file is sent to the --dhcp-hostsfile option of dnsmasq,
and lists the hosts on the network which should receive a dhcp lease.
Each line in this file is in the form::
'mac_address,FQDN,ip_address'
IMPORTANT NOTE: a dnsmasq instance does not resolve hosts defined in
this file if it did not give a lease to a host listed in it (e.g.:
multiple dnsmasq instances on the same network if this network is on
multiple network nodes). This file is only defining hosts which
should receive a dhcp lease, the hosts resolution in itself is
defined by the `_output_addn_hosts_file` method.
"""
buf = six.StringIO()
filename = self.get_conf_file_name('host')
LOG.debug('Building host file: %s', filename)
dhcp_enabled_subnet_ids = [s.id for s in self.network.subnets
if s.enable_dhcp]
for (port, alloc, hostname, name) in self._iter_hosts():
if not alloc:
if getattr(port, 'extra_dhcp_opts', False):
buf.write('%s,%s%s\n' %
(port.mac_address, 'set:', port.id))
continue
# don't write ip address which belongs to a dhcp disabled subnet.
if alloc.subnet_id not in dhcp_enabled_subnet_ids:
continue
# (dzyu) Check if it is legal ipv6 address, if so, need wrap
# it with '[]' to let dnsmasq to distinguish MAC address from
# IPv6 address.
ip_address = alloc.ip_address
if netaddr.valid_ipv6(ip_address):
ip_address = '[%s]' % ip_address
if getattr(port, 'extra_dhcp_opts', False):
buf.write('%s,%s,%s,%s%s\n' %
(port.mac_address, name, ip_address,
'set:', port.id))
else:
buf.write('%s,%s,%s\n' %
(port.mac_address, name, ip_address))
utils.replace_file(filename, buf.getvalue())
LOG.debug('Done building host file %s with contents:\n%s', filename,
buf.getvalue())
return filename
def _read_hosts_file_leases(self, filename):
leases = set()
if os.path.exists(filename):
with open(filename) as f:
for l in f.readlines():
host = l.strip().split(',')
leases.add((host[2].strip('[]'), host[0]))
return leases
def _release_unused_leases(self):
filename = self.get_conf_file_name('host')
old_leases = self._read_hosts_file_leases(filename)
new_leases = set()
for port in self.network.ports:
for alloc in port.fixed_ips:
new_leases.add((alloc.ip_address, port.mac_address))
for ip, mac in old_leases - new_leases:
self._release_lease(mac, ip)
def _output_addn_hosts_file(self):
"""Writes a dnsmasq compatible additional hosts file.
The generated file is sent to the --addn-hosts option of dnsmasq,
and lists the hosts on the network which should be resolved even if
the dnsmaq instance did not give a lease to the host (see the
`_output_hosts_file` method).
Each line in this file is in the same form as a standard /etc/hosts
file.
"""
buf = six.StringIO()
for (port, alloc, hostname, fqdn) in self._iter_hosts():
# It is compulsory to write the `fqdn` before the `hostname` in
# order to obtain it in PTR responses.
if alloc:
buf.write('%s\t%s %s\n' % (alloc.ip_address, fqdn, hostname))
addn_hosts = self.get_conf_file_name('addn_hosts')
utils.replace_file(addn_hosts, buf.getvalue())
return addn_hosts
def _output_opts_file(self):
"""Write a dnsmasq compatible options file."""
options, subnet_index_map = self._generate_opts_per_subnet()
options += self._generate_opts_per_port(subnet_index_map)
name = self.get_conf_file_name('opts')
utils.replace_file(name, '\n'.join(options))
return name
def _generate_opts_per_subnet(self):
options = []
subnet_index_map = {}
if self.conf.enable_isolated_metadata:
subnet_to_interface_ip = self._make_subnet_interface_ip_map()
isolated_subnets = self.get_isolated_subnets(self.network)
for i, subnet in enumerate(self.network.subnets):
if (not subnet.enable_dhcp or
(subnet.ip_version == 6 and
getattr(subnet, 'ipv6_address_mode', None)
in [None, constants.IPV6_SLAAC])):
continue
if subnet.dns_nameservers:
options.append(
self._format_option(
subnet.ip_version, i, 'dns-server',
','.join(
Dnsmasq._convert_to_literal_addrs(
subnet.ip_version, subnet.dns_nameservers))))
else:
# use the dnsmasq ip as nameservers only if there is no
# dns-server submitted by the server
subnet_index_map[subnet.id] = i
if self.conf.dhcp_domain and subnet.ip_version == 6:
options.append('tag:tag%s,option6:domain-search,%s' %
(i, ''.join(self.conf.dhcp_domain)))
gateway = subnet.gateway_ip
host_routes = []
for hr in subnet.host_routes:
if hr.destination == constants.IPv4_ANY:
if not gateway:
gateway = hr.nexthop
else:
host_routes.append("%s,%s" % (hr.destination, hr.nexthop))
# Add host routes for isolated network segments
if (isolated_subnets[subnet.id] and
self.conf.enable_isolated_metadata and
subnet.ip_version == 4):
subnet_dhcp_ip = subnet_to_interface_ip[subnet.id]
host_routes.append(
'%s/32,%s' % (METADATA_DEFAULT_IP, subnet_dhcp_ip)
)
if subnet.ip_version == 4:
host_routes.extend(["%s,0.0.0.0" % (s.cidr) for s in
self.network.subnets
if (s.ip_version == 4 and
s.cidr != subnet.cidr)])
if host_routes:
if gateway:
host_routes.append("%s,%s" % (constants.IPv4_ANY,
gateway))
options.append(
self._format_option(subnet.ip_version, i,
'classless-static-route',
','.join(host_routes)))
options.append(
self._format_option(subnet.ip_version, i,
WIN2k3_STATIC_DNS,
','.join(host_routes)))
if gateway:
options.append(self._format_option(subnet.ip_version,
i, 'router',
gateway))
else:
options.append(self._format_option(subnet.ip_version,
i, 'router'))
return options, subnet_index_map
def _generate_opts_per_port(self, subnet_index_map):
options = []
dhcp_ips = collections.defaultdict(list)
for port in self.network.ports:
if getattr(port, 'extra_dhcp_opts', False):
port_ip_versions = set(
[netaddr.IPAddress(ip.ip_address).version
for ip in port.fixed_ips])
for opt in port.extra_dhcp_opts:
opt_ip_version = opt.ip_version
if opt_ip_version in port_ip_versions:
options.append(
self._format_option(opt_ip_version, port.id,
opt.opt_name, opt.opt_value))
else:
LOG.info(_LI("Cannot apply dhcp option %(opt)s "
"because it's ip_version %(version)d "
"is not in port's address IP versions"),
{'opt': opt.opt_name,
'version': opt_ip_version})
# provides all dnsmasq ip as dns-server if there is more than
# one dnsmasq for a subnet and there is no dns-server submitted
# by the server
if port.device_owner == constants.DEVICE_OWNER_DHCP:
for ip in port.fixed_ips:
i = subnet_index_map.get(ip.subnet_id)
if i is None:
continue
dhcp_ips[i].append(ip.ip_address)
for i, ips in dhcp_ips.items():
for ip_version in (4, 6):
vx_ips = [ip for ip in ips
if netaddr.IPAddress(ip).version == ip_version]
if vx_ips:
options.append(
self._format_option(
ip_version, i, 'dns-server',
','.join(
Dnsmasq._convert_to_literal_addrs(ip_version,
vx_ips))))
return options
def _make_subnet_interface_ip_map(self):
ip_dev = ip_lib.IPDevice(self.interface_name,
namespace=self.network.namespace)
subnet_lookup = dict(
(netaddr.IPNetwork(subnet.cidr), subnet.id)
for subnet in self.network.subnets
)
retval = {}
for addr in ip_dev.addr.list():
ip_net = netaddr.IPNetwork(addr['cidr'])
if ip_net in subnet_lookup:
retval[subnet_lookup[ip_net]] = addr['cidr'].split('/')[0]
return retval
def _format_option(self, ip_version, tag, option, *args):
"""Format DHCP option by option name or code."""
option = str(option)
pattern = "(tag:(.*),)?(.*)$"
matches = re.match(pattern, option)
extra_tag = matches.groups()[0]
option = matches.groups()[2]
if isinstance(tag, int):
tag = self._TAG_PREFIX % tag
if not option.isdigit():
if ip_version == 4:
option = 'option:%s' % option
else:
option = 'option6:%s' % option
if extra_tag:
tags = ('tag:' + tag, extra_tag[:-1], '%s' % option)
else:
tags = ('tag:' + tag, '%s' % option)
return ','.join(tags + args)
@staticmethod
def _convert_to_literal_addrs(ip_version, ips):
if ip_version == 4:
return ips
return ['[' + ip + ']' for ip in ips]
@classmethod
def get_isolated_subnets(cls, network):
"""Returns a dict indicating whether or not a subnet is isolated
A subnet is considered non-isolated if there is a port connected to
the subnet, and the port's ip address matches that of the subnet's
gateway. The port must be owned by a nuetron router.
"""
isolated_subnets = collections.defaultdict(lambda: True)
subnets = dict((subnet.id, subnet) for subnet in network.subnets)
for port in network.ports:
if port.device_owner not in constants.ROUTER_INTERFACE_OWNERS:
continue
for alloc in port.fixed_ips:
if subnets[alloc.subnet_id].gateway_ip == alloc.ip_address:
isolated_subnets[alloc.subnet_id] = False
return isolated_subnets
@classmethod
def should_enable_metadata(cls, conf, network):
"""Determine whether the metadata proxy is needed for a network
This method returns True for truly isolated networks (ie: not attached
to a router), when the enable_isolated_metadata flag is True.
This method also returns True when enable_metadata_network is True,
and the network passed as a parameter has a subnet in the link-local
CIDR, thus characterizing it as a "metadata" network. The metadata
network is used by solutions which do not leverage the l3 agent for
providing access to the metadata service via logical routers built
with 3rd party backends.
"""
if conf.enable_metadata_network and conf.enable_isolated_metadata:
# check if the network has a metadata subnet
meta_cidr = netaddr.IPNetwork(METADATA_DEFAULT_CIDR)
if any(netaddr.IPNetwork(s.cidr) in meta_cidr
for s in network.subnets):
return True
if not conf.use_namespaces or not conf.enable_isolated_metadata:
return False
isolated_subnets = cls.get_isolated_subnets(network)
return any(isolated_subnets[subnet.id] for subnet in network.subnets)
class DeviceManager(object):
def __init__(self, conf, plugin):
self.conf = conf
self.plugin = plugin
if not conf.interface_driver:
LOG.error(_LE('An interface driver must be specified'))
raise SystemExit(1)
try:
self.driver = importutils.import_object(
conf.interface_driver, conf)
except Exception as e:
LOG.error(_LE("Error importing interface driver '%(driver)s': "
"%(inner)s"),
{'driver': conf.interface_driver,
'inner': e})
raise SystemExit(1)
def get_interface_name(self, network, port):
"""Return interface(device) name for use by the DHCP process."""
return self.driver.get_device_name(port)
def get_device_id(self, network):
"""Return a unique DHCP device ID for this host on the network."""
# There could be more than one dhcp server per network, so create
# a device id that combines host and network ids
return commonutils.get_dhcp_agent_device_id(network.id, self.conf.host)
def _set_default_route(self, network, device_name):
"""Sets the default gateway for this dhcp namespace.
This method is idempotent and will only adjust the route if adjusting
it would change it from what it already is. This makes it safe to call
and avoids unnecessary perturbation of the system.
"""
device = ip_lib.IPDevice(device_name, namespace=network.namespace)
gateway = device.route.get_gateway()
if gateway:
gateway = gateway['gateway']
for subnet in network.subnets:
skip_subnet = (
subnet.ip_version != 4
or not subnet.enable_dhcp
or subnet.gateway_ip is None)
if skip_subnet:
continue
if gateway != subnet.gateway_ip:
LOG.debug('Setting gateway for dhcp netns on net %(n)s to '
'%(ip)s',
{'n': network.id, 'ip': subnet.gateway_ip})
device.route.add_gateway(subnet.gateway_ip)
return
# No subnets on the network have a valid gateway. Clean it up to avoid
# confusion from seeing an invalid gateway here.
if gateway is not None:
LOG.debug('Removing gateway for dhcp netns on net %s', network.id)
device.route.delete_gateway(gateway)
def setup_dhcp_port(self, network):
"""Create/update DHCP port for the host if needed and return port."""
device_id = self.get_device_id(network)
subnets = {}
dhcp_enabled_subnet_ids = []
for subnet in network.subnets:
if subnet.enable_dhcp:
dhcp_enabled_subnet_ids.append(subnet.id)
subnets[subnet.id] = subnet
dhcp_port = None
for port in network.ports:
port_device_id = getattr(port, 'device_id', None)
if port_device_id == device_id:
port_fixed_ips = []
for fixed_ip in port.fixed_ips:
port_fixed_ips.append({'subnet_id': fixed_ip.subnet_id,
'ip_address': fixed_ip.ip_address})
if fixed_ip.subnet_id in dhcp_enabled_subnet_ids:
dhcp_enabled_subnet_ids.remove(fixed_ip.subnet_id)
# If there are dhcp_enabled_subnet_ids here that means that
# we need to add those to the port and call update.
if dhcp_enabled_subnet_ids:
port_fixed_ips.extend(
[dict(subnet_id=s) for s in dhcp_enabled_subnet_ids])
dhcp_port = self.plugin.update_dhcp_port(
port.id, {'port': {'network_id': network.id,
'fixed_ips': port_fixed_ips}})
if not dhcp_port:
raise exceptions.Conflict()
else:
dhcp_port = port
# break since we found port that matches device_id
break
# check for a reserved DHCP port
if dhcp_port is None:
LOG.debug('DHCP port %(device_id)s on network %(network_id)s'
' does not yet exist. Checking for a reserved port.',
{'device_id': device_id, 'network_id': network.id})
for port in network.ports:
port_device_id = getattr(port, 'device_id', None)
if port_device_id == constants.DEVICE_ID_RESERVED_DHCP_PORT:
dhcp_port = self.plugin.update_dhcp_port(
port.id, {'port': {'network_id': network.id,
'device_id': device_id}})
if dhcp_port:
break
# DHCP port has not yet been created.
if dhcp_port is None:
LOG.debug('DHCP port %(device_id)s on network %(network_id)s'
' does not yet exist.', {'device_id': device_id,
'network_id': network.id})
port_dict = dict(
name='',
admin_state_up=True,
device_id=device_id,
network_id=network.id,
tenant_id=network.tenant_id,
fixed_ips=[dict(subnet_id=s) for s in dhcp_enabled_subnet_ids])
dhcp_port = self.plugin.create_dhcp_port({'port': port_dict})
if not dhcp_port:
raise exceptions.Conflict()
# Convert subnet_id to subnet dict
fixed_ips = [dict(subnet_id=fixed_ip.subnet_id,
ip_address=fixed_ip.ip_address,
subnet=subnets[fixed_ip.subnet_id])
for fixed_ip in dhcp_port.fixed_ips]
ips = [DictModel(item) if isinstance(item, dict) else item
for item in fixed_ips]
dhcp_port.fixed_ips = ips
return dhcp_port
def setup(self, network):
"""Create and initialize a device for network's DHCP on this host."""
port = self.setup_dhcp_port(network)
interface_name = self.get_interface_name(network, port)
if ip_lib.ensure_device_is_ready(interface_name,
namespace=network.namespace):
LOG.debug('Reusing existing device: %s.', interface_name)
else:
self.driver.plug(network.id,
port.id,
interface_name,
port.mac_address,
namespace=network.namespace)
self.fill_dhcp_udp_checksums(namespace=network.namespace)
ip_cidrs = []
for fixed_ip in port.fixed_ips:
subnet = fixed_ip.subnet
if not ipv6_utils.is_auto_address_subnet(subnet):
net = netaddr.IPNetwork(subnet.cidr)
ip_cidr = '%s/%s' % (fixed_ip.ip_address, net.prefixlen)
ip_cidrs.append(ip_cidr)
if (self.conf.enable_isolated_metadata and
self.conf.use_namespaces):
ip_cidrs.append(METADATA_DEFAULT_CIDR)
self.driver.init_l3(interface_name, ip_cidrs,
namespace=network.namespace)
# ensure that the dhcp interface is first in the list
if network.namespace is None:
device = ip_lib.IPDevice(interface_name)
device.route.pullup_route(interface_name)
if self.conf.use_namespaces:
self._set_default_route(network, interface_name)
return interface_name
def update(self, network, device_name):
"""Update device settings for the network's DHCP on this host."""
if self.conf.use_namespaces:
self._set_default_route(network, device_name)
def destroy(self, network, device_name):
"""Destroy the device used for the network's DHCP on this host."""
self.driver.unplug(device_name, namespace=network.namespace)
self.plugin.release_dhcp_port(network.id,
self.get_device_id(network))
def fill_dhcp_udp_checksums(self, namespace):
"""Ensure DHCP reply packets always have correct UDP checksums."""
iptables_mgr = iptables_manager.IptablesManager(use_ipv6=False,
namespace=namespace)
ipv4_rule = ('-p udp --dport %d -j CHECKSUM --checksum-fill'
% constants.DHCP_RESPONSE_PORT)
iptables_mgr.ipv4['mangle'].add_rule('POSTROUTING', ipv4_rule)
iptables_mgr.apply()
|
unknown
|
codeparrot/codeparrot-clean
| ||
from ctypes import *
import os
import sys
import unittest
import test.support
from ctypes.util import find_library
libc_name = None
def setUpModule():
global libc_name
if os.name == "nt":
libc_name = find_library("c")
elif os.name == "ce":
libc_name = "coredll"
elif sys.platform == "cygwin":
libc_name = "cygwin1.dll"
else:
libc_name = find_library("c")
if test.support.verbose:
print("libc_name is", libc_name)
class LoaderTest(unittest.TestCase):
unknowndll = "xxrandomnamexx"
def test_load(self):
if libc_name is None:
self.skipTest('could not find libc')
CDLL(libc_name)
CDLL(os.path.basename(libc_name))
self.assertRaises(OSError, CDLL, self.unknowndll)
def test_load_version(self):
if libc_name is None:
self.skipTest('could not find libc')
if os.path.basename(libc_name) != 'libc.so.6':
self.skipTest('wrong libc path for test')
cdll.LoadLibrary("libc.so.6")
# linux uses version, libc 9 should not exist
self.assertRaises(OSError, cdll.LoadLibrary, "libc.so.9")
self.assertRaises(OSError, cdll.LoadLibrary, self.unknowndll)
def test_find(self):
for name in ("c", "m"):
lib = find_library(name)
if lib:
cdll.LoadLibrary(lib)
CDLL(lib)
@unittest.skipUnless(os.name in ("nt", "ce"),
'test specific to Windows (NT/CE)')
def test_load_library(self):
self.assertIsNotNone(libc_name)
if test.support.verbose:
print(find_library("kernel32"))
print(find_library("user32"))
if os.name == "nt":
windll.kernel32.GetModuleHandleW
windll["kernel32"].GetModuleHandleW
windll.LoadLibrary("kernel32").GetModuleHandleW
WinDLL("kernel32").GetModuleHandleW
elif os.name == "ce":
windll.coredll.GetModuleHandleW
windll["coredll"].GetModuleHandleW
windll.LoadLibrary("coredll").GetModuleHandleW
WinDLL("coredll").GetModuleHandleW
@unittest.skipUnless(os.name in ("nt", "ce"),
'test specific to Windows (NT/CE)')
def test_load_ordinal_functions(self):
import _ctypes_test
dll = WinDLL(_ctypes_test.__file__)
# We load the same function both via ordinal and name
func_ord = dll[2]
func_name = dll.GetString
# addressof gets the address where the function pointer is stored
a_ord = addressof(func_ord)
a_name = addressof(func_name)
f_ord_addr = c_void_p.from_address(a_ord).value
f_name_addr = c_void_p.from_address(a_name).value
self.assertEqual(hex(f_ord_addr), hex(f_name_addr))
self.assertRaises(AttributeError, dll.__getitem__, 1234)
@unittest.skipUnless(os.name == "nt", 'Windows-specific test')
def test_1703286_A(self):
from _ctypes import LoadLibrary, FreeLibrary
# On winXP 64-bit, advapi32 loads at an address that does
# NOT fit into a 32-bit integer. FreeLibrary must be able
# to accept this address.
# These are tests for http://www.python.org/sf/1703286
handle = LoadLibrary("advapi32")
FreeLibrary(handle)
@unittest.skipUnless(os.name == "nt", 'Windows-specific test')
def test_1703286_B(self):
# Since on winXP 64-bit advapi32 loads like described
# above, the (arbitrarily selected) CloseEventLog function
# also has a high address. 'call_function' should accept
# addresses so large.
from _ctypes import call_function
advapi32 = windll.advapi32
# Calling CloseEventLog with a NULL argument should fail,
# but the call should not segfault or so.
self.assertEqual(0, advapi32.CloseEventLog(None))
windll.kernel32.GetProcAddress.argtypes = c_void_p, c_char_p
windll.kernel32.GetProcAddress.restype = c_void_p
proc = windll.kernel32.GetProcAddress(advapi32._handle,
b"CloseEventLog")
self.assertTrue(proc)
# This is the real test: call the function via 'call_function'
self.assertEqual(0, call_function(proc, (None,)))
if __name__ == "__main__":
unittest.main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
/**
* This header is used internally by all current supported SIMD extensions,
* except for AVX512.
*/
#ifndef NPY_SIMD
#error "Not a standalone header, use simd/simd.h instead"
#endif
#ifndef _NPY_SIMD_EMULATE_MASKOP_H
#define _NPY_SIMD_EMULATE_MASKOP_H
/**
* Implements conditional addition and subtraction.
* e.g. npyv_ifadd_f32(mask, a, b, c) -> mask ? a + b : c
* e.g. npyv_ifsub_f32(mask, a, b, c) -> mask ? a - b : c
*/
#define NPYV_IMPL_EMULATE_MASK_ADDSUB(SFX, BSFX) \
NPY_FINLINE npyv_##SFX npyv_ifadd_##SFX \
(npyv_##BSFX m, npyv_##SFX a, npyv_##SFX b, npyv_##SFX c) \
{ \
npyv_##SFX add = npyv_add_##SFX(a, b); \
return npyv_select_##SFX(m, add, c); \
} \
NPY_FINLINE npyv_##SFX npyv_ifsub_##SFX \
(npyv_##BSFX m, npyv_##SFX a, npyv_##SFX b, npyv_##SFX c) \
{ \
npyv_##SFX sub = npyv_sub_##SFX(a, b); \
return npyv_select_##SFX(m, sub, c); \
}
NPYV_IMPL_EMULATE_MASK_ADDSUB(u8, b8)
NPYV_IMPL_EMULATE_MASK_ADDSUB(s8, b8)
NPYV_IMPL_EMULATE_MASK_ADDSUB(u16, b16)
NPYV_IMPL_EMULATE_MASK_ADDSUB(s16, b16)
NPYV_IMPL_EMULATE_MASK_ADDSUB(u32, b32)
NPYV_IMPL_EMULATE_MASK_ADDSUB(s32, b32)
NPYV_IMPL_EMULATE_MASK_ADDSUB(u64, b64)
NPYV_IMPL_EMULATE_MASK_ADDSUB(s64, b64)
#if NPY_SIMD_F32
NPYV_IMPL_EMULATE_MASK_ADDSUB(f32, b32)
#endif
#if NPY_SIMD_F64
NPYV_IMPL_EMULATE_MASK_ADDSUB(f64, b64)
#endif
#if NPY_SIMD_F32
// conditional division, m ? a / b : c
NPY_FINLINE npyv_f32
npyv_ifdiv_f32(npyv_b32 m, npyv_f32 a, npyv_f32 b, npyv_f32 c)
{
const npyv_f32 one = npyv_setall_f32(1.0f);
npyv_f32 div = npyv_div_f32(a, npyv_select_f32(m, b, one));
return npyv_select_f32(m, div, c);
}
// conditional division, m ? a / b : 0
NPY_FINLINE npyv_f32
npyv_ifdivz_f32(npyv_b32 m, npyv_f32 a, npyv_f32 b)
{
const npyv_f32 zero = npyv_zero_f32();
return npyv_ifdiv_f32(m, a, b, zero);
}
#endif
#if NPY_SIMD_F64
// conditional division, m ? a / b : c
NPY_FINLINE npyv_f64
npyv_ifdiv_f64(npyv_b64 m, npyv_f64 a, npyv_f64 b, npyv_f64 c)
{
const npyv_f64 one = npyv_setall_f64(1.0);
npyv_f64 div = npyv_div_f64(a, npyv_select_f64(m, b, one));
return npyv_select_f64(m, div, c);
}
// conditional division, m ? a / b : 0
NPY_FINLINE npyv_f64
npyv_ifdivz_f64(npyv_b64 m, npyv_f64 a, npyv_f64 b)
{
const npyv_f64 zero = npyv_zero_f64();
return npyv_ifdiv_f64(m, a, b, zero);
}
#endif
#endif // _NPY_SIMD_EMULATE_MASKOP_H
|
c
|
github
|
https://github.com/numpy/numpy
|
numpy/_core/src/common/simd/emulate_maskop.h
|
use crate::abi::GCThreadTLS;
use crate::upcalls;
use crate::utils::ChunkedVecCollector;
use crate::Ruby;
use crate::RubySlot;
use mmtk::memory_manager;
use mmtk::scheduler::GCWork;
use mmtk::scheduler::GCWorker;
use mmtk::scheduler::WorkBucketStage;
use mmtk::util::ObjectReference;
use mmtk::util::VMWorkerThread;
use mmtk::vm::ObjectTracer;
use mmtk::vm::RootsWorkFactory;
use mmtk::vm::Scanning;
use mmtk::vm::SlotVisitor;
use mmtk::Mutator;
pub struct VMScanning {}
impl Scanning<Ruby> for VMScanning {
const UNIQUE_OBJECT_ENQUEUING: bool = true;
fn support_slot_enqueuing(_tls: VMWorkerThread, _object: ObjectReference) -> bool {
false
}
fn scan_object<EV: SlotVisitor<RubySlot>>(
_tls: VMWorkerThread,
_object: ObjectReference,
_slot_visitor: &mut EV,
) {
unreachable!("We have not enabled slot enqueuing for any types, yet.");
}
fn scan_object_and_trace_edges<OT: ObjectTracer>(
tls: VMWorkerThread,
object: ObjectReference,
object_tracer: &mut OT,
) {
debug_assert!(
mmtk::memory_manager::is_mmtk_object(object.to_raw_address()).is_some(),
"Not an MMTk object: {object}",
);
let gc_tls = unsafe { GCThreadTLS::from_vwt_check(tls) };
let visit_object = |_worker, target_object: ObjectReference, pin| {
trace!(
"Tracing edge: {} -> {}{}",
object,
target_object,
if pin { " pin" } else { "" }
);
debug_assert!(
mmtk::memory_manager::is_mmtk_object(target_object.to_raw_address()).is_some(),
"Destination is not an MMTk object. Src: {object} dst: {target_object}"
);
debug_assert!(
// If we are in a moving GC, all objects should be pinned by PinningRegistry.
// If it is requested that target_object be pinned but it is not pinned, then
// it is a bug because it could be moved.
if crate::mmtk().get_plan().current_gc_may_move_object() && pin {
memory_manager::is_pinned(target_object)
} else {
true
},
"Object {object} is trying to pin {target_object}"
);
let forwarded_target = object_tracer.trace_object(target_object);
if forwarded_target != target_object {
trace!(" Forwarded target {target_object} -> {forwarded_target}");
}
forwarded_target
};
gc_tls
.object_closure
.set_temporarily_and_run_code(visit_object, || {
(upcalls().call_gc_mark_children)(object);
if crate::mmtk().get_plan().current_gc_may_move_object() {
(upcalls().update_object_references)(object);
}
});
}
fn notify_initial_thread_scan_complete(_partial_scan: bool, _tls: VMWorkerThread) {
// Do nothing
}
fn scan_roots_in_mutator_thread(
_tls: VMWorkerThread,
_mutator: &'static mut Mutator<Ruby>,
mut _factory: impl RootsWorkFactory<RubySlot>,
) {
// Do nothing. All stacks (including Ruby stacks and machine stacks) are reachable from
// `rb_vm_t` -> ractor -> thread -> fiber -> stacks. It is part of `ScanGCRoots` which
// calls `rb_gc_mark_roots` -> `rb_vm_mark`.
}
fn scan_vm_specific_roots(tls: VMWorkerThread, factory: impl RootsWorkFactory<RubySlot>) {
let gc_tls = unsafe { GCThreadTLS::from_vwt_check(tls) };
let root_scanning_work_packets: Vec<Box<dyn GCWork<Ruby>>> = vec![
Box::new(ScanGCRoots::new(factory.clone())),
Box::new(ScanObjspace::new(factory.clone())),
];
gc_tls.worker().scheduler().work_buckets[WorkBucketStage::Prepare]
.bulk_add(root_scanning_work_packets);
// Generate WB-unprotected roots scanning work packets
'gen_wb_unprotected_work: {
let is_nursery_gc = (crate::mmtk().get_plan().generational())
.is_some_and(|gen| gen.is_current_gc_nursery());
if !is_nursery_gc {
break 'gen_wb_unprotected_work;
}
let vecs = {
let guard = crate::binding()
.wb_unprotected_objects
.try_lock()
.expect("Someone is holding the lock of wb_unprotected_objects?");
if guard.is_empty() {
break 'gen_wb_unprotected_work;
}
let mut collector = ChunkedVecCollector::new(128);
collector.extend(guard.iter().copied());
collector.into_vecs()
};
let packets = vecs
.into_iter()
.map(|objects| {
let factory = factory.clone();
Box::new(ScanWbUnprotectedRoots { factory, objects }) as _
})
.collect::<Vec<_>>();
gc_tls.worker().scheduler().work_buckets[WorkBucketStage::Prepare].bulk_add(packets);
}
}
fn supports_return_barrier() -> bool {
false
}
fn prepare_for_roots_re_scanning() {
todo!()
}
fn process_weak_refs(
worker: &mut GCWorker<Ruby>,
tracer_context: impl mmtk::vm::ObjectTracerContext<Ruby>,
) -> bool {
crate::binding()
.weak_proc
.process_weak_stuff(worker, tracer_context);
crate::binding().pinning_registry.cleanup(worker);
false
}
fn forward_weak_refs(
_worker: &mut GCWorker<Ruby>,
_tracer_context: impl mmtk::vm::ObjectTracerContext<Ruby>,
) {
panic!("We can't use MarkCompact in Ruby.");
}
}
impl VMScanning {
const OBJECT_BUFFER_SIZE: usize = 4096;
fn collect_object_roots_in<F: FnOnce()>(
root_scan_kind: &str,
gc_tls: &mut GCThreadTLS,
factory: &mut impl RootsWorkFactory<RubySlot>,
callback: F,
) {
let mut buffer: Vec<ObjectReference> = Vec::new();
let visit_object = |_, object: ObjectReference, pin| {
debug!(
"[{}] Visiting object: {}{}",
root_scan_kind,
object,
if pin {
"(unmovable root)"
} else {
"(movable, but we pin it anyway)"
}
);
debug_assert!(
mmtk::memory_manager::is_mmtk_object(object.to_raw_address()).is_some(),
"Root does not point to MMTk object. object: {object}"
);
buffer.push(object);
if buffer.len() >= Self::OBJECT_BUFFER_SIZE {
factory.create_process_pinning_roots_work(std::mem::take(&mut buffer));
}
object
};
gc_tls
.object_closure
.set_temporarily_and_run_code(visit_object, callback);
if !buffer.is_empty() {
factory.create_process_pinning_roots_work(buffer);
}
}
}
trait GlobaRootScanningWork {
type F: RootsWorkFactory<RubySlot>;
const NAME: &'static str;
fn new(factory: Self::F) -> Self;
fn scan_roots();
fn roots_work_factory(&mut self) -> &mut Self::F;
fn do_work(&mut self, worker: &mut GCWorker<Ruby>, _mmtk: &'static mmtk::MMTK<Ruby>) {
let gc_tls = unsafe { GCThreadTLS::from_vwt_check(worker.tls) };
let factory = self.roots_work_factory();
VMScanning::collect_object_roots_in(Self::NAME, gc_tls, factory, || {
Self::scan_roots();
});
}
}
macro_rules! define_global_root_scanner {
($name: ident, $code: expr) => {
struct $name<F: RootsWorkFactory<RubySlot>> {
factory: F,
}
impl<F: RootsWorkFactory<RubySlot>> GlobaRootScanningWork for $name<F> {
type F = F;
const NAME: &'static str = stringify!($name);
fn new(factory: Self::F) -> Self {
Self { factory }
}
fn scan_roots() {
$code
}
fn roots_work_factory(&mut self) -> &mut Self::F {
&mut self.factory
}
}
impl<F: RootsWorkFactory<RubySlot>> GCWork<Ruby> for $name<F> {
fn do_work(&mut self, worker: &mut GCWorker<Ruby>, mmtk: &'static mmtk::MMTK<Ruby>) {
GlobaRootScanningWork::do_work(self, worker, mmtk);
}
}
};
}
define_global_root_scanner!(ScanGCRoots, {
(crate::upcalls().scan_gc_roots)();
});
define_global_root_scanner!(ScanObjspace, {
(crate::upcalls().scan_objspace)();
});
struct ScanWbUnprotectedRoots<F: RootsWorkFactory<RubySlot>> {
factory: F,
objects: Vec<ObjectReference>,
}
impl<F: RootsWorkFactory<RubySlot>> GCWork<Ruby> for ScanWbUnprotectedRoots<F> {
fn do_work(&mut self, worker: &mut GCWorker<Ruby>, _mmtk: &'static mmtk::MMTK<Ruby>) {
let gc_tls = unsafe { GCThreadTLS::from_vwt_check(worker.tls) };
VMScanning::collect_object_roots_in("wb_unprot_roots", gc_tls, &mut self.factory, || {
for object in self.objects.iter().copied() {
if object.is_reachable() {
debug!("[wb_unprot_roots] Visiting WB-unprotected object (parent): {object}");
(upcalls().call_gc_mark_children)(object);
if crate::mmtk().get_plan().current_gc_may_move_object() {
(upcalls().update_object_references)(object);
}
} else {
debug!(
"[wb_unprot_roots] Skipping young WB-unprotected object (parent): {object}"
);
}
}
});
}
}
|
rust
|
github
|
https://github.com/ruby/ruby
|
gc/mmtk/src/scanning.rs
|
#!/usr/bin/env python3
"""
Generate powers of ten using William Clinger's ``AlgorithmM`` for use in
decimal to floating point conversions.
Specifically, computes and outputs (as Rust code) a table of 10^e for some
range of exponents e. The output is one array of 64 bit significands and
another array of corresponding base two exponents. The approximations are
normalized and rounded perfectly, i.e., within 0.5 ULP of the true value.
The representation ([u64], [i16]) instead of the more natural [(u64, i16)]
is used because (u64, i16) has a ton of padding which would make the table
even larger, and it's already uncomfortably large (6 KiB).
"""
from __future__ import print_function
from math import ceil, log
from fractions import Fraction
from collections import namedtuple
N = 64 # Size of the significand field in bits
MIN_SIG = 2 ** (N - 1)
MAX_SIG = (2 ** N) - 1
# Hand-rolled fp representation without arithmetic or any other operations.
# The significand is normalized and always N bit, but the exponent is
# unrestricted in range.
Fp = namedtuple('Fp', 'sig exp')
def algorithm_m(f, e):
assert f > 0
if e < 0:
u = f
v = 10 ** abs(e)
else:
u = f * 10 ** e
v = 1
k = 0
x = u // v
while True:
if x < MIN_SIG:
u <<= 1
k -= 1
elif x >= MAX_SIG:
v <<= 1
k += 1
else:
break
x = u // v
return ratio_to_float(u, v, k)
def ratio_to_float(u, v, k):
q, r = divmod(u, v)
v_r = v - r
z = Fp(q, k)
if r < v_r:
return z
elif r > v_r:
return next_float(z)
elif q % 2 == 0:
return z
else:
return next_float(z)
def next_float(z):
if z.sig == MAX_SIG:
return Fp(MIN_SIG, z.exp + 1)
else:
return Fp(z.sig + 1, z.exp)
def error(f, e, z):
decimal = f * Fraction(10) ** e
binary = z.sig * Fraction(2) ** z.exp
abs_err = abs(decimal - binary)
# The unit in the last place has value z.exp
ulp_err = abs_err / Fraction(2) ** z.exp
return float(ulp_err)
HEADER = """
//! Tables of approximations of powers of ten.
//! DO NOT MODIFY: Generated by `src/etc/dec2flt_table.py`
"""
def main():
print(HEADER.strip())
print()
print_proper_powers()
print()
print_short_powers(32, 24)
print()
print_short_powers(64, 53)
def print_proper_powers():
MIN_E = -305
MAX_E = 305
e_range = range(MIN_E, MAX_E+1)
powers = []
for e in e_range:
z = algorithm_m(1, e)
err = error(1, e, z)
assert err < 0.5
powers.append(z)
print("pub const MIN_E: i16 = {};".format(MIN_E))
print("pub const MAX_E: i16 = {};".format(MAX_E))
print()
print("#[rustfmt::skip]")
typ = "([u64; {0}], [i16; {0}])".format(len(powers))
print("pub static POWERS: ", typ, " = (", sep='')
print(" [")
for z in powers:
print(" 0x{:x},".format(z.sig))
print(" ],")
print(" [")
for z in powers:
print(" {},".format(z.exp))
print(" ],")
print(");")
def print_short_powers(num_bits, significand_size):
max_sig = 2**significand_size - 1
# The fast path bails out for exponents >= ceil(log5(max_sig))
max_e = int(ceil(log(max_sig, 5)))
e_range = range(max_e)
typ = "[f{}; {}]".format(num_bits, len(e_range))
print("#[rustfmt::skip]")
print("pub const F", num_bits, "_SHORT_POWERS: ", typ, " = [", sep='')
for e in e_range:
print(" 1e{},".format(e))
print("];")
if __name__ == '__main__':
main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
#!/usr/bin/python
#coding: utf-8 -*-
# (c) 2016 Krzysztof Magosa <krzysztof@magosa.pl>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'version': '1.0'}
DOCUMENTATION = '''
---
module: tempfile
version_added: "2.3"
author:
- Krzysztof Magosa
short_description: Creates temporary files and directories.
description:
- The C(tempfile) module creates temporary files and directories. C(mktemp) command takes different parameters on various systems, this module helps to avoid troubles related to that. Files/directories created by module are accessible only by creator. In case you need to make them world-accessible you need to use M(file) module.
options:
state:
description:
- Whether to create file or directory.
required: false
choices: [ "file", "directory" ]
default: file
path:
description:
- Location where temporary file or directory should be created. If path is not specified default system temporary directory will be used.
required: false
default: null
prefix:
description:
- Prefix of file/directory name created by module.
required: false
default: ansible.
suffix:
description:
- Suffix of file/directory name created by module.
required: false
default: ""
'''
EXAMPLES = """
- name: create temporary build directory
tempfile:
state: directory
suffix: build
- name: create temporary file
tempfile:
state: file
suffix: temp
"""
RETURN = '''
path:
description: Path to created file or directory
returned: success
type: string
sample: "/tmp/ansible.bMlvdk"
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.pycompat24 import get_exception
from tempfile import mkstemp, mkdtemp
from os import close
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(default='file', choices=['file', 'directory']),
path = dict(default=None),
prefix = dict(default='ansible.'),
suffix = dict(default='')
)
)
try:
if module.params['state'] == 'file':
handle, path = mkstemp(
prefix=module.params['prefix'],
suffix=module.params['suffix'],
dir=module.params['path']
)
close(handle)
elif module.params['state'] == 'directory':
path = mkdtemp(
prefix=module.params['prefix'],
suffix=module.params['suffix'],
dir=module.params['path']
)
module.exit_json(changed=True, path=path)
except Exception:
e = get_exception()
module.fail_json(msg=str(e))
if __name__ == '__main__':
main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
from setuptools import setup, find_packages
exec(open('thunderfish/version.py').read())
long_description = """
# ThunderFish
Algorithms and programs for analysing electric field recordings of
weakly electric fish.
[Documentation](https://bendalab.github.io/thunderfish) |
[API Reference](https://bendalab.github.io/thunderfish/api)
Weakly electric fish generate an electric organ discharge (EOD). In
wave-type fish the EOD resembles a sinewave of a specific frequency
and with higher harmonics. In pulse-type fish EODs have a distinct
waveform and are separated in time. The thunderfish package provides
algorithms and tools for analysing both wavefish and pulsefish EODs.
"""
setup(
name = 'thunderfish',
version = __version__,
author = 'Jan Benda, Juan F. Sehuanes, Till Raab, Jörg Henninger, Jan Grewe, Fabian Sinz, Liz Weerdmeester',
author_email = "jan.benda@uni-tuebingen.de",
description = 'Algorithms and scripts for analyzing recordings of electric fish waveforms.',
long_description = long_description,
long_description_content_type = "text/markdown",
url = "https://github.com/bendalab/thunderfish",
license = "GPLv3",
classifiers = [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering",
"Topic :: Software Development :: Libraries :: Python Modules",
],
packages = find_packages(exclude = ['contrib', 'docs', 'tests*']),
entry_points = {
'console_scripts': [
'thunderfish = thunderfish.thunderfish:main',
'fishfinder = thunderfish.fishfinder:main',
'collectfish = thunderfish.collectfish:main',
'eodexplorer = thunderfish.eodexplorer:main',
]},
python_requires = '>=3.4',
install_requires = ['sklearn', 'scipy', 'numpy', 'matplotlib', 'audioio'],
)
|
unknown
|
codeparrot/codeparrot-clean
| ||
"""Adds xref targets to the top of files."""
import sys
import os
testing = False
DONT_TOUCH = (
'./index.txt',
)
def target_name(fn):
if fn.endswith('.txt'):
fn = fn[:-4]
return '_' + fn.lstrip('./').replace('/', '-')
def process_file(fn, lines):
lines.insert(0, '\n')
lines.insert(0, '.. %s:\n' % target_name(fn))
try:
with open(fn, 'w') as fp:
fp.writelines(lines)
except IOError:
print("Can't open %s for writing. Not touching it." % fn)
def has_target(fn):
try:
with open(fn, 'r') as fp:
lines = fp.readlines()
except IOError:
print("Can't open or read %s. Not touching it." % fn)
return (True, None)
#print fn, len(lines)
if len(lines) < 1:
print("Not touching empty file %s." % fn)
return (True, None)
if lines[0].startswith('.. _'):
return (True, None)
return (False, lines)
def main(argv=None):
if argv is None:
argv = sys.argv
if len(argv) == 1:
argv.extend('.')
files = []
for root in argv[1:]:
for (dirpath, dirnames, filenames) in os.walk(root):
files.extend([(dirpath, f) for f in filenames])
files.sort()
files = [os.path.join(p, fn) for p, fn in files if fn.endswith('.txt')]
#print files
for fn in files:
if fn in DONT_TOUCH:
print("Skipping blacklisted file %s." % fn)
continue
target_found, lines = has_target(fn)
if not target_found:
if testing:
print('%s: %s' % (fn, lines[0]))
else:
print("Adding xref to %s" % fn)
process_file(fn, lines)
else:
print("Skipping %s: already has a xref" % fn)
if __name__ == '__main__':
sys.exit(main())
|
unknown
|
codeparrot/codeparrot-clean
| ||
#----------------------------------------------------------------------
# Name: wx.lib.mixins.gridlabelrenderer
# Purpose: A Grid mixin that enables renderers to be plugged in
# for drawing the row and col labels, similar to how the
# cell renderers work.
#
# Author: Robin Dunn
#
# Created: 20-Mar-2009
# RCS-ID: $Id: gridlabelrenderer.py 63321 2010-01-30 00:59:12Z RD $
# Copyright: (c) 2009 by Total Control Software
# Licence: wxWindows license
#----------------------------------------------------------------------
"""
A Grid mixin that enables renderers to be plugged in for drawing the
row and col labels, similar to how the cell renderers work.
"""
import wx
class GridWithLabelRenderersMixin(object):
"""
This class can be mixed with wx.grid.Grid to add the ability to plugin
label renderer objects for the row, column and corner labels, similar to
how the cell renderers work in the main Grid class.
"""
def __init__(self):
self.GetGridRowLabelWindow().Bind(wx.EVT_PAINT, self._onPaintRowLabels)
self.GetGridColLabelWindow().Bind(wx.EVT_PAINT, self._onPaintColLabels)
self.GetGridCornerLabelWindow().Bind(wx.EVT_PAINT, self._onPaintCornerLabel)
self._rowRenderers = dict()
self._colRenderers = dict()
self._cornderRenderer = None
self._defRowRenderer = None
self._defColRenderer = None
def SetRowLabelRenderer(self, row, renderer):
"""
Register a renderer to be used for drawing the label for the
given row.
"""
if renderer is None:
if row in self._rowRenderers:
del self._rowRenderers[row]
else:
self._rowRenderers[row] = renderer
def SetDefaultRowLabelRenderer(self, renderer):
"""
Set the row label renderer that should be used for any row
that does not have an explicitly set renderer. Defaults to
an instance of `GridDefaultRowLabelRenderer`.
"""
self._defRowRenderer = renderer
def SetColLabelRenderer(self, col, renderer):
"""
Register a renderer to be used for drawing the label for the
given column.
"""
if renderer is None:
if col in self._colRenderers:
del self._colRenderers[col]
else:
self._colRenderers[col] = renderer
def SetDefaultColLabelRenderer(self, renderer):
"""
Set the column label renderer that should be used for any
column that does not have an explicitly set renderer.
Defaults to an instance of `GridDefaultColLabelRenderer`.
"""
self._defColRenderer = renderer
def SetCornerLabelRenderer(self, renderer):
"""
Sets the renderer that should be used for drawing the area in
the upper left corner of the Grid, between the row labels and
the column labels. Defaults to an instance of
`GridDefaultCornerLabelRenderer`
"""
self._cornderRenderer = renderer
#----------------------------------------------------------------
def _onPaintRowLabels(self, evt):
window = evt.GetEventObject()
dc = wx.PaintDC(window)
rows = self.CalcRowLabelsExposed(window.GetUpdateRegion())
x, y = self.CalcUnscrolledPosition((0,0))
pt = dc.GetDeviceOrigin()
dc.SetDeviceOrigin(pt.x, pt.y-y)
for row in rows:
top, bottom = self._getRowTopBottom(row)
rect = wx.Rect()
rect.top = top
rect.bottom = bottom
rect.x = 0
rect.width = self.GetRowLabelSize()
renderer = self._rowRenderers.get(row, None) or \
self._defRowRenderer or GridDefaultRowLabelRenderer()
renderer.Draw(self, dc, rect, row)
def _onPaintColLabels(self, evt):
window = evt.GetEventObject()
dc = wx.PaintDC(window)
cols = self.CalcColLabelsExposed(window.GetUpdateRegion())
x, y = self.CalcUnscrolledPosition((0,0))
pt = dc.GetDeviceOrigin()
dc.SetDeviceOrigin(pt.x-x, pt.y)
for col in cols:
left, right = self._getColLeftRight(col)
rect = wx.Rect()
rect.left = left
rect.right = right
rect.y = 0
rect.height = self.GetColLabelSize()
renderer = self._colRenderers.get(col, None) or \
self._defColRenderer or GridDefaultColLabelRenderer()
renderer.Draw(self, dc, rect, col)
def _onPaintCornerLabel(self, evt):
window = evt.GetEventObject()
dc = wx.PaintDC(window)
w, h = window.GetSize()
rect = wx.Rect(0, 0, w, h)
renderer = self._cornderRenderer or GridDefaultCornerLabelRenderer()
renderer.Draw(self, dc, rect, -1)
# NOTE: These helpers or something like them should probably be publicly
# available in the C++ wxGrid class, but they are currently protected so
# for now we will have to calculate them ourselves.
def _getColLeftRight(self, col):
c = 0
left = 0
while c < col:
left += self.GetColSize(c)
c += 1
right = left + self.GetColSize(col)
return left, right
def _getRowTopBottom(self, row):
r = 0
top = 0
while r < row:
top += self.GetRowSize(r)
r += 1
bottom = top + self.GetRowSize(row) - 1
return top, bottom
class GridLabelRenderer(object):
"""
Base class for row, col or corner label renderers.
"""
def Draw(self, grid, dc, rect, row_or_col):
"""
Override this method in derived classes to do the actual
drawing of the label.
"""
raise NotImplementedError
# These two can be used to duplicate the default wxGrid label drawing
def DrawBorder(self, grid, dc, rect):
"""
Draw a standard border around the label, to give a simple 3D
effect like the stock wx.grid.Grid labels do.
"""
top = rect.top
bottom = rect.bottom
left = rect.left
right = rect.right
dc.SetPen(wx.Pen(wx.SystemSettings.GetColour(wx.SYS_COLOUR_3DSHADOW)))
dc.DrawLine(right, top, right, bottom)
dc.DrawLine(left, top, left, bottom)
dc.DrawLine(left, bottom, right, bottom)
dc.SetPen(wx.WHITE_PEN)
dc.DrawLine(left+1, top, left+1, bottom)
dc.DrawLine(left+1, top, right, top)
def DrawText(self, grid, dc, rect, text, hAlign, vAlign):
"""
Draw the label's text in the rectangle, using the alignment
flags, and the grid's specified label font and color.
"""
dc.SetBackgroundMode(wx.TRANSPARENT)
dc.SetTextForeground(grid.GetLabelTextColour())
dc.SetFont(grid.GetLabelFont())
rect = wx.Rect(*rect)
rect.Deflate(2,2)
grid.DrawTextRectangle(dc, text, rect, hAlign, vAlign)
# These classes draw approximately the same things that the built-in
# label windows do in C++, but are adapted to fit into this label
# renderer scheme.
class GridDefaultRowLabelRenderer(GridLabelRenderer):
def Draw(self, grid, dc, rect, row):
hAlign, vAlign = grid.GetRowLabelAlignment()
text = grid.GetRowLabelValue(row)
self.DrawBorder(grid, dc, rect)
self.DrawText(grid, dc, rect, text, hAlign, vAlign)
class GridDefaultColLabelRenderer(GridLabelRenderer):
def Draw(self, grid, dc, rect, col):
hAlign, vAlign = grid.GetColLabelAlignment()
text = grid.GetColLabelValue(col)
self.DrawBorder(grid, dc, rect)
self.DrawText(grid, dc, rect, text, hAlign, vAlign)
class GridDefaultCornerLabelRenderer(GridLabelRenderer):
def Draw(self, grid, dc, rect, row_or_col):
self.DrawBorder(grid, dc, rect)
#---------------------------------------------------------------------------
|
unknown
|
codeparrot/codeparrot-clean
| ||
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by conversion-gen. DO NOT EDIT.
package v1
import (
unsafe "unsafe"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
conversion "k8s.io/apimachinery/pkg/conversion"
runtime "k8s.io/apimachinery/pkg/runtime"
rbac "k8s.io/kubernetes/pkg/apis/rbac"
)
func init() {
localSchemeBuilder.Register(RegisterConversions)
}
// RegisterConversions adds conversion functions to the given scheme.
// Public to allow building arbitrary schemes.
func RegisterConversions(s *runtime.Scheme) error {
if err := s.AddGeneratedConversionFunc((*rbacv1.AggregationRule)(nil), (*rbac.AggregationRule)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_AggregationRule_To_rbac_AggregationRule(a.(*rbacv1.AggregationRule), b.(*rbac.AggregationRule), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.AggregationRule)(nil), (*rbacv1.AggregationRule)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_AggregationRule_To_v1_AggregationRule(a.(*rbac.AggregationRule), b.(*rbacv1.AggregationRule), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbacv1.ClusterRole)(nil), (*rbac.ClusterRole)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_ClusterRole_To_rbac_ClusterRole(a.(*rbacv1.ClusterRole), b.(*rbac.ClusterRole), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.ClusterRole)(nil), (*rbacv1.ClusterRole)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_ClusterRole_To_v1_ClusterRole(a.(*rbac.ClusterRole), b.(*rbacv1.ClusterRole), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbacv1.ClusterRoleBinding)(nil), (*rbac.ClusterRoleBinding)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_ClusterRoleBinding_To_rbac_ClusterRoleBinding(a.(*rbacv1.ClusterRoleBinding), b.(*rbac.ClusterRoleBinding), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.ClusterRoleBinding)(nil), (*rbacv1.ClusterRoleBinding)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_ClusterRoleBinding_To_v1_ClusterRoleBinding(a.(*rbac.ClusterRoleBinding), b.(*rbacv1.ClusterRoleBinding), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbacv1.ClusterRoleBindingList)(nil), (*rbac.ClusterRoleBindingList)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_ClusterRoleBindingList_To_rbac_ClusterRoleBindingList(a.(*rbacv1.ClusterRoleBindingList), b.(*rbac.ClusterRoleBindingList), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.ClusterRoleBindingList)(nil), (*rbacv1.ClusterRoleBindingList)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_ClusterRoleBindingList_To_v1_ClusterRoleBindingList(a.(*rbac.ClusterRoleBindingList), b.(*rbacv1.ClusterRoleBindingList), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbacv1.ClusterRoleList)(nil), (*rbac.ClusterRoleList)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_ClusterRoleList_To_rbac_ClusterRoleList(a.(*rbacv1.ClusterRoleList), b.(*rbac.ClusterRoleList), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.ClusterRoleList)(nil), (*rbacv1.ClusterRoleList)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_ClusterRoleList_To_v1_ClusterRoleList(a.(*rbac.ClusterRoleList), b.(*rbacv1.ClusterRoleList), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbacv1.PolicyRule)(nil), (*rbac.PolicyRule)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_PolicyRule_To_rbac_PolicyRule(a.(*rbacv1.PolicyRule), b.(*rbac.PolicyRule), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.PolicyRule)(nil), (*rbacv1.PolicyRule)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_PolicyRule_To_v1_PolicyRule(a.(*rbac.PolicyRule), b.(*rbacv1.PolicyRule), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbacv1.Role)(nil), (*rbac.Role)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_Role_To_rbac_Role(a.(*rbacv1.Role), b.(*rbac.Role), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.Role)(nil), (*rbacv1.Role)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_Role_To_v1_Role(a.(*rbac.Role), b.(*rbacv1.Role), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbacv1.RoleBinding)(nil), (*rbac.RoleBinding)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_RoleBinding_To_rbac_RoleBinding(a.(*rbacv1.RoleBinding), b.(*rbac.RoleBinding), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.RoleBinding)(nil), (*rbacv1.RoleBinding)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_RoleBinding_To_v1_RoleBinding(a.(*rbac.RoleBinding), b.(*rbacv1.RoleBinding), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbacv1.RoleBindingList)(nil), (*rbac.RoleBindingList)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_RoleBindingList_To_rbac_RoleBindingList(a.(*rbacv1.RoleBindingList), b.(*rbac.RoleBindingList), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.RoleBindingList)(nil), (*rbacv1.RoleBindingList)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_RoleBindingList_To_v1_RoleBindingList(a.(*rbac.RoleBindingList), b.(*rbacv1.RoleBindingList), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbacv1.RoleList)(nil), (*rbac.RoleList)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_RoleList_To_rbac_RoleList(a.(*rbacv1.RoleList), b.(*rbac.RoleList), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.RoleList)(nil), (*rbacv1.RoleList)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_RoleList_To_v1_RoleList(a.(*rbac.RoleList), b.(*rbacv1.RoleList), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbacv1.RoleRef)(nil), (*rbac.RoleRef)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_RoleRef_To_rbac_RoleRef(a.(*rbacv1.RoleRef), b.(*rbac.RoleRef), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.RoleRef)(nil), (*rbacv1.RoleRef)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_RoleRef_To_v1_RoleRef(a.(*rbac.RoleRef), b.(*rbacv1.RoleRef), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbacv1.Subject)(nil), (*rbac.Subject)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_v1_Subject_To_rbac_Subject(a.(*rbacv1.Subject), b.(*rbac.Subject), scope)
}); err != nil {
return err
}
if err := s.AddGeneratedConversionFunc((*rbac.Subject)(nil), (*rbacv1.Subject)(nil), func(a, b interface{}, scope conversion.Scope) error {
return Convert_rbac_Subject_To_v1_Subject(a.(*rbac.Subject), b.(*rbacv1.Subject), scope)
}); err != nil {
return err
}
return nil
}
func autoConvert_v1_AggregationRule_To_rbac_AggregationRule(in *rbacv1.AggregationRule, out *rbac.AggregationRule, s conversion.Scope) error {
out.ClusterRoleSelectors = *(*[]metav1.LabelSelector)(unsafe.Pointer(&in.ClusterRoleSelectors))
return nil
}
// Convert_v1_AggregationRule_To_rbac_AggregationRule is an autogenerated conversion function.
func Convert_v1_AggregationRule_To_rbac_AggregationRule(in *rbacv1.AggregationRule, out *rbac.AggregationRule, s conversion.Scope) error {
return autoConvert_v1_AggregationRule_To_rbac_AggregationRule(in, out, s)
}
func autoConvert_rbac_AggregationRule_To_v1_AggregationRule(in *rbac.AggregationRule, out *rbacv1.AggregationRule, s conversion.Scope) error {
out.ClusterRoleSelectors = *(*[]metav1.LabelSelector)(unsafe.Pointer(&in.ClusterRoleSelectors))
return nil
}
// Convert_rbac_AggregationRule_To_v1_AggregationRule is an autogenerated conversion function.
func Convert_rbac_AggregationRule_To_v1_AggregationRule(in *rbac.AggregationRule, out *rbacv1.AggregationRule, s conversion.Scope) error {
return autoConvert_rbac_AggregationRule_To_v1_AggregationRule(in, out, s)
}
func autoConvert_v1_ClusterRole_To_rbac_ClusterRole(in *rbacv1.ClusterRole, out *rbac.ClusterRole, s conversion.Scope) error {
out.ObjectMeta = in.ObjectMeta
out.Rules = *(*[]rbac.PolicyRule)(unsafe.Pointer(&in.Rules))
out.AggregationRule = (*rbac.AggregationRule)(unsafe.Pointer(in.AggregationRule))
return nil
}
// Convert_v1_ClusterRole_To_rbac_ClusterRole is an autogenerated conversion function.
func Convert_v1_ClusterRole_To_rbac_ClusterRole(in *rbacv1.ClusterRole, out *rbac.ClusterRole, s conversion.Scope) error {
return autoConvert_v1_ClusterRole_To_rbac_ClusterRole(in, out, s)
}
func autoConvert_rbac_ClusterRole_To_v1_ClusterRole(in *rbac.ClusterRole, out *rbacv1.ClusterRole, s conversion.Scope) error {
out.ObjectMeta = in.ObjectMeta
out.Rules = *(*[]rbacv1.PolicyRule)(unsafe.Pointer(&in.Rules))
out.AggregationRule = (*rbacv1.AggregationRule)(unsafe.Pointer(in.AggregationRule))
return nil
}
// Convert_rbac_ClusterRole_To_v1_ClusterRole is an autogenerated conversion function.
func Convert_rbac_ClusterRole_To_v1_ClusterRole(in *rbac.ClusterRole, out *rbacv1.ClusterRole, s conversion.Scope) error {
return autoConvert_rbac_ClusterRole_To_v1_ClusterRole(in, out, s)
}
func autoConvert_v1_ClusterRoleBinding_To_rbac_ClusterRoleBinding(in *rbacv1.ClusterRoleBinding, out *rbac.ClusterRoleBinding, s conversion.Scope) error {
out.ObjectMeta = in.ObjectMeta
out.Subjects = *(*[]rbac.Subject)(unsafe.Pointer(&in.Subjects))
if err := Convert_v1_RoleRef_To_rbac_RoleRef(&in.RoleRef, &out.RoleRef, s); err != nil {
return err
}
return nil
}
// Convert_v1_ClusterRoleBinding_To_rbac_ClusterRoleBinding is an autogenerated conversion function.
func Convert_v1_ClusterRoleBinding_To_rbac_ClusterRoleBinding(in *rbacv1.ClusterRoleBinding, out *rbac.ClusterRoleBinding, s conversion.Scope) error {
return autoConvert_v1_ClusterRoleBinding_To_rbac_ClusterRoleBinding(in, out, s)
}
func autoConvert_rbac_ClusterRoleBinding_To_v1_ClusterRoleBinding(in *rbac.ClusterRoleBinding, out *rbacv1.ClusterRoleBinding, s conversion.Scope) error {
out.ObjectMeta = in.ObjectMeta
out.Subjects = *(*[]rbacv1.Subject)(unsafe.Pointer(&in.Subjects))
if err := Convert_rbac_RoleRef_To_v1_RoleRef(&in.RoleRef, &out.RoleRef, s); err != nil {
return err
}
return nil
}
// Convert_rbac_ClusterRoleBinding_To_v1_ClusterRoleBinding is an autogenerated conversion function.
func Convert_rbac_ClusterRoleBinding_To_v1_ClusterRoleBinding(in *rbac.ClusterRoleBinding, out *rbacv1.ClusterRoleBinding, s conversion.Scope) error {
return autoConvert_rbac_ClusterRoleBinding_To_v1_ClusterRoleBinding(in, out, s)
}
func autoConvert_v1_ClusterRoleBindingList_To_rbac_ClusterRoleBindingList(in *rbacv1.ClusterRoleBindingList, out *rbac.ClusterRoleBindingList, s conversion.Scope) error {
out.ListMeta = in.ListMeta
out.Items = *(*[]rbac.ClusterRoleBinding)(unsafe.Pointer(&in.Items))
return nil
}
// Convert_v1_ClusterRoleBindingList_To_rbac_ClusterRoleBindingList is an autogenerated conversion function.
func Convert_v1_ClusterRoleBindingList_To_rbac_ClusterRoleBindingList(in *rbacv1.ClusterRoleBindingList, out *rbac.ClusterRoleBindingList, s conversion.Scope) error {
return autoConvert_v1_ClusterRoleBindingList_To_rbac_ClusterRoleBindingList(in, out, s)
}
func autoConvert_rbac_ClusterRoleBindingList_To_v1_ClusterRoleBindingList(in *rbac.ClusterRoleBindingList, out *rbacv1.ClusterRoleBindingList, s conversion.Scope) error {
out.ListMeta = in.ListMeta
out.Items = *(*[]rbacv1.ClusterRoleBinding)(unsafe.Pointer(&in.Items))
return nil
}
// Convert_rbac_ClusterRoleBindingList_To_v1_ClusterRoleBindingList is an autogenerated conversion function.
func Convert_rbac_ClusterRoleBindingList_To_v1_ClusterRoleBindingList(in *rbac.ClusterRoleBindingList, out *rbacv1.ClusterRoleBindingList, s conversion.Scope) error {
return autoConvert_rbac_ClusterRoleBindingList_To_v1_ClusterRoleBindingList(in, out, s)
}
func autoConvert_v1_ClusterRoleList_To_rbac_ClusterRoleList(in *rbacv1.ClusterRoleList, out *rbac.ClusterRoleList, s conversion.Scope) error {
out.ListMeta = in.ListMeta
out.Items = *(*[]rbac.ClusterRole)(unsafe.Pointer(&in.Items))
return nil
}
// Convert_v1_ClusterRoleList_To_rbac_ClusterRoleList is an autogenerated conversion function.
func Convert_v1_ClusterRoleList_To_rbac_ClusterRoleList(in *rbacv1.ClusterRoleList, out *rbac.ClusterRoleList, s conversion.Scope) error {
return autoConvert_v1_ClusterRoleList_To_rbac_ClusterRoleList(in, out, s)
}
func autoConvert_rbac_ClusterRoleList_To_v1_ClusterRoleList(in *rbac.ClusterRoleList, out *rbacv1.ClusterRoleList, s conversion.Scope) error {
out.ListMeta = in.ListMeta
out.Items = *(*[]rbacv1.ClusterRole)(unsafe.Pointer(&in.Items))
return nil
}
// Convert_rbac_ClusterRoleList_To_v1_ClusterRoleList is an autogenerated conversion function.
func Convert_rbac_ClusterRoleList_To_v1_ClusterRoleList(in *rbac.ClusterRoleList, out *rbacv1.ClusterRoleList, s conversion.Scope) error {
return autoConvert_rbac_ClusterRoleList_To_v1_ClusterRoleList(in, out, s)
}
func autoConvert_v1_PolicyRule_To_rbac_PolicyRule(in *rbacv1.PolicyRule, out *rbac.PolicyRule, s conversion.Scope) error {
out.Verbs = *(*[]string)(unsafe.Pointer(&in.Verbs))
out.APIGroups = *(*[]string)(unsafe.Pointer(&in.APIGroups))
out.Resources = *(*[]string)(unsafe.Pointer(&in.Resources))
out.ResourceNames = *(*[]string)(unsafe.Pointer(&in.ResourceNames))
out.NonResourceURLs = *(*[]string)(unsafe.Pointer(&in.NonResourceURLs))
return nil
}
// Convert_v1_PolicyRule_To_rbac_PolicyRule is an autogenerated conversion function.
func Convert_v1_PolicyRule_To_rbac_PolicyRule(in *rbacv1.PolicyRule, out *rbac.PolicyRule, s conversion.Scope) error {
return autoConvert_v1_PolicyRule_To_rbac_PolicyRule(in, out, s)
}
func autoConvert_rbac_PolicyRule_To_v1_PolicyRule(in *rbac.PolicyRule, out *rbacv1.PolicyRule, s conversion.Scope) error {
out.Verbs = *(*[]string)(unsafe.Pointer(&in.Verbs))
out.APIGroups = *(*[]string)(unsafe.Pointer(&in.APIGroups))
out.Resources = *(*[]string)(unsafe.Pointer(&in.Resources))
out.ResourceNames = *(*[]string)(unsafe.Pointer(&in.ResourceNames))
out.NonResourceURLs = *(*[]string)(unsafe.Pointer(&in.NonResourceURLs))
return nil
}
// Convert_rbac_PolicyRule_To_v1_PolicyRule is an autogenerated conversion function.
func Convert_rbac_PolicyRule_To_v1_PolicyRule(in *rbac.PolicyRule, out *rbacv1.PolicyRule, s conversion.Scope) error {
return autoConvert_rbac_PolicyRule_To_v1_PolicyRule(in, out, s)
}
func autoConvert_v1_Role_To_rbac_Role(in *rbacv1.Role, out *rbac.Role, s conversion.Scope) error {
out.ObjectMeta = in.ObjectMeta
out.Rules = *(*[]rbac.PolicyRule)(unsafe.Pointer(&in.Rules))
return nil
}
// Convert_v1_Role_To_rbac_Role is an autogenerated conversion function.
func Convert_v1_Role_To_rbac_Role(in *rbacv1.Role, out *rbac.Role, s conversion.Scope) error {
return autoConvert_v1_Role_To_rbac_Role(in, out, s)
}
func autoConvert_rbac_Role_To_v1_Role(in *rbac.Role, out *rbacv1.Role, s conversion.Scope) error {
out.ObjectMeta = in.ObjectMeta
out.Rules = *(*[]rbacv1.PolicyRule)(unsafe.Pointer(&in.Rules))
return nil
}
// Convert_rbac_Role_To_v1_Role is an autogenerated conversion function.
func Convert_rbac_Role_To_v1_Role(in *rbac.Role, out *rbacv1.Role, s conversion.Scope) error {
return autoConvert_rbac_Role_To_v1_Role(in, out, s)
}
func autoConvert_v1_RoleBinding_To_rbac_RoleBinding(in *rbacv1.RoleBinding, out *rbac.RoleBinding, s conversion.Scope) error {
out.ObjectMeta = in.ObjectMeta
out.Subjects = *(*[]rbac.Subject)(unsafe.Pointer(&in.Subjects))
if err := Convert_v1_RoleRef_To_rbac_RoleRef(&in.RoleRef, &out.RoleRef, s); err != nil {
return err
}
return nil
}
// Convert_v1_RoleBinding_To_rbac_RoleBinding is an autogenerated conversion function.
func Convert_v1_RoleBinding_To_rbac_RoleBinding(in *rbacv1.RoleBinding, out *rbac.RoleBinding, s conversion.Scope) error {
return autoConvert_v1_RoleBinding_To_rbac_RoleBinding(in, out, s)
}
func autoConvert_rbac_RoleBinding_To_v1_RoleBinding(in *rbac.RoleBinding, out *rbacv1.RoleBinding, s conversion.Scope) error {
out.ObjectMeta = in.ObjectMeta
out.Subjects = *(*[]rbacv1.Subject)(unsafe.Pointer(&in.Subjects))
if err := Convert_rbac_RoleRef_To_v1_RoleRef(&in.RoleRef, &out.RoleRef, s); err != nil {
return err
}
return nil
}
// Convert_rbac_RoleBinding_To_v1_RoleBinding is an autogenerated conversion function.
func Convert_rbac_RoleBinding_To_v1_RoleBinding(in *rbac.RoleBinding, out *rbacv1.RoleBinding, s conversion.Scope) error {
return autoConvert_rbac_RoleBinding_To_v1_RoleBinding(in, out, s)
}
func autoConvert_v1_RoleBindingList_To_rbac_RoleBindingList(in *rbacv1.RoleBindingList, out *rbac.RoleBindingList, s conversion.Scope) error {
out.ListMeta = in.ListMeta
out.Items = *(*[]rbac.RoleBinding)(unsafe.Pointer(&in.Items))
return nil
}
// Convert_v1_RoleBindingList_To_rbac_RoleBindingList is an autogenerated conversion function.
func Convert_v1_RoleBindingList_To_rbac_RoleBindingList(in *rbacv1.RoleBindingList, out *rbac.RoleBindingList, s conversion.Scope) error {
return autoConvert_v1_RoleBindingList_To_rbac_RoleBindingList(in, out, s)
}
func autoConvert_rbac_RoleBindingList_To_v1_RoleBindingList(in *rbac.RoleBindingList, out *rbacv1.RoleBindingList, s conversion.Scope) error {
out.ListMeta = in.ListMeta
out.Items = *(*[]rbacv1.RoleBinding)(unsafe.Pointer(&in.Items))
return nil
}
// Convert_rbac_RoleBindingList_To_v1_RoleBindingList is an autogenerated conversion function.
func Convert_rbac_RoleBindingList_To_v1_RoleBindingList(in *rbac.RoleBindingList, out *rbacv1.RoleBindingList, s conversion.Scope) error {
return autoConvert_rbac_RoleBindingList_To_v1_RoleBindingList(in, out, s)
}
func autoConvert_v1_RoleList_To_rbac_RoleList(in *rbacv1.RoleList, out *rbac.RoleList, s conversion.Scope) error {
out.ListMeta = in.ListMeta
out.Items = *(*[]rbac.Role)(unsafe.Pointer(&in.Items))
return nil
}
// Convert_v1_RoleList_To_rbac_RoleList is an autogenerated conversion function.
func Convert_v1_RoleList_To_rbac_RoleList(in *rbacv1.RoleList, out *rbac.RoleList, s conversion.Scope) error {
return autoConvert_v1_RoleList_To_rbac_RoleList(in, out, s)
}
func autoConvert_rbac_RoleList_To_v1_RoleList(in *rbac.RoleList, out *rbacv1.RoleList, s conversion.Scope) error {
out.ListMeta = in.ListMeta
out.Items = *(*[]rbacv1.Role)(unsafe.Pointer(&in.Items))
return nil
}
// Convert_rbac_RoleList_To_v1_RoleList is an autogenerated conversion function.
func Convert_rbac_RoleList_To_v1_RoleList(in *rbac.RoleList, out *rbacv1.RoleList, s conversion.Scope) error {
return autoConvert_rbac_RoleList_To_v1_RoleList(in, out, s)
}
func autoConvert_v1_RoleRef_To_rbac_RoleRef(in *rbacv1.RoleRef, out *rbac.RoleRef, s conversion.Scope) error {
out.APIGroup = in.APIGroup
out.Kind = in.Kind
out.Name = in.Name
return nil
}
// Convert_v1_RoleRef_To_rbac_RoleRef is an autogenerated conversion function.
func Convert_v1_RoleRef_To_rbac_RoleRef(in *rbacv1.RoleRef, out *rbac.RoleRef, s conversion.Scope) error {
return autoConvert_v1_RoleRef_To_rbac_RoleRef(in, out, s)
}
func autoConvert_rbac_RoleRef_To_v1_RoleRef(in *rbac.RoleRef, out *rbacv1.RoleRef, s conversion.Scope) error {
out.APIGroup = in.APIGroup
out.Kind = in.Kind
out.Name = in.Name
return nil
}
// Convert_rbac_RoleRef_To_v1_RoleRef is an autogenerated conversion function.
func Convert_rbac_RoleRef_To_v1_RoleRef(in *rbac.RoleRef, out *rbacv1.RoleRef, s conversion.Scope) error {
return autoConvert_rbac_RoleRef_To_v1_RoleRef(in, out, s)
}
func autoConvert_v1_Subject_To_rbac_Subject(in *rbacv1.Subject, out *rbac.Subject, s conversion.Scope) error {
out.Kind = in.Kind
out.APIGroup = in.APIGroup
out.Name = in.Name
out.Namespace = in.Namespace
return nil
}
// Convert_v1_Subject_To_rbac_Subject is an autogenerated conversion function.
func Convert_v1_Subject_To_rbac_Subject(in *rbacv1.Subject, out *rbac.Subject, s conversion.Scope) error {
return autoConvert_v1_Subject_To_rbac_Subject(in, out, s)
}
func autoConvert_rbac_Subject_To_v1_Subject(in *rbac.Subject, out *rbacv1.Subject, s conversion.Scope) error {
out.Kind = in.Kind
out.APIGroup = in.APIGroup
out.Name = in.Name
out.Namespace = in.Namespace
return nil
}
// Convert_rbac_Subject_To_v1_Subject is an autogenerated conversion function.
func Convert_rbac_Subject_To_v1_Subject(in *rbac.Subject, out *rbacv1.Subject, s conversion.Scope) error {
return autoConvert_rbac_Subject_To_v1_Subject(in, out, s)
}
|
go
|
github
|
https://github.com/kubernetes/kubernetes
|
pkg/apis/rbac/v1/zz_generated.conversion.go
|
from __future__ import absolute_import
from django.conf import settings
from django.core.cache import cache
from functools import partial
from twitter import Twitter, OAuth, TwitterHTTPError
from twitter.stream import TwitterStream
from urlparse import parse_qs
import json
import re
from utils.datatools import chunk
from logging import getLogger
log = getLogger(__name__)
# ============================================================
# Exceptions
# ============================================================
class SocialMediaException (Exception):
pass
# ============================================================
# The user information services
# ============================================================
class BaseService (object):
# ==================================================================
# User specific cache keys
# ==================================================================
def get_user_cache_key_prefix(self, user):
return 'user-%s' % user.pk
def get_user_cache_key(self, user, extra):
return ':'.join([self.get_user_cache_key_prefix(user), extra])
# ==================================================================
# User specific info
# ==================================================================
def get_avatar_url(self, user, on_behalf_of):
user_info = self.get_user_info(user, on_behalf_of)
return user_info['avatar_url']
def get_full_name(self, user, on_behalf_of):
user_info = self.get_user_info(user, on_behalf_of)
return user_info['full_name']
def get_bio(self, user, on_behalf_of):
user_info = self.get_user_info(user, on_behalf_of)
return user_info['bio']
# ==================================================================
# User-specific info, from the database, used for authenticating
# against Twitter on behalf of a specific user
# ==================================================================
def get_social_auth(self, user):
cache_key = self.get_user_cache_key(user, 'db-object')
social_auth = cache.get(cache_key)
if social_auth is None:
if hasattr(user, 'preferred_provider') and user.preferred_provider:
try:
social_auth = user.social_auth.get(provider=user.preferred_provider)
except Exception as e:
raise SocialMediaException(
'Could not get the social media account for user %s -- %s: %s'
% (user, type(e), e))
else:
try:
# Assume the first one is the one we want
social_auth = user.social_auth.all()[0]
except IndexError:
# If we don't have any, just return empty
raise SocialMediaException(
'User %s is not authenticated with a social media account'
% (user,))
# Cache for just long enough to complete the current batch of
# lookups without having to hit the DB again.
cache.set(cache_key, social_auth.pk, 60)
else:
social_auth = user.social_auth.get(pk=social_auth)
return social_auth
def get_user_id(self, user):
social_auth = self.get_social_auth(user)
return social_auth.uid
def get_preferred_provider(self, user):
social_auth = self.get_social_auth(user)
return social_auth.provider
class CacheService (BaseService):
provider = 'cache'
@classmethod
def get_social_media_service(cls, provider):
if provider == 'twitter':
return TwitterService()
elif provider == 'facebook':
return FacebookService()
elif provider == 'publicstuff':
return PublicStuffService()
else:
raise SocialMediaException(
'Invalid social media account provider: %r' % (provider,))
# ==================================================================
# User specific info
# ==================================================================
def get_user_info(self, user, on_behalf_of=None):
cache_key = self.get_user_cache_key(user, 'info')
info = cache.get(cache_key)
if info is None:
provider = self.get_preferred_provider(user)
service = self.get_social_media_service(provider)
raw_info = service.get_user_info(user, on_behalf_of)
info = {
'avatar_url': service.extract_avatar_url(raw_info),
'full_name': service.extract_full_name(raw_info),
'bio': service.extract_bio(raw_info)
}
cache.set(cache_key, info)
return info
#
# Meta-programming for the specific extract_... methods
#
def extract_user_attr(self, attr, user_info):
return user_info[attr]
def __getattr__(self, name):
if name.startswith('extract_'):
skip = len('extract_')
attr = name[skip:]
if attr in ('avatar_url', 'full_name', 'bio'):
return partial(self.extract_user_attr, name[skip:])
raise AttributeError(name)
# ============================================================
# The Twitter service
# ============================================================
class TwitterService (BaseService):
provider = 'twitter'
# ==================================================================
# General Twitter info, cached
# ==================================================================
def get_config_cache_key(self):
return 'twitter-config'
def get_config(self, on_behalf_of=None):
cache_key = self.get_config_cache_key()
config = cache.get(cache_key)
if config is None:
t = self.get_api(on_behalf_of)
config = t.help.configuration()
config = dict(config.items())
cache.set(cache_key, config)
return config
def get_url_length(self, url, on_behalf_of=None):
config = self.get_config(on_behalf_of)
if url.startswith('http://localhost') or url.startswith('http://127.0.0.1'):
return len(url)
if url.startswith('https'):
return config['short_url_length_https']
else:
return config['short_url_length']
# ==================================================================
# User specific info, from Twitter
# ==================================================================
def get_user_info(self, user, on_behalf_of=None):
user_id = self.get_user_id(user)
log_string = (
'\n'
'============================================================\n'
'Hitting the API for %s to get info on %s (%s)\n'
'============================================================\n'
) % (
on_behalf_of.username if on_behalf_of else 'the app',
user.username, user_id
)
log.info(log_string)
t = self.get_api(on_behalf_of)
raw_info = t.users.show(user_id=user_id)
return raw_info # raw_info is a WrappedTwitterResponse
def get_users_info(self, users, on_behalf_of=None):
# Build a mapping from cache_key => user_id
data = {}
for user in users:
try:
cache_key = self.get_user_cache_key(user, 'info')
user_id = self.get_user_id(user)
data[cache_key] = user_id
except SocialMediaException as e:
log.warning(e)
pass
# Build a reverse mapping from user_id => cache_key
reverse_data = dict([
(user_id, cache_key)
for cache_key, user_id in data.items()
])
# Get all the user info that is currently cached for the given users
all_info = cache.get_many(data.keys())
# Build a list of keys that have no cached data
uncached_keys = filter(lambda key: key not in all_info, data.keys())
if uncached_keys:
log_string = (
'\n'
'============================================================\n'
'Hitting the API for %s to get info on %s user(s)\n'
'IDs: %s\n'
'============================================================\n'
) % (
on_behalf_of.username if on_behalf_of else 'the app',
len(uncached_keys),
','.join([str(data[k]) for k in uncached_keys])
)
log.info(log_string)
# If there are uncached keys, fetch the user info for those users
# in chunks of 100
t = self.get_api(on_behalf_of)
user_ids = [data[key] for key in uncached_keys]
new_info = {}
for id_group in chunk(user_ids, 100):
bulk_info = t.users.lookup(user_id=','.join([str(user_id) for user_id in id_group]))
for info in bulk_info:
cache_key = reverse_data[str(info['id'])]
new_info[cache_key] = info
# Store any new information gotten in the cache
cache.set_many(new_info)
# Add the new info to the already cached info
all_info.update(new_info)
return all_info.values()
def extract_avatar_url(self, user_info):
url = user_info['profile_image_url']
url_pattern = r'^(?P<path>.*?)(?:_normal|_mini|_bigger|)(?P<ext>(?:\.[^\/]*)?)$'
match = re.match(url_pattern, url)
if match:
return match.group('path') + '_bigger' + match.group('ext')
else:
return url
def extract_full_name(self, user_info):
return user_info['name']
def extract_bio(self, user_info):
return user_info['description']
# ==================================================================
# User-specific info, from the database, used for authenticating
# against Twitter on behalf of a specific user
# ==================================================================
def get_user_oauth(self, user):
social_auth = self.get_social_auth(user)
if social_auth.provider == 'twitter':
extra_data = social_auth.extra_data
access_token = parse_qs(extra_data['access_token'])
else:
raise SocialMediaException(
('Can\'t get info for a user authenticated with a %r '
'provider') % social_auth.provider
)
oauth_args = (
access_token['oauth_token'][0],
access_token['oauth_token_secret'][0],
settings.TWITTER_CONSUMER_KEY,
settings.TWITTER_CONSUMER_SECRET
)
return OAuth(*oauth_args)
# ==================================================================
# App-specific info, from the database, used for authenticating
# against Twitter on behalf of the app
# ==================================================================
def get_app_oauth(self):
return OAuth(
settings.TWITTER_ACCESS_TOKEN,
settings.TWITTER_ACCESS_SECRET,
settings.TWITTER_CONSUMER_KEY,
settings.TWITTER_CONSUMER_SECRET,
)
def get_api(self, on_behalf_of=None):
# If user is None, tweet from the app's account
if on_behalf_of is None:
oauth = self.get_app_oauth()
# Otherwise, tweet from the user's twitter account
else:
oauth = self.get_user_oauth(on_behalf_of)
return Twitter(auth=oauth)
def get_stream(self, on_behalf_of=None):
# If user is None, tweet from the app's account
if on_behalf_of is None:
oauth = self.get_app_oauth()
# Otherwise, tweet from the user's twitter account
else:
oauth = self.get_user_oauth(on_behalf_of)
return TwitterStream(auth=oauth)
# ==================================================================
# Twitter actions
# ==================================================================
def tweet(self, text, on_behalf_of=None, **extra):
t = self.get_api(on_behalf_of)
try:
return True, t.statuses.update(status=text, **extra)
except TwitterHTTPError as e:
return False, e.response_data
def add_favorite(self, on_behalf_of, tweet_id, **extra):
t = self.get_api(on_behalf_of)
try:
return True, t.favorites.create(_id=tweet_id, **extra)
except TwitterHTTPError as e:
return False, e.response_data
def remove_favorite(self, on_behalf_of, tweet_id, **extra):
t = self.get_api(on_behalf_of)
try:
return True, t.favorites.destroy(_id=tweet_id, **extra)
except TwitterHTTPError as e:
return False, e.response_data
def retweet(self, tweet_id, on_behalf_of, **extra):
t = self.get_api(on_behalf_of)
try:
return True, t.statuses.retweet(id=tweet_id, **extra)
except TwitterHTTPError as e:
return False, e.response_data
#
# Streaming
#
def itertweets(self, on_behalf_of=None, **extra):
s = self.get_stream(on_behalf_of)
return s.statuses.filter(**extra)
# ============================================================
# The Facebook service
# ============================================================
class FacebookService (BaseService):
provider = 'facebook'
def extract_avatar_url(self, user_info):
url = user_info['picture']['data']['url']
return url
def extract_full_name(self, user_info):
return user_info['name']
def extract_bio(self, user_info):
return user_info['bio']
# ============================================================
# The PublicStuff
# ============================================================
class PublicStuffService (BaseService):
provider = 'publicstuff'
def extract_avatar_url(self, user_info):
return user_info.get('image', '')
def extract_full_name(self, user_info):
return ' '.join(n for n in [user_info['firstname'] or '', user_info['lastname']] if n)
def extract_bio(self, user_info):
return ''
default_twitter_service = TwitterService()
|
unknown
|
codeparrot/codeparrot-clean
| ||
from django.db.models.fields.related import (
RECURSIVE_RELATIONSHIP_CONSTANT, ManyToManyField, ManyToManyRel,
RelatedField, ReverseManyRelatedObjectsDescriptor,
create_many_to_many_intermediary_model,
)
from django.utils.functional import curry
class CustomManyToManyField(RelatedField):
"""
Ticket #24104 - Need to have a custom ManyToManyField,
which is not an inheritor of ManyToManyField.
"""
many_to_many = True
def __init__(self, to, db_constraint=True, swappable=True, **kwargs):
try:
to._meta
except AttributeError:
to = str(to)
kwargs['rel'] = ManyToManyRel(
self, to,
related_name=kwargs.pop('related_name', None),
related_query_name=kwargs.pop('related_query_name', None),
limit_choices_to=kwargs.pop('limit_choices_to', None),
symmetrical=kwargs.pop('symmetrical', to == RECURSIVE_RELATIONSHIP_CONSTANT),
through=kwargs.pop('through', None),
through_fields=kwargs.pop('through_fields', None),
db_constraint=db_constraint,
)
self.swappable = swappable
self.db_table = kwargs.pop('db_table', None)
if kwargs['rel'].through is not None:
assert self.db_table is None, "Cannot specify a db_table if an intermediary model is used."
super(CustomManyToManyField, self).__init__(**kwargs)
def contribute_to_class(self, cls, name, **kwargs):
if self.rel.symmetrical and (self.rel.to == "self" or self.rel.to == cls._meta.object_name):
self.rel.related_name = "%s_rel_+" % name
super(CustomManyToManyField, self).contribute_to_class(cls, name, **kwargs)
if not self.rel.through and not cls._meta.abstract and not cls._meta.swapped:
self.rel.through = create_many_to_many_intermediary_model(self, cls)
setattr(cls, self.name, ReverseManyRelatedObjectsDescriptor(self))
self.m2m_db_table = curry(self._get_m2m_db_table, cls._meta)
def get_internal_type(self):
return 'ManyToManyField'
# Copy those methods from ManyToManyField because they don't call super() internally
contribute_to_related_class = ManyToManyField.__dict__['contribute_to_related_class']
_get_m2m_attr = ManyToManyField.__dict__['_get_m2m_attr']
_get_m2m_reverse_attr = ManyToManyField.__dict__['_get_m2m_reverse_attr']
_get_m2m_db_table = ManyToManyField.__dict__['_get_m2m_db_table']
class InheritedManyToManyField(ManyToManyField):
pass
|
unknown
|
codeparrot/codeparrot-clean
| ||
{
"apiVersion": "v1",
"kind": "APIGroup",
"name": "apps",
"preferredVersion": {
"groupVersion": "apps/v1",
"version": "v1"
},
"versions": [
{
"groupVersion": "apps/v1",
"version": "v1"
}
]
}
|
json
|
github
|
https://github.com/kubernetes/kubernetes
|
api/discovery/apis__apps.json
|
# -*- coding: utf-8 -*-
from hachoir_core.compatibility import any, sorted
from hachoir_core.endian import endian_name
from hachoir_core.tools import makePrintable, makeUnicode
from hachoir_core.dict import Dict
from hachoir_core.error import error, HACHOIR_ERRORS
from hachoir_core.i18n import _
from hachoir_core.log import Logger
from hachoir_metadata.metadata_item import (
MIN_PRIORITY, MAX_PRIORITY, QUALITY_NORMAL)
from hachoir_metadata.register import registerAllItems
extractors = {}
class Metadata(Logger):
header = u"Metadata"
def __init__(self, parent, quality=QUALITY_NORMAL):
assert isinstance(self.header, unicode)
# Limit to 0.0 .. 1.0
if parent:
quality = parent.quality
else:
quality = min(max(0.0, quality), 1.0)
object.__init__(self)
object.__setattr__(self, "_Metadata__data", {})
object.__setattr__(self, "quality", quality)
header = self.__class__.header
object.__setattr__(self, "_Metadata__header", header)
registerAllItems(self)
def _logger(self):
pass
def __setattr__(self, key, value):
"""
Add a new value to data with name 'key'. Skip duplicates.
"""
# Invalid key?
if key not in self.__data:
raise KeyError(_("%s has no metadata '%s'") % (self.__class__.__name__, key))
# Skip duplicates
self.__data[key].add(value)
def setHeader(self, text):
object.__setattr__(self, "header", text)
def getItems(self, key):
try:
return self.__data[key]
except LookupError:
raise ValueError("Metadata has no value '%s'" % key)
def getItem(self, key, index):
try:
return self.getItems(key)[index]
except (LookupError, ValueError):
return None
def has(self, key):
return 1 <= len(self.getItems(key))
def get(self, key, default=None, index=0):
"""
Read first value of tag with name 'key'.
>>> from datetime import timedelta
>>> a = RootMetadata()
>>> a.duration = timedelta(seconds=2300)
>>> a.get('duration')
datetime.timedelta(0, 2300)
>>> a.get('author', u'Anonymous')
u'Anonymous'
"""
item = self.getItem(key, index)
if item is None:
if default is None:
raise ValueError("Metadata has no value '%s' (index %s)" % (key, index))
else:
return default
return item.value
def getValues(self, key):
try:
data = self.__data[key]
except LookupError:
raise ValueError("Metadata has no value '%s'" % key)
return [ item.value for item in data ]
def getText(self, key, default=None, index=0):
"""
Read first value, as unicode string, of tag with name 'key'.
>>> from datetime import timedelta
>>> a = RootMetadata()
>>> a.duration = timedelta(seconds=2300)
>>> a.getText('duration')
u'38 min 20 sec'
>>> a.getText('titre', u'Unknown')
u'Unknown'
"""
item = self.getItem(key, index)
if item is not None:
return item.text
else:
return default
def register(self, data):
assert data.key not in self.__data
data.metadata = self
self.__data[data.key] = data
def __iter__(self):
return self.__data.itervalues()
def __str__(self):
r"""
Create a multi-line ASCII string (end of line is "\n") which
represents all datas.
>>> a = RootMetadata()
>>> a.author = "haypo"
>>> a.copyright = unicode("© Hachoir", "UTF-8")
>>> print a
Metadata:
- Author: haypo
- Copyright: \xa9 Hachoir
@see __unicode__() and exportPlaintext()
"""
text = self.exportPlaintext()
return "\n".join( makePrintable(line, "ASCII") for line in text )
def __unicode__(self):
r"""
Create a multi-line Unicode string (end of line is "\n") which
represents all datas.
>>> a = RootMetadata()
>>> a.copyright = unicode("© Hachoir", "UTF-8")
>>> print repr(unicode(a))
u'Metadata:\n- Copyright: \xa9 Hachoir'
@see __str__() and exportPlaintext()
"""
return "\n".join(self.exportPlaintext())
def exportPlaintext(self, priority=None, human=True, line_prefix=u"- ", title=None):
r"""
Convert metadata to multi-line Unicode string and skip datas
with priority lower than specified priority.
Default priority is Metadata.MAX_PRIORITY. If human flag is True, data
key are translated to better human name (eg. "bit_rate" becomes
"Bit rate") which may be translated using gettext.
If priority is too small, metadata are empty and so None is returned.
>>> print RootMetadata().exportPlaintext()
None
>>> meta = RootMetadata()
>>> meta.copyright = unicode("© Hachoir", "UTF-8")
>>> print repr(meta.exportPlaintext())
[u'Metadata:', u'- Copyright: \xa9 Hachoir']
@see __str__() and __unicode__()
"""
if priority is not None:
priority = max(priority, MIN_PRIORITY)
priority = min(priority, MAX_PRIORITY)
else:
priority = MAX_PRIORITY
if not title:
title = self.header
text = ["%s:" % title]
for data in sorted(self):
if priority < data.priority:
break
if not data.values:
continue
if human:
title = data.description
else:
title = data.key
for item in data.values:
if human:
value = item.text
else:
value = makeUnicode(item.value)
text.append("%s%s: %s" % (line_prefix, title, value))
if 1 < len(text):
return text
else:
return None
def __nonzero__(self):
return any(item for item in self.__data.itervalues())
class RootMetadata(Metadata):
def __init__(self, quality=QUALITY_NORMAL):
Metadata.__init__(self, None, quality)
class MultipleMetadata(RootMetadata):
header = _("Common")
def __init__(self, quality=QUALITY_NORMAL):
RootMetadata.__init__(self, quality)
object.__setattr__(self, "_MultipleMetadata__groups", Dict())
object.__setattr__(self, "_MultipleMetadata__key_counter", {})
def __contains__(self, key):
return key in self.__groups
def __getitem__(self, key):
return self.__groups[key]
def iterGroups(self):
return self.__groups.itervalues()
def __nonzero__(self):
if RootMetadata.__nonzero__(self):
return True
return any(bool(group) for group in self.__groups)
def addGroup(self, key, metadata, header=None):
"""
Add a new group (metadata of a sub-document).
Returns False if the group is skipped, True if it has been added.
"""
if not metadata:
self.warning("Skip empty group %s" % key)
return False
if key.endswith("[]"):
key = key[:-2]
if key in self.__key_counter:
self.__key_counter[key] += 1
else:
self.__key_counter[key] = 1
key += "[%u]" % self.__key_counter[key]
if header:
metadata.setHeader(header)
self.__groups.append(key, metadata)
return True
def exportPlaintext(self, priority=None, human=True, line_prefix=u"- "):
common = Metadata.exportPlaintext(self, priority, human, line_prefix)
if common:
text = common
else:
text = []
for key, metadata in self.__groups.iteritems():
if not human:
title = key
else:
title = None
value = metadata.exportPlaintext(priority, human, line_prefix, title=title)
if value:
text.extend(value)
if len(text):
return text
else:
return None
def registerExtractor(parser, extractor):
assert parser not in extractors
assert issubclass(extractor, RootMetadata)
extractors[parser] = extractor
def extractMetadata(parser, quality=QUALITY_NORMAL):
"""
Create a Metadata class from a parser. Returns None if no metadata
extractor does exist for the parser class.
"""
try:
extractor = extractors[parser.__class__]
except KeyError:
return None
metadata = extractor(quality)
try:
metadata.extract(parser)
except HACHOIR_ERRORS, err:
error("Error during metadata extraction: %s" % unicode(err))
return None
except Exception, err:
error("Error during metadata extraction: %s" % unicode(err))
return None
if metadata:
metadata.mime_type = parser.mime_type
metadata.endian = endian_name[parser.endian]
return metadata
|
unknown
|
codeparrot/codeparrot-clean
| ||
"""
===================================================================
Decision Tree Regression
===================================================================
A 1D regression with decision tree.
The :ref:`decision trees <tree>` is
used to fit a sine curve with addition noisy observation. As a result, it
learns local linear regressions approximating the sine curve.
We can see that if the maximum depth of the tree (controlled by the
`max_depth` parameter) is set too high, the decision trees learn too fine
details of the training data and learn from the noise, i.e. they overfit.
"""
print(__doc__)
# Import the necessary modules and libraries
import numpy as np
from sklearn.tree import DecisionTreeRegressor
import matplotlib.pyplot as plt
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(5 * rng.rand(80, 1), axis=0)
y = np.sin(X).ravel()
y[::5] += 3 * (0.5 - rng.rand(16))
# Fit regression model
regr_1 = DecisionTreeRegressor(max_depth=2)
regr_2 = DecisionTreeRegressor(max_depth=5)
regr_1.fit(X, y)
regr_2.fit(X, y)
# Predict
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]
y_1 = regr_1.predict(X_test)
y_2 = regr_2.predict(X_test)
# Plot the results
plt.figure()
plt.scatter(X, y, c="k", label="data")
plt.plot(X_test, y_1, c="g", label="max_depth=2", linewidth=2)
plt.plot(X_test, y_2, c="r", label="max_depth=5", linewidth=2)
plt.xlabel("data")
plt.ylabel("target")
plt.title("Decision Tree Regression")
plt.legend()
plt.show()
|
unknown
|
codeparrot/codeparrot-clean
| ||
from __future__ import annotations
import numbers
from typing import (
TYPE_CHECKING,
ClassVar,
Self,
cast,
)
import numpy as np
from pandas._libs import (
lib,
missing as libmissing,
)
from pandas.util._decorators import set_module
from pandas.core.dtypes.common import is_list_like
from pandas.core.dtypes.dtypes import register_extension_dtype
from pandas.core.dtypes.missing import isna
from pandas.core import ops
from pandas.core.array_algos import masked_accumulations
from pandas.core.arrays.masked import (
BaseMaskedArray,
BaseMaskedDtype,
)
if TYPE_CHECKING:
import pyarrow
from pandas._typing import (
DtypeObj,
npt,
type_t,
)
from pandas.core.dtypes.dtypes import ExtensionDtype
@register_extension_dtype
@set_module("pandas")
class BooleanDtype(BaseMaskedDtype):
"""
Extension dtype for boolean data.
This is a pandas Extension dtype for boolean data with support for
missing values. BooleanDtype is the dtype companion to :class:`.BooleanArray`,
which implements Kleene logic (sometimes called three-value logic) for
logical operations. See :ref:`boolean.kleene` for more.
.. warning::
BooleanDtype is considered experimental. The implementation and
parts of the API may change without warning.
Attributes
----------
None
Methods
-------
None
See Also
--------
arrays.BooleanArray : Array of boolean (True/False) data with missing values.
Int64Dtype : Extension dtype for int64 integer data.
StringDtype : Extension dtype for string data.
Examples
--------
>>> pd.BooleanDtype()
BooleanDtype
>>> pd.array([True, False, None], dtype=pd.BooleanDtype())
<BooleanArray>
[True, False, <NA>]
Length: 3, dtype: boolean
>>> pd.array([True, False, None], dtype="boolean")
<BooleanArray>
[True, False, <NA>]
Length: 3, dtype: boolean
"""
name: ClassVar[str] = "boolean"
# The value used to fill '_data' to avoid upcasting
_internal_fill_value = False
# https://github.com/python/mypy/issues/4125
# error: Signature of "type" incompatible with supertype "BaseMaskedDtype"
@property
def type(self) -> type: # type: ignore[override]
return np.bool_
@property
def kind(self) -> str:
return "b"
@property
def numpy_dtype(self) -> np.dtype:
return np.dtype("bool")
def construct_array_type(self) -> type_t[BooleanArray]:
"""
Return the array type associated with this dtype.
Returns
-------
type
"""
return BooleanArray
def __repr__(self) -> str:
return "BooleanDtype"
@property
def _is_boolean(self) -> bool:
return True
@property
def _is_numeric(self) -> bool:
return True
def __from_arrow__(
self, array: pyarrow.Array | pyarrow.ChunkedArray
) -> BooleanArray:
"""
Construct BooleanArray from pyarrow Array/ChunkedArray.
"""
import pyarrow
if array.type != pyarrow.bool_() and not pyarrow.types.is_null(array.type):
raise TypeError(f"Expected array of boolean type, got {array.type} instead")
if isinstance(array, pyarrow.Array):
chunks = [array]
length = len(array)
else:
# pyarrow.ChunkedArray
chunks = array.chunks
length = array.length()
if pyarrow.types.is_null(array.type):
mask = np.ones(length, dtype=bool)
# No need to init data, since all null
data = np.empty(length, dtype=bool)
return BooleanArray(data, mask)
results = []
for arr in chunks:
buflist = arr.buffers()
data = pyarrow.BooleanArray.from_buffers(
arr.type, len(arr), [None, buflist[1]], offset=arr.offset
).to_numpy(zero_copy_only=False)
if arr.null_count != 0:
mask = pyarrow.BooleanArray.from_buffers(
arr.type, len(arr), [None, buflist[0]], offset=arr.offset
).to_numpy(zero_copy_only=False)
mask = ~mask
else:
mask = np.zeros(len(arr), dtype=bool)
bool_arr = BooleanArray(data, mask)
results.append(bool_arr)
if not results:
return BooleanArray(
np.array([], dtype=np.bool_), np.array([], dtype=np.bool_)
)
else:
return BooleanArray._concat_same_type(results)
def coerce_to_array(
values, mask=None, copy: bool = False
) -> tuple[np.ndarray, np.ndarray]:
"""
Coerce the input values array to numpy arrays with a mask.
Parameters
----------
values : 1D list-like
mask : bool 1D array, optional
copy : bool, default False
if True, copy the input
Returns
-------
tuple of (values, mask)
"""
if isinstance(values, BooleanArray):
if mask is not None:
raise ValueError("cannot pass mask for BooleanArray input")
values, mask = values._data, values._mask
if copy:
values = values.copy()
mask = mask.copy()
return values, mask
mask_values = None
if isinstance(values, np.ndarray) and values.dtype == np.bool_:
if copy:
values = values.copy()
elif isinstance(values, np.ndarray) and values.dtype.kind in "iufcb":
mask_values = isna(values)
values_bool = np.zeros(len(values), dtype=bool)
values_bool[~mask_values] = values[~mask_values].astype(bool)
if not np.all(
values_bool[~mask_values].astype(values.dtype) == values[~mask_values]
):
raise TypeError("Need to pass bool-like values")
values = values_bool
else:
values_object = np.asarray(values, dtype=object)
inferred_dtype = lib.infer_dtype(values_object, skipna=True)
integer_like = ("floating", "integer", "mixed-integer-float")
if inferred_dtype not in ("boolean", "empty", *integer_like):
raise TypeError("Need to pass bool-like values")
# mypy does not narrow the type of mask_values to npt.NDArray[np.bool_]
# within this branch, it assumes it can also be None
mask_values = cast("npt.NDArray[np.bool_]", isna(values_object))
values = np.zeros(len(values), dtype=bool)
values[~mask_values] = values_object[~mask_values].astype(bool)
# if the values were integer-like, validate it were actually 0/1's
if (inferred_dtype in integer_like) and not (
np.all(
values[~mask_values].astype(float)
== values_object[~mask_values].astype(float)
)
):
raise TypeError("Need to pass bool-like values")
if mask is None and mask_values is None:
mask = np.zeros(values.shape, dtype=bool)
elif mask is None:
mask = mask_values
elif isinstance(mask, np.ndarray) and mask.dtype == np.bool_:
if mask_values is not None:
mask = mask | mask_values
elif copy:
mask = mask.copy()
else:
mask = np.array(mask, dtype=bool)
if mask_values is not None:
mask = mask | mask_values
if values.shape != mask.shape:
raise ValueError("values.shape and mask.shape must match")
return values, mask
@set_module("pandas.arrays")
class BooleanArray(BaseMaskedArray):
"""
Array of boolean (True/False) data with missing values.
This is a pandas Extension array for boolean data, under the hood
represented by 2 numpy arrays: a boolean array with the data and
a boolean array with the mask (True indicating missing).
BooleanArray implements Kleene logic (sometimes called three-value
logic) for logical operations. See :ref:`boolean.kleene` for more.
To construct a BooleanArray from generic array-like input, use
:func:`pandas.array` specifying ``dtype="boolean"`` (see examples
below).
.. warning::
BooleanArray is considered experimental. The implementation and
parts of the API may change without warning.
Parameters
----------
values : numpy.ndarray
A 1-d boolean-dtype array with the data.
mask : numpy.ndarray
A 1-d boolean-dtype array indicating missing values (True
indicates missing).
copy : bool, default False
Whether to copy the `values` and `mask` arrays.
Attributes
----------
None
Methods
-------
None
Returns
-------
BooleanArray
See Also
--------
array : Create an array from data with the appropriate dtype.
BooleanDtype : Extension dtype for boolean data.
Series : One-dimensional ndarray with axis labels (including time series).
DataFrame : Two-dimensional, size-mutable, potentially heterogeneous tabular data.
Examples
--------
Create a BooleanArray with :func:`pandas.array`:
>>> pd.array([True, False, None], dtype="boolean")
<BooleanArray>
[True, False, <NA>]
Length: 3, dtype: boolean
"""
_TRUE_VALUES = {"True", "TRUE", "true", "1", "1.0"}
_FALSE_VALUES = {"False", "FALSE", "false", "0", "0.0"}
@classmethod
def _simple_new(cls, values: np.ndarray, mask: npt.NDArray[np.bool_]) -> Self:
result = super()._simple_new(values, mask)
result._dtype = BooleanDtype()
return result
def __init__(
self, values: np.ndarray, mask: np.ndarray, copy: bool = False
) -> None:
if not (isinstance(values, np.ndarray) and values.dtype == np.bool_):
raise TypeError(
"values should be boolean numpy array. Use "
"the 'pd.array' function instead"
)
self._dtype = BooleanDtype()
super().__init__(values, mask, copy=copy)
@property
def dtype(self) -> BooleanDtype:
return self._dtype
@classmethod
def _from_sequence_of_strings(
cls,
strings: list[str],
*,
dtype: ExtensionDtype,
copy: bool = False,
true_values: list[str] | None = None,
false_values: list[str] | None = None,
none_values: list[str] | None = None,
) -> BooleanArray:
true_values_union = cls._TRUE_VALUES.union(true_values or [])
false_values_union = cls._FALSE_VALUES.union(false_values or [])
if none_values is None:
none_values = []
def map_string(s) -> bool | None:
if s in true_values_union:
return True
elif s in false_values_union:
return False
elif s in none_values:
return None
else:
raise ValueError(f"{s} cannot be cast to bool")
scalars = np.array(strings, dtype=object)
mask = isna(scalars)
scalars[~mask] = list(map(map_string, scalars[~mask]))
return cls._from_sequence(scalars, dtype=dtype, copy=copy)
_HANDLED_TYPES = (np.ndarray, numbers.Number, bool, np.bool_)
@classmethod
def _coerce_to_array(
cls, value, *, dtype: DtypeObj, copy: bool = False
) -> tuple[np.ndarray, np.ndarray]:
if dtype:
assert dtype == "boolean"
return coerce_to_array(value, copy=copy)
def _logical_method(self, other, op):
assert op.__name__ in {"or_", "ror_", "and_", "rand_", "xor", "rxor"}
other_is_scalar = lib.is_scalar(other)
mask = None
if isinstance(other, BooleanArray):
other, mask = other._data, other._mask
elif is_list_like(other):
other = np.asarray(other, dtype="bool")
if other.ndim > 1:
return NotImplemented
other, mask = coerce_to_array(other, copy=False)
elif isinstance(other, np.bool_):
other = other.item()
if other_is_scalar and other is not libmissing.NA and not lib.is_bool(other):
raise TypeError(
"'other' should be pandas.NA or a bool. "
f"Got {type(other).__name__} instead."
)
if not other_is_scalar and len(self) != len(other):
raise ValueError("Lengths must match")
if op.__name__ in {"or_", "ror_"}:
result, mask = ops.kleene_or(self._data, other, self._mask, mask)
elif op.__name__ in {"and_", "rand_"}:
result, mask = ops.kleene_and(self._data, other, self._mask, mask)
else:
# i.e. xor, rxor
result, mask = ops.kleene_xor(self._data, other, self._mask, mask)
# i.e. BooleanArray
return self._maybe_mask_result(result, mask)
def _accumulate(
self, name: str, *, skipna: bool = True, **kwargs
) -> BaseMaskedArray:
data = self._data
mask = self._mask
if name in ("cummin", "cummax"):
op = getattr(masked_accumulations, name)
data, mask = op(data, mask, skipna=skipna, **kwargs)
return self._simple_new(data, mask)
else:
from pandas.core.arrays import IntegerArray
return IntegerArray(data.astype(int), mask)._accumulate(
name, skipna=skipna, **kwargs
)
|
python
|
github
|
https://github.com/pandas-dev/pandas
|
pandas/core/arrays/boolean.py
|
from django.conf.urls.defaults import patterns, url
urlpatterns = patterns("symposion.reviews.views",
url(r"^section/(?P<section_slug>[\w\-]+)/all/$", "review_section", {"reviewed": "all"}, name="review_section"),
url(r"^section/(?P<section_slug>[\w\-]+)/reviewed/$", "review_section", {"reviewed": "reviewed"}, name="user_reviewed"),
url(r"^section/(?P<section_slug>[\w\-]+)/not_reviewed/$", "review_section", {"reviewed": "not_reviewed"}, name="user_not_reviewed"),
url(r"^section/(?P<section_slug>[\w\-]+)/assignments/$", "review_section", {"assigned": True}, name="review_section_assignments"),
url(r"^section/(?P<section_slug>[\w\-]+)/status/$", "review_status", name="review_status"),
url(r"^section/(?P<section_slug>[\w\-]+)/status/(?P<key>\w+)/$", "review_status", name="review_status"),
url(r"^section/(?P<section_slug>[\w\-]+)/list/(?P<user_pk>\d+)/$", "review_list", name="review_list_user"),
url(r"^section/(?P<section_slug>[\w\-]+)/admin/$", "review_admin", name="review_admin"),
url(r"^section/(?P<section_slug>[\w\-]+)/admin/accept/$", "review_bulk_accept", name="review_bulk_accept"),
url(r"^section/(?P<section_slug>[\w\-]+)/notification/(?P<status>\w+)/$", "result_notification", name="result_notification"),
url(r"^section/(?P<section_slug>[\w\-]+)/notification/(?P<status>\w+)/prepare/$", "result_notification_prepare", name="result_notification_prepare"),
url(r"^section/(?P<section_slug>[\w\-]+)/notification/(?P<status>\w+)/send/$", "result_notification_send", name="result_notification_send"),
url(r"^review/(?P<pk>\d+)/$", "review_detail", name="review_detail"),
url(r"^(?P<pk>\d+)/delete/$", "review_delete", name="review_delete"),
url(r"^assignments/$", "review_assignments", name="review_assignments"),
url(r"^assignment/(?P<pk>\d+)/opt-out/$", "review_assignment_opt_out", name="review_assignment_opt_out"),
)
|
unknown
|
codeparrot/codeparrot-clean
| ||
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.security;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.token.TokenInfo;
/**
* Constructs SecurityInfo from Annotations provided in protocol interface.
*/
public class AnnotatedSecurityInfo extends SecurityInfo {
@Override
public KerberosInfo getKerberosInfo(Class<?> protocol, Configuration conf) {
return protocol.getAnnotation(KerberosInfo.class);
}
@Override
public TokenInfo getTokenInfo(Class<?> protocol, Configuration conf) {
return protocol.getAnnotation(TokenInfo.class);
}
}
|
java
|
github
|
https://github.com/apache/hadoop
|
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AnnotatedSecurityInfo.java
|
# Add the localize package
To take advantage of the localization features of Angular, use the [Angular CLI][CliMain] to add the `@angular/localize` package to your project.
To add the `@angular/localize` package, use the following command to update the `package.json` and TypeScript configuration files in your project.
<docs-code language="shell" path="adev/src/content/examples/i18n/doc-files/commands.sh" region="add-localize"/>
It adds `types: ["@angular/localize"]` in the TypeScript configuration files.
It also adds line `/// <reference types="@angular/localize" />` at the top of the `main.ts` file which is the reference to the type definition.
HELPFUL: For more information about `package.json` and `tsconfig.json` files, see [Workspace npm dependencies][GuideNpmPackages] and [TypeScript Configuration][GuideTsConfig]. To learn about Triple-slash Directives visit [Typescript Handbook](https://www.typescriptlang.org/docs/handbook/triple-slash-directives.html#-reference-types-).
If `@angular/localize` is not installed and you try to build a localized version of your project (for example, while using the `i18n` attributes in templates), the [Angular CLI][CliMain] will generate an error, which would contain the steps that you can take to enable i18n for your project.
## Options
| OPTION | DESCRIPTION | VALUE TYPE | DEFAULT VALUE |
| :----------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------- | :------------ |
| `--project` | The name of the project. | `string` |
| `--use-at-runtime` | If set, then `$localize` can be used at runtime. Also `@angular/localize` gets included in the `dependencies` section of `package.json`, rather than `devDependencies`, which is the default. | `boolean` | `false` |
For more available options, see `ng add` in [Angular CLI][CliMain].
## What's next
<docs-pill-row>
<docs-pill href="guide/i18n/locale-id" title="Refer to locales by ID"/>
</docs-pill-row>
[CliMain]: cli 'CLI Overview and Command Reference | Angular'
[GuideNpmPackages]: reference/configs/npm-packages 'Workspace npm dependencies | Angular'
[GuideTsConfig]: https://www.typescriptlang.org/docs/handbook/tsconfig-json.html 'TypeScript Configuration'
|
unknown
|
github
|
https://github.com/angular/angular
|
adev/src/content/guide/i18n/add-package.md
|
# SIL Initializer Conventions
A nominal type can define a number of initializers, some of which may
delegate initialization to another initializer. There are specific calling
conventions for these initializers within SIL that make up a part of the ABI
for a type. This document aims to summarize the key calling conventions for
these initializers.
# Structs and Enums
The delegation status for the initializer of a struct or enum is not encoded
in the definitions of these initializers. Thus, all of these initializers
have an implicit `metatype` argument for the instance to be passed in as the
last argument to the initializer. Using `<...>` as a stand-in for other
arguments that are part of the usual function calling convention, consider this
example:
```swift
// the non-delegating init MyStruct.init(final:)
sil hidden [ossa] @$s4test8MyStructV5finalACSi_tcfC : $@convention(method) (<...>, @thin MyStruct.Type) -> MyStruct {
bb0(<...>, %meta : $@thin MyStruct.Type):
%a = alloc_box ${ var MyStruct }, var, name "self"
%b = mark_uninitialized [rootself] %a : ${ var MyStruct }
%c = begin_borrow [lexical] %b : ${ var MyStruct }
%d = project_box %c : ${ var MyStruct }, 0
// ... initialize properties, etc ...
%end = load [trivial] %d : $*MyStruct
end_borrow %c : ${ var MyStruct }
destroy_value %b : ${ var MyStruct }
return %end : $MyStruct
}
// the delegating init MyStruct.init(delegates:)
sil hidden [ossa] @$s4test8MyStructV9delegatesACyt_tcfC : $@convention(method) (<...>, @thin MyStruct.Type) -> MyStruct {
bb0(<...>, %meta : $@thin MyStruct.Type):
// Same allocation as the non-delegating:
%a = alloc_box ${ var MyStruct }, var, name "self"
%b = mark_uninitialized [rootself] %a : ${ var MyStruct }
%c = begin_borrow [lexical] %b : ${ var MyStruct }
%d = project_box %c : ${ var MyStruct }, 0
// ... delegate to MyStruct.init(final:) ...
%ctor = function_ref @$s4test8MyStructV5finalACSi_tcfC : $@convention(method) (Int, @thin MyStruct.Type) -> MyStruct
%ret = apply %ctor(<...>, %meta) : $@convention(method) (Int, @thin MyStruct.Type) -> MyStruct
assign %ret to %d : $*MyStruct
%end = load [trivial] %d : $*MyStruct
end_borrow %c : ${ var MyStruct }
destroy_value %b : ${ var MyStruct }
return %end : $MyStruct
}
```
It's important to note that all initializers take a metadata argument,
regardless of whether it is a delegating initializer. There is also no
separation between allocating and non-allocating initializer entrypoints.
All initializers may perform allocation.
# Classes
Every designated initializer has two entry-points. One performs allocation
(i.e., the "allocating" entry) before continuing at the second entrypoint
which does the initialization (i.e., the "initializing" entrypoint).
Here's an example of `MyClass.init(final:)`, which is a designated initializer,
with its two entry-points:
```swift
// MyClass.__allocating_init(final:)
sil hidden [exact_self_class] [ossa] @$s4test7MyClassC5finalACSi_tcfC : $@convention(method) (<...>, @thick MyClass.Type) -> @owned MyClass {
bb0(%0 : $Int, %1 : $@thick MyClass.Type):
%2 = alloc_ref $MyClass
// function_ref MyClass.init(final:)
%3 = function_ref @$s4test7MyClassC5finalACSi_tcfc : $@convention(method) (Int, @owned MyClass) -> @owned MyClass
%4 = apply %3(%0, %2) : $@convention(method) (Int, @owned MyClass) -> @owned MyClass // user: %5
return %4 : $MyClass
}
// MyClass.init(final:)
sil hidden [ossa] @$s4test7MyClassC5finalACSi_tcfc : $@convention(method) (Int, @owned MyClass) -> @owned MyClass {
bb0(<...>, %1 : @owned $MyClass):
%4 = mark_uninitialized [rootself] %1 : $MyClass
// ... initialize MyClass ...
%11 = copy_value %4 : $MyClass
destroy_value %4 : $MyClass
return %11 : $MyClass
}
```
In the mangling of these entrypoint labels, the uppercase `C` suffix indicates
that it's the allocating entrypoint, whereas the lowercase `c` is the
initializing entrypoint. Only the allocating entrypoint is published in the
type's vtable:
```swift
sil_vtable MyClass {
// ...
#MyClass.init!allocator: (MyClass.Type) -> (<...>) -> MyClass : @$s4test7MyClassC5finalACSi_tcfC // MyClass.__allocating_init(final:)
}
```
The initializing entrypoint is only referenced by either it's corresponding
allocating entrypoint, or by a sub-class that is delegating up in a `super.init`
call. For example, if we had:
```swift
class MyClass {
var x: Int
init(final x: Int) {
self.x = x
}
}
class MyDerivedClass: MyClass {
var y: Int
init(subFinal y: Int) {
self.y = y
super.init(final: y)
}
}
```
Then the `super.init(final: y)` call directly invokes `MyClass.init(final:)`'s
initializing entrypoint, bypassing its allocating init. Here's what that looks
like in SIL:
```
// MyDerivedClass.__allocating_init(final:)
sil hidden [exact_self_class] [ossa] @$s4test14MyDerivedClassC5finalACSi_tcfC : $@convention(method) (Int, @thick MyDerivedClass.Type) -> @owned MyDerivedClass {
// ... calls $s4test14MyDerivedClassC5finalACSi_tcfc in the usual way ...
}
// MyDerivedClass.init(final:)
sil hidden [ossa] @$s4test14MyDerivedClassC5finalACSi_tcfc : $@convention(method) (Int, @owned MyDerivedClass) -> @owned MyDerivedClass {
bb0(%0 : $Int, %1 : @owned $MyDerivedClass):
%2 = alloc_box ${ var MyDerivedClass }, let, name "self"
%3 = mark_uninitialized [derivedself] %2 : ${ var MyDerivedClass }
%4 = begin_borrow [lexical] %3 : ${ var MyDerivedClass }
%5 = project_box %4 : ${ var MyDerivedClass }, 0
debug_value %0 : $Int, let, name "y", argno 1
store %1 to [init] %5 : $*MyDerivedClass
// ... initialize self.y ...
// perform the super call. notice the ownership transfer to the super.init.
%14 = load [take] %5 : $*MyDerivedClass
%15 = upcast %14 : $MyDerivedClass to $MyClass
// function_ref MyClass.init(final:)
%16 = function_ref @$s4test7MyClassC5finalACSi_tcfc : $@convention(method) (Int, @owned MyClass) -> @owned MyClass // user: %17
%17 = apply %16(%0, %15) : $@convention(method) (Int, @owned MyClass) -> @owned MyClass // user: %18
%18 = unchecked_ref_cast %17 : $MyClass to $MyDerivedClass
store %18 to [init] %5 : $*MyDerivedClass // id: %19
// return as usual
%20 = load [copy] %5 : $*MyDerivedClass
end_borrow %4 : ${ var MyDerivedClass }
destroy_value %3 : ${ var MyDerivedClass }
return %20 : $MyDerivedClass
}
```
# Actors
There does not exist a sub-actor that inherits from some other actor in the type
system. As a result, the `convenience` keyword is not required for actor
initializers in the source code. Without inheritance, only the allocating
entry-points can ever be used by an actor.
Nevertheless, internally the compiler will still differentiate between
convenience and designated initializers. So everything discussed
earlier for classes also apply to actors. The body of the initializer determines
whether the compiler internally treats it as `convenience` or not. For example,
an internally designated initializer for an actor still emits two entry-points,
but the initializing entrypoint is exclusively used by its corresponding
allocating entrypoint.
|
unknown
|
github
|
https://github.com/apple/swift
|
docs/SIL/SILInitializerConventions.md
|
# -*- coding: utf-8 -*-
from django.utils import timezone
from website import settings
def no_addon(email):
return len([addon for addon in email.user.get_addons() if addon.config.short_name != 'osfstorage']) == 0
def no_login(email):
from osf.models.queued_mail import QueuedMail, NO_LOGIN_TYPE
sent = QueuedMail.objects.filter(user=email.user, email_type=NO_LOGIN_TYPE).exclude(_id=email._id)
if sent.exists():
return False
return email.user.date_last_login < timezone.now() - settings.NO_LOGIN_WAIT_TIME
def new_public_project(email):
""" Will check to make sure the project that triggered this presend is still public
before sending the email. It also checks to make sure this is the first (and only)
new public project email to be sent
:param email: QueuedMail object, with 'nid' in its data field
:return: boolean based on whether the email should be sent
"""
# In line import to prevent circular importing
from osf.models import AbstractNode
node = AbstractNode.load(email.data['nid'])
if not node:
return False
public = email.find_sent_of_same_type_and_user()
return node.is_public and not len(public)
def welcome_osf4m(email):
""" presend has two functions. First is to make sure that the user has not
converted to a regular OSF user by logging in. Second is to populate the
data field with downloads by finding the file/project (node_settings) and
counting downloads of all files within that project
:param email: QueuedMail object with data field including fid
:return: boolean based on whether the email should be sent
"""
# In line import to prevent circular importing
from addons.osfstorage.models import OsfStorageFileNode
if email.user.date_last_login:
if email.user.date_last_login > timezone.now() - settings.WELCOME_OSF4M_WAIT_TIME_GRACE:
return False
upload = OsfStorageFileNode.load(email.data['fid'])
if upload:
email.data['downloads'] = upload.get_download_count()
else:
email.data['downloads'] = 0
email.save()
return True
|
unknown
|
codeparrot/codeparrot-clean
| ||
from __future__ import absolute_import, division, unicode_literals
import os
import sys
import traceback
import warnings
import re
warnings.simplefilter('error')
from .support import get_data_files
from .support import TestData, convert, convertExpected, treeTypes
from html5lib import html5parser, constants
# Run the parse error checks
checkParseErrors = False
# XXX - There should just be one function here but for some reason the testcase
# format differs from the treedump format by a single space character
def convertTreeDump(data):
return '\n'.join(convert(3)(data).split('\n')[1:])
namespaceExpected = re.compile(r'^(\s*)<(\S+)>', re.M).sub
def runParserTest(innerHTML, input, expected, errors, treeClass,
namespaceHTMLElements):
with warnings.catch_warnings(record=True) as caughtWarnings:
warnings.simplefilter('always')
p = html5parser.HTMLParser(tree=treeClass,
namespaceHTMLElements=namespaceHTMLElements)
try:
if innerHTML:
document = p.parseFragment(input, innerHTML)
else:
document = p.parse(input)
except:
errorMsg = '\n'.join(['\n\nInput:', input, '\nExpected:', expected,
'\nTraceback:', traceback.format_exc()])
assert False, errorMsg
otherWarnings = [x for x in caughtWarnings
if not issubclass(x.category, constants.DataLossWarning)]
assert len(otherWarnings) == 0, [(x.category, x.message) for x in otherWarnings]
if len(caughtWarnings):
return
output = convertTreeDump(p.tree.testSerializer(document))
expected = convertExpected(expected)
if namespaceHTMLElements:
expected = namespaceExpected(r'\1<html \2>', expected)
errorMsg = '\n'.join(['\n\nInput:', input, '\nExpected:', expected,
'\nReceived:', output])
assert expected == output, errorMsg
errStr = []
for (line, col), errorcode, datavars in p.errors:
assert isinstance(datavars, dict), '%s, %s' % (errorcode, repr(datavars))
errStr.append('Line: %i Col: %i %s' % (line, col,
constants.E[errorcode] % datavars))
errorMsg2 = '\n'.join(['\n\nInput:', input,
'\nExpected errors (' + str(len(errors)) + '):\n' + '\n'.join(errors),
'\nActual errors (' + str(len(p.errors)) + '):\n' + '\n'.join(errStr)])
if checkParseErrors:
assert len(p.errors) == len(errors), errorMsg2
def test_parser():
sys.stderr.write('Testing tree builders ' + ' '.join(list(treeTypes.keys())) + '\n')
files = get_data_files('tree-construction')
for filename in files:
testName = os.path.basename(filename).replace('.dat', '')
if testName in ('template',):
continue
tests = TestData(filename, 'data')
for index, test in enumerate(tests):
input, errors, innerHTML, expected = [test[key] for key in
('data', 'errors',
'document-fragment',
'document')]
if errors:
errors = errors.split('\n')
for treeName, treeCls in treeTypes.items():
for namespaceHTMLElements in (True, False):
yield (runParserTest, innerHTML, input, expected, errors, treeCls,
namespaceHTMLElements)
|
unknown
|
codeparrot/codeparrot-clean
| ||
{
"kind": "Dashboard",
"apiVersion": "dashboard.grafana.app/v0alpha1",
"metadata": {
"name": "v13.minimal_graph_config.v42"
},
"spec": {
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations \u0026 Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"links": [],
"panels": [
{
"autoMigrateFrom": "graph",
"datasource": {
"type": "prometheus",
"uid": "gdev-prometheus"
},
"id": 4,
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "gdev-prometheus"
},
"editorMode": "builder",
"expr": "{\"a.utf8.metric 🤘\", job=\"prometheus-utf8\"}",
"instant": false,
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"type": "timeseries"
}
],
"refresh": "",
"schemaVersion": 42,
"tags": [],
"templating": {
"list": []
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "Dashboard with minimal graph panel settings",
"weekStart": ""
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
}
|
json
|
github
|
https://github.com/grafana/grafana
|
apps/dashboard/pkg/migration/conversion/testdata/migrated_dashboards_output/v1beta1-mig-v13.minimal_graph_config.v42.v0alpha1.json
|
"""
Integrity check debugging tool for IMAP accounts.
Run as:
python -m inbox.util.consistency_check --help
"""
from __future__ import absolute_import, division, print_function
import argparse
import errno
import os
import pkg_resources
import subprocess
import sys
from fnmatch import fnmatch
from inbox.models import Account, Namespace
from inbox.models.session import session_scope
from inbox.sqlalchemy_ext.util import (
b36_to_bin,
int128_to_b36, # XXX: should probably be called bin_to_b36
)
from .sqlite3_db import connect_sqlite3_db, init_sqlite3_db
class _ALL_ACCOUNTS(object):
def __str__(self):
return "all accounts" # for --help
ALL_ACCOUNTS = _ALL_ACCOUNTS()
def execute_hooks(plugins, method_name):
def f(*args, **kwargs):
results = []
for name, plugin in plugins:
res = execute_hook(plugin, method_name)(*args, **kwargs)
results.append(res)
return results
return f
def execute_hook(plugin, method_name):
def f(*args, **kwargs):
func = getattr(plugin, method_name, None)
if func is None:
return None
return func(*args, **kwargs)
return f
def main():
# Load plugins
group = 'inbox.consistency_check_plugins'
plugins = [] # see ListPlugin as an example
for entry_point in pkg_resources.iter_entry_points(group):
plugin_factory = entry_point.load() # usually a python class
plugin = plugin_factory()
plugins.append((entry_point.name, plugin))
# Create argument parser
# NOTE: In the future, the interface may change to accept namespace
# public_ids instead of account public_ids.
parser = argparse.ArgumentParser(
description="""
Shows differences between metadata fetched from the specified
account(s) and what's stored in the local Inbox database.
""",
epilog = """
Only Gmail accounts are currently supported.
""")
parser.add_argument(
"public_ids", nargs='*', metavar="PUBLIC_ID",
type=lambda x: int128_to_b36(b36_to_bin(x)), default=ALL_ACCOUNTS,
help="account(s) to check (default: %(default)s)")
parser.add_argument(
'--cache-dir', default='./cache',
help="cache directory (default: %(default)s)")
parser.add_argument(
'--no-overwrite', action='store_false', dest='force_overwrite',
help="skip cache files already generated (default: overwrite them)")
parser.add_argument(
'--no-fetch', action='store_false', dest='do_slurp',
help="don't fetch")
parser.add_argument(
'--no-dump', action='store_false', dest='do_dump',
help="don't dump")
parser.add_argument(
'--no-diff', action='store_false', dest='do_diff',
help="don't diff")
execute_hooks(plugins, 'argparse_addoption')(parser)
# Parse arguments
args = parser.parse_args()
execute_hooks(plugins, 'argparse_args')(args)
# Make sure the cache directory exists.
if not os.path.exists(args.cache_dir):
os.mkdir(args.cache_dir)
with session_scope() as db_session:
# Query the list of accounts
query = db_session.query(Account)
if args.public_ids is not ALL_ACCOUNTS:
query = query.filter(Account.public_id.in_(args.public_ids))
accounts = query.all()
# list.py uses this hook to show a list of accounts
execute_hooks(plugins, 'process_accounts')(accounts)
# hax
if args.do_list:
return
# Query namespaces
query = (
db_session.query(Namespace, Account)
.filter(Namespace.account_id == Account.id)
.order_by(Namespace.id)
)
if args.public_ids is not ALL_ACCOUNTS:
query = query.filter(Namespace.public_id.in_(args.public_ids))
nnaa = query.all()
# check for discrepancies
missing_accounts = (set(a.public_id for ns, a in nnaa) ^
set(a.public_id for a in accounts))
if missing_accounts:
raise AssertionError("Missing accounts: %r" % (missing_accounts,))
# Fetch metadata for each account and save it into a sqlite3 database
# in the cache_dir.
# - See imap_gm.py & local_gm.py
# - See sqlite3_db.py for sqlite3 database schema.
# This creates files like:
# - cache/<account.public_id>.<namespace.public_id>.imap_gm.sqlite3
# - cache/<account.public_id>.<namespace.public_id>.local_gm.sqlite3
if args.do_slurp:
for namespace, account in nnaa:
can_slurp = execute_hooks(plugins, 'can_slurp_namespace')(
namespace=namespace,
account=account)
for i, (plugin_name, plugin) in enumerate(plugins):
if not can_slurp[i]:
continue
db_path = os.path.join(
args.cache_dir,
cachefile_basename(
namespace=namespace,
account=account,
plugin_name=plugin_name,
ext='.sqlite3'))
if os.path.exists(db_path):
if not args.force_overwrite:
# already saved
print(
"skipping {0}: already exists".format(db_path),
file=sys.stderr)
continue
os.unlink(db_path)
db = init_sqlite3_db(connect_sqlite3_db(db_path))
with db:
execute_hook(plugin, 'slurp_namespace')(
namespace=namespace,
account=account,
db=db)
# Generate canonical-format text files from the sqlite3 databases.
# - See dump_gm.py
# This creates files like:
# - cache/<account.public_id>.<namespace.public_id>.imap_gm.txt
# - cache/<account.public_id>.<namespace.public_id>.local_gm.txt
if args.do_dump:
for namespace, account in nnaa:
can_dump = execute_hooks(plugins, 'can_dump_namespace')(
namespace=namespace,
account=account)
for i, (plugin_name, plugin) in enumerate(plugins):
if not can_dump[i]:
continue
db_path = os.path.join(args.cache_dir, cachefile_basename(
namespace=namespace,
account=account,
plugin_name=plugin_name,
ext='.sqlite3'))
txt_path = os.path.join(args.cache_dir, cachefile_basename(
namespace=namespace,
account=account,
plugin_name=plugin_name,
ext='.txt'))
try:
db_stat = os.stat(db_path)
except OSError as e:
if e.errno != errno.ENOENT:
raise
db_stat = None
try:
txt_stat = os.stat(txt_path)
except OSError as e:
if e.errno != errno.ENOENT:
raise
txt_stat = None
if (db_stat and txt_stat and
db_stat.st_mtime < txt_stat.st_mtime):
print(
"skipping {0}: already exists".format(txt_path),
file=sys.stderr)
continue
db = connect_sqlite3_db(db_path)
with db, open(txt_path, "w") as txtfile:
execute_hook(plugin, 'dump_namespace')(
db=db,
txtfile=txtfile)
# Show differences between the text files in the cache directory.
# Basically, this runs something like the following for each account:
# vimdiff cache/${acct_pubid}.${ns_pubid).imap_gm.txt cache/${acct_pubid}.${ns_pubid).local_gm.txt
if args.do_diff:
if os.system("which vimdiff >/dev/null") == 0:
diff_cmd = ['vimdiff']
else:
diff_cmd = ['diff', '-u']
for namespace, account in nnaa:
# plugins here would be nice here, too
# This is such a hack
files_to_diff = sorted(
os.path.join(args.cache_dir, f)
for f in os.listdir(args.cache_dir)
if fnmatch(f, cachefile_basename(
namespace=namespace,
account=account,
plugin_name='*',
ext='.txt')))
if files_to_diff:
status = subprocess.call(diff_cmd + files_to_diff)
if status not in (0, 1):
raise AssertionError("error running diff")
def cachefile_basename(namespace, account, plugin_name, ext=''):
return '{acct_pubid}.{ns_pubid}.{plugin_name}{ext}'.format(
acct_pubid=account.public_id,
ns_pubid=namespace.public_id,
plugin_name=plugin_name,
ext=ext)
if __name__ == '__main__':
main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
#!/usr/bin/env python3
# Author: Dennis Strasser mailto:dennis.f.strasser@gmail.com
import configparser as cp
__version__ = "1.0"
class ConfigReader(object):
"""This class holds information on devices in the network
This class holds information about people and their contact information: first- and last name,
the cell phone number and the email address.
"""
def __init__(self, config_file="./OECluster.cfg"):
"""Initializes and declares class attributes from the given parameters
:param config_file: The config file to be parsed by ConfigReader
"""
self._config_file = config_file
self._config_parser = cp.ConfigParser()
self._config_parser.read(config_file)
def get_config_section(self, section):
dict1 = {}
options = self._config_parser.options(section)
for option in options:
try:
dict1[option] = self._config_parser.get(section, option)
if dict1[option] == -1:
print("skip: %s" % option)
except:
print("exception on %s!" % option)
dict1[option] = None
return dict1
if __name__ == "__main__":
print("This class should not be called directly.")
|
unknown
|
codeparrot/codeparrot-clean
| ||
use crate::runtime::scheduler::multi_thread::{queue, Stats};
use std::cell::RefCell;
use std::thread;
use std::time::Duration;
#[allow(unused)]
macro_rules! assert_metrics {
($stats:ident, $field:ident == $v:expr) => {
#[cfg(target_has_atomic = "64")]
{
use crate::runtime::WorkerMetrics;
use std::sync::atomic::Ordering::Relaxed;
let worker = WorkerMetrics::new();
$stats.submit(&worker);
let expect = $v;
let actual = worker.$field.load(Relaxed);
assert!(actual == expect, "expect = {}; actual = {}", expect, actual)
}
};
}
fn new_stats() -> Stats {
use crate::runtime::WorkerMetrics;
Stats::new(&WorkerMetrics::new())
}
#[test]
fn fits_256_one_at_a_time() {
let (_, mut local) = queue::local();
let inject = RefCell::new(vec![]);
let mut stats = new_stats();
for _ in 0..256 {
let (task, _) = super::unowned(async {});
local.push_back_or_overflow(task, &inject, &mut stats);
}
cfg_unstable_metrics! {
assert_metrics!(stats, overflow_count == 0);
}
assert!(inject.borrow_mut().pop().is_none());
while local.pop().is_some() {}
}
#[test]
fn fits_256_all_at_once() {
let (_, mut local) = queue::local();
let mut tasks = (0..256)
.map(|_| super::unowned(async {}).0)
.collect::<Vec<_>>();
local.push_back(tasks.drain(..));
let mut i = 0;
while local.pop().is_some() {
i += 1;
}
assert_eq!(i, 256);
}
#[test]
fn fits_256_all_in_chunks() {
let (_, mut local) = queue::local();
let mut tasks = (0..256)
.map(|_| super::unowned(async {}).0)
.collect::<Vec<_>>();
local.push_back(tasks.drain(..10));
local.push_back(tasks.drain(..100));
local.push_back(tasks.drain(..46));
local.push_back(tasks.drain(..100));
let mut i = 0;
while local.pop().is_some() {
i += 1;
}
assert_eq!(i, 256);
}
#[test]
fn overflow() {
let (_, mut local) = queue::local();
let inject = RefCell::new(vec![]);
let mut stats = new_stats();
for _ in 0..257 {
let (task, _) = super::unowned(async {});
local.push_back_or_overflow(task, &inject, &mut stats);
}
cfg_unstable_metrics! {
assert_metrics!(stats, overflow_count == 1);
}
let mut n = 0;
n += inject.borrow_mut().drain(..).count();
while local.pop().is_some() {
n += 1;
}
assert_eq!(n, 257);
}
#[test]
fn steal_batch() {
let mut stats = new_stats();
let (steal1, mut local1) = queue::local();
let (_, mut local2) = queue::local();
let inject = RefCell::new(vec![]);
for _ in 0..4 {
let (task, _) = super::unowned(async {});
local1.push_back_or_overflow(task, &inject, &mut stats);
}
assert!(steal1.steal_into(&mut local2, &mut stats).is_some());
cfg_unstable_metrics! {
assert_metrics!(stats, steal_count == 2);
}
for _ in 0..1 {
assert!(local2.pop().is_some());
}
assert!(local2.pop().is_none());
for _ in 0..2 {
assert!(local1.pop().is_some());
}
assert!(local1.pop().is_none());
}
const fn normal_or_miri(normal: usize, miri: usize) -> usize {
if cfg!(miri) {
miri
} else {
normal
}
}
#[test]
fn stress1() {
const NUM_ITER: usize = 5;
const NUM_STEAL: usize = normal_or_miri(1_000, 10);
const NUM_LOCAL: usize = normal_or_miri(1_000, 10);
const NUM_PUSH: usize = normal_or_miri(500, 10);
const NUM_POP: usize = normal_or_miri(250, 10);
let mut stats = new_stats();
for _ in 0..NUM_ITER {
let (steal, mut local) = queue::local();
let inject = RefCell::new(vec![]);
let th = thread::spawn(move || {
let mut stats = new_stats();
let (_, mut local) = queue::local();
let mut n = 0;
for _ in 0..NUM_STEAL {
if steal.steal_into(&mut local, &mut stats).is_some() {
n += 1;
}
while local.pop().is_some() {
n += 1;
}
thread::yield_now();
}
cfg_unstable_metrics! {
assert_metrics!(stats, steal_count == n as _);
}
n
});
let mut n = 0;
for _ in 0..NUM_LOCAL {
for _ in 0..NUM_PUSH {
let (task, _) = super::unowned(async {});
local.push_back_or_overflow(task, &inject, &mut stats);
}
for _ in 0..NUM_POP {
if local.pop().is_some() {
n += 1;
} else {
break;
}
}
}
n += inject.borrow_mut().drain(..).count();
n += th.join().unwrap();
assert_eq!(n, NUM_LOCAL * NUM_PUSH);
}
}
#[test]
fn stress2() {
const NUM_ITER: usize = 1;
const NUM_TASKS: usize = normal_or_miri(1_000_000, 50);
const NUM_STEAL: usize = normal_or_miri(1_000, 10);
let mut stats = new_stats();
for _ in 0..NUM_ITER {
let (steal, mut local) = queue::local();
let inject = RefCell::new(vec![]);
let th = thread::spawn(move || {
let mut stats = new_stats();
let (_, mut local) = queue::local();
let mut n = 0;
for _ in 0..NUM_STEAL {
if steal.steal_into(&mut local, &mut stats).is_some() {
n += 1;
}
while local.pop().is_some() {
n += 1;
}
thread::sleep(Duration::from_micros(10));
}
n
});
let mut num_pop = 0;
for i in 0..NUM_TASKS {
let (task, _) = super::unowned(async {});
local.push_back_or_overflow(task, &inject, &mut stats);
if i % 128 == 0 && local.pop().is_some() {
num_pop += 1;
}
num_pop += inject.borrow_mut().drain(..).count();
}
num_pop += th.join().unwrap();
while local.pop().is_some() {
num_pop += 1;
}
num_pop += inject.borrow_mut().drain(..).count();
assert_eq!(num_pop, NUM_TASKS);
}
}
|
rust
|
github
|
https://github.com/tokio-rs/tokio
|
tokio/src/runtime/tests/queue.rs
|
/*
* Copyright 2012-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.boot;
import java.time.Duration;
import org.jspecify.annotations.Nullable;
import org.springframework.boot.bootstrap.ConfigurableBootstrapContext;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.io.support.SpringFactoriesLoader;
/**
* Listener for the {@link SpringApplication} {@code run} method.
* {@link SpringApplicationRunListener}s are loaded through the
* {@link SpringFactoriesLoader} and should declare a public constructor that accepts a
* {@link SpringApplication} instance and a {@code String[]} of arguments. A new
* {@link SpringApplicationRunListener} instance will be created for each run.
*
* @author Phillip Webb
* @author Dave Syer
* @author Andy Wilkinson
* @author Chris Bono
* @since 1.0.0
*/
public interface SpringApplicationRunListener {
/**
* Called immediately when the run method has first started. Can be used for very
* early initialization.
* @param bootstrapContext the bootstrap context
*/
default void starting(ConfigurableBootstrapContext bootstrapContext) {
}
/**
* Called once the environment has been prepared, but before the
* {@link ApplicationContext} has been created.
* @param bootstrapContext the bootstrap context
* @param environment the environment
*/
default void environmentPrepared(ConfigurableBootstrapContext bootstrapContext,
ConfigurableEnvironment environment) {
}
/**
* Called once the {@link ApplicationContext} has been created and prepared, but
* before sources have been loaded.
* @param context the application context
*/
default void contextPrepared(ConfigurableApplicationContext context) {
}
/**
* Called once the application context has been loaded but before it has been
* refreshed.
* @param context the application context
*/
default void contextLoaded(ConfigurableApplicationContext context) {
}
/**
* The context has been refreshed and the application has started but
* {@link CommandLineRunner CommandLineRunners} and {@link ApplicationRunner
* ApplicationRunners} have not been called.
* @param context the application context.
* @param timeTaken the time taken to start the application or {@code null} if unknown
* @since 2.6.0
*/
default void started(ConfigurableApplicationContext context, @Nullable Duration timeTaken) {
}
/**
* Called immediately before the run method finishes, when the application context has
* been refreshed and all {@link CommandLineRunner CommandLineRunners} and
* {@link ApplicationRunner ApplicationRunners} have been called.
* @param context the application context.
* @param timeTaken the time taken for the application to be ready or {@code null} if
* unknown
* @since 2.6.0
*/
default void ready(ConfigurableApplicationContext context, @Nullable Duration timeTaken) {
}
/**
* Called when a failure occurs when running the application.
* @param context the application context or {@code null} if a failure occurred before
* the context was created
* @param exception the failure
* @since 2.0.0
*/
default void failed(@Nullable ConfigurableApplicationContext context, Throwable exception) {
}
}
|
java
|
github
|
https://github.com/spring-projects/spring-boot
|
core/spring-boot/src/main/java/org/springframework/boot/SpringApplicationRunListener.java
|
# Copyright 2011 OpenStack Foundation # All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tests For Scheduler Host Filters.
"""
from nova.scheduler import filters
from nova.scheduler.filters import all_hosts_filter
from nova.scheduler.filters import compute_filter
from nova import test
from nova.tests.unit.scheduler import fakes
class HostFiltersTestCase(test.NoDBTestCase):
def test_filter_handler(self):
# Double check at least a couple of known filters exist
filter_handler = filters.HostFilterHandler()
classes = filter_handler.get_matching_classes(
['nova.scheduler.filters.all_filters'])
self.assertIn(all_hosts_filter.AllHostsFilter, classes)
self.assertIn(compute_filter.ComputeFilter, classes)
def test_all_host_filter(self):
filt_cls = all_hosts_filter.AllHostsFilter()
host = fakes.FakeHostState('host1', 'node1', {})
self.assertTrue(filt_cls.host_passes(host, {}))
|
unknown
|
codeparrot/codeparrot-clean
| ||
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import urllib
from random import random
from time import time
from os.path import join
from swift import gettext_ as _
import hashlib
from eventlet import sleep, Timeout
from eventlet.greenpool import GreenPool
from swift.common.daemon import Daemon
from swift.common.internal_client import InternalClient
from swift.common.utils import get_logger, dump_recon_cache
from swift.common.http import HTTP_NOT_FOUND, HTTP_CONFLICT, \
HTTP_PRECONDITION_FAILED
class ObjectExpirer(Daemon):
"""
Daemon that queries the internal hidden expiring_objects_account to
discover objects that need to be deleted.
:param conf: The daemon configuration.
"""
def __init__(self, conf):
self.conf = conf
self.logger = get_logger(conf, log_route='object-expirer')
self.interval = int(conf.get('interval') or 300)
self.expiring_objects_account = \
(conf.get('auto_create_account_prefix') or '.') + \
'expiring_objects'
conf_path = conf.get('__file__') or '/etc/swift/object-expirer.conf'
request_tries = int(conf.get('request_tries') or 3)
self.swift = InternalClient(conf_path,
'Swift Object Expirer',
request_tries)
self.report_interval = int(conf.get('report_interval') or 300)
self.report_first_time = self.report_last_time = time()
self.report_objects = 0
self.recon_cache_path = conf.get('recon_cache_path',
'/var/cache/swift')
self.rcache = join(self.recon_cache_path, 'object.recon')
self.concurrency = int(conf.get('concurrency', 1))
if self.concurrency < 1:
raise ValueError("concurrency must be set to at least 1")
self.processes = int(self.conf.get('processes', 0))
self.process = int(self.conf.get('process', 0))
def report(self, final=False):
"""
Emits a log line report of the progress so far, or the final progress
is final=True.
:param final: Set to True for the last report once the expiration pass
has completed.
"""
if final:
elapsed = time() - self.report_first_time
self.logger.info(_('Pass completed in %ds; %d objects expired') %
(elapsed, self.report_objects))
dump_recon_cache({'object_expiration_pass': elapsed,
'expired_last_pass': self.report_objects},
self.rcache, self.logger)
elif time() - self.report_last_time >= self.report_interval:
elapsed = time() - self.report_first_time
self.logger.info(_('Pass so far %ds; %d objects expired') %
(elapsed, self.report_objects))
self.report_last_time = time()
def run_once(self, *args, **kwargs):
"""
Executes a single pass, looking for objects to expire.
:param args: Extra args to fulfill the Daemon interface; this daemon
has no additional args.
:param kwargs: Extra keyword args to fulfill the Daemon interface; this
daemon accepts processes and process keyword args.
These will override the values from the config file if
provided.
"""
processes, process = self.get_process_values(kwargs)
pool = GreenPool(self.concurrency)
containers_to_delete = []
self.report_first_time = self.report_last_time = time()
self.report_objects = 0
try:
self.logger.debug(_('Run begin'))
containers, objects = \
self.swift.get_account_info(self.expiring_objects_account)
self.logger.info(_('Pass beginning; %s possible containers; %s '
'possible objects') % (containers, objects))
for c in self.swift.iter_containers(self.expiring_objects_account):
container = c['name']
timestamp = int(container)
if timestamp > int(time()):
break
containers_to_delete.append(container)
for o in self.swift.iter_objects(self.expiring_objects_account,
container):
obj = o['name'].encode('utf8')
if processes > 0:
obj_process = int(
hashlib.md5('%s/%s' % (container, obj)).
hexdigest(), 16)
if obj_process % processes != process:
continue
timestamp, actual_obj = obj.split('-', 1)
timestamp = int(timestamp)
if timestamp > int(time()):
break
pool.spawn_n(
self.delete_object, actual_obj, timestamp,
container, obj)
pool.waitall()
for container in containers_to_delete:
try:
self.swift.delete_container(
self.expiring_objects_account,
container,
acceptable_statuses=(2, HTTP_NOT_FOUND, HTTP_CONFLICT))
except (Exception, Timeout) as err:
self.logger.exception(
_('Exception while deleting container %s %s') %
(container, str(err)))
self.logger.debug(_('Run end'))
self.report(final=True)
except (Exception, Timeout):
self.logger.exception(_('Unhandled exception'))
def run_forever(self, *args, **kwargs):
"""
Executes passes forever, looking for objects to expire.
:param args: Extra args to fulfill the Daemon interface; this daemon
has no additional args.
:param kwargs: Extra keyword args to fulfill the Daemon interface; this
daemon has no additional keyword args.
"""
sleep(random() * self.interval)
while True:
begin = time()
try:
self.run_once(*args, **kwargs)
except (Exception, Timeout):
self.logger.exception(_('Unhandled exception'))
elapsed = time() - begin
if elapsed < self.interval:
sleep(random() * (self.interval - elapsed))
def get_process_values(self, kwargs):
"""
Gets the processes, process from the kwargs if those values exist.
Otherwise, return processes, process set in the config file.
:param kwargs: Keyword args passed into the run_forever(), run_once()
methods. They have values specified on the command
line when the daemon is run.
"""
if kwargs.get('processes') is not None:
processes = int(kwargs['processes'])
else:
processes = self.processes
if kwargs.get('process') is not None:
process = int(kwargs['process'])
else:
process = self.process
if process < 0:
raise ValueError(
'process must be an integer greater than or equal to 0')
if processes < 0:
raise ValueError(
'processes must be an integer greater than or equal to 0')
if processes and process >= processes:
raise ValueError(
'process must be less than or equal to processes')
return processes, process
def delete_object(self, actual_obj, timestamp, container, obj):
start_time = time()
try:
self.delete_actual_object(actual_obj, timestamp)
self.swift.delete_object(self.expiring_objects_account,
container, obj)
self.report_objects += 1
self.logger.increment('objects')
except (Exception, Timeout) as err:
self.logger.increment('errors')
self.logger.exception(
_('Exception while deleting object %s %s %s') %
(container, obj, str(err)))
self.logger.timing_since('timing', start_time)
self.report()
def delete_actual_object(self, actual_obj, timestamp):
"""
Deletes the end-user object indicated by the actual object name given
'<account>/<container>/<object>' if and only if the X-Delete-At value
of the object is exactly the timestamp given.
:param actual_obj: The name of the end-user object to delete:
'<account>/<container>/<object>'
:param timestamp: The timestamp the X-Delete-At value must match to
perform the actual delete.
"""
path = '/v1/' + urllib.quote(actual_obj.lstrip('/'))
self.swift.make_request('DELETE', path,
{'X-If-Delete-At': str(timestamp)},
(2, HTTP_NOT_FOUND, HTTP_PRECONDITION_FAILED))
|
unknown
|
codeparrot/codeparrot-clean
| ||
'''
Created on: Jun 28, 2013
@author: qwang
'''
import hashlib
import unittest
# configure django settings for test
from django.conf import settings
#NOTICE configure in weibonews/test.py in setUpTests
#settings.configure()
from weibonews.utils.web.decorators import authenticate
class MockRequestFunc(object):
'''
Mock object to test decorators
'''
def __init__(self):
self.called = False
def __call__(self, request, *args):
self.called = True
return {'sta': 0}
class TestAuthenticate(unittest.TestCase):
def setUp(self):
self.salt = 'dzone test salt'
self.uid = 2646150270
self.did = '1234567abcdefg'
def _mock_settings(self):
settings.AUTHENTICATION_ENABLED = True
settings.AUTHENTICATION_FAIL_INFO = 'auth failed'
settings.COOKIE_PATH = None
settings.COOKIE_DOMAIN = None
settings.DOLPHIN_SALT = self.salt
settings.DEBUG = True
class mock_weibodb(object):
'''
Mock weibodb class to test authenticate
'''
token = 'dzone test token'
def get_weibo_user_token(self, user_id):
try:
int(user_id)
except ValueError:
return None
if user_id == 2646150270:
return self.token
else:
# for other uid, return None
return None
class mock_request(object):
def __init__(self, path, method='GET', params=None, user_key=None):
params = {} if params is None else params
self.path = path
self.method = method
if method == 'GET':
self.GET = params
elif method == 'POST':
self.POST = params
self.COOKIES = {'user_key': user_key}
self.raw_post_data = {}
def build_absolute_uri(self):
return self.path
def _gen_user_key(self, weibodb, did=None):
if did is not None:
token = did
else:
token = weibodb.token
return hashlib.md5("".join([token, self.salt])).hexdigest()
def test_authenticate_with_get_request_and_anonymous_user(self):
'''
Test auth for anonymous user, with uid, did, user_key all None
'''
self._mock_settings()
func = MockRequestFunc()
weibodb = TestAuthenticate.mock_weibodb()
auth_decorator = authenticate(weibodb, settings)
user_key = None
request = TestAuthenticate.mock_request('/api/infostream.json', user_key=user_key)
response = auth_decorator(func)(request)
self.assertTrue(func.called)
self.assertTrue(type(response) in [dict, list])
def test_authenticate_with_get_request_and_key_in_param_with_uid(self):
'''
Test auth for get request and auth id in param, auth with uid. did also in param, but because
uid is provided, will auth with uid. Here the user is already authed, so token will be found
'''
self._mock_settings()
func = MockRequestFunc()
weibodb = TestAuthenticate.mock_weibodb()
auth_decorator = authenticate(weibodb, settings)
user_key = self._gen_user_key(weibodb)
request = TestAuthenticate.mock_request('/api/infostream.json', params={'uid': self.uid, 'did': self.did}, user_key=user_key)
response = auth_decorator(func)(request)
self.assertTrue(func.called)
self.assertTrue(type(response) in [dict, list])
def test_authenticate_with_get_request_and_key_in_param_with_wrong_uid(self):
'''
Test auth for get request and auth id in param, auth with uid. did also in param, but because
uid is provided, will auth with uid. Here the user is not authed, so token will be None, will
auth failed
'''
self._mock_settings()
func = MockRequestFunc()
weibodb = TestAuthenticate.mock_weibodb()
auth_decorator = authenticate(weibodb, settings)
user_key = self._gen_user_key(weibodb)
request = TestAuthenticate.mock_request('/api/infostream.json', params={'uid': 123456, 'did': self.did}, user_key=user_key)
auth_decorator(func)(request)
self.assertTrue(not func.called)
def test_authenticate_with_get_request_and_key_in_param_with_did(self):
'''
Test auth for get request, and auth id in param, auth with did. Here no uid because the user
is not authed, just auth with did
'''
self._mock_settings()
func = MockRequestFunc()
weibodb = TestAuthenticate.mock_weibodb()
auth_decorator = authenticate(weibodb, settings)
user_key = self._gen_user_key(weibodb, did=self.did)
request = TestAuthenticate.mock_request('/api/infostream.json', params={'did': self.did}, user_key=user_key)
response = auth_decorator(func)(request)
self.assertTrue(func.called)
self.assertTrue(type(response) in [dict, list])
def test_authenticate_with_get_request_and_key_in_param_fail(self):
'''
Test auth for get request, auth id in request params, with wrong user_key, will auth fail
'''
self._mock_settings()
func = MockRequestFunc()
weibodb = TestAuthenticate.mock_weibodb()
auth_decorator = authenticate(weibodb, settings)
user_key = self._gen_user_key(weibodb) + 'aa'
request = TestAuthenticate.mock_request('/api/infostream.json', params={'uid': self.uid, 'did': self.did}, user_key=user_key)
auth_decorator(func)(request)
self.assertTrue(not func.called)
def test_authenticate_with_get_request_and_key_in_path_with_uid(self):
'''
Test auth for get request, user id in request uri, auth with user id
'''
self._mock_settings()
func = MockRequestFunc()
weibodb = TestAuthenticate.mock_weibodb()
auth_decorator = authenticate(weibodb, settings, key_pos=1)
user_key = self._gen_user_key(weibodb)
request = TestAuthenticate.mock_request('/api/weibolist/%d.json' % self.uid, user_key=user_key)
response = auth_decorator(func)(request, self.uid)
self.assertTrue(func.called)
self.assertTrue(type(response) in [dict, list])
# Do not test did in path, we don't use that kind of api any more.
#def test_authenticate_with_get_request_and_key_in_path_with_did(self):
# '''
# Test auth for get request, device id in request uri, auth with device id
# '''
# self._mock_settings()
# func = MockRequestFunc()
# weibodb = TestAuthenticate.mock_weibodb()
# auth_decorator = authenticate(weibodb, settings, key_pos=1)
# user_key = self._gen_user_key(weibodb, did=self.did)
# request = TestAuthenticate.mock_request('/api/weibolist/%s.json' % self.did, user_key=user_key)
# response = auth_decorator(func)(request, self.did)
# self.assertTrue(func.called)
# self.assertTrue(type(response) in [dict, list])
def test_authenticate_with_post_request_and_key_in_param_with_uid(self):
'''
Test auth for post request, auth id in post body, auth with uid. Here the uid already authed, token will be found
'''
self._mock_settings()
func = MockRequestFunc()
weibodb = TestAuthenticate.mock_weibodb()
auth_decorator = authenticate(weibodb, settings, method='POST')
user_key = self._gen_user_key(weibodb)
request = TestAuthenticate.mock_request('/api/weibo/pub', method='POST', user_key=user_key, params={'uid': self.uid, 'did': self.did})
response = auth_decorator(func)(request)
self.assertTrue(func.called)
self.assertTrue(type(response) in [dict, list])
def test_authenticate_with_post_request_and_key_in_param_with_wrong_uid(self):
'''
Test auth for post request, auth id in post body, auth with wrong uid. Here uid is not authed, will found None token, auth will failed
'''
self._mock_settings()
func = MockRequestFunc()
weibodb = TestAuthenticate.mock_weibodb()
auth_decorator = authenticate(weibodb, settings, method='POST')
user_key = self._gen_user_key(weibodb)
request = TestAuthenticate.mock_request('/api/weibo/pub', method='POST', user_key=user_key, params={'uid': 123456, 'did': self.did})
auth_decorator(func)(request)
self.assertTrue(not func.called)
def test_authenticate_with_post_request_and_key_in_param_with_did(self):
'''
Test auth for post request, auth id in post body, auth with device id
'''
self._mock_settings()
func = MockRequestFunc()
weibodb = TestAuthenticate.mock_weibodb()
auth_decorator = authenticate(weibodb, settings, method='POST')
user_key = self._gen_user_key(weibodb, did=self.did)
request = TestAuthenticate.mock_request('/api/weibo/pub', method='POST', user_key=user_key, params={'did': self.did})
response = auth_decorator(func)(request)
self.assertTrue(func.called)
self.assertTrue(type(response) in [dict, list])
if __name__ == '__main__':
unittest.main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
#!/usr/bin/env python
"""
This file defines a set of system_info classes for getting
information about various resources (libraries, library directories,
include directories, etc.) in the system. Currently, the following
classes are available:
atlas_info
atlas_threads_info
atlas_blas_info
atlas_blas_threads_info
lapack_atlas_info
lapack_atlas_threads_info
atlas_3_10_info
atlas_3_10_threads_info
atlas_3_10_blas_info,
atlas_3_10_blas_threads_info,
lapack_atlas_3_10_info
lapack_atlas_3_10_threads_info
blas_info
lapack_info
openblas_info
blis_info
blas_opt_info # usage recommended
lapack_opt_info # usage recommended
fftw_info,dfftw_info,sfftw_info
fftw_threads_info,dfftw_threads_info,sfftw_threads_info
djbfft_info
x11_info
lapack_src_info
blas_src_info
numpy_info
numarray_info
numpy_info
boost_python_info
agg2_info
wx_info
gdk_pixbuf_xlib_2_info
gdk_pixbuf_2_info
gdk_x11_2_info
gtkp_x11_2_info
gtkp_2_info
xft_info
freetype2_info
umfpack_info
Usage:
info_dict = get_info(<name>)
where <name> is a string 'atlas','x11','fftw','lapack','blas',
'lapack_src', 'blas_src', etc. For a complete list of allowed names,
see the definition of get_info() function below.
Returned info_dict is a dictionary which is compatible with
distutils.setup keyword arguments. If info_dict == {}, then the
asked resource is not available (system_info could not find it).
Several *_info classes specify an environment variable to specify
the locations of software. When setting the corresponding environment
variable to 'None' then the software will be ignored, even when it
is available in system.
Global parameters:
system_info.search_static_first - search static libraries (.a)
in precedence to shared ones (.so, .sl) if enabled.
system_info.verbosity - output the results to stdout if enabled.
The file 'site.cfg' is looked for in
1) Directory of main setup.py file being run.
2) Home directory of user running the setup.py file as ~/.numpy-site.cfg
3) System wide directory (location of this file...)
The first one found is used to get system configuration options The
format is that used by ConfigParser (i.e., Windows .INI style). The
section ALL has options that are the default for each section. The
available sections are fftw, atlas, and x11. Appropriate defaults are
used if nothing is specified.
The order of finding the locations of resources is the following:
1. environment variable
2. section in site.cfg
3. ALL section in site.cfg
Only the first complete match is returned.
Example:
----------
[ALL]
library_dirs = /usr/lib:/usr/local/lib:/opt/lib
include_dirs = /usr/include:/usr/local/include:/opt/include
src_dirs = /usr/local/src:/opt/src
# search static libraries (.a) in preference to shared ones (.so)
search_static_first = 0
[fftw]
fftw_libs = rfftw, fftw
fftw_opt_libs = rfftw_threaded, fftw_threaded
# if the above aren't found, look for {s,d}fftw_libs and {s,d}fftw_opt_libs
[atlas]
library_dirs = /usr/lib/3dnow:/usr/lib/3dnow/atlas
# for overriding the names of the atlas libraries
atlas_libs = lapack, f77blas, cblas, atlas
[x11]
library_dirs = /usr/X11R6/lib
include_dirs = /usr/X11R6/include
----------
Authors:
Pearu Peterson <pearu@cens.ioc.ee>, February 2002
David M. Cooke <cookedm@physics.mcmaster.ca>, April 2002
Copyright 2002 Pearu Peterson all rights reserved,
Pearu Peterson <pearu@cens.ioc.ee>
Permission to use, modify, and distribute this software is given under the
terms of the NumPy (BSD style) license. See LICENSE.txt that came with
this distribution for specifics.
NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.
"""
from __future__ import division, absolute_import, print_function
import sys
import os
import re
import copy
import warnings
from glob import glob
from functools import reduce
if sys.version_info[0] < 3:
from ConfigParser import NoOptionError
from ConfigParser import RawConfigParser as ConfigParser
else:
from configparser import NoOptionError
from configparser import RawConfigParser as ConfigParser
# It seems that some people are importing ConfigParser from here so is
# good to keep its class name. Use of RawConfigParser is needed in
# order to be able to load path names with percent in them, like
# `feature%2Fcool` which is common on git flow branch names.
from distutils.errors import DistutilsError
from distutils.dist import Distribution
import distutils.sysconfig
from distutils import log
from distutils.util import get_platform
from numpy.distutils.exec_command import (
find_executable, exec_command, get_pythonexe)
from numpy.distutils.misc_util import (is_sequence, is_string,
get_shared_lib_extension)
from numpy.distutils.command.config import config as cmd_config
from numpy.distutils.compat import get_exception
import distutils.ccompiler
import tempfile
import shutil
# Determine number of bits
import platform
_bits = {'32bit': 32, '64bit': 64}
platform_bits = _bits[platform.architecture()[0]]
def libpaths(paths, bits):
"""Return a list of library paths valid on 32 or 64 bit systems.
Inputs:
paths : sequence
A sequence of strings (typically paths)
bits : int
An integer, the only valid values are 32 or 64. A ValueError exception
is raised otherwise.
Examples:
Consider a list of directories
>>> paths = ['/usr/X11R6/lib','/usr/X11/lib','/usr/lib']
For a 32-bit platform, this is already valid:
>>> np.distutils.system_info.libpaths(paths,32)
['/usr/X11R6/lib', '/usr/X11/lib', '/usr/lib']
On 64 bits, we prepend the '64' postfix
>>> np.distutils.system_info.libpaths(paths,64)
['/usr/X11R6/lib64', '/usr/X11R6/lib', '/usr/X11/lib64', '/usr/X11/lib',
'/usr/lib64', '/usr/lib']
"""
if bits not in (32, 64):
raise ValueError("Invalid bit size in libpaths: 32 or 64 only")
# Handle 32bit case
if bits == 32:
return paths
# Handle 64bit case
out = []
for p in paths:
out.extend([p + '64', p])
return out
if sys.platform == 'win32':
default_lib_dirs = ['C:\\',
os.path.join(distutils.sysconfig.EXEC_PREFIX,
'libs')]
default_runtime_dirs = []
default_include_dirs = []
default_src_dirs = ['.']
default_x11_lib_dirs = []
default_x11_include_dirs = []
else:
default_lib_dirs = libpaths(['/usr/local/lib', '/opt/lib', '/usr/lib',
'/opt/local/lib', '/sw/lib'], platform_bits)
default_runtime_dirs = []
default_include_dirs = ['/usr/local/include',
'/opt/include', '/usr/include',
# path of umfpack under macports
'/opt/local/include/ufsparse',
'/opt/local/include', '/sw/include',
'/usr/include/suitesparse']
default_src_dirs = ['.', '/usr/local/src', '/opt/src', '/sw/src']
default_x11_lib_dirs = libpaths(['/usr/X11R6/lib', '/usr/X11/lib',
'/usr/lib'], platform_bits)
default_x11_include_dirs = ['/usr/X11R6/include', '/usr/X11/include',
'/usr/include']
if os.path.exists('/usr/lib/X11'):
globbed_x11_dir = glob('/usr/lib/*/libX11.so')
if globbed_x11_dir:
x11_so_dir = os.path.split(globbed_x11_dir[0])[0]
default_x11_lib_dirs.extend([x11_so_dir, '/usr/lib/X11'])
default_x11_include_dirs.extend(['/usr/lib/X11/include',
'/usr/include/X11'])
import subprocess as sp
tmp = None
try:
# Explicitly open/close file to avoid ResourceWarning when
# tests are run in debug mode Python 3.
tmp = open(os.devnull, 'w')
p = sp.Popen(["gcc", "-print-multiarch"], stdout=sp.PIPE,
stderr=tmp)
except (OSError, DistutilsError):
# OSError if gcc is not installed, or SandboxViolation (DistutilsError
# subclass) if an old setuptools bug is triggered (see gh-3160).
pass
else:
triplet = str(p.communicate()[0].decode().strip())
if p.returncode == 0:
# gcc supports the "-print-multiarch" option
default_x11_lib_dirs += [os.path.join("/usr/lib/", triplet)]
default_lib_dirs += [os.path.join("/usr/lib/", triplet)]
finally:
if tmp is not None:
tmp.close()
if os.path.join(sys.prefix, 'lib') not in default_lib_dirs:
default_lib_dirs.insert(0, os.path.join(sys.prefix, 'lib'))
default_include_dirs.append(os.path.join(sys.prefix, 'include'))
default_src_dirs.append(os.path.join(sys.prefix, 'src'))
default_lib_dirs = [_m for _m in default_lib_dirs if os.path.isdir(_m)]
default_runtime_dirs = [_m for _m in default_runtime_dirs if os.path.isdir(_m)]
default_include_dirs = [_m for _m in default_include_dirs if os.path.isdir(_m)]
default_src_dirs = [_m for _m in default_src_dirs if os.path.isdir(_m)]
so_ext = get_shared_lib_extension()
def get_standard_file(fname):
"""Returns a list of files named 'fname' from
1) System-wide directory (directory-location of this module)
2) Users HOME directory (os.environ['HOME'])
3) Local directory
"""
# System-wide file
filenames = []
try:
f = __file__
except NameError:
f = sys.argv[0]
else:
sysfile = os.path.join(os.path.split(os.path.abspath(f))[0],
fname)
if os.path.isfile(sysfile):
filenames.append(sysfile)
# Home directory
# And look for the user config file
try:
f = os.path.expanduser('~')
except KeyError:
pass
else:
user_file = os.path.join(f, fname)
if os.path.isfile(user_file):
filenames.append(user_file)
# Local file
if os.path.isfile(fname):
filenames.append(os.path.abspath(fname))
return filenames
def get_info(name, notfound_action=0):
"""
notfound_action:
0 - do nothing
1 - display warning message
2 - raise error
"""
cl = {'atlas': atlas_info, # use lapack_opt or blas_opt instead
'atlas_threads': atlas_threads_info, # ditto
'atlas_blas': atlas_blas_info,
'atlas_blas_threads': atlas_blas_threads_info,
'lapack_atlas': lapack_atlas_info, # use lapack_opt instead
'lapack_atlas_threads': lapack_atlas_threads_info, # ditto
'atlas_3_10': atlas_3_10_info, # use lapack_opt or blas_opt instead
'atlas_3_10_threads': atlas_3_10_threads_info, # ditto
'atlas_3_10_blas': atlas_3_10_blas_info,
'atlas_3_10_blas_threads': atlas_3_10_blas_threads_info,
'lapack_atlas_3_10': lapack_atlas_3_10_info, # use lapack_opt instead
'lapack_atlas_3_10_threads': lapack_atlas_3_10_threads_info, # ditto
'mkl': mkl_info,
# openblas which may or may not have embedded lapack
'openblas': openblas_info, # use blas_opt instead
# openblas with embedded lapack
'openblas_lapack': openblas_lapack_info, # use blas_opt instead
'blis': blis_info, # use blas_opt instead
'lapack_mkl': lapack_mkl_info, # use lapack_opt instead
'blas_mkl': blas_mkl_info, # use blas_opt instead
'x11': x11_info,
'fft_opt': fft_opt_info,
'fftw': fftw_info,
'fftw2': fftw2_info,
'fftw3': fftw3_info,
'dfftw': dfftw_info,
'sfftw': sfftw_info,
'fftw_threads': fftw_threads_info,
'dfftw_threads': dfftw_threads_info,
'sfftw_threads': sfftw_threads_info,
'djbfft': djbfft_info,
'blas': blas_info, # use blas_opt instead
'lapack': lapack_info, # use lapack_opt instead
'lapack_src': lapack_src_info,
'blas_src': blas_src_info,
'numpy': numpy_info,
'f2py': f2py_info,
'Numeric': Numeric_info,
'numeric': Numeric_info,
'numarray': numarray_info,
'numerix': numerix_info,
'lapack_opt': lapack_opt_info,
'blas_opt': blas_opt_info,
'boost_python': boost_python_info,
'agg2': agg2_info,
'wx': wx_info,
'gdk_pixbuf_xlib_2': gdk_pixbuf_xlib_2_info,
'gdk-pixbuf-xlib-2.0': gdk_pixbuf_xlib_2_info,
'gdk_pixbuf_2': gdk_pixbuf_2_info,
'gdk-pixbuf-2.0': gdk_pixbuf_2_info,
'gdk': gdk_info,
'gdk_2': gdk_2_info,
'gdk-2.0': gdk_2_info,
'gdk_x11_2': gdk_x11_2_info,
'gdk-x11-2.0': gdk_x11_2_info,
'gtkp_x11_2': gtkp_x11_2_info,
'gtk+-x11-2.0': gtkp_x11_2_info,
'gtkp_2': gtkp_2_info,
'gtk+-2.0': gtkp_2_info,
'xft': xft_info,
'freetype2': freetype2_info,
'umfpack': umfpack_info,
'amd': amd_info,
}.get(name.lower(), system_info)
return cl().get_info(notfound_action)
class NotFoundError(DistutilsError):
"""Some third-party program or library is not found."""
class AtlasNotFoundError(NotFoundError):
"""
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable."""
class LapackNotFoundError(NotFoundError):
"""
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable."""
class LapackSrcNotFoundError(LapackNotFoundError):
"""
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable."""
class BlasNotFoundError(NotFoundError):
"""
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable."""
class BlasSrcNotFoundError(BlasNotFoundError):
"""
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable."""
class FFTWNotFoundError(NotFoundError):
"""
FFTW (http://www.fftw.org/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [fftw]) or by setting
the FFTW environment variable."""
class DJBFFTNotFoundError(NotFoundError):
"""
DJBFFT (http://cr.yp.to/djbfft.html) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [djbfft]) or by setting
the DJBFFT environment variable."""
class NumericNotFoundError(NotFoundError):
"""
Numeric (http://www.numpy.org/) module not found.
Get it from above location, install it, and retry setup.py."""
class X11NotFoundError(NotFoundError):
"""X11 libraries not found."""
class UmfpackNotFoundError(NotFoundError):
"""
UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/)
not found. Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [umfpack]) or by setting
the UMFPACK environment variable."""
class system_info(object):
""" get_info() is the only public method. Don't use others.
"""
section = 'ALL'
dir_env_var = None
search_static_first = 0 # XXX: disabled by default, may disappear in
# future unless it is proved to be useful.
verbosity = 1
saved_results = {}
notfounderror = NotFoundError
def __init__(self,
default_lib_dirs=default_lib_dirs,
default_include_dirs=default_include_dirs,
verbosity=1,
):
self.__class__.info = {}
self.local_prefixes = []
defaults = {'library_dirs': os.pathsep.join(default_lib_dirs),
'include_dirs': os.pathsep.join(default_include_dirs),
'runtime_library_dirs': os.pathsep.join(default_runtime_dirs),
'rpath': '',
'src_dirs': os.pathsep.join(default_src_dirs),
'search_static_first': str(self.search_static_first),
'extra_compile_args': '', 'extra_link_args': ''}
self.cp = ConfigParser(defaults)
self.files = []
self.files.extend(get_standard_file('.numpy-site.cfg'))
self.files.extend(get_standard_file('site.cfg'))
self.parse_config_files()
if self.section is not None:
self.search_static_first = self.cp.getboolean(
self.section, 'search_static_first')
assert isinstance(self.search_static_first, int)
def parse_config_files(self):
self.cp.read(self.files)
if not self.cp.has_section(self.section):
if self.section is not None:
self.cp.add_section(self.section)
def calc_libraries_info(self):
libs = self.get_libraries()
dirs = self.get_lib_dirs()
# The extensions use runtime_library_dirs
r_dirs = self.get_runtime_lib_dirs()
# Intrinsic distutils use rpath, we simply append both entries
# as though they were one entry
r_dirs.extend(self.get_runtime_lib_dirs(key='rpath'))
info = {}
for lib in libs:
i = self.check_libs(dirs, [lib])
if i is not None:
dict_append(info, **i)
else:
log.info('Library %s was not found. Ignoring' % (lib))
if r_dirs:
i = self.check_libs(r_dirs, [lib])
if i is not None:
# Swap library keywords found to runtime_library_dirs
# the libraries are insisting on the user having defined
# them using the library_dirs, and not necessarily by
# runtime_library_dirs
del i['libraries']
i['runtime_library_dirs'] = i.pop('library_dirs')
dict_append(info, **i)
else:
log.info('Runtime library %s was not found. Ignoring' % (lib))
return info
def set_info(self, **info):
if info:
lib_info = self.calc_libraries_info()
dict_append(info, **lib_info)
# Update extra information
extra_info = self.calc_extra_info()
dict_append(info, **extra_info)
self.saved_results[self.__class__.__name__] = info
def has_info(self):
return self.__class__.__name__ in self.saved_results
def calc_extra_info(self):
""" Updates the information in the current information with
respect to these flags:
extra_compile_args
extra_link_args
"""
info = {}
for key in ['extra_compile_args', 'extra_link_args']:
# Get values
opt = self.cp.get(self.section, key)
if opt:
tmp = {key : [opt]}
dict_append(info, **tmp)
return info
def get_info(self, notfound_action=0):
""" Return a dictonary with items that are compatible
with numpy.distutils.setup keyword arguments.
"""
flag = 0
if not self.has_info():
flag = 1
log.info(self.__class__.__name__ + ':')
if hasattr(self, 'calc_info'):
self.calc_info()
if notfound_action:
if not self.has_info():
if notfound_action == 1:
warnings.warn(self.notfounderror.__doc__, stacklevel=2)
elif notfound_action == 2:
raise self.notfounderror(self.notfounderror.__doc__)
else:
raise ValueError(repr(notfound_action))
if not self.has_info():
log.info(' NOT AVAILABLE')
self.set_info()
else:
log.info(' FOUND:')
res = self.saved_results.get(self.__class__.__name__)
if self.verbosity > 0 and flag:
for k, v in res.items():
v = str(v)
if k in ['sources', 'libraries'] and len(v) > 270:
v = v[:120] + '...\n...\n...' + v[-120:]
log.info(' %s = %s', k, v)
log.info('')
return copy.deepcopy(res)
def get_paths(self, section, key):
dirs = self.cp.get(section, key).split(os.pathsep)
env_var = self.dir_env_var
if env_var:
if is_sequence(env_var):
e0 = env_var[-1]
for e in env_var:
if e in os.environ:
e0 = e
break
if not env_var[0] == e0:
log.info('Setting %s=%s' % (env_var[0], e0))
env_var = e0
if env_var and env_var in os.environ:
d = os.environ[env_var]
if d == 'None':
log.info('Disabled %s: %s',
self.__class__.__name__, '(%s is None)'
% (env_var,))
return []
if os.path.isfile(d):
dirs = [os.path.dirname(d)] + dirs
l = getattr(self, '_lib_names', [])
if len(l) == 1:
b = os.path.basename(d)
b = os.path.splitext(b)[0]
if b[:3] == 'lib':
log.info('Replacing _lib_names[0]==%r with %r' \
% (self._lib_names[0], b[3:]))
self._lib_names[0] = b[3:]
else:
ds = d.split(os.pathsep)
ds2 = []
for d in ds:
if os.path.isdir(d):
ds2.append(d)
for dd in ['include', 'lib']:
d1 = os.path.join(d, dd)
if os.path.isdir(d1):
ds2.append(d1)
dirs = ds2 + dirs
default_dirs = self.cp.get(self.section, key).split(os.pathsep)
dirs.extend(default_dirs)
ret = []
for d in dirs:
if len(d) > 0 and not os.path.isdir(d):
warnings.warn('Specified path %s is invalid.' % d, stacklevel=2)
continue
if d not in ret:
ret.append(d)
log.debug('( %s = %s )', key, ':'.join(ret))
return ret
def get_lib_dirs(self, key='library_dirs'):
return self.get_paths(self.section, key)
def get_runtime_lib_dirs(self, key='runtime_library_dirs'):
path = self.get_paths(self.section, key)
if path == ['']:
path = []
return path
def get_include_dirs(self, key='include_dirs'):
return self.get_paths(self.section, key)
def get_src_dirs(self, key='src_dirs'):
return self.get_paths(self.section, key)
def get_libs(self, key, default):
try:
libs = self.cp.get(self.section, key)
except NoOptionError:
if not default:
return []
if is_string(default):
return [default]
return default
return [b for b in [a.strip() for a in libs.split(',')] if b]
def get_libraries(self, key='libraries'):
if hasattr(self, '_lib_names'):
return self.get_libs(key, default=self._lib_names)
else:
return self.get_libs(key, '')
def library_extensions(self):
static_exts = ['.a']
if sys.platform == 'win32':
static_exts.append('.lib') # .lib is used by MSVC
if self.search_static_first:
exts = static_exts + [so_ext]
else:
exts = [so_ext] + static_exts
if sys.platform == 'cygwin':
exts.append('.dll.a')
if sys.platform == 'darwin':
exts.append('.dylib')
return exts
def check_libs(self, lib_dirs, libs, opt_libs=[]):
"""If static or shared libraries are available then return
their info dictionary.
Checks for all libraries as shared libraries first, then
static (or vice versa if self.search_static_first is True).
"""
exts = self.library_extensions()
info = None
for ext in exts:
info = self._check_libs(lib_dirs, libs, opt_libs, [ext])
if info is not None:
break
if not info:
log.info(' libraries %s not found in %s', ','.join(libs),
lib_dirs)
return info
def check_libs2(self, lib_dirs, libs, opt_libs=[]):
"""If static or shared libraries are available then return
their info dictionary.
Checks each library for shared or static.
"""
exts = self.library_extensions()
info = self._check_libs(lib_dirs, libs, opt_libs, exts)
if not info:
log.info(' libraries %s not found in %s', ','.join(libs),
lib_dirs)
return info
def _find_lib(self, lib_dir, lib, exts):
assert is_string(lib_dir)
# under windows first try without 'lib' prefix
if sys.platform == 'win32':
lib_prefixes = ['', 'lib']
else:
lib_prefixes = ['lib']
# for each library name, see if we can find a file for it.
for ext in exts:
for prefix in lib_prefixes:
p = self.combine_paths(lib_dir, prefix + lib + ext)
if p:
break
if p:
assert len(p) == 1
# ??? splitext on p[0] would do this for cygwin
# doesn't seem correct
if ext == '.dll.a':
lib += '.dll'
return lib
return False
def _find_libs(self, lib_dirs, libs, exts):
# make sure we preserve the order of libs, as it can be important
found_dirs, found_libs = [], []
for lib in libs:
for lib_dir in lib_dirs:
found_lib = self._find_lib(lib_dir, lib, exts)
if found_lib:
found_libs.append(found_lib)
if lib_dir not in found_dirs:
found_dirs.append(lib_dir)
break
return found_dirs, found_libs
def _check_libs(self, lib_dirs, libs, opt_libs, exts):
"""Find mandatory and optional libs in expected paths.
Missing optional libraries are silently forgotten.
"""
if not is_sequence(lib_dirs):
lib_dirs = [lib_dirs]
# First, try to find the mandatory libraries
found_dirs, found_libs = self._find_libs(lib_dirs, libs, exts)
if len(found_libs) > 0 and len(found_libs) == len(libs):
# Now, check for optional libraries
opt_found_dirs, opt_found_libs = self._find_libs(lib_dirs, opt_libs, exts)
found_libs.extend(opt_found_libs)
for lib_dir in opt_found_dirs:
if lib_dir not in found_dirs:
found_dirs.append(lib_dir)
info = {'libraries': found_libs, 'library_dirs': found_dirs}
return info
else:
return None
def combine_paths(self, *args):
"""Return a list of existing paths composed by all combinations
of items from the arguments.
"""
return combine_paths(*args, **{'verbosity': self.verbosity})
class fft_opt_info(system_info):
def calc_info(self):
info = {}
fftw_info = get_info('fftw3') or get_info('fftw2') or get_info('dfftw')
djbfft_info = get_info('djbfft')
if fftw_info:
dict_append(info, **fftw_info)
if djbfft_info:
dict_append(info, **djbfft_info)
self.set_info(**info)
return
class fftw_info(system_info):
#variables to override
section = 'fftw'
dir_env_var = 'FFTW'
notfounderror = FFTWNotFoundError
ver_info = [{'name':'fftw3',
'libs':['fftw3'],
'includes':['fftw3.h'],
'macros':[('SCIPY_FFTW3_H', None)]},
{'name':'fftw2',
'libs':['rfftw', 'fftw'],
'includes':['fftw.h', 'rfftw.h'],
'macros':[('SCIPY_FFTW_H', None)]}]
def calc_ver_info(self, ver_param):
"""Returns True on successful version detection, else False"""
lib_dirs = self.get_lib_dirs()
incl_dirs = self.get_include_dirs()
libs = self.get_libs(self.section + '_libs', ver_param['libs'])
info = self.check_libs(lib_dirs, libs)
if info is not None:
flag = 0
for d in incl_dirs:
if len(self.combine_paths(d, ver_param['includes'])) \
== len(ver_param['includes']):
dict_append(info, include_dirs=[d])
flag = 1
incl_dirs = [d]
break
if flag:
dict_append(info, define_macros=ver_param['macros'])
else:
info = None
if info is not None:
self.set_info(**info)
return True
else:
log.info(' %s not found' % (ver_param['name']))
return False
def calc_info(self):
for i in self.ver_info:
if self.calc_ver_info(i):
break
class fftw2_info(fftw_info):
#variables to override
section = 'fftw'
dir_env_var = 'FFTW'
notfounderror = FFTWNotFoundError
ver_info = [{'name':'fftw2',
'libs':['rfftw', 'fftw'],
'includes':['fftw.h', 'rfftw.h'],
'macros':[('SCIPY_FFTW_H', None)]}
]
class fftw3_info(fftw_info):
#variables to override
section = 'fftw3'
dir_env_var = 'FFTW3'
notfounderror = FFTWNotFoundError
ver_info = [{'name':'fftw3',
'libs':['fftw3'],
'includes':['fftw3.h'],
'macros':[('SCIPY_FFTW3_H', None)]},
]
class dfftw_info(fftw_info):
section = 'fftw'
dir_env_var = 'FFTW'
ver_info = [{'name':'dfftw',
'libs':['drfftw', 'dfftw'],
'includes':['dfftw.h', 'drfftw.h'],
'macros':[('SCIPY_DFFTW_H', None)]}]
class sfftw_info(fftw_info):
section = 'fftw'
dir_env_var = 'FFTW'
ver_info = [{'name':'sfftw',
'libs':['srfftw', 'sfftw'],
'includes':['sfftw.h', 'srfftw.h'],
'macros':[('SCIPY_SFFTW_H', None)]}]
class fftw_threads_info(fftw_info):
section = 'fftw'
dir_env_var = 'FFTW'
ver_info = [{'name':'fftw threads',
'libs':['rfftw_threads', 'fftw_threads'],
'includes':['fftw_threads.h', 'rfftw_threads.h'],
'macros':[('SCIPY_FFTW_THREADS_H', None)]}]
class dfftw_threads_info(fftw_info):
section = 'fftw'
dir_env_var = 'FFTW'
ver_info = [{'name':'dfftw threads',
'libs':['drfftw_threads', 'dfftw_threads'],
'includes':['dfftw_threads.h', 'drfftw_threads.h'],
'macros':[('SCIPY_DFFTW_THREADS_H', None)]}]
class sfftw_threads_info(fftw_info):
section = 'fftw'
dir_env_var = 'FFTW'
ver_info = [{'name':'sfftw threads',
'libs':['srfftw_threads', 'sfftw_threads'],
'includes':['sfftw_threads.h', 'srfftw_threads.h'],
'macros':[('SCIPY_SFFTW_THREADS_H', None)]}]
class djbfft_info(system_info):
section = 'djbfft'
dir_env_var = 'DJBFFT'
notfounderror = DJBFFTNotFoundError
def get_paths(self, section, key):
pre_dirs = system_info.get_paths(self, section, key)
dirs = []
for d in pre_dirs:
dirs.extend(self.combine_paths(d, ['djbfft']) + [d])
return [d for d in dirs if os.path.isdir(d)]
def calc_info(self):
lib_dirs = self.get_lib_dirs()
incl_dirs = self.get_include_dirs()
info = None
for d in lib_dirs:
p = self.combine_paths(d, ['djbfft.a'])
if p:
info = {'extra_objects': p}
break
p = self.combine_paths(d, ['libdjbfft.a', 'libdjbfft' + so_ext])
if p:
info = {'libraries': ['djbfft'], 'library_dirs': [d]}
break
if info is None:
return
for d in incl_dirs:
if len(self.combine_paths(d, ['fftc8.h', 'fftfreq.h'])) == 2:
dict_append(info, include_dirs=[d],
define_macros=[('SCIPY_DJBFFT_H', None)])
self.set_info(**info)
return
return
class mkl_info(system_info):
section = 'mkl'
dir_env_var = 'MKLROOT'
_lib_mkl = ['mkl_rt']
def get_mkl_rootdir(self):
mklroot = os.environ.get('MKLROOT', None)
if mklroot is not None:
return mklroot
paths = os.environ.get('LD_LIBRARY_PATH', '').split(os.pathsep)
ld_so_conf = '/etc/ld.so.conf'
if os.path.isfile(ld_so_conf):
with open(ld_so_conf, 'r') as f:
for d in f:
d = d.strip()
if d:
paths.append(d)
intel_mkl_dirs = []
for path in paths:
path_atoms = path.split(os.sep)
for m in path_atoms:
if m.startswith('mkl'):
d = os.sep.join(path_atoms[:path_atoms.index(m) + 2])
intel_mkl_dirs.append(d)
break
for d in paths:
dirs = glob(os.path.join(d, 'mkl', '*'))
dirs += glob(os.path.join(d, 'mkl*'))
for d in dirs:
if os.path.isdir(os.path.join(d, 'lib')):
return d
return None
def __init__(self):
mklroot = self.get_mkl_rootdir()
if mklroot is None:
system_info.__init__(self)
else:
from .cpuinfo import cpu
if cpu.is_Itanium():
plt = '64'
elif cpu.is_Intel() and cpu.is_64bit():
plt = 'intel64'
else:
plt = '32'
system_info.__init__(
self,
default_lib_dirs=[os.path.join(mklroot, 'lib', plt)],
default_include_dirs=[os.path.join(mklroot, 'include')])
def calc_info(self):
lib_dirs = self.get_lib_dirs()
incl_dirs = self.get_include_dirs()
mkl_libs = self.get_libs('mkl_libs', self._lib_mkl)
info = self.check_libs2(lib_dirs, mkl_libs)
if info is None:
return
dict_append(info,
define_macros=[('SCIPY_MKL_H', None),
('HAVE_CBLAS', None)],
include_dirs=incl_dirs)
if sys.platform == 'win32':
pass # win32 has no pthread library
else:
dict_append(info, libraries=['pthread'])
self.set_info(**info)
class lapack_mkl_info(mkl_info):
pass
class blas_mkl_info(mkl_info):
pass
class atlas_info(system_info):
section = 'atlas'
dir_env_var = 'ATLAS'
_lib_names = ['f77blas', 'cblas']
if sys.platform[:7] == 'freebsd':
_lib_atlas = ['atlas_r']
_lib_lapack = ['alapack_r']
else:
_lib_atlas = ['atlas']
_lib_lapack = ['lapack']
notfounderror = AtlasNotFoundError
def get_paths(self, section, key):
pre_dirs = system_info.get_paths(self, section, key)
dirs = []
for d in pre_dirs:
dirs.extend(self.combine_paths(d, ['atlas*', 'ATLAS*',
'sse', '3dnow', 'sse2']) + [d])
return [d for d in dirs if os.path.isdir(d)]
def calc_info(self):
lib_dirs = self.get_lib_dirs()
info = {}
atlas_libs = self.get_libs('atlas_libs',
self._lib_names + self._lib_atlas)
lapack_libs = self.get_libs('lapack_libs', self._lib_lapack)
atlas = None
lapack = None
atlas_1 = None
for d in lib_dirs:
atlas = self.check_libs2(d, atlas_libs, [])
lapack_atlas = self.check_libs2(d, ['lapack_atlas'], [])
if atlas is not None:
lib_dirs2 = [d] + self.combine_paths(d, ['atlas*', 'ATLAS*'])
lapack = self.check_libs2(lib_dirs2, lapack_libs, [])
if lapack is not None:
break
if atlas:
atlas_1 = atlas
log.info(self.__class__)
if atlas is None:
atlas = atlas_1
if atlas is None:
return
include_dirs = self.get_include_dirs()
h = (self.combine_paths(lib_dirs + include_dirs, 'cblas.h') or [None])
h = h[0]
if h:
h = os.path.dirname(h)
dict_append(info, include_dirs=[h])
info['language'] = 'c'
if lapack is not None:
dict_append(info, **lapack)
dict_append(info, **atlas)
elif 'lapack_atlas' in atlas['libraries']:
dict_append(info, **atlas)
dict_append(info,
define_macros=[('ATLAS_WITH_LAPACK_ATLAS', None)])
self.set_info(**info)
return
else:
dict_append(info, **atlas)
dict_append(info, define_macros=[('ATLAS_WITHOUT_LAPACK', None)])
message = """
*********************************************************************
Could not find lapack library within the ATLAS installation.
*********************************************************************
"""
warnings.warn(message, stacklevel=2)
self.set_info(**info)
return
# Check if lapack library is complete, only warn if it is not.
lapack_dir = lapack['library_dirs'][0]
lapack_name = lapack['libraries'][0]
lapack_lib = None
lib_prefixes = ['lib']
if sys.platform == 'win32':
lib_prefixes.append('')
for e in self.library_extensions():
for prefix in lib_prefixes:
fn = os.path.join(lapack_dir, prefix + lapack_name + e)
if os.path.exists(fn):
lapack_lib = fn
break
if lapack_lib:
break
if lapack_lib is not None:
sz = os.stat(lapack_lib)[6]
if sz <= 4000 * 1024:
message = """
*********************************************************************
Lapack library (from ATLAS) is probably incomplete:
size of %s is %sk (expected >4000k)
Follow the instructions in the KNOWN PROBLEMS section of the file
numpy/INSTALL.txt.
*********************************************************************
""" % (lapack_lib, sz / 1024)
warnings.warn(message, stacklevel=2)
else:
info['language'] = 'f77'
atlas_version, atlas_extra_info = get_atlas_version(**atlas)
dict_append(info, **atlas_extra_info)
self.set_info(**info)
class atlas_blas_info(atlas_info):
_lib_names = ['f77blas', 'cblas']
def calc_info(self):
lib_dirs = self.get_lib_dirs()
info = {}
atlas_libs = self.get_libs('atlas_libs',
self._lib_names + self._lib_atlas)
atlas = self.check_libs2(lib_dirs, atlas_libs, [])
if atlas is None:
return
include_dirs = self.get_include_dirs()
h = (self.combine_paths(lib_dirs + include_dirs, 'cblas.h') or [None])
h = h[0]
if h:
h = os.path.dirname(h)
dict_append(info, include_dirs=[h])
info['language'] = 'c'
info['define_macros'] = [('HAVE_CBLAS', None)]
atlas_version, atlas_extra_info = get_atlas_version(**atlas)
dict_append(atlas, **atlas_extra_info)
dict_append(info, **atlas)
self.set_info(**info)
return
class atlas_threads_info(atlas_info):
dir_env_var = ['PTATLAS', 'ATLAS']
_lib_names = ['ptf77blas', 'ptcblas']
class atlas_blas_threads_info(atlas_blas_info):
dir_env_var = ['PTATLAS', 'ATLAS']
_lib_names = ['ptf77blas', 'ptcblas']
class lapack_atlas_info(atlas_info):
_lib_names = ['lapack_atlas'] + atlas_info._lib_names
class lapack_atlas_threads_info(atlas_threads_info):
_lib_names = ['lapack_atlas'] + atlas_threads_info._lib_names
class atlas_3_10_info(atlas_info):
_lib_names = ['satlas']
_lib_atlas = _lib_names
_lib_lapack = _lib_names
class atlas_3_10_blas_info(atlas_3_10_info):
_lib_names = ['satlas']
def calc_info(self):
lib_dirs = self.get_lib_dirs()
info = {}
atlas_libs = self.get_libs('atlas_libs',
self._lib_names)
atlas = self.check_libs2(lib_dirs, atlas_libs, [])
if atlas is None:
return
include_dirs = self.get_include_dirs()
h = (self.combine_paths(lib_dirs + include_dirs, 'cblas.h') or [None])
h = h[0]
if h:
h = os.path.dirname(h)
dict_append(info, include_dirs=[h])
info['language'] = 'c'
info['define_macros'] = [('HAVE_CBLAS', None)]
atlas_version, atlas_extra_info = get_atlas_version(**atlas)
dict_append(atlas, **atlas_extra_info)
dict_append(info, **atlas)
self.set_info(**info)
return
class atlas_3_10_threads_info(atlas_3_10_info):
dir_env_var = ['PTATLAS', 'ATLAS']
_lib_names = ['tatlas']
_lib_atlas = _lib_names
_lib_lapack = _lib_names
class atlas_3_10_blas_threads_info(atlas_3_10_blas_info):
dir_env_var = ['PTATLAS', 'ATLAS']
_lib_names = ['tatlas']
class lapack_atlas_3_10_info(atlas_3_10_info):
pass
class lapack_atlas_3_10_threads_info(atlas_3_10_threads_info):
pass
class lapack_info(system_info):
section = 'lapack'
dir_env_var = 'LAPACK'
_lib_names = ['lapack']
notfounderror = LapackNotFoundError
def calc_info(self):
lib_dirs = self.get_lib_dirs()
lapack_libs = self.get_libs('lapack_libs', self._lib_names)
info = self.check_libs(lib_dirs, lapack_libs, [])
if info is None:
return
info['language'] = 'f77'
self.set_info(**info)
class lapack_src_info(system_info):
section = 'lapack_src'
dir_env_var = 'LAPACK_SRC'
notfounderror = LapackSrcNotFoundError
def get_paths(self, section, key):
pre_dirs = system_info.get_paths(self, section, key)
dirs = []
for d in pre_dirs:
dirs.extend([d] + self.combine_paths(d, ['LAPACK*/SRC', 'SRC']))
return [d for d in dirs if os.path.isdir(d)]
def calc_info(self):
src_dirs = self.get_src_dirs()
src_dir = ''
for d in src_dirs:
if os.path.isfile(os.path.join(d, 'dgesv.f')):
src_dir = d
break
if not src_dir:
#XXX: Get sources from netlib. May be ask first.
return
# The following is extracted from LAPACK-3.0/SRC/Makefile.
# Added missing names from lapack-lite-3.1.1/SRC/Makefile
# while keeping removed names for Lapack-3.0 compatibility.
allaux = '''
ilaenv ieeeck lsame lsamen xerbla
iparmq
''' # *.f
laux = '''
bdsdc bdsqr disna labad lacpy ladiv lae2 laebz laed0 laed1
laed2 laed3 laed4 laed5 laed6 laed7 laed8 laed9 laeda laev2
lagtf lagts lamch lamrg lanst lapy2 lapy3 larnv larrb larre
larrf lartg laruv las2 lascl lasd0 lasd1 lasd2 lasd3 lasd4
lasd5 lasd6 lasd7 lasd8 lasd9 lasda lasdq lasdt laset lasq1
lasq2 lasq3 lasq4 lasq5 lasq6 lasr lasrt lassq lasv2 pttrf
stebz stedc steqr sterf
larra larrc larrd larr larrk larrj larrr laneg laisnan isnan
lazq3 lazq4
''' # [s|d]*.f
lasrc = '''
gbbrd gbcon gbequ gbrfs gbsv gbsvx gbtf2 gbtrf gbtrs gebak
gebal gebd2 gebrd gecon geequ gees geesx geev geevx gegs gegv
gehd2 gehrd gelq2 gelqf gels gelsd gelss gelsx gelsy geql2
geqlf geqp3 geqpf geqr2 geqrf gerfs gerq2 gerqf gesc2 gesdd
gesv gesvd gesvx getc2 getf2 getrf getri getrs ggbak ggbal
gges ggesx ggev ggevx ggglm gghrd gglse ggqrf ggrqf ggsvd
ggsvp gtcon gtrfs gtsv gtsvx gttrf gttrs gtts2 hgeqz hsein
hseqr labrd lacon laein lags2 lagtm lahqr lahrd laic1 lals0
lalsa lalsd langb lange langt lanhs lansb lansp lansy lantb
lantp lantr lapll lapmt laqgb laqge laqp2 laqps laqsb laqsp
laqsy lar1v lar2v larf larfb larfg larft larfx largv larrv
lartv larz larzb larzt laswp lasyf latbs latdf latps latrd
latrs latrz latzm lauu2 lauum pbcon pbequ pbrfs pbstf pbsv
pbsvx pbtf2 pbtrf pbtrs pocon poequ porfs posv posvx potf2
potrf potri potrs ppcon ppequ pprfs ppsv ppsvx pptrf pptri
pptrs ptcon pteqr ptrfs ptsv ptsvx pttrs ptts2 spcon sprfs
spsv spsvx sptrf sptri sptrs stegr stein sycon syrfs sysv
sysvx sytf2 sytrf sytri sytrs tbcon tbrfs tbtrs tgevc tgex2
tgexc tgsen tgsja tgsna tgsy2 tgsyl tpcon tprfs tptri tptrs
trcon trevc trexc trrfs trsen trsna trsyl trti2 trtri trtrs
tzrqf tzrzf
lacn2 lahr2 stemr laqr0 laqr1 laqr2 laqr3 laqr4 laqr5
''' # [s|c|d|z]*.f
sd_lasrc = '''
laexc lag2 lagv2 laln2 lanv2 laqtr lasy2 opgtr opmtr org2l
org2r orgbr orghr orgl2 orglq orgql orgqr orgr2 orgrq orgtr
orm2l orm2r ormbr ormhr orml2 ormlq ormql ormqr ormr2 ormr3
ormrq ormrz ormtr rscl sbev sbevd sbevx sbgst sbgv sbgvd sbgvx
sbtrd spev spevd spevx spgst spgv spgvd spgvx sptrd stev stevd
stevr stevx syev syevd syevr syevx sygs2 sygst sygv sygvd
sygvx sytd2 sytrd
''' # [s|d]*.f
cz_lasrc = '''
bdsqr hbev hbevd hbevx hbgst hbgv hbgvd hbgvx hbtrd hecon heev
heevd heevr heevx hegs2 hegst hegv hegvd hegvx herfs hesv
hesvx hetd2 hetf2 hetrd hetrf hetri hetrs hpcon hpev hpevd
hpevx hpgst hpgv hpgvd hpgvx hprfs hpsv hpsvx hptrd hptrf
hptri hptrs lacgv lacp2 lacpy lacrm lacrt ladiv laed0 laed7
laed8 laesy laev2 lahef lanhb lanhe lanhp lanht laqhb laqhe
laqhp larcm larnv lartg lascl laset lasr lassq pttrf rot spmv
spr stedc steqr symv syr ung2l ung2r ungbr unghr ungl2 unglq
ungql ungqr ungr2 ungrq ungtr unm2l unm2r unmbr unmhr unml2
unmlq unmql unmqr unmr2 unmr3 unmrq unmrz unmtr upgtr upmtr
''' # [c|z]*.f
#######
sclaux = laux + ' econd ' # s*.f
dzlaux = laux + ' secnd ' # d*.f
slasrc = lasrc + sd_lasrc # s*.f
dlasrc = lasrc + sd_lasrc # d*.f
clasrc = lasrc + cz_lasrc + ' srot srscl ' # c*.f
zlasrc = lasrc + cz_lasrc + ' drot drscl ' # z*.f
oclasrc = ' icmax1 scsum1 ' # *.f
ozlasrc = ' izmax1 dzsum1 ' # *.f
sources = ['s%s.f' % f for f in (sclaux + slasrc).split()] \
+ ['d%s.f' % f for f in (dzlaux + dlasrc).split()] \
+ ['c%s.f' % f for f in (clasrc).split()] \
+ ['z%s.f' % f for f in (zlasrc).split()] \
+ ['%s.f' % f for f in (allaux + oclasrc + ozlasrc).split()]
sources = [os.path.join(src_dir, f) for f in sources]
# Lapack 3.1:
src_dir2 = os.path.join(src_dir, '..', 'INSTALL')
sources += [os.path.join(src_dir2, p + 'lamch.f') for p in 'sdcz']
# Lapack 3.2.1:
sources += [os.path.join(src_dir, p + 'larfp.f') for p in 'sdcz']
sources += [os.path.join(src_dir, 'ila' + p + 'lr.f') for p in 'sdcz']
sources += [os.path.join(src_dir, 'ila' + p + 'lc.f') for p in 'sdcz']
# Should we check here actual existence of source files?
# Yes, the file listing is different between 3.0 and 3.1
# versions.
sources = [f for f in sources if os.path.isfile(f)]
info = {'sources': sources, 'language': 'f77'}
self.set_info(**info)
atlas_version_c_text = r'''
/* This file is generated from numpy/distutils/system_info.py */
void ATL_buildinfo(void);
int main(void) {
ATL_buildinfo();
return 0;
}
'''
_cached_atlas_version = {}
def get_atlas_version(**config):
libraries = config.get('libraries', [])
library_dirs = config.get('library_dirs', [])
key = (tuple(libraries), tuple(library_dirs))
if key in _cached_atlas_version:
return _cached_atlas_version[key]
c = cmd_config(Distribution())
atlas_version = None
info = {}
try:
s, o = c.get_output(atlas_version_c_text,
libraries=libraries, library_dirs=library_dirs,
use_tee=(system_info.verbosity > 0))
if s and re.search(r'undefined reference to `_gfortran', o, re.M):
s, o = c.get_output(atlas_version_c_text,
libraries=libraries + ['gfortran'],
library_dirs=library_dirs,
use_tee=(system_info.verbosity > 0))
if not s:
warnings.warn("""
*****************************************************
Linkage with ATLAS requires gfortran. Use
python setup.py config_fc --fcompiler=gnu95 ...
when building extension libraries that use ATLAS.
Make sure that -lgfortran is used for C++ extensions.
*****************************************************
""", stacklevel=2)
dict_append(info, language='f90',
define_macros=[('ATLAS_REQUIRES_GFORTRAN', None)])
except Exception: # failed to get version from file -- maybe on Windows
# look at directory name
for o in library_dirs:
m = re.search(r'ATLAS_(?P<version>\d+[.]\d+[.]\d+)_', o)
if m:
atlas_version = m.group('version')
if atlas_version is not None:
break
# final choice --- look at ATLAS_VERSION environment
# variable
if atlas_version is None:
atlas_version = os.environ.get('ATLAS_VERSION', None)
if atlas_version:
dict_append(info, define_macros=[(
'ATLAS_INFO', '"\\"%s\\""' % atlas_version)
])
else:
dict_append(info, define_macros=[('NO_ATLAS_INFO', -1)])
return atlas_version or '?.?.?', info
if not s:
m = re.search(r'ATLAS version (?P<version>\d+[.]\d+[.]\d+)', o)
if m:
atlas_version = m.group('version')
if atlas_version is None:
if re.search(r'undefined symbol: ATL_buildinfo', o, re.M):
atlas_version = '3.2.1_pre3.3.6'
else:
log.info('Status: %d', s)
log.info('Output: %s', o)
if atlas_version == '3.2.1_pre3.3.6':
dict_append(info, define_macros=[('NO_ATLAS_INFO', -2)])
else:
dict_append(info, define_macros=[(
'ATLAS_INFO', '"\\"%s\\""' % atlas_version)
])
result = _cached_atlas_version[key] = atlas_version, info
return result
class lapack_opt_info(system_info):
notfounderror = LapackNotFoundError
def calc_info(self):
lapack_mkl_info = get_info('lapack_mkl')
if lapack_mkl_info:
self.set_info(**lapack_mkl_info)
return
openblas_info = get_info('openblas_lapack')
if openblas_info:
self.set_info(**openblas_info)
return
atlas_info = get_info('atlas_3_10_threads')
if not atlas_info:
atlas_info = get_info('atlas_3_10')
if not atlas_info:
atlas_info = get_info('atlas_threads')
if not atlas_info:
atlas_info = get_info('atlas')
if sys.platform == 'darwin' and not (atlas_info or openblas_info or
lapack_mkl_info):
# Use the system lapack from Accelerate or vecLib under OSX
args = []
link_args = []
if get_platform()[-4:] == 'i386' or 'intel' in get_platform() or \
'x86_64' in get_platform() or \
'i386' in platform.platform():
intel = 1
else:
intel = 0
if os.path.exists('/System/Library/Frameworks'
'/Accelerate.framework/'):
if intel:
args.extend(['-msse3'])
else:
args.extend(['-faltivec'])
link_args.extend(['-Wl,-framework', '-Wl,Accelerate'])
elif os.path.exists('/System/Library/Frameworks'
'/vecLib.framework/'):
if intel:
args.extend(['-msse3'])
else:
args.extend(['-faltivec'])
link_args.extend(['-Wl,-framework', '-Wl,vecLib'])
if args:
self.set_info(extra_compile_args=args,
extra_link_args=link_args,
define_macros=[('NO_ATLAS_INFO', 3),
('HAVE_CBLAS', None)])
return
need_lapack = 0
need_blas = 0
info = {}
if atlas_info:
l = atlas_info.get('define_macros', [])
if ('ATLAS_WITH_LAPACK_ATLAS', None) in l \
or ('ATLAS_WITHOUT_LAPACK', None) in l:
need_lapack = 1
info = atlas_info
else:
warnings.warn(AtlasNotFoundError.__doc__, stacklevel=2)
need_blas = 1
need_lapack = 1
dict_append(info, define_macros=[('NO_ATLAS_INFO', 1)])
if need_lapack:
lapack_info = get_info('lapack')
#lapack_info = {} ## uncomment for testing
if lapack_info:
dict_append(info, **lapack_info)
else:
warnings.warn(LapackNotFoundError.__doc__, stacklevel=2)
lapack_src_info = get_info('lapack_src')
if not lapack_src_info:
warnings.warn(LapackSrcNotFoundError.__doc__, stacklevel=2)
return
dict_append(info, libraries=[('flapack_src', lapack_src_info)])
if need_blas:
blas_info = get_info('blas')
if blas_info:
dict_append(info, **blas_info)
else:
warnings.warn(BlasNotFoundError.__doc__, stacklevel=2)
blas_src_info = get_info('blas_src')
if not blas_src_info:
warnings.warn(BlasSrcNotFoundError.__doc__, stacklevel=2)
return
dict_append(info, libraries=[('fblas_src', blas_src_info)])
self.set_info(**info)
return
class blas_opt_info(system_info):
notfounderror = BlasNotFoundError
def calc_info(self):
blas_mkl_info = get_info('blas_mkl')
if blas_mkl_info:
self.set_info(**blas_mkl_info)
return
blis_info = get_info('blis')
if blis_info:
self.set_info(**blis_info)
return
openblas_info = get_info('openblas')
if openblas_info:
self.set_info(**openblas_info)
return
atlas_info = get_info('atlas_3_10_blas_threads')
if not atlas_info:
atlas_info = get_info('atlas_3_10_blas')
if not atlas_info:
atlas_info = get_info('atlas_blas_threads')
if not atlas_info:
atlas_info = get_info('atlas_blas')
if sys.platform == 'darwin' and not (atlas_info or openblas_info or
blas_mkl_info or blis_info):
# Use the system BLAS from Accelerate or vecLib under OSX
args = []
link_args = []
if get_platform()[-4:] == 'i386' or 'intel' in get_platform() or \
'x86_64' in get_platform() or \
'i386' in platform.platform():
intel = 1
else:
intel = 0
if os.path.exists('/System/Library/Frameworks'
'/Accelerate.framework/'):
if intel:
args.extend(['-msse3'])
else:
args.extend(['-faltivec'])
args.extend([
'-I/System/Library/Frameworks/vecLib.framework/Headers'])
link_args.extend(['-Wl,-framework', '-Wl,Accelerate'])
elif os.path.exists('/System/Library/Frameworks'
'/vecLib.framework/'):
if intel:
args.extend(['-msse3'])
else:
args.extend(['-faltivec'])
args.extend([
'-I/System/Library/Frameworks/vecLib.framework/Headers'])
link_args.extend(['-Wl,-framework', '-Wl,vecLib'])
if args:
self.set_info(extra_compile_args=args,
extra_link_args=link_args,
define_macros=[('NO_ATLAS_INFO', 3),
('HAVE_CBLAS', None)])
return
need_blas = 0
info = {}
if atlas_info:
info = atlas_info
else:
warnings.warn(AtlasNotFoundError.__doc__, stacklevel=2)
need_blas = 1
dict_append(info, define_macros=[('NO_ATLAS_INFO', 1)])
if need_blas:
blas_info = get_info('blas')
if blas_info:
dict_append(info, **blas_info)
else:
warnings.warn(BlasNotFoundError.__doc__, stacklevel=2)
blas_src_info = get_info('blas_src')
if not blas_src_info:
warnings.warn(BlasSrcNotFoundError.__doc__, stacklevel=2)
return
dict_append(info, libraries=[('fblas_src', blas_src_info)])
self.set_info(**info)
return
class blas_info(system_info):
section = 'blas'
dir_env_var = 'BLAS'
_lib_names = ['blas']
notfounderror = BlasNotFoundError
def calc_info(self):
lib_dirs = self.get_lib_dirs()
blas_libs = self.get_libs('blas_libs', self._lib_names)
info = self.check_libs(lib_dirs, blas_libs, [])
if info is None:
return
else:
info['include_dirs'] = self.get_include_dirs()
if platform.system() == 'Windows':
# The check for windows is needed because has_cblas uses the
# same compiler that was used to compile Python and msvc is
# often not installed when mingw is being used. This rough
# treatment is not desirable, but windows is tricky.
info['language'] = 'f77' # XXX: is it generally true?
else:
lib = self.has_cblas(info)
if lib is not None:
info['language'] = 'c'
info['libraries'] = [lib]
info['define_macros'] = [('HAVE_CBLAS', None)]
self.set_info(**info)
def has_cblas(self, info):
# primitive cblas check by looking for the header and trying to link
# cblas or blas
res = False
c = distutils.ccompiler.new_compiler()
c.customize('')
tmpdir = tempfile.mkdtemp()
s = """#include <cblas.h>
int main(int argc, const char *argv[])
{
double a[4] = {1,2,3,4};
double b[4] = {5,6,7,8};
return cblas_ddot(4, a, 1, b, 1) > 10;
}"""
src = os.path.join(tmpdir, 'source.c')
try:
with open(src, 'wt') as f:
f.write(s)
try:
# check we can compile (find headers)
obj = c.compile([src], output_dir=tmpdir,
include_dirs=self.get_include_dirs())
# check we can link (find library)
# some systems have separate cblas and blas libs. First
# check for cblas lib, and if not present check for blas lib.
try:
c.link_executable(obj, os.path.join(tmpdir, "a.out"),
libraries=["cblas"],
library_dirs=info['library_dirs'],
extra_postargs=info.get('extra_link_args', []))
res = "cblas"
except distutils.ccompiler.LinkError:
c.link_executable(obj, os.path.join(tmpdir, "a.out"),
libraries=["blas"],
library_dirs=info['library_dirs'],
extra_postargs=info.get('extra_link_args', []))
res = "blas"
except distutils.ccompiler.CompileError:
res = None
finally:
shutil.rmtree(tmpdir)
return res
class openblas_info(blas_info):
section = 'openblas'
dir_env_var = 'OPENBLAS'
_lib_names = ['openblas']
notfounderror = BlasNotFoundError
def check_embedded_lapack(self, info):
return True
def calc_info(self):
lib_dirs = self.get_lib_dirs()
openblas_libs = self.get_libs('libraries', self._lib_names)
if openblas_libs == self._lib_names: # backward compat with 1.8.0
openblas_libs = self.get_libs('openblas_libs', self._lib_names)
info = self.check_libs(lib_dirs, openblas_libs, [])
if info is None:
return
# Add extra info for OpenBLAS
extra_info = self.calc_extra_info()
dict_append(info, **extra_info)
if not self.check_embedded_lapack(info):
return
info['language'] = 'c'
info['define_macros'] = [('HAVE_CBLAS', None)]
self.set_info(**info)
class openblas_lapack_info(openblas_info):
section = 'openblas'
dir_env_var = 'OPENBLAS'
_lib_names = ['openblas']
notfounderror = BlasNotFoundError
def check_embedded_lapack(self, info):
res = False
c = distutils.ccompiler.new_compiler()
c.customize('')
tmpdir = tempfile.mkdtemp()
s = """void zungqr();
int main(int argc, const char *argv[])
{
zungqr_();
return 0;
}"""
src = os.path.join(tmpdir, 'source.c')
out = os.path.join(tmpdir, 'a.out')
# Add the additional "extra" arguments
try:
extra_args = info['extra_link_args']
except:
extra_args = []
try:
with open(src, 'wt') as f:
f.write(s)
obj = c.compile([src], output_dir=tmpdir)
try:
c.link_executable(obj, out, libraries=info['libraries'],
library_dirs=info['library_dirs'],
extra_postargs=extra_args)
res = True
except distutils.ccompiler.LinkError:
res = False
finally:
shutil.rmtree(tmpdir)
return res
class blis_info(blas_info):
section = 'blis'
dir_env_var = 'BLIS'
_lib_names = ['blis']
notfounderror = BlasNotFoundError
def calc_info(self):
lib_dirs = self.get_lib_dirs()
blis_libs = self.get_libs('libraries', self._lib_names)
if blis_libs == self._lib_names:
blis_libs = self.get_libs('blis_libs', self._lib_names)
info = self.check_libs2(lib_dirs, blis_libs, [])
if info is None:
return
# Add include dirs
incl_dirs = self.get_include_dirs()
dict_append(info,
language='c',
define_macros=[('HAVE_CBLAS', None)],
include_dirs=incl_dirs)
self.set_info(**info)
class blas_src_info(system_info):
section = 'blas_src'
dir_env_var = 'BLAS_SRC'
notfounderror = BlasSrcNotFoundError
def get_paths(self, section, key):
pre_dirs = system_info.get_paths(self, section, key)
dirs = []
for d in pre_dirs:
dirs.extend([d] + self.combine_paths(d, ['blas']))
return [d for d in dirs if os.path.isdir(d)]
def calc_info(self):
src_dirs = self.get_src_dirs()
src_dir = ''
for d in src_dirs:
if os.path.isfile(os.path.join(d, 'daxpy.f')):
src_dir = d
break
if not src_dir:
#XXX: Get sources from netlib. May be ask first.
return
blas1 = '''
caxpy csscal dnrm2 dzasum saxpy srotg zdotc ccopy cswap drot
dznrm2 scasum srotm zdotu cdotc dasum drotg icamax scnrm2
srotmg zdrot cdotu daxpy drotm idamax scopy sscal zdscal crotg
dcabs1 drotmg isamax sdot sswap zrotg cscal dcopy dscal izamax
snrm2 zaxpy zscal csrot ddot dswap sasum srot zcopy zswap
scabs1
'''
blas2 = '''
cgbmv chpmv ctrsv dsymv dtrsv sspr2 strmv zhemv ztpmv cgemv
chpr dgbmv dsyr lsame ssymv strsv zher ztpsv cgerc chpr2 dgemv
dsyr2 sgbmv ssyr xerbla zher2 ztrmv cgeru ctbmv dger dtbmv
sgemv ssyr2 zgbmv zhpmv ztrsv chbmv ctbsv dsbmv dtbsv sger
stbmv zgemv zhpr chemv ctpmv dspmv dtpmv ssbmv stbsv zgerc
zhpr2 cher ctpsv dspr dtpsv sspmv stpmv zgeru ztbmv cher2
ctrmv dspr2 dtrmv sspr stpsv zhbmv ztbsv
'''
blas3 = '''
cgemm csymm ctrsm dsyrk sgemm strmm zhemm zsyr2k chemm csyr2k
dgemm dtrmm ssymm strsm zher2k zsyrk cher2k csyrk dsymm dtrsm
ssyr2k zherk ztrmm cherk ctrmm dsyr2k ssyrk zgemm zsymm ztrsm
'''
sources = [os.path.join(src_dir, f + '.f') \
for f in (blas1 + blas2 + blas3).split()]
#XXX: should we check here actual existence of source files?
sources = [f for f in sources if os.path.isfile(f)]
info = {'sources': sources, 'language': 'f77'}
self.set_info(**info)
class x11_info(system_info):
section = 'x11'
notfounderror = X11NotFoundError
def __init__(self):
system_info.__init__(self,
default_lib_dirs=default_x11_lib_dirs,
default_include_dirs=default_x11_include_dirs)
def calc_info(self):
if sys.platform in ['win32']:
return
lib_dirs = self.get_lib_dirs()
include_dirs = self.get_include_dirs()
x11_libs = self.get_libs('x11_libs', ['X11'])
info = self.check_libs(lib_dirs, x11_libs, [])
if info is None:
return
inc_dir = None
for d in include_dirs:
if self.combine_paths(d, 'X11/X.h'):
inc_dir = d
break
if inc_dir is not None:
dict_append(info, include_dirs=[inc_dir])
self.set_info(**info)
class _numpy_info(system_info):
section = 'Numeric'
modulename = 'Numeric'
notfounderror = NumericNotFoundError
def __init__(self):
include_dirs = []
try:
module = __import__(self.modulename)
prefix = []
for name in module.__file__.split(os.sep):
if name == 'lib':
break
prefix.append(name)
# Ask numpy for its own include path before attempting
# anything else
try:
include_dirs.append(getattr(module, 'get_include')())
except AttributeError:
pass
include_dirs.append(distutils.sysconfig.get_python_inc(
prefix=os.sep.join(prefix)))
except ImportError:
pass
py_incl_dir = distutils.sysconfig.get_python_inc()
include_dirs.append(py_incl_dir)
py_pincl_dir = distutils.sysconfig.get_python_inc(plat_specific=True)
if py_pincl_dir not in include_dirs:
include_dirs.append(py_pincl_dir)
for d in default_include_dirs:
d = os.path.join(d, os.path.basename(py_incl_dir))
if d not in include_dirs:
include_dirs.append(d)
system_info.__init__(self,
default_lib_dirs=[],
default_include_dirs=include_dirs)
def calc_info(self):
try:
module = __import__(self.modulename)
except ImportError:
return
info = {}
macros = []
for v in ['__version__', 'version']:
vrs = getattr(module, v, None)
if vrs is None:
continue
macros = [(self.modulename.upper() + '_VERSION',
'"\\"%s\\""' % (vrs)),
(self.modulename.upper(), None)]
break
dict_append(info, define_macros=macros)
include_dirs = self.get_include_dirs()
inc_dir = None
for d in include_dirs:
if self.combine_paths(d,
os.path.join(self.modulename,
'arrayobject.h')):
inc_dir = d
break
if inc_dir is not None:
dict_append(info, include_dirs=[inc_dir])
if info:
self.set_info(**info)
return
class numarray_info(_numpy_info):
section = 'numarray'
modulename = 'numarray'
class Numeric_info(_numpy_info):
section = 'Numeric'
modulename = 'Numeric'
class numpy_info(_numpy_info):
section = 'numpy'
modulename = 'numpy'
class numerix_info(system_info):
section = 'numerix'
def calc_info(self):
which = None, None
if os.getenv("NUMERIX"):
which = os.getenv("NUMERIX"), "environment var"
# If all the above fail, default to numpy.
if which[0] is None:
which = "numpy", "defaulted"
try:
import numpy
which = "numpy", "defaulted"
except ImportError:
msg1 = str(get_exception())
try:
import Numeric
which = "numeric", "defaulted"
except ImportError:
msg2 = str(get_exception())
try:
import numarray
which = "numarray", "defaulted"
except ImportError:
msg3 = str(get_exception())
log.info(msg1)
log.info(msg2)
log.info(msg3)
which = which[0].strip().lower(), which[1]
if which[0] not in ["numeric", "numarray", "numpy"]:
raise ValueError("numerix selector must be either 'Numeric' "
"or 'numarray' or 'numpy' but the value obtained"
" from the %s was '%s'." % (which[1], which[0]))
os.environ['NUMERIX'] = which[0]
self.set_info(**get_info(which[0]))
class f2py_info(system_info):
def calc_info(self):
try:
import numpy.f2py as f2py
except ImportError:
return
f2py_dir = os.path.join(os.path.dirname(f2py.__file__), 'src')
self.set_info(sources=[os.path.join(f2py_dir, 'fortranobject.c')],
include_dirs=[f2py_dir])
return
class boost_python_info(system_info):
section = 'boost_python'
dir_env_var = 'BOOST'
def get_paths(self, section, key):
pre_dirs = system_info.get_paths(self, section, key)
dirs = []
for d in pre_dirs:
dirs.extend([d] + self.combine_paths(d, ['boost*']))
return [d for d in dirs if os.path.isdir(d)]
def calc_info(self):
src_dirs = self.get_src_dirs()
src_dir = ''
for d in src_dirs:
if os.path.isfile(os.path.join(d, 'libs', 'python', 'src',
'module.cpp')):
src_dir = d
break
if not src_dir:
return
py_incl_dirs = [distutils.sysconfig.get_python_inc()]
py_pincl_dir = distutils.sysconfig.get_python_inc(plat_specific=True)
if py_pincl_dir not in py_incl_dirs:
py_incl_dirs.append(py_pincl_dir)
srcs_dir = os.path.join(src_dir, 'libs', 'python', 'src')
bpl_srcs = glob(os.path.join(srcs_dir, '*.cpp'))
bpl_srcs += glob(os.path.join(srcs_dir, '*', '*.cpp'))
info = {'libraries': [('boost_python_src',
{'include_dirs': [src_dir] + py_incl_dirs,
'sources':bpl_srcs}
)],
'include_dirs': [src_dir],
}
if info:
self.set_info(**info)
return
class agg2_info(system_info):
section = 'agg2'
dir_env_var = 'AGG2'
def get_paths(self, section, key):
pre_dirs = system_info.get_paths(self, section, key)
dirs = []
for d in pre_dirs:
dirs.extend([d] + self.combine_paths(d, ['agg2*']))
return [d for d in dirs if os.path.isdir(d)]
def calc_info(self):
src_dirs = self.get_src_dirs()
src_dir = ''
for d in src_dirs:
if os.path.isfile(os.path.join(d, 'src', 'agg_affine_matrix.cpp')):
src_dir = d
break
if not src_dir:
return
if sys.platform == 'win32':
agg2_srcs = glob(os.path.join(src_dir, 'src', 'platform',
'win32', 'agg_win32_bmp.cpp'))
else:
agg2_srcs = glob(os.path.join(src_dir, 'src', '*.cpp'))
agg2_srcs += [os.path.join(src_dir, 'src', 'platform',
'X11',
'agg_platform_support.cpp')]
info = {'libraries':
[('agg2_src',
{'sources': agg2_srcs,
'include_dirs': [os.path.join(src_dir, 'include')],
}
)],
'include_dirs': [os.path.join(src_dir, 'include')],
}
if info:
self.set_info(**info)
return
class _pkg_config_info(system_info):
section = None
config_env_var = 'PKG_CONFIG'
default_config_exe = 'pkg-config'
append_config_exe = ''
version_macro_name = None
release_macro_name = None
version_flag = '--modversion'
cflags_flag = '--cflags'
def get_config_exe(self):
if self.config_env_var in os.environ:
return os.environ[self.config_env_var]
return self.default_config_exe
def get_config_output(self, config_exe, option):
cmd = config_exe + ' ' + self.append_config_exe + ' ' + option
s, o = exec_command(cmd, use_tee=0)
if not s:
return o
def calc_info(self):
config_exe = find_executable(self.get_config_exe())
if not config_exe:
log.warn('File not found: %s. Cannot determine %s info.' \
% (config_exe, self.section))
return
info = {}
macros = []
libraries = []
library_dirs = []
include_dirs = []
extra_link_args = []
extra_compile_args = []
version = self.get_config_output(config_exe, self.version_flag)
if version:
macros.append((self.__class__.__name__.split('.')[-1].upper(),
'"\\"%s\\""' % (version)))
if self.version_macro_name:
macros.append((self.version_macro_name + '_%s'
% (version.replace('.', '_')), None))
if self.release_macro_name:
release = self.get_config_output(config_exe, '--release')
if release:
macros.append((self.release_macro_name + '_%s'
% (release.replace('.', '_')), None))
opts = self.get_config_output(config_exe, '--libs')
if opts:
for opt in opts.split():
if opt[:2] == '-l':
libraries.append(opt[2:])
elif opt[:2] == '-L':
library_dirs.append(opt[2:])
else:
extra_link_args.append(opt)
opts = self.get_config_output(config_exe, self.cflags_flag)
if opts:
for opt in opts.split():
if opt[:2] == '-I':
include_dirs.append(opt[2:])
elif opt[:2] == '-D':
if '=' in opt:
n, v = opt[2:].split('=')
macros.append((n, v))
else:
macros.append((opt[2:], None))
else:
extra_compile_args.append(opt)
if macros:
dict_append(info, define_macros=macros)
if libraries:
dict_append(info, libraries=libraries)
if library_dirs:
dict_append(info, library_dirs=library_dirs)
if include_dirs:
dict_append(info, include_dirs=include_dirs)
if extra_link_args:
dict_append(info, extra_link_args=extra_link_args)
if extra_compile_args:
dict_append(info, extra_compile_args=extra_compile_args)
if info:
self.set_info(**info)
return
class wx_info(_pkg_config_info):
section = 'wx'
config_env_var = 'WX_CONFIG'
default_config_exe = 'wx-config'
append_config_exe = ''
version_macro_name = 'WX_VERSION'
release_macro_name = 'WX_RELEASE'
version_flag = '--version'
cflags_flag = '--cxxflags'
class gdk_pixbuf_xlib_2_info(_pkg_config_info):
section = 'gdk_pixbuf_xlib_2'
append_config_exe = 'gdk-pixbuf-xlib-2.0'
version_macro_name = 'GDK_PIXBUF_XLIB_VERSION'
class gdk_pixbuf_2_info(_pkg_config_info):
section = 'gdk_pixbuf_2'
append_config_exe = 'gdk-pixbuf-2.0'
version_macro_name = 'GDK_PIXBUF_VERSION'
class gdk_x11_2_info(_pkg_config_info):
section = 'gdk_x11_2'
append_config_exe = 'gdk-x11-2.0'
version_macro_name = 'GDK_X11_VERSION'
class gdk_2_info(_pkg_config_info):
section = 'gdk_2'
append_config_exe = 'gdk-2.0'
version_macro_name = 'GDK_VERSION'
class gdk_info(_pkg_config_info):
section = 'gdk'
append_config_exe = 'gdk'
version_macro_name = 'GDK_VERSION'
class gtkp_x11_2_info(_pkg_config_info):
section = 'gtkp_x11_2'
append_config_exe = 'gtk+-x11-2.0'
version_macro_name = 'GTK_X11_VERSION'
class gtkp_2_info(_pkg_config_info):
section = 'gtkp_2'
append_config_exe = 'gtk+-2.0'
version_macro_name = 'GTK_VERSION'
class xft_info(_pkg_config_info):
section = 'xft'
append_config_exe = 'xft'
version_macro_name = 'XFT_VERSION'
class freetype2_info(_pkg_config_info):
section = 'freetype2'
append_config_exe = 'freetype2'
version_macro_name = 'FREETYPE2_VERSION'
class amd_info(system_info):
section = 'amd'
dir_env_var = 'AMD'
_lib_names = ['amd']
def calc_info(self):
lib_dirs = self.get_lib_dirs()
amd_libs = self.get_libs('amd_libs', self._lib_names)
info = self.check_libs(lib_dirs, amd_libs, [])
if info is None:
return
include_dirs = self.get_include_dirs()
inc_dir = None
for d in include_dirs:
p = self.combine_paths(d, 'amd.h')
if p:
inc_dir = os.path.dirname(p[0])
break
if inc_dir is not None:
dict_append(info, include_dirs=[inc_dir],
define_macros=[('SCIPY_AMD_H', None)],
swig_opts=['-I' + inc_dir])
self.set_info(**info)
return
class umfpack_info(system_info):
section = 'umfpack'
dir_env_var = 'UMFPACK'
notfounderror = UmfpackNotFoundError
_lib_names = ['umfpack']
def calc_info(self):
lib_dirs = self.get_lib_dirs()
umfpack_libs = self.get_libs('umfpack_libs', self._lib_names)
info = self.check_libs(lib_dirs, umfpack_libs, [])
if info is None:
return
include_dirs = self.get_include_dirs()
inc_dir = None
for d in include_dirs:
p = self.combine_paths(d, ['', 'umfpack'], 'umfpack.h')
if p:
inc_dir = os.path.dirname(p[0])
break
if inc_dir is not None:
dict_append(info, include_dirs=[inc_dir],
define_macros=[('SCIPY_UMFPACK_H', None)],
swig_opts=['-I' + inc_dir])
amd = get_info('amd')
dict_append(info, **get_info('amd'))
self.set_info(**info)
return
def combine_paths(*args, **kws):
""" Return a list of existing paths composed by all combinations of
items from arguments.
"""
r = []
for a in args:
if not a:
continue
if is_string(a):
a = [a]
r.append(a)
args = r
if not args:
return []
if len(args) == 1:
result = reduce(lambda a, b: a + b, map(glob, args[0]), [])
elif len(args) == 2:
result = []
for a0 in args[0]:
for a1 in args[1]:
result.extend(glob(os.path.join(a0, a1)))
else:
result = combine_paths(*(combine_paths(args[0], args[1]) + args[2:]))
log.debug('(paths: %s)', ','.join(result))
return result
language_map = {'c': 0, 'c++': 1, 'f77': 2, 'f90': 3}
inv_language_map = {0: 'c', 1: 'c++', 2: 'f77', 3: 'f90'}
def dict_append(d, **kws):
languages = []
for k, v in kws.items():
if k == 'language':
languages.append(v)
continue
if k in d:
if k in ['library_dirs', 'include_dirs',
'extra_compile_args', 'extra_link_args',
'runtime_library_dirs', 'define_macros']:
[d[k].append(vv) for vv in v if vv not in d[k]]
else:
d[k].extend(v)
else:
d[k] = v
if languages:
l = inv_language_map[max([language_map.get(l, 0) for l in languages])]
d['language'] = l
return
def parseCmdLine(argv=(None,)):
import optparse
parser = optparse.OptionParser("usage: %prog [-v] [info objs]")
parser.add_option('-v', '--verbose', action='store_true', dest='verbose',
default=False,
help='be verbose and print more messages')
opts, args = parser.parse_args(args=argv[1:])
return opts, args
def show_all(argv=None):
import inspect
if argv is None:
argv = sys.argv
opts, args = parseCmdLine(argv)
if opts.verbose:
log.set_threshold(log.DEBUG)
else:
log.set_threshold(log.INFO)
show_only = []
for n in args:
if n[-5:] != '_info':
n = n + '_info'
show_only.append(n)
show_all = not show_only
_gdict_ = globals().copy()
for name, c in _gdict_.items():
if not inspect.isclass(c):
continue
if not issubclass(c, system_info) or c is system_info:
continue
if not show_all:
if name not in show_only:
continue
del show_only[show_only.index(name)]
conf = c()
conf.verbosity = 2
r = conf.get_info()
if show_only:
log.info('Info classes not defined: %s', ','.join(show_only))
if __name__ == "__main__":
show_all()
|
unknown
|
codeparrot/codeparrot-clean
| ||
<?php
namespace Illuminate\Support\Testing\Fakes;
use Closure;
use Exception;
use Illuminate\Contracts\Notifications\Dispatcher as NotificationDispatcher;
use Illuminate\Contracts\Notifications\Factory as NotificationFactory;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Contracts\Translation\HasLocalePreference;
use Illuminate\Notifications\AnonymousNotifiable;
use Illuminate\Support\Collection;
use Illuminate\Support\Str;
use Illuminate\Support\Traits\Macroable;
use Illuminate\Support\Traits\ReflectsClosures;
use PHPUnit\Framework\Assert as PHPUnit;
class NotificationFake implements Fake, NotificationDispatcher, NotificationFactory
{
use Macroable, ReflectsClosures;
/**
* All of the notifications that have been sent.
*
* @var array
*/
protected $notifications = [];
/**
* Locale used when sending notifications.
*
* @var string|null
*/
public $locale;
/**
* Indicates if notifications should be serialized and restored when pushed to the queue.
*
* @var bool
*/
protected $serializeAndRestore = false;
/**
* Assert if a notification was sent on-demand based on a truth-test callback.
*
* @param string|\Closure $notification
* @param callable|null $callback
* @return void
*
* @throws \Exception
*/
public function assertSentOnDemand($notification, $callback = null)
{
$this->assertSentTo(new AnonymousNotifiable, $notification, $callback);
}
/**
* Assert if a notification was sent based on a truth-test callback.
*
* @param mixed $notifiable
* @param string|\Closure $notification
* @param callable|null $callback
* @return void
*
* @throws \Exception
*/
public function assertSentTo($notifiable, $notification, $callback = null)
{
if (is_array($notifiable) || $notifiable instanceof Collection) {
if (count($notifiable) === 0) {
throw new Exception('No notifiable given.');
}
foreach ($notifiable as $singleNotifiable) {
$this->assertSentTo($singleNotifiable, $notification, $callback);
}
return;
}
if ($notification instanceof Closure) {
[$notification, $callback] = [$this->firstClosureParameterType($notification), $notification];
}
if (is_numeric($callback)) {
return $this->assertSentToTimes($notifiable, $notification, $callback);
}
PHPUnit::assertTrue(
$this->sent($notifiable, $notification, $callback)->count() > 0,
"The expected [{$notification}] notification was not sent."
);
}
/**
* Assert if a notification was sent on-demand a number of times.
*
* @param string $notification
* @param int $times
* @return void
*/
public function assertSentOnDemandTimes($notification, $times = 1)
{
$this->assertSentToTimes(new AnonymousNotifiable, $notification, $times);
}
/**
* Assert if a notification was sent a number of times.
*
* @param mixed $notifiable
* @param string $notification
* @param int $times
* @return void
*/
public function assertSentToTimes($notifiable, $notification, $times = 1)
{
$count = $this->sent($notifiable, $notification)->count();
PHPUnit::assertSame(
$times, $count,
"Expected [{$notification}] to be sent {$times} times, but was sent {$count} times."
);
}
/**
* Determine if a notification was sent based on a truth-test callback.
*
* @param mixed $notifiable
* @param string|\Closure $notification
* @param callable|null $callback
* @return void
*
* @throws \Exception
*/
public function assertNotSentTo($notifiable, $notification, $callback = null)
{
if (is_array($notifiable) || $notifiable instanceof Collection) {
if (count($notifiable) === 0) {
throw new Exception('No notifiable given.');
}
foreach ($notifiable as $singleNotifiable) {
$this->assertNotSentTo($singleNotifiable, $notification, $callback);
}
return;
}
if ($notification instanceof Closure) {
[$notification, $callback] = [$this->firstClosureParameterType($notification), $notification];
}
PHPUnit::assertCount(
0, $this->sent($notifiable, $notification, $callback),
"The unexpected [{$notification}] notification was sent."
);
}
/**
* Assert that no notifications were sent.
*
* @return void
*/
public function assertNothingSent()
{
$notificationNames = (new Collection($this->notifications))
->map(fn ($notifiableModels) => (new Collection($notifiableModels))
->map(fn ($notifiables) => (new Collection($notifiables))->keys())
)
->flatten()->join("\n- ");
PHPUnit::assertEmpty($this->notifications, "The following notifications were sent unexpectedly:\n\n- $notificationNames\n");
}
/**
* Assert that no notifications were sent to the given notifiable.
*
* @param mixed $notifiable
* @return void
*
* @throws \Exception
*/
public function assertNothingSentTo($notifiable)
{
if (is_array($notifiable) || $notifiable instanceof Collection) {
if (count($notifiable) === 0) {
throw new Exception('No notifiable given.');
}
foreach ($notifiable as $singleNotifiable) {
$this->assertNothingSentTo($singleNotifiable);
}
return;
}
PHPUnit::assertEmpty(
$this->notifications[get_class($notifiable)][$notifiable->getKey() ?? ''] ?? [],
'Notifications were sent unexpectedly.',
);
}
/**
* Assert the total amount of times a notification was sent.
*
* @param string $notification
* @param int $expectedCount
* @return void
*/
public function assertSentTimes($notification, $expectedCount)
{
$actualCount = (new Collection($this->notifications))
->flatten(1)
->reduce(fn ($count, $sent) => $count + count($sent[$notification] ?? []), 0);
PHPUnit::assertSame(
$expectedCount, $actualCount,
sprintf(
"Expected [{$notification}] to be sent {$expectedCount} %s, but was sent {$actualCount} %s.",
Str::plural('time', $expectedCount),
Str::plural('time', $actualCount)
)
);
}
/**
* Assert the total count of notification that were sent.
*
* @param int $expectedCount
* @return void
*/
public function assertCount($expectedCount)
{
$actualCount = (new Collection($this->notifications))->flatten(3)->count();
PHPUnit::assertSame(
$expectedCount, $actualCount,
"Expected {$expectedCount} notifications to be sent, but {$actualCount} were sent."
);
}
/**
* Get all of the notifications matching a truth-test callback.
*
* @param mixed $notifiable
* @param string $notification
* @param callable|null $callback
* @return \Illuminate\Support\Collection
*/
public function sent($notifiable, $notification, $callback = null)
{
if (! $this->hasSent($notifiable, $notification)) {
return new Collection;
}
$callback = $callback ?: fn () => true;
$notifications = new Collection($this->notificationsFor($notifiable, $notification));
return $notifications->filter(
fn ($arguments) => $callback(...array_values($arguments))
)->pluck('notification');
}
/**
* Determine if there are more notifications left to inspect.
*
* @param mixed $notifiable
* @param string $notification
* @return bool
*/
public function hasSent($notifiable, $notification)
{
return ! empty($this->notificationsFor($notifiable, $notification));
}
/**
* Get all of the notifications for a notifiable entity by type.
*
* @param mixed $notifiable
* @param string $notification
* @return array
*/
protected function notificationsFor($notifiable, $notification)
{
return $this->notifications[get_class($notifiable)][(string) $notifiable->getKey()][$notification] ?? [];
}
/**
* Send the given notification to the given notifiable entities.
*
* @param \Illuminate\Support\Collection|mixed $notifiables
* @param mixed $notification
* @return void
*/
public function send($notifiables, $notification)
{
$this->sendNow($notifiables, $notification);
}
/**
* Send the given notification immediately.
*
* @param \Illuminate\Support\Collection|mixed $notifiables
* @param mixed $notification
* @param array|null $channels
* @return void
*/
public function sendNow($notifiables, $notification, ?array $channels = null)
{
if (! $notifiables instanceof Collection && ! is_array($notifiables)) {
$notifiables = [$notifiables];
}
foreach ($notifiables as $notifiable) {
if (! $notification->id) {
$notification->id = (string) Str::uuid();
}
$notifiableChannels = $channels ?: $notification->via($notifiable);
if (method_exists($notification, 'shouldSend')) {
$notifiableChannels = array_filter(
$notifiableChannels,
fn ($channel) => $notification->shouldSend($notifiable, $channel) !== false
);
}
if (empty($notifiableChannels)) {
continue;
}
$this->notifications[get_class($notifiable)][(string) $notifiable->getKey()][get_class($notification)][] = [
'notification' => $this->serializeAndRestore && $notification instanceof ShouldQueue
? $this->serializeAndRestoreNotification($notification)
: $notification,
'channels' => $notifiableChannels,
'notifiable' => $notifiable,
'locale' => $notification->locale ?? $this->locale ?? value(function () use ($notifiable) {
if ($notifiable instanceof HasLocalePreference) {
return $notifiable->preferredLocale();
}
}),
];
}
}
/**
* Get a channel instance by name.
*
* @param string|null $name
* @return mixed
*/
public function channel($name = null)
{
//
}
/**
* Set the locale of notifications.
*
* @param string $locale
* @return $this
*/
public function locale($locale)
{
$this->locale = $locale;
return $this;
}
/**
* Specify if notification should be serialized and restored when being "pushed" to the queue.
*
* @param bool $serializeAndRestore
* @return $this
*/
public function serializeAndRestore(bool $serializeAndRestore = true)
{
$this->serializeAndRestore = $serializeAndRestore;
return $this;
}
/**
* Serialize and unserialize the notification to simulate the queueing process.
*
* @param mixed $notification
* @return mixed
*/
protected function serializeAndRestoreNotification($notification)
{
return unserialize(serialize($notification));
}
/**
* Get the notifications that have been sent.
*
* @return array
*/
public function sentNotifications()
{
return $this->notifications;
}
}
|
php
|
github
|
https://github.com/laravel/framework
|
src/Illuminate/Support/Testing/Fakes/NotificationFake.php
|
config = {
"interfaces": {
"google.container.v1.ClusterManager": {
"retry_codes": {
"idempotent": ["DEADLINE_EXCEEDED", "UNAVAILABLE"],
"non_idempotent": [],
},
"retry_params": {
"default": {
"initial_retry_delay_millis": 100,
"retry_delay_multiplier": 1.3,
"max_retry_delay_millis": 60000,
"initial_rpc_timeout_millis": 20000,
"rpc_timeout_multiplier": 1.0,
"max_rpc_timeout_millis": 20000,
"total_timeout_millis": 600000,
}
},
"methods": {
"ListClusters": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default",
},
"GetCluster": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default",
},
"CreateCluster": {
"timeout_millis": 45000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"UpdateCluster": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"UpdateNodePool": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetNodePoolAutoscaling": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetLoggingService": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetMonitoringService": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetAddonsConfig": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetLocations": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"UpdateMaster": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetMasterAuth": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"DeleteCluster": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"ListOperations": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default",
},
"GetOperation": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default",
},
"CancelOperation": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"GetServerConfig": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default",
},
"ListNodePools": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default",
},
"GetNodePool": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default",
},
"CreateNodePool": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"DeleteNodePool": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"RollbackNodePoolUpgrade": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetNodePoolManagement": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetLabels": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetLegacyAbac": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"StartIPRotation": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"CompleteIPRotation": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetNodePoolSize": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetNetworkPolicy": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"SetMaintenancePolicy": {
"timeout_millis": 60000,
"retry_codes_name": "non_idempotent",
"retry_params_name": "default",
},
"ListUsableSubnetworks": {
"timeout_millis": 60000,
"retry_codes_name": "idempotent",
"retry_params_name": "default",
},
},
}
}
}
|
unknown
|
codeparrot/codeparrot-clean
| ||
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (C) 2017 Red Hat Inc.
# Copyright (C) 2017 Lenovo.
#
# GNU General Public License v3.0+
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#
# Module to Collect facts from Lenovo Switches running Lenovo ENOS commands
# Lenovo Networking
#
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: enos_facts
version_added: "2.5"
author: "Anil Kumar Muraleedharan (@amuraleedhar)"
short_description: Collect facts from remote devices running Lenovo ENOS
description:
- Collects a base set of device facts from a remote Lenovo device
running on ENOS. This module prepends all of the
base network fact keys with C(ansible_net_<fact>). The facts
module will always collect a base set of facts from the device
and can enable or disable collection of additional facts.
extends_documentation_fragment: enos
notes:
- Tested against ENOS 8.4.1
options:
gather_subset:
description:
- When supplied, this argument will restrict the facts collected
to a given subset. Possible values for this argument include
all, hardware, config, and interfaces. Can specify a list of
values to include a larger subset. Values can also be used
with an initial C(M(!)) to specify that a specific subset should
not be collected.
required: false
default: '!config'
'''
EXAMPLES = '''
Tasks: The following are examples of using the module enos_facts.
---
- name: Test Enos Facts
enos_facts:
provider={{ cli }}
vars:
cli:
host: "{{ inventory_hostname }}"
port: 22
username: admin
password: admin
transport: cli
timeout: 30
authorize: True
auth_pass:
---
# Collect all facts from the device
- enos_facts:
gather_subset: all
provider: "{{ cli }}"
# Collect only the config and default facts
- enos_facts:
gather_subset:
- config
provider: "{{ cli }}"
# Do not collect hardware facts
- enos_facts:
gather_subset:
- "!hardware"
provider: "{{ cli }}"
'''
RETURN = '''
ansible_net_gather_subset:
description: The list of fact subsets collected from the device
returned: always
type: list
# default
ansible_net_model:
description: The model name returned from the Lenovo ENOS device
returned: always
type: str
ansible_net_serialnum:
description: The serial number of the Lenovo ENOS device
returned: always
type: str
ansible_net_version:
description: The ENOS operating system version running on the remote device
returned: always
type: str
ansible_net_hostname:
description: The configured hostname of the device
returned: always
type: string
ansible_net_image:
description: Indicates the active image for the device
returned: always
type: string
# hardware
ansible_net_memfree_mb:
description: The available free memory on the remote device in MB
returned: when hardware is configured
type: int
# config
ansible_net_config:
description: The current active config from the device
returned: when config is configured
type: str
# interfaces
ansible_net_all_ipv4_addresses:
description: All IPv4 addresses configured on the device
returned: when interfaces is configured
type: list
ansible_net_all_ipv6_addresses:
description: All IPv6 addresses configured on the device
returned: when interfaces is configured
type: list
ansible_net_interfaces:
description: A hash of all interfaces running on the system.
This gives information on description, mac address, mtu, speed,
duplex and operstatus
returned: when interfaces is configured
type: dict
ansible_net_neighbors:
description: The list of LLDP neighbors from the remote device
returned: when interfaces is configured
type: dict
'''
import re
from ansible.module_utils.network.enos.enos import run_commands, enos_argument_spec, check_args
from ansible.module_utils._text import to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import iteritems
from ansible.module_utils.six.moves import zip
class FactsBase(object):
COMMANDS = list()
def __init__(self, module):
self.module = module
self.facts = dict()
self.responses = None
self.PERSISTENT_COMMAND_TIMEOUT = 60
def populate(self):
self.responses = run_commands(self.module, self.COMMANDS,
check_rc=False)
def run(self, cmd):
return run_commands(self.module, cmd, check_rc=False)
class Default(FactsBase):
COMMANDS = ['show version', 'show run']
def populate(self):
super(Default, self).populate()
data = self.responses[0]
data_run = self.responses[1]
if data:
self.facts['version'] = self.parse_version(data)
self.facts['serialnum'] = self.parse_serialnum(data)
self.facts['model'] = self.parse_model(data)
self.facts['image'] = self.parse_image(data)
if data_run:
self.facts['hostname'] = self.parse_hostname(data_run)
def parse_version(self, data):
match = re.search(r'^Software Version (.*?) ', data, re.M | re.I)
if match:
return match.group(1)
def parse_hostname(self, data_run):
for line in data_run.split('\n'):
line = line.strip()
match = re.match(r'hostname (.*?)', line, re.M | re.I)
if match:
hosts = line.split()
hostname = hosts[1].strip('\"')
return hostname
return "NA"
def parse_model(self, data):
match = re.search(r'^Lenovo RackSwitch (\S+)', data, re.M | re.I)
if match:
return match.group(1)
def parse_image(self, data):
match = re.search(r'(.*) image1(.*)', data, re.M | re.I)
if match:
return "Image1"
else:
return "Image2"
def parse_serialnum(self, data):
match = re.search(r'^Switch Serial No: (\S+)', data, re.M | re.I)
if match:
return match.group(1)
class Hardware(FactsBase):
COMMANDS = [
'show system memory'
]
def populate(self):
super(Hardware, self).populate()
data = self.run(['show system memory'])
data = to_text(data, errors='surrogate_or_strict').strip()
data = data.replace(r"\n", "\n")
if data:
self.facts['memtotal_mb'] = self.parse_memtotal(data)
self.facts['memfree_mb'] = self.parse_memfree(data)
def parse_memtotal(self, data):
match = re.search(r'^MemTotal:\s*(.*) kB', data, re.M | re.I)
if match:
return int(match.group(1)) / 1024
def parse_memfree(self, data):
match = re.search(r'^MemFree:\s*(.*) kB', data, re.M | re.I)
if match:
return int(match.group(1)) / 1024
class Config(FactsBase):
COMMANDS = ['show running-config']
def populate(self):
super(Config, self).populate()
data = self.responses[0]
if data:
self.facts['config'] = data
class Interfaces(FactsBase):
COMMANDS = ['show interface status']
def populate(self):
super(Interfaces, self).populate()
self.facts['all_ipv4_addresses'] = list()
self.facts['all_ipv6_addresses'] = list()
data1 = self.run(['show interface status'])
data1 = to_text(data1, errors='surrogate_or_strict').strip()
data1 = data1.replace(r"\n", "\n")
data2 = self.run(['show lldp port'])
data2 = to_text(data2, errors='surrogate_or_strict').strip()
data2 = data2.replace(r"\n", "\n")
lines1 = None
lines2 = None
if data1:
lines1 = self.parse_interfaces(data1)
if data2:
lines2 = self.parse_interfaces(data2)
if lines1 is not None and lines2 is not None:
self.facts['interfaces'] = self.populate_interfaces(lines1, lines2)
data3 = self.run(['show lldp remote-device port'])
data3 = to_text(data3, errors='surrogate_or_strict').strip()
data3 = data3.replace(r"\n", "\n")
lines3 = None
if data3:
lines3 = self.parse_neighbors(data3)
if lines3 is not None:
self.facts['neighbors'] = self.populate_neighbors(lines3)
data4 = self.run(['show interface ip'])
data4 = data4[0].split('\n')
lines4 = None
if data4:
lines4 = self.parse_ipaddresses(data4)
ipv4_interfaces = self.set_ipv4_interfaces(lines4)
self.facts['all_ipv4_addresses'] = ipv4_interfaces
ipv6_interfaces = self.set_ipv6_interfaces(lines4)
self.facts['all_ipv6_addresses'] = ipv6_interfaces
def parse_ipaddresses(self, data4):
parsed = list()
for line in data4:
if len(line) == 0:
continue
else:
line = line.strip()
if len(line) == 0:
continue
match = re.search(r'IP4', line, re.M | re.I)
if match:
key = match.group()
parsed.append(line)
match = re.search(r'IP6', line, re.M | re.I)
if match:
key = match.group()
parsed.append(line)
return parsed
def set_ipv4_interfaces(self, line4):
ipv4_addresses = list()
for line in line4:
ipv4Split = line.split()
if ipv4Split[1] == "IP4":
ipv4_addresses.append(ipv4Split[2])
return ipv4_addresses
def set_ipv6_interfaces(self, line4):
ipv6_addresses = list()
for line in line4:
ipv6Split = line.split()
if ipv6Split[1] == "IP6":
ipv6_addresses.append(ipv6Split[2])
return ipv6_addresses
def populate_neighbors(self, lines3):
neighbors = dict()
for line in lines3:
neighborSplit = line.split("|")
innerData = dict()
innerData['Remote Chassis ID'] = neighborSplit[2].strip()
innerData['Remote Port'] = neighborSplit[3].strip()
sysName = neighborSplit[4].strip()
if sysName is not None:
innerData['Remote System Name'] = neighborSplit[4].strip()
else:
innerData['Remote System Name'] = "NA"
neighbors[neighborSplit[0].strip()] = innerData
return neighbors
def populate_interfaces(self, lines1, lines2):
interfaces = dict()
for line1, line2 in zip(lines1, lines2):
line = line1 + " " + line2
intfSplit = line.split()
innerData = dict()
innerData['description'] = intfSplit[6].strip()
innerData['macaddress'] = intfSplit[8].strip()
innerData['mtu'] = intfSplit[9].strip()
innerData['speed'] = intfSplit[1].strip()
innerData['duplex'] = intfSplit[2].strip()
innerData['operstatus'] = intfSplit[5].strip()
if("up" not in intfSplit[5].strip()) and ("down" not in intfSplit[5].strip()):
innerData['description'] = intfSplit[7].strip()
innerData['macaddress'] = intfSplit[9].strip()
innerData['mtu'] = intfSplit[10].strip()
innerData['operstatus'] = intfSplit[6].strip()
interfaces[intfSplit[0].strip()] = innerData
return interfaces
def parse_neighbors(self, neighbors):
parsed = list()
for line in neighbors.split('\n'):
if len(line) == 0:
continue
else:
line = line.strip()
match = re.match(r'^([0-9]+)', line)
if match:
key = match.group(1)
parsed.append(line)
match = re.match(r'^(INT+)', line)
if match:
key = match.group(1)
parsed.append(line)
match = re.match(r'^(EXT+)', line)
if match:
key = match.group(1)
parsed.append(line)
match = re.match(r'^(MGT+)', line)
if match:
key = match.group(1)
parsed.append(line)
return parsed
def parse_interfaces(self, data):
parsed = list()
for line in data.split('\n'):
if len(line) == 0:
continue
else:
line = line.strip()
match = re.match(r'^([0-9]+)', line)
if match:
key = match.group(1)
parsed.append(line)
match = re.match(r'^(INT+)', line)
if match:
key = match.group(1)
parsed.append(line)
match = re.match(r'^(EXT+)', line)
if match:
key = match.group(1)
parsed.append(line)
match = re.match(r'^(MGT+)', line)
if match:
key = match.group(1)
parsed.append(line)
return parsed
FACT_SUBSETS = dict(
default=Default,
hardware=Hardware,
interfaces=Interfaces,
config=Config,
)
VALID_SUBSETS = frozenset(FACT_SUBSETS.keys())
PERSISTENT_COMMAND_TIMEOUT = 60
def main():
"""main entry point for module execution
"""
argument_spec = dict(
gather_subset=dict(default=['!config'], type='list')
)
argument_spec.update(enos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
gather_subset = module.params['gather_subset']
runable_subsets = set()
exclude_subsets = set()
for subset in gather_subset:
if subset == 'all':
runable_subsets.update(VALID_SUBSETS)
continue
if subset.startswith('!'):
subset = subset[1:]
if subset == 'all':
exclude_subsets.update(VALID_SUBSETS)
continue
exclude = True
else:
exclude = False
if subset not in VALID_SUBSETS:
module.fail_json(msg='Bad subset')
if exclude:
exclude_subsets.add(subset)
else:
runable_subsets.add(subset)
if not runable_subsets:
runable_subsets.update(VALID_SUBSETS)
runable_subsets.difference_update(exclude_subsets)
runable_subsets.add('default')
facts = dict()
facts['gather_subset'] = list(runable_subsets)
instances = list()
for key in runable_subsets:
instances.append(FACT_SUBSETS[key](module))
for inst in instances:
inst.populate()
facts.update(inst.facts)
ansible_facts = dict()
for key, value in iteritems(facts):
key = 'ansible_net_%s' % key
ansible_facts[key] = value
warnings = list()
check_args(module, warnings)
module.exit_json(ansible_facts=ansible_facts, warnings=warnings)
if __name__ == '__main__':
main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
"""
Test view handler for rerun (and eventually create)
"""
from django.test.client import RequestFactory
from opaque_keys.edx.keys import CourseKey
from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase
from xmodule.modulestore.tests.factories import CourseFactory
from student.roles import CourseInstructorRole, CourseStaffRole
from student.tests.factories import UserFactory
from contentstore.tests.utils import AjaxEnabledTestClient, parse_json
from datetime import datetime
from xmodule.course_module import CourseFields
class TestCourseListing(ModuleStoreTestCase):
"""
Unit tests for getting the list of courses for a logged in user
"""
def setUp(self):
"""
Add a user and a course
"""
super(TestCourseListing, self).setUp()
# create and log in a staff user.
# create and log in a non-staff user
self.user = UserFactory()
self.factory = RequestFactory()
self.client = AjaxEnabledTestClient()
self.client.login(username=self.user.username, password='test')
source_course = CourseFactory.create(
org='origin',
number='the_beginning',
run='first',
display_name='the one and only',
start=datetime.utcnow()
)
self.source_course_key = source_course.id
for role in [CourseInstructorRole, CourseStaffRole]:
role(self.source_course_key).add_users(self.user)
def tearDown(self):
"""
Reverse the setup
"""
self.client.logout()
ModuleStoreTestCase.tearDown(self)
def test_rerun(self):
"""
Just testing the functionality the view handler adds over the tasks tested in test_clone_course
"""
response = self.client.ajax_post('/course/', {
'source_course_key': unicode(self.source_course_key),
'org': self.source_course_key.org, 'course': self.source_course_key.course, 'run': 'copy',
'display_name': 'not the same old name',
})
self.assertEqual(response.status_code, 200)
data = parse_json(response)
dest_course_key = CourseKey.from_string(data['destination_course_key'])
self.assertEqual(dest_course_key.run, 'copy')
dest_course = self.store.get_course(dest_course_key)
self.assertEqual(dest_course.start, CourseFields.start.default)
|
unknown
|
codeparrot/codeparrot-clean
| ||
from test.test_support import verbose, verify
import sys
import new
class Eggs:
def get_yolks(self):
return self.yolks
print 'new.module()'
m = new.module('Spam')
if verbose:
print m
m.Eggs = Eggs
sys.modules['Spam'] = m
import Spam
def get_more_yolks(self):
return self.yolks + 3
print 'new.classobj()'
C = new.classobj('Spam', (Spam.Eggs,), {'get_more_yolks': get_more_yolks})
if verbose:
print C
print 'new.instance()'
c = new.instance(C, {'yolks': 3})
if verbose:
print c
o = new.instance(C)
verify(o.__dict__ == {},
"new __dict__ should be empty")
del o
o = new.instance(C, None)
verify(o.__dict__ == {},
"new __dict__ should be empty")
del o
def break_yolks(self):
self.yolks = self.yolks - 2
print 'new.instancemethod()'
im = new.instancemethod(break_yolks, c, C)
if verbose:
print im
verify(c.get_yolks() == 3 and c.get_more_yolks() == 6,
'Broken call of hand-crafted class instance')
im()
verify(c.get_yolks() == 1 and c.get_more_yolks() == 4,
'Broken call of hand-crafted instance method')
# It's unclear what the semantics should be for a code object compiled at
# module scope, but bound and run in a function. In CPython, `c' is global
# (by accident?) while in Jython, `c' is local. The intent of the test
# clearly is to make `c' global, so let's be explicit about it.
codestr = '''
global c
a = 1
b = 2
c = a + b
'''
ccode = compile(codestr, '<string>', 'exec')
# Jython doesn't have a __builtins__, so use a portable alternative
import __builtin__
g = {'c': 0, '__builtins__': __builtin__}
# this test could be more robust
print 'new.function()'
func = new.function(ccode, g)
if verbose:
print func
func()
verify(g['c'] == 3,
'Could not create a proper function object')
# test the various extended flavors of function.new
def f(x):
def g(y):
return x + y
return g
g = f(4)
new.function(f.func_code, {}, "blah")
g2 = new.function(g.func_code, {}, "blah", (2,), g.func_closure)
verify(g2() == 6)
g3 = new.function(g.func_code, {}, "blah", None, g.func_closure)
verify(g3(5) == 9)
def test_closure(func, closure, exc):
try:
new.function(func.func_code, {}, "", None, closure)
except exc:
pass
else:
print "corrupt closure accepted"
test_closure(g, None, TypeError) # invalid closure
test_closure(g, (1,), TypeError) # non-cell in closure
test_closure(g, (1, 1), ValueError) # closure is wrong size
test_closure(f, g.func_closure, ValueError) # no closure needed
print 'new.code()'
# bogus test of new.code()
# Note: Jython will never have new.code()
if hasattr(new, 'code'):
d = new.code(3, 3, 3, 3, codestr, (), (), (),
"<string>", "<name>", 1, "", (), ())
# test backwards-compatibility version with no freevars or cellvars
d = new.code(3, 3, 3, 3, codestr, (), (), (),
"<string>", "<name>", 1, "")
if verbose:
print d
|
unknown
|
codeparrot/codeparrot-clean
| ||
import inspect
import os
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
from django.utils.importlib import import_module
DEFAULT_DB_ALIAS = 'default'
# Define some exceptions that mirror the PEP249 interface.
# We will rethrow any backend-specific errors using these
# common wrappers
class DatabaseError(Exception):
pass
class IntegrityError(DatabaseError):
pass
def load_backend(backend_name):
try:
module = import_module('.base', 'django.db.backends.%s' % backend_name)
import warnings
warnings.warn(
"Short names for DATABASE_ENGINE are deprecated; prepend with 'django.db.backends.'",
DeprecationWarning
)
return module
except ImportError, e:
# Look for a fully qualified database backend name
try:
return import_module('.base', backend_name)
except ImportError, e_user:
# The database backend wasn't found. Display a helpful error message
# listing all possible (built-in) database backends.
backend_dir = os.path.join(os.path.dirname(__file__), 'backends')
try:
available_backends = [f for f in os.listdir(backend_dir)
if os.path.isdir(os.path.join(backend_dir, f))
and not f.startswith('.')]
except EnvironmentError:
available_backends = []
available_backends.sort()
if backend_name not in available_backends:
error_msg = ("%r isn't an available database backend. \n" +
"Try using django.db.backends.XXX, where XXX is one of:\n %s\n" +
"Error was: %s") % \
(backend_name, ", ".join(map(repr, available_backends)), e_user)
raise ImproperlyConfigured(error_msg)
else:
raise # If there's some other error, this must be an error in Django itself.
class ConnectionDoesNotExist(Exception):
pass
class ConnectionHandler(object):
def __init__(self, databases):
self.databases = databases
self._connections = {}
def ensure_defaults(self, alias):
"""
Puts the defaults into the settings dictionary for a given connection
where no settings is provided.
"""
try:
conn = self.databases[alias]
except KeyError:
raise ConnectionDoesNotExist("The connection %s doesn't exist" % alias)
conn.setdefault('ENGINE', 'django.db.backends.dummy')
if conn['ENGINE'] == 'django.db.backends.' or not conn['ENGINE']:
conn['ENGINE'] = 'django.db.backends.dummy'
conn.setdefault('OPTIONS', {})
conn.setdefault('TEST_CHARSET', None)
conn.setdefault('TEST_COLLATION', None)
conn.setdefault('TEST_NAME', None)
conn.setdefault('TEST_MIRROR', None)
conn.setdefault('TIME_ZONE', settings.TIME_ZONE)
for setting in ('NAME', 'USER', 'PASSWORD', 'HOST', 'PORT'):
conn.setdefault(setting, '')
def __getitem__(self, alias):
if alias in self._connections:
return self._connections[alias]
self.ensure_defaults(alias)
db = self.databases[alias]
backend = load_backend(db['ENGINE'])
conn = backend.DatabaseWrapper(db, alias)
self._connections[alias] = conn
return conn
def __iter__(self):
return iter(self.databases)
def all(self):
return [self[alias] for alias in self]
class ConnectionRouter(object):
def __init__(self, routers):
self.routers = []
for r in routers:
if isinstance(r, basestring):
try:
module_name, klass_name = r.rsplit('.', 1)
module = import_module(module_name)
except ImportError, e:
raise ImproperlyConfigured('Error importing database router %s: "%s"' % (klass_name, e))
try:
router_class = getattr(module, klass_name)
except AttributeError:
raise ImproperlyConfigured('Module "%s" does not define a database router name "%s"' % (module, klass_name))
else:
router = router_class()
else:
router = r
self.routers.append(router)
def _router_func(action):
def _route_db(self, model, **hints):
chosen_db = None
for router in self.routers:
try:
method = getattr(router, action)
except AttributeError:
# If the router doesn't have a method, skip to the next one.
pass
else:
chosen_db = method(model, **hints)
if chosen_db:
return chosen_db
try:
return hints['instance']._state.db or DEFAULT_DB_ALIAS
except KeyError:
return DEFAULT_DB_ALIAS
return _route_db
db_for_read = _router_func('db_for_read')
db_for_write = _router_func('db_for_write')
def allow_relation(self, obj1, obj2, **hints):
for router in self.routers:
try:
method = router.allow_relation
except AttributeError:
# If the router doesn't have a method, skip to the next one.
pass
else:
allow = method(obj1, obj2, **hints)
if allow is not None:
return allow
return obj1._state.db == obj2._state.db
def allow_syncdb(self, db, model):
for router in self.routers:
try:
method = router.allow_syncdb
except AttributeError:
# If the router doesn't have a method, skip to the next one.
pass
else:
allow = method(db, model)
if allow is not None:
return allow
return True
|
unknown
|
codeparrot/codeparrot-clean
| ||
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2017 Google
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# ----------------------------------------------------------------------------
#
# *** AUTO GENERATED CODE *** AUTO GENERATED CODE ***
#
# ----------------------------------------------------------------------------
#
# This file is automatically generated by Magic Modules and manual
# changes will be clobbered when the file is regenerated.
#
# Please read more about how to change this file at
# https://www.github.com/GoogleCloudPlatform/magic-modules
#
# ----------------------------------------------------------------------------
from __future__ import absolute_import, division, print_function
__metaclass__ = type
################################################################################
# Documentation
################################################################################
ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ["preview"], 'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gcp_compute_interconnect_attachment
description:
- Represents an InterconnectAttachment (VLAN attachment) resource. For more information,
see Creating VLAN Attachments.
short_description: Creates a GCP InterconnectAttachment
version_added: 2.8
author: Google Inc. (@googlecloudplatform)
requirements:
- python >= 2.6
- requests >= 2.18.4
- google-auth >= 1.3.0
options:
state:
description:
- Whether the given object should exist in GCP
choices:
- present
- absent
default: present
interconnect:
description:
- URL of the underlying Interconnect object that this attachment's traffic will
traverse through.
required: true
description:
description:
- An optional description of this resource.
required: false
router:
description:
- URL of the cloud router to be used for dynamic routing. This router must be
in the same region as this InterconnectAttachment. The InterconnectAttachment
will automatically connect the Interconnect to the network & region within which
the Cloud Router is configured.
- 'This field represents a link to a Router resource in GCP. It can be specified
in two ways. First, you can place in the selfLink of the resource here as a
string Alternatively, you can add `register: name-of-resource` to a gcp_compute_router
task and then set this router field to "{{ name-of-resource }}"'
required: true
name:
description:
- Name of the resource. Provided by the client when the resource is created. The
name must be 1-63 characters long, and comply with RFC1035. Specifically, the
name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?`
which means the first character must be a lowercase letter, and all following
characters must be a dash, lowercase letter, or digit, except the last character,
which cannot be a dash.
required: true
candidate_subnets:
description:
- Up to 16 candidate prefixes that can be used to restrict the allocation of cloudRouterIpAddress
and customerRouterIpAddress for this attachment.
- All prefixes must be within link-local address space (169.254.0.0/16) and must
be /29 or shorter (/28, /27, etc). Google will attempt to select an unused /29
from the supplied candidate prefix(es). The request will fail if all possible
/29s are in use on Google's edge. If not supplied, Google will randomly select
an unused /29 from all of link-local space.
required: false
vlan_tag8021q:
description:
- The IEEE 802.1Q VLAN tag for this attachment, in the range 2-4094.
required: false
region:
description:
- Region where the regional interconnect attachment resides.
required: true
extends_documentation_fragment: gcp
'''
EXAMPLES = '''
- name: create a interconnect attachment
gcp_compute_interconnect_attachment:
name: "test_object"
region: us-central1
project: "test_project"
auth_kind: "serviceaccount"
interconnect: https://googleapis.com/compute/v1/projects/test_project/global/interconnects/...
router: https://googleapis.com/compute/v1/projects/test_project/regions/us-central1/routers/...
service_account_file: "/tmp/auth.pem"
state: present
register: disk
'''
RETURN = '''
cloudRouterIpAddress:
description:
- IPv4 address + prefix length to be configured on Cloud Router Interface for this
interconnect attachment.
returned: success
type: str
customerRouterIpAddress:
description:
- IPv4 address + prefix length to be configured on the customer router subinterface
for this interconnect attachment.
returned: success
type: str
interconnect:
description:
- URL of the underlying Interconnect object that this attachment's traffic will
traverse through.
returned: success
type: str
description:
description:
- An optional description of this resource.
returned: success
type: str
privateInterconnectInfo:
description:
- Information specific to an InterconnectAttachment. This property is populated
if the interconnect that this is attached to is of type DEDICATED.
returned: success
type: complex
contains:
tag8021q:
description:
- 802.1q encapsulation tag to be used for traffic between Google and the customer,
going to and from this network and region.
returned: success
type: int
googleReferenceId:
description:
- Google reference ID, to be used when raising support tickets with Google or otherwise
to debug backend connectivity issues.
returned: success
type: str
router:
description:
- URL of the cloud router to be used for dynamic routing. This router must be in
the same region as this InterconnectAttachment. The InterconnectAttachment will
automatically connect the Interconnect to the network & region within which the
Cloud Router is configured.
returned: success
type: str
creationTimestamp:
description:
- Creation timestamp in RFC3339 text format.
returned: success
type: str
id:
description:
- The unique identifier for the resource. This identifier is defined by the server.
returned: success
type: str
name:
description:
- Name of the resource. Provided by the client when the resource is created. The
name must be 1-63 characters long, and comply with RFC1035. Specifically, the
name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?`
which means the first character must be a lowercase letter, and all following
characters must be a dash, lowercase letter, or digit, except the last character,
which cannot be a dash.
returned: success
type: str
candidateSubnets:
description:
- Up to 16 candidate prefixes that can be used to restrict the allocation of cloudRouterIpAddress
and customerRouterIpAddress for this attachment.
- All prefixes must be within link-local address space (169.254.0.0/16) and must
be /29 or shorter (/28, /27, etc). Google will attempt to select an unused /29
from the supplied candidate prefix(es). The request will fail if all possible
/29s are in use on Google's edge. If not supplied, Google will randomly select
an unused /29 from all of link-local space.
returned: success
type: list
vlanTag8021q:
description:
- The IEEE 802.1Q VLAN tag for this attachment, in the range 2-4094.
returned: success
type: int
region:
description:
- Region where the regional interconnect attachment resides.
returned: success
type: str
'''
################################################################################
# Imports
################################################################################
from ansible.module_utils.gcp_utils import navigate_hash, GcpSession, GcpModule, GcpRequest, remove_nones_from_dict, replace_resource_dict
import json
import re
import time
################################################################################
# Main
################################################################################
def main():
"""Main function"""
module = GcpModule(
argument_spec=dict(
state=dict(default='present', choices=['present', 'absent'], type='str'),
interconnect=dict(required=True, type='str'),
description=dict(type='str'),
router=dict(required=True),
name=dict(required=True, type='str'),
candidate_subnets=dict(type='list', elements='str'),
vlan_tag8021q=dict(type='int'),
region=dict(required=True, type='str'),
)
)
if not module.params['scopes']:
module.params['scopes'] = ['https://www.googleapis.com/auth/compute']
state = module.params['state']
kind = 'compute#interconnectAttachment'
fetch = fetch_resource(module, self_link(module), kind)
changed = False
if fetch:
if state == 'present':
if is_different(module, fetch):
update(module, self_link(module), kind)
fetch = fetch_resource(module, self_link(module), kind)
changed = True
else:
delete(module, self_link(module), kind)
fetch = {}
changed = True
else:
if state == 'present':
fetch = create(module, collection(module), kind)
changed = True
else:
fetch = {}
fetch.update({'changed': changed})
module.exit_json(**fetch)
def create(module, link, kind):
auth = GcpSession(module, 'compute')
return wait_for_operation(module, auth.post(link, resource_to_request(module)))
def update(module, link, kind):
module.fail_json(msg="InterconnectAttachment cannot be edited")
def delete(module, link, kind):
auth = GcpSession(module, 'compute')
return wait_for_operation(module, auth.delete(link))
def resource_to_request(module):
request = {
u'kind': 'compute#interconnectAttachment',
u'interconnect': module.params.get('interconnect'),
u'description': module.params.get('description'),
u'router': replace_resource_dict(module.params.get(u'router', {}), 'selfLink'),
u'name': module.params.get('name'),
u'candidateSubnets': module.params.get('candidate_subnets'),
u'vlanTag8021q': module.params.get('vlan_tag8021q'),
}
return_vals = {}
for k, v in request.items():
if v or v is False:
return_vals[k] = v
return return_vals
def fetch_resource(module, link, kind, allow_not_found=True):
auth = GcpSession(module, 'compute')
return return_if_object(module, auth.get(link), kind, allow_not_found)
def self_link(module):
return "https://www.googleapis.com/compute/v1/projects/{project}/regions/{region}/interconnectAttachments/{name}".format(**module.params)
def collection(module):
return "https://www.googleapis.com/compute/v1/projects/{project}/regions/{region}/interconnectAttachments".format(**module.params)
def return_if_object(module, response, kind, allow_not_found=False):
# If not found, return nothing.
if allow_not_found and response.status_code == 404:
return None
# If no content, return nothing.
if response.status_code == 204:
return None
try:
module.raise_for_status(response)
result = response.json()
except getattr(json.decoder, 'JSONDecodeError', ValueError):
module.fail_json(msg="Invalid JSON response with error: %s" % response.text)
if navigate_hash(result, ['error', 'errors']):
module.fail_json(msg=navigate_hash(result, ['error', 'errors']))
return result
def is_different(module, response):
request = resource_to_request(module)
response = response_to_hash(module, response)
# Remove all output-only from response.
response_vals = {}
for k, v in response.items():
if k in request:
response_vals[k] = v
request_vals = {}
for k, v in request.items():
if k in response:
request_vals[k] = v
return GcpRequest(request_vals) != GcpRequest(response_vals)
# Remove unnecessary properties from the response.
# This is for doing comparisons with Ansible's current parameters.
def response_to_hash(module, response):
return {
u'cloudRouterIpAddress': response.get(u'cloudRouterIpAddress'),
u'customerRouterIpAddress': response.get(u'customerRouterIpAddress'),
u'interconnect': response.get(u'interconnect'),
u'description': response.get(u'description'),
u'privateInterconnectInfo': InterconnectAttachmentPrivateinterconnectinfo(response.get(u'privateInterconnectInfo', {}), module).from_response(),
u'googleReferenceId': response.get(u'googleReferenceId'),
u'router': response.get(u'router'),
u'creationTimestamp': response.get(u'creationTimestamp'),
u'id': response.get(u'id'),
u'name': response.get(u'name'),
u'candidateSubnets': response.get(u'candidateSubnets'),
u'vlanTag8021q': response.get(u'vlanTag8021q'),
}
def region_selflink(name, params):
if name is None:
return
url = r"https://www.googleapis.com/compute/v1/projects/.*/regions/[a-z1-9\-]*"
if not re.match(url, name):
name = "https://www.googleapis.com/compute/v1/projects/{project}/regions/%s".format(**params) % name
return name
def async_op_url(module, extra_data=None):
if extra_data is None:
extra_data = {}
url = "https://www.googleapis.com/compute/v1/projects/{project}/regions/{region}/operations/{op_id}"
combined = extra_data.copy()
combined.update(module.params)
return url.format(**combined)
def wait_for_operation(module, response):
op_result = return_if_object(module, response, 'compute#operation')
if op_result is None:
return {}
status = navigate_hash(op_result, ['status'])
wait_done = wait_for_completion(status, op_result, module)
return fetch_resource(module, navigate_hash(wait_done, ['targetLink']), 'compute#interconnectAttachment')
def wait_for_completion(status, op_result, module):
op_id = navigate_hash(op_result, ['name'])
op_uri = async_op_url(module, {'op_id': op_id})
while status != 'DONE':
raise_if_errors(op_result, ['error', 'errors'], module)
time.sleep(1.0)
op_result = fetch_resource(module, op_uri, 'compute#operation')
status = navigate_hash(op_result, ['status'])
return op_result
def raise_if_errors(response, err_path, module):
errors = navigate_hash(response, err_path)
if errors is not None:
module.fail_json(msg=errors)
class InterconnectAttachmentPrivateinterconnectinfo(object):
def __init__(self, request, module):
self.module = module
if request:
self.request = request
else:
self.request = {}
def to_request(self):
return remove_nones_from_dict({u'tag8021q': self.request.get('tag8021q')})
def from_response(self):
return remove_nones_from_dict({u'tag8021q': self.request.get(u'tag8021q')})
if __name__ == '__main__':
main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
// SPDX-License-Identifier: AGPL-3.0-only
// Code generated by openapi-gen. DO NOT EDIT.
package v2alpha1
import (
common "k8s.io/kube-openapi/pkg/common"
spec "k8s.io/kube-openapi/pkg/validation/spec"
)
func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
return map[string]common.OpenAPIDefinition{
AnnotationActions{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_AnnotationActions(ref),
AnnotationPermission{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_AnnotationPermission(ref),
Dashboard{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_Dashboard(ref),
DashboardAccess{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAccess(ref),
DashboardAction{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAction(ref),
DashboardActionVariable{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardActionVariable(ref),
DashboardAdHocFilterWithLabels{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAdHocFilterWithLabels(ref),
DashboardAdhocVariableKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAdhocVariableKind(ref),
DashboardAdhocVariableSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAdhocVariableSpec(ref),
DashboardAnnotationEventFieldMapping{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAnnotationEventFieldMapping(ref),
DashboardAnnotationPanelFilter{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAnnotationPanelFilter(ref),
DashboardAnnotationQueryKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAnnotationQueryKind(ref),
DashboardAnnotationQuerySpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAnnotationQuerySpec(ref),
DashboardAutoGridLayoutItemKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAutoGridLayoutItemKind(ref),
DashboardAutoGridLayoutItemSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAutoGridLayoutItemSpec(ref),
DashboardAutoGridLayoutKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAutoGridLayoutKind(ref),
DashboardAutoGridLayoutSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAutoGridLayoutSpec(ref),
DashboardAutoGridRepeatOptions{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardAutoGridRepeatOptions(ref),
"github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v2alpha1.DashboardClient": schema_pkg_apis_dashboard_v2alpha1_DashboardClient(ref),
DashboardConditionalRenderingDataKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingDataKind(ref),
DashboardConditionalRenderingDataSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingDataSpec(ref),
DashboardConditionalRenderingGroupKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingGroupKind(ref),
DashboardConditionalRenderingGroupSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingGroupSpec(ref),
DashboardConditionalRenderingTimeRangeSizeKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingTimeRangeSizeKind(ref),
DashboardConditionalRenderingTimeRangeSizeSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingTimeRangeSizeSpec(ref),
DashboardConditionalRenderingVariableKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingVariableKind(ref),
DashboardConditionalRenderingVariableKindOrConditionalRenderingDataKindOrConditionalRenderingTimeRangeSizeKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingVariableKindOrConditionalRenderingDataKindOrConditionalRenderingTimeRangeSizeKind(ref),
DashboardConditionalRenderingVariableSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingVariableSpec(ref),
DashboardConstantVariableKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConstantVariableKind(ref),
DashboardConstantVariableSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConstantVariableSpec(ref),
DashboardConversionStatus{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardConversionStatus(ref),
DashboardCustomVariableKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardCustomVariableKind(ref),
DashboardCustomVariableSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardCustomVariableSpec(ref),
DashboardDashboardLink{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardDashboardLink(ref),
DashboardDataLink{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardDataLink(ref),
DashboardDataQueryKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardDataQueryKind(ref),
DashboardDataSourceRef{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardDataSourceRef(ref),
DashboardDataTransformerConfig{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardDataTransformerConfig(ref),
DashboardDatasourceVariableKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardDatasourceVariableKind(ref),
DashboardDatasourceVariableSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardDatasourceVariableSpec(ref),
DashboardDynamicConfigValue{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardDynamicConfigValue(ref),
DashboardElementReference{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardElementReference(ref),
DashboardFetchOptions{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardFetchOptions(ref),
DashboardFieldColor{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardFieldColor(ref),
DashboardFieldConfig{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardFieldConfig(ref),
DashboardFieldConfigSource{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardFieldConfigSource(ref),
DashboardGridLayoutItemKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutItemKind(ref),
DashboardGridLayoutItemSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutItemSpec(ref),
DashboardGridLayoutKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutKind(ref),
DashboardGridLayoutKindOrAutoGridLayoutKindOrTabsLayoutKindOrRowsLayoutKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutKindOrAutoGridLayoutKindOrTabsLayoutKindOrRowsLayoutKind(ref),
DashboardGridLayoutKindOrRowsLayoutKindOrAutoGridLayoutKindOrTabsLayoutKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutKindOrRowsLayoutKindOrAutoGridLayoutKindOrTabsLayoutKind(ref),
DashboardGridLayoutSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutSpec(ref),
DashboardGroupByVariableKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardGroupByVariableKind(ref),
DashboardGroupByVariableSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardGroupByVariableSpec(ref),
DashboardInfinityOptions{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardInfinityOptions(ref),
DashboardIntervalVariableKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardIntervalVariableKind(ref),
DashboardIntervalVariableSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardIntervalVariableSpec(ref),
"github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v2alpha1.DashboardJSONCodec": schema_pkg_apis_dashboard_v2alpha1_DashboardJSONCodec(ref),
DashboardLibraryPanelKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardLibraryPanelKind(ref),
DashboardLibraryPanelKindSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardLibraryPanelKindSpec(ref),
DashboardLibraryPanelRef{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardLibraryPanelRef(ref),
DashboardList{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardList(ref),
DashboardMatcherConfig{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardMatcherConfig(ref),
DashboardMetricFindValue{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardMetricFindValue(ref),
DashboardPanelKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardPanelKind(ref),
DashboardPanelKindOrLibraryPanelKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardPanelKindOrLibraryPanelKind(ref),
DashboardPanelQueryKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardPanelQueryKind(ref),
DashboardPanelQuerySpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardPanelQuerySpec(ref),
DashboardPanelSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardPanelSpec(ref),
DashboardQueryGroupKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardQueryGroupKind(ref),
DashboardQueryGroupSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardQueryGroupSpec(ref),
DashboardQueryOptionsSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardQueryOptionsSpec(ref),
DashboardQueryVariableKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardQueryVariableKind(ref),
DashboardQueryVariableKindOrTextVariableKindOrConstantVariableKindOrDatasourceVariableKindOrIntervalVariableKindOrCustomVariableKindOrGroupByVariableKindOrAdhocVariableKindOrSwitchVariableKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardQueryVariableKindOrTextVariableKindOrConstantVariableKindOrDatasourceVariableKindOrIntervalVariableKindOrCustomVariableKindOrGroupByVariableKindOrAdhocVariableKindOrSwitchVariableKind(ref),
DashboardQueryVariableSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardQueryVariableSpec(ref),
DashboardRangeMap{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardRangeMap(ref),
DashboardRegexMap{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardRegexMap(ref),
DashboardRepeatOptions{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardRepeatOptions(ref),
DashboardRowRepeatOptions{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardRowRepeatOptions(ref),
DashboardRowsLayoutKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardRowsLayoutKind(ref),
DashboardRowsLayoutRowKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardRowsLayoutRowKind(ref),
DashboardRowsLayoutRowSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardRowsLayoutRowSpec(ref),
DashboardRowsLayoutSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardRowsLayoutSpec(ref),
DashboardSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardSpec(ref),
DashboardSpecialValueMap{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardSpecialValueMap(ref),
DashboardStatus{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardStatus(ref),
DashboardStringOrArrayOfString{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardStringOrArrayOfString(ref),
DashboardStringOrFloat64{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardStringOrFloat64(ref),
DashboardSwitchVariableKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardSwitchVariableKind(ref),
DashboardSwitchVariableSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardSwitchVariableSpec(ref),
DashboardTabRepeatOptions{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardTabRepeatOptions(ref),
DashboardTabsLayoutKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardTabsLayoutKind(ref),
DashboardTabsLayoutSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardTabsLayoutSpec(ref),
DashboardTabsLayoutTabKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardTabsLayoutTabKind(ref),
DashboardTabsLayoutTabSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardTabsLayoutTabSpec(ref),
DashboardTextVariableKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardTextVariableKind(ref),
DashboardTextVariableSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardTextVariableSpec(ref),
DashboardThreshold{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardThreshold(ref),
DashboardThresholdsConfig{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardThresholdsConfig(ref),
DashboardTimeRangeOption{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardTimeRangeOption(ref),
DashboardTimeSettingsSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardTimeSettingsSpec(ref),
DashboardTransformationKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardTransformationKind(ref),
DashboardV2alpha1ActionStyle{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardV2alpha1ActionStyle(ref),
DashboardV2alpha1FieldConfigSourceOverrides{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardV2alpha1FieldConfigSourceOverrides(ref),
DashboardV2alpha1RangeMapOptions{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardV2alpha1RangeMapOptions(ref),
DashboardV2alpha1RegexMapOptions{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardV2alpha1RegexMapOptions(ref),
DashboardV2alpha1SpecialValueMapOptions{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardV2alpha1SpecialValueMapOptions(ref),
DashboardValueMap{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardValueMap(ref),
DashboardValueMapOrRangeMapOrRegexMapOrSpecialValueMap{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardValueMapOrRangeMapOrRegexMapOrSpecialValueMap(ref),
DashboardValueMappingResult{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardValueMappingResult(ref),
DashboardVariableOption{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardVariableOption(ref),
DashboardVizConfigKind{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardVizConfigKind(ref),
DashboardVizConfigSpec{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardVizConfigSpec(ref),
DashboardWithAccessInfo{}.OpenAPIModelName(): schema_pkg_apis_dashboard_v2alpha1_DashboardWithAccessInfo(ref),
}
}
func schema_pkg_apis_dashboard_v2alpha1_AnnotationActions(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"canAdd": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"canEdit": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"canDelete": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
},
Required: []string{"canAdd", "canEdit", "canDelete"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_AnnotationPermission(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"dashboard": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(AnnotationActions{}.OpenAPIModelName()),
},
},
"organization": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(AnnotationActions{}.OpenAPIModelName()),
},
},
},
Required: []string{"dashboard", "organization"},
},
},
Dependencies: []string{
AnnotationActions{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_Dashboard(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Description: "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
Type: []string{"string"},
Format: "",
},
},
"apiVersion": {
SchemaProps: spec.SchemaProps{
Description: "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources",
Type: []string{"string"},
Format: "",
},
},
"metadata": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref("io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta"),
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Description: "Spec is the spec of the Dashboard",
Default: map[string]interface{}{},
Ref: ref(DashboardSpec{}.OpenAPIModelName()),
},
},
"status": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardStatus{}.OpenAPIModelName()),
},
},
},
Required: []string{"metadata", "spec", "status"},
},
},
Dependencies: []string{
DashboardSpec{}.OpenAPIModelName(), DashboardStatus{}.OpenAPIModelName(), "io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta"},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAccess(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"slug": {
SchemaProps: spec.SchemaProps{
Description: "Metadata fields",
Type: []string{"string"},
Format: "",
},
},
"url": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"isPublic": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"canSave": {
SchemaProps: spec.SchemaProps{
Description: "The permissions part",
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"canEdit": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"canAdmin": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"canStar": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"canDelete": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"annotationsPermissions": {
SchemaProps: spec.SchemaProps{
Ref: ref(AnnotationPermission{}.OpenAPIModelName()),
},
},
},
Required: []string{"isPublic", "canSave", "canEdit", "canAdmin", "canStar", "canDelete", "annotationsPermissions"},
},
},
Dependencies: []string{
AnnotationPermission{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAction(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"type": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"title": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"fetch": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardFetchOptions{}.OpenAPIModelName()),
},
},
"infinity": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardInfinityOptions{}.OpenAPIModelName()),
},
},
"confirmation": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"oneClick": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
Format: "",
},
},
"variables": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardActionVariable{}.OpenAPIModelName()),
},
},
},
},
},
"style": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardV2alpha1ActionStyle{}.OpenAPIModelName()),
},
},
},
Required: []string{"type", "title"},
},
},
Dependencies: []string{
DashboardActionVariable{}.OpenAPIModelName(), DashboardFetchOptions{}.OpenAPIModelName(), DashboardInfinityOptions{}.OpenAPIModelName(), DashboardV2alpha1ActionStyle{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardActionVariable(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"key": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"type": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"key", "name", "type"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAdHocFilterWithLabels(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Define the AdHocFilterWithLabels type",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"key": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"operator": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"value": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"values": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
"keyLabel": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"valueLabels": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
"forceEdit": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
Format: "",
},
},
"origin": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"condition": {
SchemaProps: spec.SchemaProps{
Description: "@deprecated",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"key", "operator", "value"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAdhocVariableKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Adhoc variable kind",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardAdhocVariableSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardAdhocVariableSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAdhocVariableSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Adhoc variable specification",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"datasource": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardDataSourceRef{}.OpenAPIModelName()),
},
},
"baseFilters": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardAdHocFilterWithLabels{}.OpenAPIModelName()),
},
},
},
},
},
"filters": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardAdHocFilterWithLabels{}.OpenAPIModelName()),
},
},
},
},
},
"defaultKeys": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardMetricFindValue{}.OpenAPIModelName()),
},
},
},
},
},
"label": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"hide": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"skipUrlSync": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"allowCustomValue": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
},
Required: []string{"name", "baseFilters", "filters", "defaultKeys", "hide", "skipUrlSync", "allowCustomValue"},
},
},
Dependencies: []string{
DashboardAdHocFilterWithLabels{}.OpenAPIModelName(), DashboardDataSourceRef{}.OpenAPIModelName(), DashboardMetricFindValue{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAnnotationEventFieldMapping(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Annotation event field mapping. Defines how to map a data frame field to an annotation event field.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"source": {
SchemaProps: spec.SchemaProps{
Description: "Source type for the field value",
Type: []string{"string"},
Format: "",
},
},
"value": {
SchemaProps: spec.SchemaProps{
Description: "Constant value to use when source is \"text\"",
Type: []string{"string"},
Format: "",
},
},
"regex": {
SchemaProps: spec.SchemaProps{
Description: "Regular expression to apply to the field value",
Type: []string{"string"},
Format: "",
},
},
},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAnnotationPanelFilter(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"exclude": {
SchemaProps: spec.SchemaProps{
Description: "Should the specified panels be included or excluded",
Type: []string{"boolean"},
Format: "",
},
},
"ids": {
SchemaProps: spec.SchemaProps{
Description: "Panel IDs that should be included or excluded",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
},
},
},
},
Required: []string{"ids"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAnnotationQueryKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardAnnotationQuerySpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardAnnotationQuerySpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAnnotationQuerySpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"datasource": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardDataSourceRef{}.OpenAPIModelName()),
},
},
"query": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardDataQueryKind{}.OpenAPIModelName()),
},
},
"enable": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"hide": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"iconColor": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"builtIn": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
Format: "",
},
},
"filter": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardAnnotationPanelFilter{}.OpenAPIModelName()),
},
},
"mappings": {
SchemaProps: spec.SchemaProps{
Description: "Mappings define how to convert data frame fields to annotation event fields.",
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardAnnotationEventFieldMapping{}.OpenAPIModelName()),
},
},
},
},
},
"legacyOptions": {
SchemaProps: spec.SchemaProps{
Description: "Catch-all field for datasource-specific properties",
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Format: "",
},
},
},
},
},
},
Required: []string{"enable", "hide", "iconColor", "name"},
},
},
Dependencies: []string{
DashboardAnnotationEventFieldMapping{}.OpenAPIModelName(), DashboardAnnotationPanelFilter{}.OpenAPIModelName(), DashboardDataQueryKind{}.OpenAPIModelName(), DashboardDataSourceRef{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAutoGridLayoutItemKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardAutoGridLayoutItemSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardAutoGridLayoutItemSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAutoGridLayoutItemSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"element": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardElementReference{}.OpenAPIModelName()),
},
},
"repeat": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardAutoGridRepeatOptions{}.OpenAPIModelName()),
},
},
"conditionalRendering": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardConditionalRenderingGroupKind{}.OpenAPIModelName()),
},
},
},
Required: []string{"element"},
},
},
Dependencies: []string{
DashboardAutoGridRepeatOptions{}.OpenAPIModelName(), DashboardConditionalRenderingGroupKind{}.OpenAPIModelName(), DashboardElementReference{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAutoGridLayoutKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardAutoGridLayoutSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardAutoGridLayoutSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAutoGridLayoutSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"maxColumnCount": {
SchemaProps: spec.SchemaProps{
Type: []string{"number"},
Format: "double",
},
},
"columnWidthMode": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"columnWidth": {
SchemaProps: spec.SchemaProps{
Type: []string{"number"},
Format: "double",
},
},
"rowHeightMode": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"rowHeight": {
SchemaProps: spec.SchemaProps{
Type: []string{"number"},
Format: "double",
},
},
"fillScreen": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
Format: "",
},
},
"items": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardAutoGridLayoutItemKind{}.OpenAPIModelName()),
},
},
},
},
},
},
Required: []string{"columnWidthMode", "rowHeightMode", "items"},
},
},
Dependencies: []string{
DashboardAutoGridLayoutItemKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardAutoGridRepeatOptions(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"mode": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"value": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"mode", "value"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardClient(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"client": {
SchemaProps: spec.SchemaProps{
Ref: ref("github.com/grafana/grafana-app-sdk/resource.TypedClient[T,L]"),
},
},
},
Required: []string{"client"},
},
},
Dependencies: []string{
"github.com/grafana/grafana-app-sdk/resource.TypedClient[T,L]"},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingDataKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardConditionalRenderingDataSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardConditionalRenderingDataSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingDataSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"value": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
},
Required: []string{"value"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingGroupKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardConditionalRenderingGroupSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardConditionalRenderingGroupSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingGroupSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"visibility": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"condition": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"items": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardConditionalRenderingVariableKindOrConditionalRenderingDataKindOrConditionalRenderingTimeRangeSizeKind{}.OpenAPIModelName()),
},
},
},
},
},
},
Required: []string{"visibility", "condition", "items"},
},
},
Dependencies: []string{
DashboardConditionalRenderingVariableKindOrConditionalRenderingDataKindOrConditionalRenderingTimeRangeSizeKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingTimeRangeSizeKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardConditionalRenderingTimeRangeSizeSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardConditionalRenderingTimeRangeSizeSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingTimeRangeSizeSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"value": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"value"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingVariableKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardConditionalRenderingVariableSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardConditionalRenderingVariableSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingVariableKindOrConditionalRenderingDataKindOrConditionalRenderingTimeRangeSizeKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"ConditionalRenderingVariableKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardConditionalRenderingVariableKind{}.OpenAPIModelName()),
},
},
"ConditionalRenderingDataKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardConditionalRenderingDataKind{}.OpenAPIModelName()),
},
},
"ConditionalRenderingTimeRangeSizeKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardConditionalRenderingTimeRangeSizeKind{}.OpenAPIModelName()),
},
},
},
},
},
Dependencies: []string{
DashboardConditionalRenderingDataKind{}.OpenAPIModelName(), DashboardConditionalRenderingTimeRangeSizeKind{}.OpenAPIModelName(), DashboardConditionalRenderingVariableKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConditionalRenderingVariableSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"variable": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"operator": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"value": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"variable", "operator", "value"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConstantVariableKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Constant variable kind",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardConstantVariableSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardConstantVariableSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConstantVariableSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Constant variable specification",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"query": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"current": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
"label": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"hide": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"skipUrlSync": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"name", "query", "current", "hide", "skipUrlSync"},
},
},
Dependencies: []string{
DashboardVariableOption{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardConversionStatus(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "ConversionStatus is the status of the conversion of the dashboard.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"failed": {
SchemaProps: spec.SchemaProps{
Description: "Whether from another version has failed. If true, means that the dashboard is not valid, and the caller should instead fetch the stored version.",
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"error": {
SchemaProps: spec.SchemaProps{
Description: "The error message from the conversion. Empty if the conversion has not failed.",
Type: []string{"string"},
Format: "",
},
},
"storedVersion": {
SchemaProps: spec.SchemaProps{
Description: "The version which was stored when the dashboard was created / updated. Fetching this version should always succeed.",
Type: []string{"string"},
Format: "",
},
},
"source": {
SchemaProps: spec.SchemaProps{
Description: "The original value map[string]any",
Type: []string{"object"},
Format: "",
},
},
},
Required: []string{"failed"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardCustomVariableKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Custom variable kind",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardCustomVariableSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardCustomVariableSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardCustomVariableSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Custom variable specification",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"query": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"current": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
"options": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
},
},
},
"multi": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"includeAll": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"allValue": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"label": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"hide": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"skipUrlSync": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"allowCustomValue": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"valuesFormat": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"name", "query", "current", "options", "multi", "includeAll", "hide", "skipUrlSync", "allowCustomValue"},
},
},
Dependencies: []string{
DashboardVariableOption{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardDashboardLink(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Links with references to other dashboards or external resources",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"title": {
SchemaProps: spec.SchemaProps{
Description: "Title to display with the link",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"type": {
SchemaProps: spec.SchemaProps{
Description: "Link type. Accepted values are dashboards (to refer to another dashboard) and link (to refer to an external resource) FIXME: The type is generated as `type: DashboardLinkType | dashboardLinkType.Link;` but it should be `type: DashboardLinkType`",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"icon": {
SchemaProps: spec.SchemaProps{
Description: "Icon name to be displayed with the link",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"tooltip": {
SchemaProps: spec.SchemaProps{
Description: "Tooltip to display when the user hovers their mouse over it",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"url": {
SchemaProps: spec.SchemaProps{
Description: "Link URL. Only required/valid if the type is link",
Type: []string{"string"},
Format: "",
},
},
"tags": {
SchemaProps: spec.SchemaProps{
Description: "List of tags to limit the linked dashboards. If empty, all dashboards will be displayed. Only valid if the type is dashboards",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
"asDropdown": {
SchemaProps: spec.SchemaProps{
Description: "If true, all dashboards links will be displayed in a dropdown. If false, all dashboards links will be displayed side by side. Only valid if the type is dashboards",
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"targetBlank": {
SchemaProps: spec.SchemaProps{
Description: "If true, the link will be opened in a new tab",
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"includeVars": {
SchemaProps: spec.SchemaProps{
Description: "If true, includes current template variables values in the link as query params",
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"keepTime": {
SchemaProps: spec.SchemaProps{
Description: "If true, includes current time range in the link as query params",
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"placement": {
SchemaProps: spec.SchemaProps{
Description: "Placement can be used to display the link somewhere else on the dashboard other than above the visualisations.",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"title", "type", "icon", "tooltip", "tags", "asDropdown", "targetBlank", "includeVars", "keepTime"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardDataLink(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"title": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"url": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"targetBlank": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
Format: "",
},
},
},
Required: []string{"title", "url"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardDataQueryKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Description: "The kind of a DataQueryKind is the datasource type",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Format: "",
},
},
},
},
},
},
Required: []string{"kind", "spec"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardDataSourceRef(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"type": {
SchemaProps: spec.SchemaProps{
Description: "The plugin type-id",
Type: []string{"string"},
Format: "",
},
},
"uid": {
SchemaProps: spec.SchemaProps{
Description: "Specific datasource instance",
Type: []string{"string"},
Format: "",
},
},
},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardDataTransformerConfig(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Transformations allow to manipulate data returned by a query before the system applies a visualization. Using transformations you can: rename fields, join time series data, perform mathematical operations across queries, use the output of one transformation as the input to another transformation, etc.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"id": {
SchemaProps: spec.SchemaProps{
Description: "Unique identifier of transformer",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"disabled": {
SchemaProps: spec.SchemaProps{
Description: "Disabled transformations are skipped",
Type: []string{"boolean"},
Format: "",
},
},
"filter": {
SchemaProps: spec.SchemaProps{
Description: "Optional frame matcher. When missing it will be applied to all results",
Ref: ref(DashboardMatcherConfig{}.OpenAPIModelName()),
},
},
"topic": {
SchemaProps: spec.SchemaProps{
Description: "Where to pull DataFrames from as input to transformation",
Type: []string{"string"},
Format: "",
},
},
"options": {
SchemaProps: spec.SchemaProps{
Description: "Options to be passed to the transformer Valid options depend on the transformer id",
Type: []string{"object"},
Format: "",
},
},
},
Required: []string{"id", "options"},
},
},
Dependencies: []string{
DashboardMatcherConfig{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardDatasourceVariableKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Datasource variable kind",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardDatasourceVariableSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardDatasourceVariableSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardDatasourceVariableSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Datasource variable specification",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"pluginId": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"refresh": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"regex": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"current": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
"options": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
},
},
},
"multi": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"includeAll": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"allValue": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"label": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"hide": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"skipUrlSync": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"allowCustomValue": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
},
Required: []string{"name", "pluginId", "refresh", "regex", "current", "options", "multi", "includeAll", "hide", "skipUrlSync", "allowCustomValue"},
},
},
Dependencies: []string{
DashboardVariableOption{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardDynamicConfigValue(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"id": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"value": {
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Format: "",
},
},
},
Required: []string{"id"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardElementReference(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"kind", "name"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardFetchOptions(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"method": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"url": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"body": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"queryParams": {
SchemaProps: spec.SchemaProps{
Description: "These are 2D arrays of strings, each representing a key-value pair We are defining them this way because we can't generate a go struct that that would have exactly two strings in each sub-array",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
},
},
},
"headers": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
},
},
},
},
Required: []string{"method", "url"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardFieldColor(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Map a field to a color.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"mode": {
SchemaProps: spec.SchemaProps{
Description: "The main color scheme mode.",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"fixedColor": {
SchemaProps: spec.SchemaProps{
Description: "The fixed color value for fixed or shades color modes.",
Type: []string{"string"},
Format: "",
},
},
"seriesBy": {
SchemaProps: spec.SchemaProps{
Description: "Some visualizations need to know how to assign a series color from by value color schemes.",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"mode"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardFieldConfig(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "The data model used in Grafana, namely the data frame, is a columnar-oriented table structure that unifies both time series and table query results. Each column within this structure is called a field. A field can represent a single time series or table column. Field options allow you to change how the data is displayed in your visualizations.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"displayName": {
SchemaProps: spec.SchemaProps{
Description: "The display value for this field. This supports template variables blank is auto",
Type: []string{"string"},
Format: "",
},
},
"displayNameFromDS": {
SchemaProps: spec.SchemaProps{
Description: "This can be used by data sources that return and explicit naming structure for values and labels When this property is configured, this value is used rather than the default naming strategy.",
Type: []string{"string"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Description: "Human readable field metadata",
Type: []string{"string"},
Format: "",
},
},
"path": {
SchemaProps: spec.SchemaProps{
Description: "An explicit path to the field in the datasource. When the frame meta includes a path, This will default to `${frame.meta.path}/${field.name}\n\nWhen defined, this value can be used as an identifier within the datasource scope, and may be used to update the results",
Type: []string{"string"},
Format: "",
},
},
"writeable": {
SchemaProps: spec.SchemaProps{
Description: "True if data source can write a value to the path. Auth/authz are supported separately",
Type: []string{"boolean"},
Format: "",
},
},
"filterable": {
SchemaProps: spec.SchemaProps{
Description: "True if data source field supports ad-hoc filters",
Type: []string{"boolean"},
Format: "",
},
},
"unit": {
SchemaProps: spec.SchemaProps{
Description: "Unit a field should use. The unit you select is applied to all fields except time. You can use the units ID available in Grafana or a custom unit. Available units in Grafana: https://github.com/grafana/grafana/blob/main/packages/grafana-data/src/valueFormats/categories.ts As custom unit, you can use the following formats: `suffix:<suffix>` for custom unit that should go after value. `prefix:<prefix>` for custom unit that should go before value. `time:<format>` For custom date time formats type for example `time:YYYY-MM-DD`. `si:<base scale><unit characters>` for custom SI units. For example: `si: mF`. This one is a bit more advanced as you can specify both a unit and the source data scale. So if your source data is represented as milli (thousands of) something prefix the unit with that SI scale character. `count:<unit>` for a custom count unit. `currency:<unit>` for custom a currency unit.",
Type: []string{"string"},
Format: "",
},
},
"decimals": {
SchemaProps: spec.SchemaProps{
Description: "Specify the number of decimals Grafana includes in the rendered value. If you leave this field blank, Grafana automatically truncates the number of decimals based on the value. For example 1.1234 will display as 1.12 and 100.456 will display as 100. To display all decimals, set the unit to `String`.",
Type: []string{"number"},
Format: "double",
},
},
"min": {
SchemaProps: spec.SchemaProps{
Description: "The minimum value used in percentage threshold calculations. Leave blank for auto calculation based on all series and fields.",
Type: []string{"number"},
Format: "double",
},
},
"max": {
SchemaProps: spec.SchemaProps{
Description: "The maximum value used in percentage threshold calculations. Leave blank for auto calculation based on all series and fields.",
Type: []string{"number"},
Format: "double",
},
},
"mappings": {
SchemaProps: spec.SchemaProps{
Description: "Convert input values into a display string",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardValueMapOrRangeMapOrRegexMapOrSpecialValueMap{}.OpenAPIModelName()),
},
},
},
},
},
"thresholds": {
SchemaProps: spec.SchemaProps{
Description: "Map numeric values to states",
Ref: ref(DashboardThresholdsConfig{}.OpenAPIModelName()),
},
},
"color": {
SchemaProps: spec.SchemaProps{
Description: "Panel color configuration",
Ref: ref(DashboardFieldColor{}.OpenAPIModelName()),
},
},
"links": {
SchemaProps: spec.SchemaProps{
Description: "The behavior when clicking on a result",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Format: "",
},
},
},
},
},
"actions": {
SchemaProps: spec.SchemaProps{
Description: "Define interactive HTTP requests that can be triggered from data visualizations.",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardAction{}.OpenAPIModelName()),
},
},
},
},
},
"noValue": {
SchemaProps: spec.SchemaProps{
Description: "Alternative to empty string",
Type: []string{"string"},
Format: "",
},
},
"custom": {
SchemaProps: spec.SchemaProps{
Description: "custom is specified by the FieldConfig field in panel plugin schemas.",
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Format: "",
},
},
},
},
},
"fieldMinMax": {
SchemaProps: spec.SchemaProps{
Description: "Calculate min max per field",
Type: []string{"boolean"},
Format: "",
},
},
"nullValueMode": {
SchemaProps: spec.SchemaProps{
Description: "How null values should be handled when calculating field stats \"null\" - Include null values, \"connected\" - Ignore nulls, \"null as zero\" - Treat nulls as zero",
Type: []string{"string"},
Format: "",
},
},
},
},
},
Dependencies: []string{
DashboardAction{}.OpenAPIModelName(), DashboardFieldColor{}.OpenAPIModelName(), DashboardThresholdsConfig{}.OpenAPIModelName(), DashboardValueMapOrRangeMapOrRegexMapOrSpecialValueMap{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardFieldConfigSource(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "The data model used in Grafana, namely the data frame, is a columnar-oriented table structure that unifies both time series and table query results. Each column within this structure is called a field. A field can represent a single time series or table column. Field options allow you to change how the data is displayed in your visualizations.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"defaults": {
SchemaProps: spec.SchemaProps{
Description: "Defaults are the options applied to all fields.",
Default: map[string]interface{}{},
Ref: ref(DashboardFieldConfig{}.OpenAPIModelName()),
},
},
"overrides": {
SchemaProps: spec.SchemaProps{
Description: "Overrides are the options applied to specific fields overriding the defaults.",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardV2alpha1FieldConfigSourceOverrides{}.OpenAPIModelName()),
},
},
},
},
},
},
Required: []string{"defaults", "overrides"},
},
},
Dependencies: []string{
DashboardFieldConfig{}.OpenAPIModelName(), DashboardV2alpha1FieldConfigSourceOverrides{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutItemKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardGridLayoutItemSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardGridLayoutItemSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutItemSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"x": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"y": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"width": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"height": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"element": {
SchemaProps: spec.SchemaProps{
Description: "reference to a PanelKind from dashboard.spec.elements Expressed as JSON Schema reference",
Default: map[string]interface{}{},
Ref: ref(DashboardElementReference{}.OpenAPIModelName()),
},
},
"repeat": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardRepeatOptions{}.OpenAPIModelName()),
},
},
},
Required: []string{"x", "y", "width", "height", "element"},
},
},
Dependencies: []string{
DashboardElementReference{}.OpenAPIModelName(), DashboardRepeatOptions{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardGridLayoutSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardGridLayoutSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutKindOrAutoGridLayoutKindOrTabsLayoutKindOrRowsLayoutKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"GridLayoutKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardGridLayoutKind{}.OpenAPIModelName()),
},
},
"AutoGridLayoutKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardAutoGridLayoutKind{}.OpenAPIModelName()),
},
},
"TabsLayoutKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardTabsLayoutKind{}.OpenAPIModelName()),
},
},
"RowsLayoutKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardRowsLayoutKind{}.OpenAPIModelName()),
},
},
},
},
},
Dependencies: []string{
DashboardAutoGridLayoutKind{}.OpenAPIModelName(), DashboardGridLayoutKind{}.OpenAPIModelName(), DashboardRowsLayoutKind{}.OpenAPIModelName(), DashboardTabsLayoutKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutKindOrRowsLayoutKindOrAutoGridLayoutKindOrTabsLayoutKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"GridLayoutKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardGridLayoutKind{}.OpenAPIModelName()),
},
},
"RowsLayoutKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardRowsLayoutKind{}.OpenAPIModelName()),
},
},
"AutoGridLayoutKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardAutoGridLayoutKind{}.OpenAPIModelName()),
},
},
"TabsLayoutKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardTabsLayoutKind{}.OpenAPIModelName()),
},
},
},
},
},
Dependencies: []string{
DashboardAutoGridLayoutKind{}.OpenAPIModelName(), DashboardGridLayoutKind{}.OpenAPIModelName(), DashboardRowsLayoutKind{}.OpenAPIModelName(), DashboardTabsLayoutKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardGridLayoutSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"items": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardGridLayoutItemKind{}.OpenAPIModelName()),
},
},
},
},
},
},
Required: []string{"items"},
},
},
Dependencies: []string{
DashboardGridLayoutItemKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardGroupByVariableKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Group variable kind",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardGroupByVariableSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardGroupByVariableSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardGroupByVariableSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "GroupBy variable specification",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"datasource": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardDataSourceRef{}.OpenAPIModelName()),
},
},
"defaultValue": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
"current": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
"options": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
},
},
},
"multi": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"label": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"hide": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"skipUrlSync": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"name", "current", "options", "multi", "hide", "skipUrlSync"},
},
},
Dependencies: []string{
DashboardDataSourceRef{}.OpenAPIModelName(), DashboardVariableOption{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardInfinityOptions(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"method": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"url": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"body": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"queryParams": {
SchemaProps: spec.SchemaProps{
Description: "These are 2D arrays of strings, each representing a key-value pair We are defining them this way because we can't generate a go struct that that would have exactly two strings in each sub-array",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
},
},
},
"datasourceUid": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"headers": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
},
},
},
},
Required: []string{"method", "url", "datasourceUid"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardIntervalVariableKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Interval variable kind",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardIntervalVariableSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardIntervalVariableSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardIntervalVariableSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Interval variable specification",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"query": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"current": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
"options": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
},
},
},
"auto": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"auto_min": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"auto_count": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"refresh": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"label": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"hide": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"skipUrlSync": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"name", "query", "current", "options", "auto", "auto_min", "auto_count", "refresh", "hide", "skipUrlSync"},
},
},
Dependencies: []string{
DashboardVariableOption{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardJSONCodec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "DashboardJSONCodec is an implementation of resource.Codec for kubernetes JSON encoding",
Type: []string{"object"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardLibraryPanelKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardLibraryPanelKindSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardLibraryPanelKindSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardLibraryPanelKindSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"id": {
SchemaProps: spec.SchemaProps{
Description: "Panel ID for the library panel in the dashboard",
Default: 0,
Type: []string{"number"},
Format: "double",
},
},
"title": {
SchemaProps: spec.SchemaProps{
Description: "Title for the library panel in the dashboard",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"libraryPanel": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardLibraryPanelRef{}.OpenAPIModelName()),
},
},
},
Required: []string{"id", "title", "libraryPanel"},
},
},
Dependencies: []string{
DashboardLibraryPanelRef{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardLibraryPanelRef(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "A library panel is a reusable panel that you can use in any dashboard. When you make a change to a library panel, that change propagates to all instances of where the panel is used. Library panels streamline reuse of panels across multiple dashboards.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Description: "Library panel name",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"uid": {
SchemaProps: spec.SchemaProps{
Description: "Library panel uid",
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"name", "uid"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardList(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Description: "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
Type: []string{"string"},
Format: "",
},
},
"apiVersion": {
SchemaProps: spec.SchemaProps{
Description: "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources",
Type: []string{"string"},
Format: "",
},
},
"metadata": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref("io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta"),
},
},
"items": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(Dashboard{}.OpenAPIModelName()),
},
},
},
},
},
},
Required: []string{"metadata", "items"},
},
},
Dependencies: []string{
Dashboard{}.OpenAPIModelName(), "io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta"},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardMatcherConfig(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Matcher is a predicate configuration. Based on the config a set of field(s) or values is filtered in order to apply override / transformation. It comes with in id ( to resolve implementation from registry) and a configuration that’s specific to a particular matcher type.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"id": {
SchemaProps: spec.SchemaProps{
Description: "The matcher id. This is used to find the matcher implementation from registry.",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"options": {
SchemaProps: spec.SchemaProps{
Description: "The matcher options. This is specific to the matcher implementation.",
Type: []string{"object"},
Format: "",
},
},
},
Required: []string{"id"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardMetricFindValue(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Define the MetricFindValue type",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"text": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"value": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardStringOrFloat64{}.OpenAPIModelName()),
},
},
"group": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"expandable": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
Format: "",
},
},
},
Required: []string{"text"},
},
},
Dependencies: []string{
DashboardStringOrFloat64{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardPanelKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardPanelSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardPanelSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardPanelKindOrLibraryPanelKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"PanelKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardPanelKind{}.OpenAPIModelName()),
},
},
"LibraryPanelKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardLibraryPanelKind{}.OpenAPIModelName()),
},
},
},
},
},
Dependencies: []string{
DashboardLibraryPanelKind{}.OpenAPIModelName(), DashboardPanelKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardPanelQueryKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardPanelQuerySpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardPanelQuerySpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardPanelQuerySpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"query": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardDataQueryKind{}.OpenAPIModelName()),
},
},
"datasource": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardDataSourceRef{}.OpenAPIModelName()),
},
},
"refId": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"hidden": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
},
Required: []string{"query", "refId", "hidden"},
},
},
Dependencies: []string{
DashboardDataQueryKind{}.OpenAPIModelName(), DashboardDataSourceRef{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardPanelSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"id": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"number"},
Format: "double",
},
},
"title": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"links": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardDataLink{}.OpenAPIModelName()),
},
},
},
},
},
"data": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardQueryGroupKind{}.OpenAPIModelName()),
},
},
"vizConfig": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVizConfigKind{}.OpenAPIModelName()),
},
},
"transparent": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
Format: "",
},
},
},
Required: []string{"id", "title", "description", "links", "data", "vizConfig"},
},
},
Dependencies: []string{
DashboardDataLink{}.OpenAPIModelName(), DashboardQueryGroupKind{}.OpenAPIModelName(), DashboardVizConfigKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardQueryGroupKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardQueryGroupSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardQueryGroupSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardQueryGroupSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"queries": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardPanelQueryKind{}.OpenAPIModelName()),
},
},
},
},
},
"transformations": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardTransformationKind{}.OpenAPIModelName()),
},
},
},
},
},
"queryOptions": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardQueryOptionsSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"queries", "transformations", "queryOptions"},
},
},
Dependencies: []string{
DashboardPanelQueryKind{}.OpenAPIModelName(), DashboardQueryOptionsSpec{}.OpenAPIModelName(), DashboardTransformationKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardQueryOptionsSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"timeFrom": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"maxDataPoints": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
Format: "int64",
},
},
"timeShift": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"queryCachingTTL": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
Format: "int64",
},
},
"interval": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"cacheTimeout": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"hideTimeOverride": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
Format: "",
},
},
},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardQueryVariableKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Query variable kind",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardQueryVariableSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardQueryVariableSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardQueryVariableKindOrTextVariableKindOrConstantVariableKindOrDatasourceVariableKindOrIntervalVariableKindOrCustomVariableKindOrGroupByVariableKindOrAdhocVariableKindOrSwitchVariableKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"QueryVariableKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardQueryVariableKind{}.OpenAPIModelName()),
},
},
"TextVariableKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardTextVariableKind{}.OpenAPIModelName()),
},
},
"ConstantVariableKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardConstantVariableKind{}.OpenAPIModelName()),
},
},
"DatasourceVariableKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardDatasourceVariableKind{}.OpenAPIModelName()),
},
},
"IntervalVariableKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardIntervalVariableKind{}.OpenAPIModelName()),
},
},
"CustomVariableKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardCustomVariableKind{}.OpenAPIModelName()),
},
},
"GroupByVariableKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardGroupByVariableKind{}.OpenAPIModelName()),
},
},
"AdhocVariableKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardAdhocVariableKind{}.OpenAPIModelName()),
},
},
"SwitchVariableKind": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardSwitchVariableKind{}.OpenAPIModelName()),
},
},
},
},
},
Dependencies: []string{
DashboardAdhocVariableKind{}.OpenAPIModelName(), DashboardConstantVariableKind{}.OpenAPIModelName(), DashboardCustomVariableKind{}.OpenAPIModelName(), DashboardDatasourceVariableKind{}.OpenAPIModelName(), DashboardGroupByVariableKind{}.OpenAPIModelName(), DashboardIntervalVariableKind{}.OpenAPIModelName(), DashboardQueryVariableKind{}.OpenAPIModelName(), DashboardSwitchVariableKind{}.OpenAPIModelName(), DashboardTextVariableKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardQueryVariableSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Query variable specification",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"current": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
"label": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"hide": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"refresh": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"skipUrlSync": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"datasource": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardDataSourceRef{}.OpenAPIModelName()),
},
},
"query": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardDataQueryKind{}.OpenAPIModelName()),
},
},
"regex": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"regexApplyTo": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"sort": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"definition": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"options": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
},
},
},
"multi": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"includeAll": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"allValue": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"placeholder": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"allowCustomValue": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"staticOptions": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
},
},
},
"staticOptionsOrder": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"name", "current", "hide", "refresh", "skipUrlSync", "query", "regex", "sort", "options", "multi", "includeAll", "allowCustomValue"},
},
},
Dependencies: []string{
DashboardDataQueryKind{}.OpenAPIModelName(), DashboardDataSourceRef{}.OpenAPIModelName(), DashboardVariableOption{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardRangeMap(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Maps numerical ranges to a display text and color. For example, if a value is within a certain range, you can configure a range value mapping to display Low or High rather than the number.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"type": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"options": {
SchemaProps: spec.SchemaProps{
Description: "Range to match against and the result to apply when the value is within the range",
Default: map[string]interface{}{},
Ref: ref(DashboardV2alpha1RangeMapOptions{}.OpenAPIModelName()),
},
},
},
Required: []string{"type", "options"},
},
},
Dependencies: []string{
DashboardV2alpha1RangeMapOptions{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardRegexMap(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Maps regular expressions to replacement text and a color. For example, if a value is www.example.com, you can configure a regex value mapping so that Grafana displays www and truncates the domain.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"type": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"options": {
SchemaProps: spec.SchemaProps{
Description: "Regular expression to match against and the result to apply when the value matches the regex",
Default: map[string]interface{}{},
Ref: ref(DashboardV2alpha1RegexMapOptions{}.OpenAPIModelName()),
},
},
},
Required: []string{"type", "options"},
},
},
Dependencies: []string{
DashboardV2alpha1RegexMapOptions{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardRepeatOptions(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"mode": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"value": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"direction": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"maxPerRow": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
Format: "int64",
},
},
},
Required: []string{"mode", "value"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardRowRepeatOptions(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"mode": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"value": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"mode", "value"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardRowsLayoutKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardRowsLayoutSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardRowsLayoutSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardRowsLayoutRowKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardRowsLayoutRowSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardRowsLayoutRowSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardRowsLayoutRowSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"title": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"collapse": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
Format: "",
},
},
"hideHeader": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
Format: "",
},
},
"fillScreen": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
Format: "",
},
},
"conditionalRendering": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardConditionalRenderingGroupKind{}.OpenAPIModelName()),
},
},
"repeat": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardRowRepeatOptions{}.OpenAPIModelName()),
},
},
"layout": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardGridLayoutKindOrAutoGridLayoutKindOrTabsLayoutKindOrRowsLayoutKind{}.OpenAPIModelName()),
},
},
},
Required: []string{"layout"},
},
},
Dependencies: []string{
DashboardConditionalRenderingGroupKind{}.OpenAPIModelName(), DashboardGridLayoutKindOrAutoGridLayoutKindOrTabsLayoutKindOrRowsLayoutKind{}.OpenAPIModelName(), DashboardRowRepeatOptions{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardRowsLayoutSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"rows": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardRowsLayoutRowKind{}.OpenAPIModelName()),
},
},
},
},
},
},
Required: []string{"rows"},
},
},
Dependencies: []string{
DashboardRowsLayoutRowKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"annotations": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardAnnotationQueryKind{}.OpenAPIModelName()),
},
},
},
},
},
"cursorSync": {
SchemaProps: spec.SchemaProps{
Description: "Configuration of dashboard cursor sync behavior. \"Off\" for no shared crosshair or tooltip (default). \"Crosshair\" for shared crosshair. \"Tooltip\" for shared crosshair AND shared tooltip.",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Description: "Description of dashboard.",
Type: []string{"string"},
Format: "",
},
},
"editable": {
SchemaProps: spec.SchemaProps{
Description: "Whether a dashboard is editable or not.",
Type: []string{"boolean"},
Format: "",
},
},
"elements": {
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardPanelKindOrLibraryPanelKind{}.OpenAPIModelName()),
},
},
},
},
},
"layout": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardGridLayoutKindOrRowsLayoutKindOrAutoGridLayoutKindOrTabsLayoutKind{}.OpenAPIModelName()),
},
},
"links": {
SchemaProps: spec.SchemaProps{
Description: "Links with references to other dashboards or external websites.",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardDashboardLink{}.OpenAPIModelName()),
},
},
},
},
},
"liveNow": {
SchemaProps: spec.SchemaProps{
Description: "When set to true, the dashboard will redraw panels at an interval matching the pixel width. This will keep data \"moving left\" regardless of the query refresh rate. This setting helps avoid dashboards presenting stale live data.",
Type: []string{"boolean"},
Format: "",
},
},
"preload": {
SchemaProps: spec.SchemaProps{
Description: "When set to true, the dashboard will load all panels in the dashboard when it's loaded.",
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"revision": {
SchemaProps: spec.SchemaProps{
Description: "Plugins only. The version of the dashboard installed together with the plugin. This is used to determine if the dashboard should be updated when the plugin is updated.",
Type: []string{"integer"},
Format: "int32",
},
},
"tags": {
SchemaProps: spec.SchemaProps{
Description: "Tags associated with dashboard.",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
"timeSettings": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardTimeSettingsSpec{}.OpenAPIModelName()),
},
},
"title": {
SchemaProps: spec.SchemaProps{
Description: "Title of dashboard.",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"variables": {
SchemaProps: spec.SchemaProps{
Description: "Configured template variables.",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardQueryVariableKindOrTextVariableKindOrConstantVariableKindOrDatasourceVariableKindOrIntervalVariableKindOrCustomVariableKindOrGroupByVariableKindOrAdhocVariableKindOrSwitchVariableKind{}.OpenAPIModelName()),
},
},
},
},
},
},
Required: []string{"annotations", "cursorSync", "elements", "layout", "links", "preload", "tags", "timeSettings", "title", "variables"},
},
},
Dependencies: []string{
DashboardAnnotationQueryKind{}.OpenAPIModelName(), DashboardDashboardLink{}.OpenAPIModelName(), DashboardGridLayoutKindOrRowsLayoutKindOrAutoGridLayoutKindOrTabsLayoutKind{}.OpenAPIModelName(), DashboardPanelKindOrLibraryPanelKind{}.OpenAPIModelName(), DashboardQueryVariableKindOrTextVariableKindOrConstantVariableKindOrDatasourceVariableKindOrIntervalVariableKindOrCustomVariableKindOrGroupByVariableKindOrAdhocVariableKindOrSwitchVariableKind{}.OpenAPIModelName(), DashboardTimeSettingsSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardSpecialValueMap(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Maps special values like Null, NaN (not a number), and boolean values like true and false to a display text and color. See SpecialValueMatch to see the list of special values. For example, you can configure a special value mapping so that null values appear as N/A.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"type": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"options": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardV2alpha1SpecialValueMapOptions{}.OpenAPIModelName()),
},
},
},
Required: []string{"type", "options"},
},
},
Dependencies: []string{
DashboardV2alpha1SpecialValueMapOptions{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardStatus(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"conversion": {
SchemaProps: spec.SchemaProps{
Description: "Optional conversion status.",
Ref: ref(DashboardConversionStatus{}.OpenAPIModelName()),
},
},
},
},
},
Dependencies: []string{
DashboardConversionStatus{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardStringOrArrayOfString(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"String": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"ArrayOfString": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardStringOrFloat64(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"String": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"Float64": {
SchemaProps: spec.SchemaProps{
Type: []string{"number"},
Format: "double",
},
},
},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardSwitchVariableKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardSwitchVariableSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardSwitchVariableSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardSwitchVariableSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Switch variable specification",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"current": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"enabledValue": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"disabledValue": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"label": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"hide": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"skipUrlSync": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"name", "current", "enabledValue", "disabledValue", "hide", "skipUrlSync"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardTabRepeatOptions(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"mode": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"value": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"mode", "value"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardTabsLayoutKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardTabsLayoutSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardTabsLayoutSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardTabsLayoutSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"tabs": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardTabsLayoutTabKind{}.OpenAPIModelName()),
},
},
},
},
},
},
Required: []string{"tabs"},
},
},
Dependencies: []string{
DashboardTabsLayoutTabKind{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardTabsLayoutTabKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardTabsLayoutTabSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardTabsLayoutTabSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardTabsLayoutTabSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"title": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"layout": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardGridLayoutKindOrRowsLayoutKindOrAutoGridLayoutKindOrTabsLayoutKind{}.OpenAPIModelName()),
},
},
"conditionalRendering": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardConditionalRenderingGroupKind{}.OpenAPIModelName()),
},
},
"repeat": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardTabRepeatOptions{}.OpenAPIModelName()),
},
},
},
Required: []string{"layout"},
},
},
Dependencies: []string{
DashboardConditionalRenderingGroupKind{}.OpenAPIModelName(), DashboardGridLayoutKindOrRowsLayoutKindOrAutoGridLayoutKindOrTabsLayoutKind{}.OpenAPIModelName(), DashboardTabRepeatOptions{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardTextVariableKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Text variable kind",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardTextVariableSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardTextVariableSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardTextVariableSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Text variable specification",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"current": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVariableOption{}.OpenAPIModelName()),
},
},
"query": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"label": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"hide": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"skipUrlSync": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"description": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"name", "current", "query", "hide", "skipUrlSync"},
},
},
Dependencies: []string{
DashboardVariableOption{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardThreshold(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"value": {
SchemaProps: spec.SchemaProps{
Description: "Value null means -Infinity",
Type: []string{"number"},
Format: "double",
},
},
"color": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"value", "color"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardThresholdsConfig(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"mode": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"steps": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardThreshold{}.OpenAPIModelName()),
},
},
},
},
},
},
Required: []string{"mode", "steps"},
},
},
Dependencies: []string{
DashboardThreshold{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardTimeRangeOption(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"display": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"from": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"to": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"display", "from", "to"},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardTimeSettingsSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Time configuration It defines the default time config for the time picker, the refresh picker for the specific dashboard.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"timezone": {
SchemaProps: spec.SchemaProps{
Description: "Timezone of dashboard. Accepted values are IANA TZDB zone ID or \"browser\" or \"utc\".",
Type: []string{"string"},
Format: "",
},
},
"from": {
SchemaProps: spec.SchemaProps{
Description: "Start time range for dashboard. Accepted values are relative time strings like \"now-6h\" or absolute time strings like \"2020-07-10T08:00:00.000Z\".",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"to": {
SchemaProps: spec.SchemaProps{
Description: "End time range for dashboard. Accepted values are relative time strings like \"now-6h\" or absolute time strings like \"2020-07-10T08:00:00.000Z\".",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"autoRefresh": {
SchemaProps: spec.SchemaProps{
Description: "Refresh rate of dashboard. Represented via interval string, e.g. \"5s\", \"1m\", \"1h\", \"1d\". v1: refresh",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"autoRefreshIntervals": {
SchemaProps: spec.SchemaProps{
Description: "Interval options available in the refresh picker dropdown. v1: timepicker.refresh_intervals",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
"quickRanges": {
SchemaProps: spec.SchemaProps{
Description: "Selectable options available in the time picker dropdown. Has no effect on provisioned dashboard. v1: timepicker.quick_ranges , not exposed in the UI",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardTimeRangeOption{}.OpenAPIModelName()),
},
},
},
},
},
"hideTimepicker": {
SchemaProps: spec.SchemaProps{
Description: "Whether timepicker is visible or not. v1: timepicker.hidden",
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"weekStart": {
SchemaProps: spec.SchemaProps{
Description: "Day when the week starts. Expressed by the name of the day in lowercase, e.g. \"monday\".",
Type: []string{"string"},
Format: "",
},
},
"fiscalYearStartMonth": {
SchemaProps: spec.SchemaProps{
Description: "The month that the fiscal year starts on. 0 = January, 11 = December",
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"nowDelay": {
SchemaProps: spec.SchemaProps{
Description: "Override the now time by entering a time delay. Use this option to accommodate known delays in data aggregation to avoid null values. v1: timepicker.nowDelay",
Type: []string{"string"},
Format: "",
},
},
},
Required: []string{"from", "to", "autoRefresh", "autoRefreshIntervals", "hideTimepicker", "fiscalYearStartMonth"},
},
},
Dependencies: []string{
DashboardTimeRangeOption{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardTransformationKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Description: "The kind of a TransformationKind is the transformation ID",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardDataTransformerConfig{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardDataTransformerConfig{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardV2alpha1ActionStyle(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"backgroundColor": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardV2alpha1FieldConfigSourceOverrides(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"__systemRef": {
SchemaProps: spec.SchemaProps{
Description: "Describes config override rules created when interacting with Grafana.",
Type: []string{"string"},
Format: "",
},
},
"matcher": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardMatcherConfig{}.OpenAPIModelName()),
},
},
"properties": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardDynamicConfigValue{}.OpenAPIModelName()),
},
},
},
},
},
},
Required: []string{"matcher", "properties"},
},
},
Dependencies: []string{
DashboardDynamicConfigValue{}.OpenAPIModelName(), DashboardMatcherConfig{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardV2alpha1RangeMapOptions(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"from": {
SchemaProps: spec.SchemaProps{
Description: "Min value of the range. It can be null which means -Infinity",
Type: []string{"number"},
Format: "double",
},
},
"to": {
SchemaProps: spec.SchemaProps{
Description: "Max value of the range. It can be null which means +Infinity",
Type: []string{"number"},
Format: "double",
},
},
"result": {
SchemaProps: spec.SchemaProps{
Description: "Config to apply when the value is within the range",
Default: map[string]interface{}{},
Ref: ref(DashboardValueMappingResult{}.OpenAPIModelName()),
},
},
},
Required: []string{"from", "to", "result"},
},
},
Dependencies: []string{
DashboardValueMappingResult{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardV2alpha1RegexMapOptions(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"pattern": {
SchemaProps: spec.SchemaProps{
Description: "Regular expression to match against",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"result": {
SchemaProps: spec.SchemaProps{
Description: "Config to apply when the value matches the regex",
Default: map[string]interface{}{},
Ref: ref(DashboardValueMappingResult{}.OpenAPIModelName()),
},
},
},
Required: []string{"pattern", "result"},
},
},
Dependencies: []string{
DashboardValueMappingResult{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardV2alpha1SpecialValueMapOptions(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"match": {
SchemaProps: spec.SchemaProps{
Description: "Special value to match against",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"result": {
SchemaProps: spec.SchemaProps{
Description: "Config to apply when the value matches the special value",
Default: map[string]interface{}{},
Ref: ref(DashboardValueMappingResult{}.OpenAPIModelName()),
},
},
},
Required: []string{"match", "result"},
},
},
Dependencies: []string{
DashboardValueMappingResult{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardValueMap(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Maps text values to a color or different display text and color. For example, you can configure a value mapping so that all instances of the value 10 appear as Perfection! rather than the number.",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"type": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"options": {
SchemaProps: spec.SchemaProps{
Description: "Map with <value_to_match>: ValueMappingResult. For example: { \"10\": { text: \"Perfection!\", color: \"green\" } }",
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardValueMappingResult{}.OpenAPIModelName()),
},
},
},
},
},
},
Required: []string{"type", "options"},
},
},
Dependencies: []string{
DashboardValueMappingResult{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardValueMapOrRangeMapOrRegexMapOrSpecialValueMap(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"ValueMap": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardValueMap{}.OpenAPIModelName()),
},
},
"RangeMap": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardRangeMap{}.OpenAPIModelName()),
},
},
"RegexMap": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardRegexMap{}.OpenAPIModelName()),
},
},
"SpecialValueMap": {
SchemaProps: spec.SchemaProps{
Ref: ref(DashboardSpecialValueMap{}.OpenAPIModelName()),
},
},
},
},
},
Dependencies: []string{
DashboardRangeMap{}.OpenAPIModelName(), DashboardRegexMap{}.OpenAPIModelName(), DashboardSpecialValueMap{}.OpenAPIModelName(), DashboardValueMap{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardValueMappingResult(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Result used as replacement with text and color when the value matches",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"text": {
SchemaProps: spec.SchemaProps{
Description: "Text to display when the value matches",
Type: []string{"string"},
Format: "",
},
},
"color": {
SchemaProps: spec.SchemaProps{
Description: "Text to use when the value matches",
Type: []string{"string"},
Format: "",
},
},
"icon": {
SchemaProps: spec.SchemaProps{
Description: "Icon to display when the value matches. Only specific visualizations.",
Type: []string{"string"},
Format: "",
},
},
"index": {
SchemaProps: spec.SchemaProps{
Description: "Position in the mapping array. Only used internally.",
Type: []string{"integer"},
Format: "int32",
},
},
},
},
},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardVariableOption(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Variable option specification",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"selected": {
SchemaProps: spec.SchemaProps{
Description: "Whether the option is selected or not",
Type: []string{"boolean"},
Format: "",
},
},
"text": {
SchemaProps: spec.SchemaProps{
Description: "Text to be displayed for the option",
Ref: ref(DashboardStringOrArrayOfString{}.OpenAPIModelName()),
},
},
"value": {
SchemaProps: spec.SchemaProps{
Description: "Value of the option",
Ref: ref(DashboardStringOrArrayOfString{}.OpenAPIModelName()),
},
},
"properties": {
SchemaProps: spec.SchemaProps{
Description: "Additional properties for multi-props variables",
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
},
Required: []string{"text", "value"},
},
},
Dependencies: []string{
DashboardStringOrArrayOfString{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardVizConfigKind(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Description: "The kind of a VizConfigKind is the plugin ID",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardVizConfigSpec{}.OpenAPIModelName()),
},
},
},
Required: []string{"kind", "spec"},
},
},
Dependencies: []string{
DashboardVizConfigSpec{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardVizConfigSpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "--- Kinds ---",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"pluginVersion": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"options": {
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Format: "",
},
},
},
},
},
"fieldConfig": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardFieldConfigSource{}.OpenAPIModelName()),
},
},
},
Required: []string{"pluginVersion", "options", "fieldConfig"},
},
},
Dependencies: []string{
DashboardFieldConfigSource{}.OpenAPIModelName()},
}
}
func schema_pkg_apis_dashboard_v2alpha1_DashboardWithAccessInfo(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "This is like the legacy DTO where access and metadata are all returned in a single call",
Type: []string{"object"},
Properties: map[string]spec.Schema{
"kind": {
SchemaProps: spec.SchemaProps{
Description: "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
Type: []string{"string"},
Format: "",
},
},
"apiVersion": {
SchemaProps: spec.SchemaProps{
Description: "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources",
Type: []string{"string"},
Format: "",
},
},
"metadata": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref("io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta"),
},
},
"spec": {
SchemaProps: spec.SchemaProps{
Description: "Spec is the spec of the Dashboard",
Default: map[string]interface{}{},
Ref: ref(DashboardSpec{}.OpenAPIModelName()),
},
},
"status": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardStatus{}.OpenAPIModelName()),
},
},
"access": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref(DashboardAccess{}.OpenAPIModelName()),
},
},
},
Required: []string{"metadata", "spec", "status", "access"},
},
},
Dependencies: []string{
DashboardAccess{}.OpenAPIModelName(), DashboardSpec{}.OpenAPIModelName(), DashboardStatus{}.OpenAPIModelName(), "io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta"},
}
}
|
go
|
github
|
https://github.com/grafana/grafana
|
apps/dashboard/pkg/apis/dashboard/v2alpha1/zz_generated.openapi.go
|
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000, Intel Corporation, all rights reserved.
// Copyright (C) 2013, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "precomp.hpp"
#include "rho.h"
#include <iostream>
#include "usac.hpp"
namespace cv
{
static inline double scaleFor(double x){
return (std::fabs(x) > std::numeric_limits<float>::epsilon()) ? 1./x : 1.;
}
static inline float scaleFor(float x){
return (std::fabs(x) > std::numeric_limits<float>::epsilon()) ? 1.f/x : 1.f;
}
/**
* This class estimates a homography \f$H\in \mathbb{R}^{3\times 3}\f$
* between \f$\mathbf{x} \in \mathbb{R}^3\f$ and
* \f$\mathbf{X} \in \mathbb{R}^3\f$ using DLT (direct linear transform)
* with algebraic distance.
*
* \f[
* \lambda \mathbf{x} = H \mathbf{X}
* \f]
* where \f$\lambda \in \mathbb{R} \f$.
*
*/
class HomographyEstimatorCallback CV_FINAL : public PointSetRegistrator::Callback
{
public:
bool checkSubset( InputArray _ms1, InputArray _ms2, int count ) const CV_OVERRIDE
{
Mat ms1 = _ms1.getMat(), ms2 = _ms2.getMat();
if( haveCollinearPoints(ms1, count) || haveCollinearPoints(ms2, count) )
return false;
// We check whether the minimal set of points for the homography estimation
// are geometrically consistent. We check if every 3 correspondences sets
// fulfills the constraint.
//
// The usefulness of this constraint is explained in the paper:
//
// "Speeding-up homography estimation in mobile devices"
// Journal of Real-Time Image Processing. 2013. DOI: 10.1007/s11554-012-0314-1
// Pablo Marquez-Neila, Javier Lopez-Alberca, Jose M. Buenaposada, Luis Baumela
if( count == 4 )
{
static const int tt[][3] = {{0, 1, 2}, {1, 2, 3}, {0, 2, 3}, {0, 1, 3}};
const Point2f* src = ms1.ptr<Point2f>();
const Point2f* dst = ms2.ptr<Point2f>();
int negative = 0;
for( int i = 0; i < 4; i++ )
{
const int* t = tt[i];
Matx33d A(src[t[0]].x, src[t[0]].y, 1., src[t[1]].x, src[t[1]].y, 1., src[t[2]].x, src[t[2]].y, 1.);
Matx33d B(dst[t[0]].x, dst[t[0]].y, 1., dst[t[1]].x, dst[t[1]].y, 1., dst[t[2]].x, dst[t[2]].y, 1.);
negative += determinant(A)*determinant(B) < 0;
}
if( negative != 0 && negative != 4 )
return false;
}
return true;
}
/**
* Normalization method:
* - $x$ and $y$ coordinates are normalized independently
* - first the coordinates are shifted so that the average coordinate is \f$(0,0)\f$
* - then the coordinates are scaled so that the average L1 norm is 1, i.e,
* the average L1 norm of the \f$x\f$ coordinates is 1 and the average
* L1 norm of the \f$y\f$ coordinates is also 1.
*
* @param _m1 source points containing (X,Y), depth is CV_32F with 1 column 2 channels or
* 2 columns 1 channel
* @param _m2 destination points containing (x,y), depth is CV_32F with 1 column 2 channels or
* 2 columns 1 channel
* @param _model CV_64FC1, 3x3, normalized, i.e., the last element is 1
*/
int runKernel( InputArray _m1, InputArray _m2, OutputArray _model ) const CV_OVERRIDE
{
Mat m1 = _m1.getMat(), m2 = _m2.getMat();
int i, count = m1.checkVector(2);
const Point2f* M = m1.ptr<Point2f>();
const Point2f* m = m2.ptr<Point2f>();
double LtL[9][9], W[9][1], V[9][9];
Mat _LtL( 9, 9, CV_64F, &LtL[0][0] );
Mat matW( 9, 1, CV_64F, W );
Mat matV( 9, 9, CV_64F, V );
Mat _H0( 3, 3, CV_64F, V[8] );
Mat _Htemp( 3, 3, CV_64F, V[7] );
Point2d cM(0,0), cm(0,0), sM(0,0), sm(0,0);
for( i = 0; i < count; i++ )
{
cm.x += m[i].x; cm.y += m[i].y;
cM.x += M[i].x; cM.y += M[i].y;
}
cm.x /= count;
cm.y /= count;
cM.x /= count;
cM.y /= count;
for( i = 0; i < count; i++ )
{
sm.x += fabs(m[i].x - cm.x);
sm.y += fabs(m[i].y - cm.y);
sM.x += fabs(M[i].x - cM.x);
sM.y += fabs(M[i].y - cM.y);
}
if( fabs(sm.x) < DBL_EPSILON || fabs(sm.y) < DBL_EPSILON ||
fabs(sM.x) < DBL_EPSILON || fabs(sM.y) < DBL_EPSILON )
return 0;
sm.x = count/sm.x; sm.y = count/sm.y;
sM.x = count/sM.x; sM.y = count/sM.y;
double invHnorm[9] = { 1./sm.x, 0, cm.x, 0, 1./sm.y, cm.y, 0, 0, 1 };
double Hnorm2[9] = { sM.x, 0, -cM.x*sM.x, 0, sM.y, -cM.y*sM.y, 0, 0, 1 };
Mat _invHnorm( 3, 3, CV_64FC1, invHnorm );
Mat _Hnorm2( 3, 3, CV_64FC1, Hnorm2 );
_LtL.setTo(Scalar::all(0));
for( i = 0; i < count; i++ )
{
double x = (m[i].x - cm.x)*sm.x, y = (m[i].y - cm.y)*sm.y;
double X = (M[i].x - cM.x)*sM.x, Y = (M[i].y - cM.y)*sM.y;
double Lx[] = { X, Y, 1, 0, 0, 0, -x*X, -x*Y, -x };
double Ly[] = { 0, 0, 0, X, Y, 1, -y*X, -y*Y, -y };
int j, k;
for( j = 0; j < 9; j++ )
for( k = j; k < 9; k++ )
LtL[j][k] += Lx[j]*Lx[k] + Ly[j]*Ly[k];
}
completeSymm( _LtL );
eigen( _LtL, matW, matV );
_Htemp = _invHnorm*_H0;
_H0 = _Htemp*_Hnorm2;
_H0.convertTo(_model, _H0.type(), scaleFor(_H0.at<double>(2,2)));
return 1;
}
/**
* Compute the reprojection error.
* m2 = H*m1
* @param _m1 depth CV_32F, 1-channel with 2 columns or 2-channel with 1 column
* @param _m2 depth CV_32F, 1-channel with 2 columns or 2-channel with 1 column
* @param _model CV_64FC1, 3x3
* @param _err output, CV_32FC1, square of the L2 norm
*/
void computeError( InputArray _m1, InputArray _m2, InputArray _model, OutputArray _err ) const CV_OVERRIDE
{
Mat m1 = _m1.getMat(), m2 = _m2.getMat(), model = _model.getMat();
int i, count = m1.checkVector(2);
const Point2f* M = m1.ptr<Point2f>();
const Point2f* m = m2.ptr<Point2f>();
const double* H = model.ptr<double>();
float Hf[] = { (float)H[0], (float)H[1], (float)H[2], (float)H[3], (float)H[4], (float)H[5], (float)H[6], (float)H[7], (float)H[8] };
_err.create(count, 1, CV_32F);
float* err = _err.getMat().ptr<float>();
for( i = 0; i < count; i++ )
{
float ww = 1.f/(Hf[6]*M[i].x + Hf[7]*M[i].y + Hf[8]);
float dx = (Hf[0]*M[i].x + Hf[1]*M[i].y + Hf[2])*ww - m[i].x;
float dy = (Hf[3]*M[i].x + Hf[4]*M[i].y + Hf[5])*ww - m[i].y;
err[i] = dx*dx + dy*dy;
}
}
};
class HomographyRefineCallback CV_FINAL : public LMSolver::Callback
{
public:
HomographyRefineCallback(InputArray _src, InputArray _dst)
{
src = _src.getMat();
dst = _dst.getMat();
}
bool compute(InputArray _param, OutputArray _err, OutputArray _Jac) const CV_OVERRIDE
{
int i, count = src.checkVector(2);
Mat param = _param.getMat();
_err.create(count*2, 1, CV_64F);
Mat err = _err.getMat(), J;
if( _Jac.needed())
{
_Jac.create(count*2, param.rows, CV_64F);
_Jac.setTo(0.);
J = _Jac.getMat();
CV_Assert( J.isContinuous() && J.cols == 9 );
}
const Point2f* M = src.ptr<Point2f>();
const Point2f* m = dst.ptr<Point2f>();
const double* h = param.ptr<double>();
double* errptr = err.ptr<double>();
double* Jptr = J.data ? J.ptr<double>() : 0;
for( i = 0; i < count; i++ )
{
double Mx = M[i].x, My = M[i].y;
double ww = h[6]*Mx + h[7]*My + h[8];
ww = fabs(ww) > DBL_EPSILON ? 1./ww : 0;
double xi = (h[0]*Mx + h[1]*My + h[2])*ww;
double yi = (h[3]*Mx + h[4]*My + h[5])*ww;
errptr[i*2] = xi - m[i].x;
errptr[i*2+1] = yi - m[i].y;
if( Jptr )
{
Jptr[0] = Mx*ww; Jptr[1] = My*ww; Jptr[2] = ww;
Jptr[6] = -Mx*ww*xi; Jptr[7] = -My*ww*xi; Jptr[8] = -ww*xi;
Jptr[12] = Mx*ww; Jptr[13] = My*ww; Jptr[14] = ww;
Jptr[15] = -Mx*ww*yi; Jptr[16] = -My*ww*yi; Jptr[17] = -ww*yi;
Jptr += 18;
}
}
return true;
}
Mat src, dst;
};
} // end namespace cv
namespace cv{
static bool createAndRunRHORegistrator(double confidence,
int maxIters,
double ransacReprojThreshold,
int npoints,
InputArray _src,
InputArray _dst,
OutputArray _H,
OutputArray _tempMask){
Mat src = _src.getMat();
Mat dst = _dst.getMat();
Mat tempMask;
bool result;
double beta = 0.35;/* 0.35 is a value that often works. */
/* Create temporary output matrix (RHO outputs a single-precision H only). */
Mat tmpH = Mat(3, 3, CV_32FC1);
/* Create output mask. */
tempMask = Mat(npoints, 1, CV_8U);
/**
* Make use of the RHO estimator API.
*
* This is where the math happens. A homography estimation context is
* initialized, used, then finalized.
*/
Ptr<RHO_HEST> p = rhoInit();
/**
* Optional. Ideally, the context would survive across calls to
* findHomography(), but no clean way appears to exit to do so. The price
* to pay is marginally more computational work than strictly needed.
*/
rhoEnsureCapacity(p, npoints, beta);
/**
* The critical call. All parameters are heavily documented in rho.h.
*
* Currently, NR (Non-Randomness criterion) and Final Refinement (with
* internal, optimized Levenberg-Marquardt method) are enabled. However,
* while refinement seems to correctly smooth jitter most of the time, when
* refinement fails it tends to make the estimate visually very much worse.
* It may be necessary to remove the refinement flags in a future commit if
* this behaviour is too problematic.
*/
result = !!rhoHest(p,
(const float*)src.data,
(const float*)dst.data,
(char*) tempMask.data,
(unsigned) npoints,
(float) ransacReprojThreshold,
(unsigned) maxIters,
(unsigned) maxIters,
confidence,
4U,
beta,
RHO_FLAG_ENABLE_NR | RHO_FLAG_ENABLE_FINAL_REFINEMENT,
NULL,
(float*)tmpH.data);
/* Convert float homography to double precision. */
tmpH.convertTo(_H, CV_64FC1);
/* Maps non-zero mask elements to 1, for the sake of the test case. */
for(int k=0;k<npoints;k++){
tempMask.data[k] = !!tempMask.data[k];
}
tempMask.copyTo(_tempMask);
return result;
}
}
cv::Mat cv::findHomography( InputArray _points1, InputArray _points2,
int method, double ransacReprojThreshold, OutputArray _mask,
const int maxIters, const double confidence)
{
CV_INSTRUMENT_REGION();
if (method >= USAC_DEFAULT && method <= USAC_MAGSAC)
return usac::findHomography(_points1, _points2, method, ransacReprojThreshold,
_mask, maxIters, confidence);
const double defaultRANSACReprojThreshold = 3;
bool result = false;
Mat points1 = _points1.getMat(), points2 = _points2.getMat();
Mat src, dst, H, tempMask;
int npoints = -1;
for( int i = 1; i <= 2; i++ )
{
Mat& p = i == 1 ? points1 : points2;
Mat& m = i == 1 ? src : dst;
npoints = p.checkVector(2, -1, false);
if( npoints < 0 )
{
npoints = p.checkVector(3, -1, false);
if( npoints < 0 )
CV_Error(Error::StsBadArg, "The input arrays should be 2D or 3D point sets");
if( npoints == 0 )
return Mat();
convertPointsFromHomogeneous(p, p);
}
// Need at least 4 point correspondences to calculate Homography
if( npoints < 4 )
CV_Error(Error::StsVecLengthErr , "The input arrays should have at least 4 corresponding point sets to calculate Homography");
p.reshape(2, npoints).convertTo(m, CV_32F);
}
CV_Assert( src.checkVector(2) == dst.checkVector(2) );
if( ransacReprojThreshold <= 0 )
ransacReprojThreshold = defaultRANSACReprojThreshold;
Ptr<PointSetRegistrator::Callback> cb = makePtr<HomographyEstimatorCallback>();
if( method == 0 || npoints == 4 )
{
tempMask = Mat::ones(npoints, 1, CV_8U);
result = cb->runKernel(src, dst, H) > 0;
}
else if( method == RANSAC )
result = createRANSACPointSetRegistrator(cb, 4, ransacReprojThreshold, confidence, maxIters)->run(src, dst, H, tempMask);
else if( method == LMEDS )
result = createLMeDSPointSetRegistrator(cb, 4, confidence, maxIters)->run(src, dst, H, tempMask);
else if( method == RHO )
result = createAndRunRHORegistrator(confidence, maxIters, ransacReprojThreshold, npoints, src, dst, H, tempMask);
else
CV_Error(Error::StsBadArg, "Unknown estimation method");
if( result && npoints > 4 && method != RHO)
{
// save the original points before compressing
const int npoints_input = npoints;
const Mat src_input = src.clone();
const Mat dst_input = dst.clone();
compressElems( src.ptr<Point2f>(), tempMask.ptr<uchar>(), 1, npoints );
npoints = compressElems( dst.ptr<Point2f>(), tempMask.ptr<uchar>(), 1, npoints );
if( npoints > 0 )
{
Mat src1 = src.rowRange(0, npoints);
Mat dst1 = dst.rowRange(0, npoints);
src = src1;
dst = dst1;
if( method == RANSAC || method == LMEDS )
cb->runKernel( src, dst, H );
Mat H8(9, 1, CV_64F, H.ptr<double>());
LMSolver::create(makePtr<HomographyRefineCallback>(src, dst), 10)->run(H8);
H.convertTo(H, H.type(), scaleFor(H.at<double>(2,2)));
// find new inliers
const float thr_sqr = static_cast<float>(ransacReprojThreshold * ransacReprojThreshold);
cv::Mat errors;
cb->computeError(src_input, dst_input, H, errors);
uchar* maskptr = tempMask.ptr<uchar>();
const float * const errors_ptr = errors.ptr<float>();
for (int i = 0; i < npoints_input; i++) {
maskptr[i] = static_cast<uchar>(errors_ptr[i] <= thr_sqr);
}
}
}
if( result )
{
if( _mask.needed() )
tempMask.copyTo(_mask);
}
else
{
H.release();
if(_mask.needed() ) {
tempMask = Mat::zeros(npoints >= 0 ? npoints : 0, 1, CV_8U);
tempMask.copyTo(_mask);
}
}
return H;
}
cv::Mat cv::findHomography( InputArray _points1, InputArray _points2,
OutputArray _mask, int method, double ransacReprojThreshold )
{
return cv::findHomography(_points1, _points2, method, ransacReprojThreshold, _mask);
}
cv::Mat cv::findHomography(InputArray srcPoints, InputArray dstPoints, OutputArray mask,
const UsacParams ¶ms) {
Ptr<usac::Model> model;
usac::setParameters(model, usac::EstimationMethod::HOMOGRAPHY, params, mask.needed());
Ptr<usac::RansacOutput> ransac_output;
if (usac::run(model, srcPoints, dstPoints,
ransac_output, noArray(), noArray(), noArray(), noArray())) {
usac::saveMask(mask, ransac_output->getInliersMask());
return ransac_output->getModel() / ransac_output->getModel().at<double>(2,2);
} else return Mat();
}
/* Estimation of Fundamental Matrix from point correspondences.
The original code has been written by Valery Mosyagin */
/* The algorithms (except for RANSAC) and the notation have been taken from
Zhengyou Zhang's research report
"Determining the Epipolar Geometry and its Uncertainty: A Review"
that can be found at http://www-sop.inria.fr/robotvis/personnel/zzhang/zzhang-eng.html */
/************************************** 7-point algorithm *******************************/
namespace cv
{
/**
* Compute the fundamental matrix using the 7-point algorithm.
*
* \f[
* (\mathrm{m2}_i,1)^T \mathrm{fmatrix} (\mathrm{m1}_i,1) = 0
* \f]
*
* @param _m1 Contain points in the reference view. Depth CV_32F with 2-channel
* 1 column or 1-channel 2 columns. It has 7 rows.
* @param _m2 Contain points in the other view. Depth CV_32F with 2-channel
* 1 column or 1-channel 2 columns. It has 7 rows.
* @param _fmatrix Output fundamental matrix (or matrices) of type CV_64FC1.
* The user is responsible for allocating the memory before calling
* this function.
* @return Number of fundamental matrices. Valid values are 1, 2 or 3.
* - 1, row 0 to row 2 in _fmatrix is a valid fundamental matrix
* - 2, row 3 to row 5 in _fmatrix is a valid fundamental matrix
* - 3, row 6 to row 8 in _fmatrix is a valid fundamental matrix
*
* Note that the computed fundamental matrix is normalized, i.e.,
* the last element \f$F_{33}\f$ is 1.
*/
static int run7Point( const Mat& _m1, const Mat& _m2, Mat& _fmatrix )
{
double a[7*9], w[7], u[9*9], v[9*9], c[4], r[3] = {0};
double* f1, *f2;
double t0, t1, t2;
Mat A( 7, 9, CV_64F, a );
Mat U( 7, 9, CV_64F, u );
Mat Vt( 9, 9, CV_64F, v );
Mat W( 7, 1, CV_64F, w );
Mat coeffs( 1, 4, CV_64F, c );
Mat roots( 1, 3, CV_64F, r );
const Point2f* m1 = _m1.ptr<Point2f>();
const Point2f* m2 = _m2.ptr<Point2f>();
double* fmatrix = _fmatrix.ptr<double>();
int i, k, n;
Point2d m1c(0, 0), m2c(0, 0);
double t, scale1 = 0, scale2 = 0;
const int count = 7;
// compute centers and average distances for each of the two point sets
for( i = 0; i < count; i++ )
{
m1c += Point2d(m1[i]);
m2c += Point2d(m2[i]);
}
// calculate the normalizing transformations for each of the point sets:
// after the transformation each set will have the mass center at the coordinate origin
// and the average distance from the origin will be ~sqrt(2).
t = 1./count;
m1c *= t;
m2c *= t;
for( i = 0; i < count; i++ )
{
scale1 += norm(Point2d(m1[i].x - m1c.x, m1[i].y - m1c.y));
scale2 += norm(Point2d(m2[i].x - m2c.x, m2[i].y - m2c.y));
}
scale1 *= t;
scale2 *= t;
if( scale1 < FLT_EPSILON || scale2 < FLT_EPSILON )
return 0;
scale1 = std::sqrt(2.)/scale1;
scale2 = std::sqrt(2.)/scale2;
// form a linear system: i-th row of A(=a) represents
// the equation: (m2[i], 1)'*F*(m1[i], 1) = 0
for( i = 0; i < 7; i++ )
{
double x0 = (m1[i].x - m1c.x)*scale1;
double y0 = (m1[i].y - m1c.y)*scale1;
double x1 = (m2[i].x - m2c.x)*scale2;
double y1 = (m2[i].y - m2c.y)*scale2;
a[i*9+0] = x1*x0;
a[i*9+1] = x1*y0;
a[i*9+2] = x1;
a[i*9+3] = y1*x0;
a[i*9+4] = y1*y0;
a[i*9+5] = y1;
a[i*9+6] = x0;
a[i*9+7] = y0;
a[i*9+8] = 1;
}
// A*(f11 f12 ... f33)' = 0 is singular (7 equations for 9 variables), so
// the solution is linear subspace of dimensionality 2.
// => use the last two singular vectors as a basis of the space
// (according to SVD properties)
SVDecomp( A, W, U, Vt, SVD::MODIFY_A + SVD::FULL_UV );
f1 = v + 7*9;
f2 = v + 8*9;
// f1, f2 is a basis => lambda*f1 + mu*f2 is an arbitrary fundamental matrix,
// as it is determined up to a scale, normalize lambda & mu (lambda + mu = 1),
// so f ~ lambda*f1 + (1 - lambda)*f2.
// use the additional constraint det(f) = det(lambda*f1 + (1-lambda)*f2) to find lambda.
// it will be a cubic equation.
// find c - polynomial coefficients.
for( i = 0; i < 9; i++ )
f1[i] -= f2[i];
t0 = f2[4]*f2[8] - f2[5]*f2[7];
t1 = f2[3]*f2[8] - f2[5]*f2[6];
t2 = f2[3]*f2[7] - f2[4]*f2[6];
c[3] = f2[0]*t0 - f2[1]*t1 + f2[2]*t2;
c[2] = f1[0]*t0 - f1[1]*t1 + f1[2]*t2 -
f1[3]*(f2[1]*f2[8] - f2[2]*f2[7]) +
f1[4]*(f2[0]*f2[8] - f2[2]*f2[6]) -
f1[5]*(f2[0]*f2[7] - f2[1]*f2[6]) +
f1[6]*(f2[1]*f2[5] - f2[2]*f2[4]) -
f1[7]*(f2[0]*f2[5] - f2[2]*f2[3]) +
f1[8]*(f2[0]*f2[4] - f2[1]*f2[3]);
t0 = f1[4]*f1[8] - f1[5]*f1[7];
t1 = f1[3]*f1[8] - f1[5]*f1[6];
t2 = f1[3]*f1[7] - f1[4]*f1[6];
c[1] = f2[0]*t0 - f2[1]*t1 + f2[2]*t2 -
f2[3]*(f1[1]*f1[8] - f1[2]*f1[7]) +
f2[4]*(f1[0]*f1[8] - f1[2]*f1[6]) -
f2[5]*(f1[0]*f1[7] - f1[1]*f1[6]) +
f2[6]*(f1[1]*f1[5] - f1[2]*f1[4]) -
f2[7]*(f1[0]*f1[5] - f1[2]*f1[3]) +
f2[8]*(f1[0]*f1[4] - f1[1]*f1[3]);
c[0] = f1[0]*t0 - f1[1]*t1 + f1[2]*t2;
// solve the cubic equation; there can be 1 to 3 roots ...
n = solveCubic( coeffs, roots );
if( n < 1 || n > 3 )
return n;
// transformation matrices
Matx33d T1( scale1, 0, -scale1*m1c.x, 0, scale1, -scale1*m1c.y, 0, 0, 1 );
Matx33d T2( scale2, 0, -scale2*m2c.x, 0, scale2, -scale2*m2c.y, 0, 0, 1 );
for( k = 0; k < n; k++, fmatrix += 9 )
{
// for each root form the fundamental matrix
double lambda = r[k], mu = 1.;
double s = f1[8]*r[k] + f2[8];
// normalize each matrix, so that F(3,3) (~fmatrix[8]) == 1
if( fabs(s) > DBL_EPSILON )
{
mu = 1./s;
lambda *= mu;
fmatrix[8] = 1.;
}
else
fmatrix[8] = 0.;
for( i = 0; i < 8; i++ )
fmatrix[i] = f1[i]*lambda + f2[i]*mu;
// de-normalize
Mat F(3, 3, CV_64F, fmatrix);
F = T2.t() * F * T1;
// make F(3,3) = 1
if(fabs(F.at<double>(8)) > FLT_EPSILON )
F *= 1. / F.at<double>(8);
}
return n;
}
/**
* Compute the fundamental matrix using the 8-point algorithm.
*
* \f[
* (\mathrm{m2}_i,1)^T \mathrm{fmatrix} (\mathrm{m1}_i,1) = 0
* \f]
*
* @param _m1 Contain points in the reference view. Depth CV_32F with 2-channel
* 1 column or 1-channel 2 columns. It has 8 rows.
* @param _m2 Contain points in the other view. Depth CV_32F with 2-channel
* 1 column or 1-channel 2 columns. It has 8 rows.
* @param _fmatrix Output fundamental matrix (or matrices) of type CV_64FC1.
* The user is responsible for allocating the memory before calling
* this function.
* @return 1 on success, 0 on failure.
*
* Note that the computed fundamental matrix is normalized, i.e.,
* the last element \f$F_{33}\f$ is 1.
*/
static int run8Point( const Mat& _m1, const Mat& _m2, Mat& _fmatrix )
{
Point2d m1c(0,0), m2c(0,0);
double t, scale1 = 0, scale2 = 0;
const Point2f* m1 = _m1.ptr<Point2f>();
const Point2f* m2 = _m2.ptr<Point2f>();
CV_Assert( (_m1.cols == 1 || _m1.rows == 1) && _m1.size() == _m2.size());
int i, count = _m1.checkVector(2);
// compute centers and average distances for each of the two point sets
for( i = 0; i < count; i++ )
{
m1c += Point2d(m1[i]);
m2c += Point2d(m2[i]);
}
// calculate the normalizing transformations for each of the point sets:
// after the transformation each set will have the mass center at the coordinate origin
// and the average distance from the origin will be ~sqrt(2).
t = 1./count;
m1c *= t;
m2c *= t;
for( i = 0; i < count; i++ )
{
scale1 += norm(Point2d(m1[i].x - m1c.x, m1[i].y - m1c.y));
scale2 += norm(Point2d(m2[i].x - m2c.x, m2[i].y - m2c.y));
}
scale1 *= t;
scale2 *= t;
if( scale1 < FLT_EPSILON || scale2 < FLT_EPSILON )
return 0;
scale1 = std::sqrt(2.)/scale1;
scale2 = std::sqrt(2.)/scale2;
Matx<double, 9, 9> A;
// form a linear system Ax=0: for each selected pair of points m1 & m2,
// the row of A(=a) represents the coefficients of equation: (m2, 1)'*F*(m1, 1) = 0
// to save computation time, we compute (At*A) instead of A and then solve (At*A)x=0.
for( i = 0; i < count; i++ )
{
double x1 = (m1[i].x - m1c.x)*scale1;
double y1 = (m1[i].y - m1c.y)*scale1;
double x2 = (m2[i].x - m2c.x)*scale2;
double y2 = (m2[i].y - m2c.y)*scale2;
Vec<double, 9> r( x2*x1, x2*y1, x2, y2*x1, y2*y1, y2, x1, y1, 1 );
A += r*r.t();
}
Vec<double, 9> W;
Matx<double, 9, 9> V;
eigen(A, W, V);
for( i = 0; i < 9; i++ )
{
if( fabs(W[i]) < DBL_EPSILON )
break;
}
if( i < 8 )
return 0;
Matx33d F0( V.val + 9*8 ); // take the last column of v as a solution of Af = 0
// make F0 singular (of rank 2) by decomposing it with SVD,
// zeroing the last diagonal element of W and then composing the matrices back.
Vec3d w;
Matx33d U;
Matx33d Vt;
SVD::compute( F0, w, U, Vt);
w[2] = 0.;
F0 = U*Matx33d::diag(w)*Vt;
// apply the transformation that is inverse
// to what we used to normalize the point coordinates
Matx33d T1( scale1, 0, -scale1*m1c.x, 0, scale1, -scale1*m1c.y, 0, 0, 1 );
Matx33d T2( scale2, 0, -scale2*m2c.x, 0, scale2, -scale2*m2c.y, 0, 0, 1 );
F0 = T2.t()*F0*T1;
// make F(3,3) = 1
if( fabs(F0(2,2)) > FLT_EPSILON )
F0 *= 1./F0(2,2);
Mat(F0).copyTo(_fmatrix);
return 1;
}
class FMEstimatorCallback CV_FINAL : public PointSetRegistrator::Callback
{
public:
bool checkSubset( InputArray _ms1, InputArray _ms2, int count ) const CV_OVERRIDE
{
Mat ms1 = _ms1.getMat(), ms2 = _ms2.getMat();
return !haveCollinearPoints(ms1, count) && !haveCollinearPoints(ms2, count);
}
int runKernel( InputArray _m1, InputArray _m2, OutputArray _model ) const CV_OVERRIDE
{
double f[9*3];
Mat m1 = _m1.getMat(), m2 = _m2.getMat();
int count = m1.checkVector(2);
Mat F(count == 7 ? 9 : 3, 3, CV_64F, f);
int n = count == 7 ? run7Point(m1, m2, F) : run8Point(m1, m2, F);
if( n <= 0 )
_model.release();
else
F.rowRange(0, n*3).copyTo(_model);
return n;
}
void computeError( InputArray _m1, InputArray _m2, InputArray _model, OutputArray _err ) const CV_OVERRIDE
{
Mat __m1 = _m1.getMat(), __m2 = _m2.getMat(), __model = _model.getMat();
int i, count = __m1.checkVector(2);
const Point2f* m1 = __m1.ptr<Point2f>();
const Point2f* m2 = __m2.ptr<Point2f>();
const double* F = __model.ptr<double>();
_err.create(count, 1, CV_32F);
float* err = _err.getMat().ptr<float>();
for( i = 0; i < count; i++ )
{
double a, b, c, d1, d2, s1, s2;
a = F[0]*m1[i].x + F[1]*m1[i].y + F[2];
b = F[3]*m1[i].x + F[4]*m1[i].y + F[5];
c = F[6]*m1[i].x + F[7]*m1[i].y + F[8];
s2 = 1./(a*a + b*b);
d2 = m2[i].x*a + m2[i].y*b + c;
a = F[0]*m2[i].x + F[3]*m2[i].y + F[6];
b = F[1]*m2[i].x + F[4]*m2[i].y + F[7];
c = F[2]*m2[i].x + F[5]*m2[i].y + F[8];
s1 = 1./(a*a + b*b);
d1 = m1[i].x*a + m1[i].y*b + c;
err[i] = (float)std::max(d1*d1*s1, d2*d2*s2);
}
}
};
}
cv::Mat cv::findFundamentalMat( InputArray _points1, InputArray _points2,
int method, double ransacReprojThreshold, double confidence,
int maxIters, OutputArray _mask )
{
CV_INSTRUMENT_REGION();
if (method >= USAC_DEFAULT && method <= USAC_MAGSAC)
return usac::findFundamentalMat(_points1, _points2, method,
ransacReprojThreshold, confidence, maxIters, _mask);
Mat points1 = _points1.getMat(), points2 = _points2.getMat();
Mat m1, m2, F;
int npoints = -1;
for( int i = 1; i <= 2; i++ )
{
Mat& p = i == 1 ? points1 : points2;
Mat& m = i == 1 ? m1 : m2;
npoints = p.checkVector(2, -1, false);
if( npoints < 0 )
{
npoints = p.checkVector(3, -1, false);
if( npoints < 0 )
CV_Error(Error::StsBadArg, "The input arrays should be 2D or 3D point sets");
if( npoints == 0 )
return Mat();
convertPointsFromHomogeneous(p, p);
}
p.reshape(2, npoints).convertTo(m, CV_32F);
}
CV_Assert( m1.checkVector(2) == m2.checkVector(2) );
if( npoints < 7 )
return Mat();
Ptr<PointSetRegistrator::Callback> cb = makePtr<FMEstimatorCallback>();
int result;
if( npoints == 7 || method == FM_8POINT )
{
result = cb->runKernel(m1, m2, F);
if( _mask.needed() )
{
_mask.create(npoints, 1, CV_8U, -1, true);
Mat mask = _mask.getMat();
CV_Assert( (mask.cols == 1 || mask.rows == 1) && (int)mask.total() == npoints );
mask.setTo(Scalar::all(1));
}
}
else
{
if( ransacReprojThreshold <= 0 )
ransacReprojThreshold = 3;
if( confidence < DBL_EPSILON || confidence > 1 - DBL_EPSILON )
confidence = 0.99;
if( (method & ~3) == FM_RANSAC && npoints >= 15 )
result = createRANSACPointSetRegistrator(cb, 7, ransacReprojThreshold, confidence, maxIters)->run(m1, m2, F, _mask);
else
result = createLMeDSPointSetRegistrator(cb, 7, confidence, maxIters)->run(m1, m2, F, _mask);
}
if( result <= 0 )
return Mat();
return F;
}
cv::Mat cv::findFundamentalMat( cv::InputArray points1, cv::InputArray points2,
int method, double ransacReprojThreshold, double confidence,
cv::OutputArray mask )
{
return cv::findFundamentalMat(points1, points2, method, ransacReprojThreshold, confidence, 1000, mask);
}
cv::Mat cv::findFundamentalMat( cv::InputArray points1, cv::InputArray points2, cv::OutputArray mask,
int method, double ransacReprojThreshold, double confidence )
{
return cv::findFundamentalMat(points1, points2, method, ransacReprojThreshold, confidence, 1000, mask);
}
cv::Mat cv::findFundamentalMat( InputArray points1, InputArray points2,
OutputArray mask, const UsacParams ¶ms) {
Ptr<usac::Model> model;
setParameters(model, usac::EstimationMethod::FUNDAMENTAL, params, mask.needed());
CV_Assert(model);
Ptr<usac::RansacOutput> ransac_output;
if (usac::run(model, points1, points2,
ransac_output, noArray(), noArray(), noArray(), noArray())) {
usac::saveMask(mask, ransac_output->getInliersMask());
return ransac_output->getModel();
} else return Mat();
}
void cv::computeCorrespondEpilines( InputArray _points, int whichImage,
InputArray _Fmat, OutputArray _lines )
{
CV_INSTRUMENT_REGION();
double f[9] = {0};
Mat tempF(3, 3, CV_64F, f);
Mat points = _points.getMat(), F = _Fmat.getMat();
if( !points.isContinuous() )
points = points.clone();
int npoints = points.checkVector(2);
if( npoints < 0 )
{
npoints = points.checkVector(3);
if( npoints < 0 )
CV_Error( Error::StsBadArg, "The input should be a 2D or 3D point set");
Mat temp;
convertPointsFromHomogeneous(points, temp);
points = temp;
}
int depth = points.depth();
CV_Assert( depth == CV_32F || depth == CV_32S || depth == CV_64F );
CV_Assert(F.size() == Size(3,3));
F.convertTo(tempF, CV_64F);
if( whichImage == 2 )
transpose(tempF, tempF);
int ltype = CV_MAKETYPE(MAX(depth, CV_32F), 3);
_lines.create(npoints, 1, ltype);
Mat lines = _lines.getMat();
if( !lines.isContinuous() )
{
_lines.release();
_lines.create(npoints, 1, ltype);
lines = _lines.getMat();
}
CV_Assert( lines.isContinuous());
if( depth == CV_32S || depth == CV_32F )
{
const Point* ptsi = points.ptr<Point>();
const Point2f* ptsf = points.ptr<Point2f>();
Point3f* dstf = lines.ptr<Point3f>();
for( int i = 0; i < npoints; i++ )
{
Point2f pt = depth == CV_32F ? ptsf[i] : Point2f((float)ptsi[i].x, (float)ptsi[i].y);
double a = f[0]*pt.x + f[1]*pt.y + f[2];
double b = f[3]*pt.x + f[4]*pt.y + f[5];
double c = f[6]*pt.x + f[7]*pt.y + f[8];
double nu = a*a + b*b;
nu = nu ? 1./std::sqrt(nu) : 1.;
a *= nu; b *= nu; c *= nu;
dstf[i] = Point3f((float)a, (float)b, (float)c);
}
}
else
{
const Point2d* ptsd = points.ptr<Point2d>();
Point3d* dstd = lines.ptr<Point3d>();
for( int i = 0; i < npoints; i++ )
{
Point2d pt = ptsd[i];
double a = f[0]*pt.x + f[1]*pt.y + f[2];
double b = f[3]*pt.x + f[4]*pt.y + f[5];
double c = f[6]*pt.x + f[7]*pt.y + f[8];
double nu = a*a + b*b;
nu = nu ? 1./std::sqrt(nu) : 1.;
a *= nu; b *= nu; c *= nu;
dstd[i] = Point3d(a, b, c);
}
}
}
void cv::convertPointsFromHomogeneous( InputArray _src, OutputArray _dst )
{
CV_INSTRUMENT_REGION();
Mat src = _src.getMat();
if( !src.isContinuous() )
src = src.clone();
int i, npoints = src.checkVector(3), depth = src.depth(), cn = 3;
if( npoints < 0 )
{
npoints = src.checkVector(4);
CV_Assert(npoints >= 0);
cn = 4;
}
CV_Assert( npoints >= 0 && (depth == CV_32S || depth == CV_32F || depth == CV_64F));
int dtype = CV_MAKETYPE(depth <= CV_32F ? CV_32F : CV_64F, cn-1);
_dst.create(npoints, 1, dtype);
Mat dst = _dst.getMat();
if( !dst.isContinuous() )
{
_dst.release();
_dst.create(npoints, 1, dtype);
dst = _dst.getMat();
}
CV_Assert( dst.isContinuous() );
if( depth == CV_32S )
{
if( cn == 3 )
{
const Point3i* sptr = src.ptr<Point3i>();
Point2f* dptr = dst.ptr<Point2f>();
for( i = 0; i < npoints; i++ )
{
float scale = sptr[i].z != 0 ? 1.f/sptr[i].z : 1.f;
dptr[i] = Point2f(sptr[i].x*scale, sptr[i].y*scale);
}
}
else
{
const Vec4i* sptr = src.ptr<Vec4i>();
Point3f* dptr = dst.ptr<Point3f>();
for( i = 0; i < npoints; i++ )
{
float scale = sptr[i][3] != 0 ? 1.f/sptr[i][3] : 1.f;
dptr[i] = Point3f(sptr[i][0]*scale, sptr[i][1]*scale, sptr[i][2]*scale);
}
}
}
else if( depth == CV_32F )
{
if( cn == 3 )
{
const Point3f* sptr = src.ptr<Point3f>();
Point2f* dptr = dst.ptr<Point2f>();
for( i = 0; i < npoints; i++ )
{
float scale = scaleFor(sptr[i].z);
dptr[i] = Point2f(sptr[i].x*scale, sptr[i].y*scale);
}
}
else
{
const Vec4f* sptr = src.ptr<Vec4f>();
Point3f* dptr = dst.ptr<Point3f>();
for( i = 0; i < npoints; i++ )
{
float scale = scaleFor(sptr[i][3]);
dptr[i] = Point3f(sptr[i][0]*scale, sptr[i][1]*scale, sptr[i][2]*scale);
}
}
}
else if( depth == CV_64F )
{
if( cn == 3 )
{
const Point3d* sptr = src.ptr<Point3d>();
Point2d* dptr = dst.ptr<Point2d>();
for( i = 0; i < npoints; i++ )
{
double scale = scaleFor(sptr[i].z);
dptr[i] = Point2d(sptr[i].x*scale, sptr[i].y*scale);
}
}
else
{
const Vec4d* sptr = src.ptr<Vec4d>();
Point3d* dptr = dst.ptr<Point3d>();
for( i = 0; i < npoints; i++ )
{
double scale = scaleFor(sptr[i][3]);
dptr[i] = Point3d(sptr[i][0]*scale, sptr[i][1]*scale, sptr[i][2]*scale);
}
}
}
else
CV_Error(Error::StsUnsupportedFormat, "");
}
void cv::convertPointsToHomogeneous( InputArray _src, OutputArray _dst )
{
CV_INSTRUMENT_REGION();
Mat src = _src.getMat();
if( !src.isContinuous() )
src = src.clone();
int i, npoints = src.checkVector(2), depth = src.depth(), cn = 2;
if( npoints < 0 )
{
npoints = src.checkVector(3);
CV_Assert(npoints >= 0);
cn = 3;
}
CV_Assert( npoints >= 0 && (depth == CV_32S || depth == CV_32F || depth == CV_64F));
int dtype = CV_MAKETYPE(depth, cn+1);
_dst.create(npoints, 1, dtype);
Mat dst = _dst.getMat();
if( !dst.isContinuous() )
{
_dst.release();
_dst.create(npoints, 1, dtype);
dst = _dst.getMat();
}
CV_Assert( dst.isContinuous() );
if( depth == CV_32S )
{
if( cn == 2 )
{
const Point2i* sptr = src.ptr<Point2i>();
Point3i* dptr = dst.ptr<Point3i>();
for( i = 0; i < npoints; i++ )
dptr[i] = Point3i(sptr[i].x, sptr[i].y, 1);
}
else
{
const Point3i* sptr = src.ptr<Point3i>();
Vec4i* dptr = dst.ptr<Vec4i>();
for( i = 0; i < npoints; i++ )
dptr[i] = Vec4i(sptr[i].x, sptr[i].y, sptr[i].z, 1);
}
}
else if( depth == CV_32F )
{
if( cn == 2 )
{
const Point2f* sptr = src.ptr<Point2f>();
Point3f* dptr = dst.ptr<Point3f>();
for( i = 0; i < npoints; i++ )
dptr[i] = Point3f(sptr[i].x, sptr[i].y, 1.f);
}
else
{
const Point3f* sptr = src.ptr<Point3f>();
Vec4f* dptr = dst.ptr<Vec4f>();
for( i = 0; i < npoints; i++ )
dptr[i] = Vec4f(sptr[i].x, sptr[i].y, sptr[i].z, 1.f);
}
}
else if( depth == CV_64F )
{
if( cn == 2 )
{
const Point2d* sptr = src.ptr<Point2d>();
Point3d* dptr = dst.ptr<Point3d>();
for( i = 0; i < npoints; i++ )
dptr[i] = Point3d(sptr[i].x, sptr[i].y, 1.);
}
else
{
const Point3d* sptr = src.ptr<Point3d>();
Vec4d* dptr = dst.ptr<Vec4d>();
for( i = 0; i < npoints; i++ )
dptr[i] = Vec4d(sptr[i].x, sptr[i].y, sptr[i].z, 1.);
}
}
else
CV_Error(Error::StsUnsupportedFormat, "");
}
void cv::convertPointsHomogeneous( InputArray _src, OutputArray _dst )
{
CV_INSTRUMENT_REGION();
int stype = _src.type(), dtype = _dst.type();
CV_Assert( _dst.fixedType() );
if( CV_MAT_CN(stype) > CV_MAT_CN(dtype) )
convertPointsFromHomogeneous(_src, _dst);
else
convertPointsToHomogeneous(_src, _dst);
}
double cv::sampsonDistance(InputArray _pt1, InputArray _pt2, InputArray _F)
{
CV_INSTRUMENT_REGION();
CV_Assert(_pt1.depth() == CV_64F && _pt2.depth() == CV_64F && _F.depth() == CV_64F);
CV_DbgAssert(_pt1.rows() == 3 && _F.size() == Size(3, 3) && _pt1.rows() == _pt2.rows());
Mat pt1(_pt1.getMat());
Mat pt2(_pt2.getMat());
Mat F(_F.getMat());
Vec3d F_pt1 = *F.ptr<Matx33d>() * *pt1.ptr<Vec3d>();
Vec3d Ft_pt2 = F.ptr<Matx33d>()->t() * *pt2.ptr<Vec3d>();
double v = pt2.ptr<Vec3d>()->dot(F_pt1);
// square
Ft_pt2 = Ft_pt2.mul(Ft_pt2);
F_pt1 = F_pt1.mul(F_pt1);
return v*v / (F_pt1[0] + F_pt1[1] + Ft_pt2[0] + Ft_pt2[1]);
}
/* End of file. */
|
cpp
|
github
|
https://github.com/opencv/opencv
|
modules/calib3d/src/fundam.cpp
|
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""The deferred instance delete extension."""
import webob
from nova.api.openstack import common
from nova.api.openstack import extensions
from nova.api.openstack import wsgi
from nova import compute
from nova import exception
ALIAS = 'os-deferred-delete'
authorize = extensions.extension_authorizer('compute',
'v3:' + ALIAS)
class DeferredDeleteController(wsgi.Controller):
def __init__(self, *args, **kwargs):
super(DeferredDeleteController, self).__init__(*args, **kwargs)
self.compute_api = compute.API()
@wsgi.response(202)
@extensions.expected_errors((404, 409, 403))
@wsgi.action('restore')
def _restore(self, req, id, body):
"""Restore a previously deleted instance."""
context = req.environ["nova.context"]
authorize(context)
instance = common.get_instance(self.compute_api, context, id,
want_objects=True)
try:
self.compute_api.restore(context, instance)
except exception.QuotaError as error:
raise webob.exc.HTTPForbidden(explanation=error.format_message())
except exception.InstanceInvalidState as state_error:
common.raise_http_conflict_for_instance_invalid_state(state_error,
'restore', id)
@wsgi.response(202)
@extensions.expected_errors((404, 409))
@wsgi.action('forceDelete')
def _force_delete(self, req, id, body):
"""Force delete of instance before deferred cleanup."""
context = req.environ["nova.context"]
authorize(context)
instance = common.get_instance(self.compute_api, context, id,
want_objects=True)
try:
self.compute_api.force_delete(context, instance)
except exception.InstanceIsLocked as e:
raise webob.exc.HTTPConflict(explanation=e.format_message())
class DeferredDelete(extensions.V3APIExtensionBase):
"""Instance deferred delete."""
name = "DeferredDelete"
alias = "os-deferred-delete"
version = 1
def get_controller_extensions(self):
controller = DeferredDeleteController()
extension = extensions.ControllerExtension(self, 'servers', controller)
return [extension]
def get_resources(self):
return []
|
unknown
|
codeparrot/codeparrot-clean
| ||
#!/usr/bin/env python
#
# docmaker.py
#
# Convert source code markup to HTML documentation.
#
# Copyright 2002, 2004, 2008, 2013, 2014 by
# David Turner.
#
# This file is part of the FreeType project, and may only be used,
# modified, and distributed under the terms of the FreeType project
# license, LICENSE.TXT. By continuing to use, modify, or distribute
# this file you indicate that you have read the license and
# understand and accept it fully.
#
# This program is a re-write of the original DocMaker tool used to generate
# the API Reference of the FreeType font rendering engine by converting
# in-source comments into structured HTML.
#
# This new version is capable of outputting XML data as well as accepting
# more liberal formatting options. It also uses regular expression matching
# and substitution to speed up operation significantly.
#
from sources import *
from content import *
from utils import *
from formatter import *
from tohtml import *
import utils
import sys, os, time, string, glob, getopt
def usage():
print "\nDocMaker Usage information\n"
print " docmaker [options] file1 [file2 ...]\n"
print "using the following options:\n"
print " -h : print this page"
print " -t : set project title, as in '-t \"My Project\"'"
print " -o : set output directory, as in '-o mydir'"
print " -p : set documentation prefix, as in '-p ft2'"
print ""
print " --title : same as -t, as in '--title=\"My Project\"'"
print " --output : same as -o, as in '--output=mydir'"
print " --prefix : same as -p, as in '--prefix=ft2'"
def main( argv ):
"""Main program loop."""
global output_dir
try:
opts, args = getopt.getopt( sys.argv[1:],
"ht:o:p:",
["help", "title=", "output=", "prefix="] )
except getopt.GetoptError:
usage()
sys.exit( 2 )
if args == []:
usage()
sys.exit( 1 )
# process options
project_title = "Project"
project_prefix = None
output_dir = None
for opt in opts:
if opt[0] in ( "-h", "--help" ):
usage()
sys.exit( 0 )
if opt[0] in ( "-t", "--title" ):
project_title = opt[1]
if opt[0] in ( "-o", "--output" ):
utils.output_dir = opt[1]
if opt[0] in ( "-p", "--prefix" ):
project_prefix = opt[1]
check_output()
# create context and processor
source_processor = SourceProcessor()
content_processor = ContentProcessor()
# retrieve the list of files to process
file_list = make_file_list( args )
for filename in file_list:
source_processor.parse_file( filename )
content_processor.parse_sources( source_processor )
# process sections
content_processor.finish()
formatter = HtmlFormatter( content_processor,
project_title,
project_prefix )
formatter.toc_dump()
formatter.index_dump()
formatter.section_dump_all()
# if called from the command line
if __name__ == '__main__':
main( sys.argv )
# eof
|
unknown
|
codeparrot/codeparrot-clean
| ||
import os
import sys
import pygame
from common.constants import *
from client.constants import *
from common import boundint
from common.util.rect import Rect
class FX(object):
def __init__(self, inPos, inFacing, inType):
self.preciseLoc = inPos
self.facingRight = inFacing
self.frames = []
self.setType(inType)
self.frameNum = 0
self.subframeNum = 0
self.removeFlag = False
self.setImage()
def update(self):
self.subframeNum += 1
if self.subframeNum >= self.frames[self.frameNum].length:
self.subframeNum = 0
self.frameNum += 1
if self.frameNum == len(self.frames):
self.removeFlag = True
else:
self.setImage()
self.rect.topleft = self.getRectPos()
def draw(self, screen, inOffset):
screen.blit(self.image, add_points(self.rect.topleft, inOffset))
def setImage(self):
f = self.frames[self.frameNum]
inImage = f.image
o = f.offset
size = inImage.get_size()
if self.facingRight:
offset = o
self.image = inImage
else:
offset = (-o[0] + size[0], o[1])
self.image = pygame.transform.flip(inImage, True, False)
self.offset = offset
self.rect = Rect(self.getRectPos(), size)
def getRectPos(self):
return ( int(self.preciseLoc[0]) - self.offset[0],
int(self.preciseLoc[1]) - self.offset[1] )
def setType(self, t):
if t == 'pow':
f = [ [0, 3],
[1, 1],
[2, 1],
[3, 1] ]
elif t == 'side':
f = [ [7, 3],
[4, 1],
[5, 1],
[6 , 1] ]
elif t == 'block':
f = [ [8, 3],
[9, 1],
[10, 1],
[11, 1] ]
elif t == 'grab':
f = [ [12, 4],
[13, 3]]
elif t == 'dust':
f = [ [14, 3],
[15, 2],
[16, 2],
[17, 2],
[18, 2] ]
elif t == 'shockwave':
f = [ [19, 2],
[20, 1],
[21, 1],
[22, 1],
[23, 1] ]
elif t == 'airelementshockwave':
f = [ [24, 2],
[25, 1],
[26, 1],
[27, 1],
[28, 1] ]
elif t == 'runicexplosion':
f = [ [29, 2],
[30, 1],
[31, 2],
[32, 2],
[33, 2] ]
elif t == 'runicflame1':
f = [ [35, 3],
[36, 1],
[37, 1],
[38, 1] ]
elif t == 'runicflame2':
f = [ [39, 3],
[40, 2],
[41, 1]]
elif t == 'runicflame3':
f = [ [42, 3],
[43, 2],
[44, 1]]
elif t == 'fullshockwave':
f = [ [45, 3],
[46, 2],
[47, 2],
[48, 2],
[49, 2],
[50, 2] ]
for i in f:
self.frames.append(Frame(FX_IMAGES[i[0]][0],
FX_IMAGES[i[0]][1], i[1]))
class Frame(object):
def __init__(self, inImage, inOffset, inLength):
self.image = inImage
self.offset = inOffset
self.length = inLength
|
unknown
|
codeparrot/codeparrot-clean
| ||
import numpy as np
import tensorflow as tf
import h5py
import time
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import sys
# Functions for initializing neural nets parameters
def init_weight_variable(shape, nameIn):
initial = tf.truncated_normal(shape, stddev=0.1, dtype=tf.float32)
return tf.Variable(initial, name=nameIn)
def init_bias_variable(shape, nameIn):
initial = tf.constant(0.1, shape=shape, dtype=tf.float32)
return tf.Variable(initial, name=nameIn)
def conv2d(x, W):
return tf.nn.conv2d(x, W, [1, 1, 1, 1], 'VALID')
def loadData(filepath):
'''
Load and return four variables from the file with path filepath
X_train: input data for training
y_train: labels for X_train
X_val: input data for validation
y_val: labels for X_val
'''
print('==> Experiment 2l')
print('==> Loading data from {}'.format(filepath))
# benchmark
t_start = time.time()
# reading data
f = h5py.File(filepath)
X_train = np.array(f.get('trainingFeatures'))
y_train = np.array(f.get('trainingLabels'))
X_val = np.array(f.get('validationFeatures'))
y_val = np.array(f.get('validationLabels'))
t_end = time.time()
print('--Time elapsed for loading data: {t:.2f} \
seconds'.format(t = t_end - t_start))
del f
print('-- Number of training samples: {}'.format(X_train.shape[0]))
print('-- Number of validation samples: {}'.format(X_val.shape[0]))
print('Shape of X_train: %s'%str(X_train.shape))
print('Shape of y_train: %s'%str(y_train.shape))
print('Shape of X_val: %s'%str(X_val.shape))
print('Shape of y_val: %s'%str(y_val.shape))
return [X_train, y_train, X_val, y_val]
#self, X_train, y_train, X_val, y_val, num_freq, filter_row, filter_col, k1, k2, learningRate, pooling_strategy):
# set up property that makes it only be set once
# we'll use this to avoid adding tensors to the graph multiple times
import functools
def lazy_property(function):
attribute = '_cache_' + function.__name__
@property
@functools.wraps(function)
def decorator(self):
if not hasattr(self, attribute):
setattr(self, attribute, function(self))
return getattr(self, attribute)
return decorator
class Model:
def __init__(self, num_freq, X_train, y_train, X_val, y_val, filter_row, filter_col, k1, learningRate, debug):
'''
Initializer for the model
'''
# store the data
self.X_train, self.y_train, self.X_val, self.y_val = X_train, y_train, X_val, y_val
# store the parameters sent to init that define our model
self.num_freq, self.filter_row, self.filter_col, self.k1, self.learningRate, self.debug = num_freq, filter_row, filter_col, k1, learningRate, debug
# find num_training_vec, total_features, num_frames, num_classes, and l from the shape of the data
# and store them
self.storeParamsFromData()
# Set-up and store the input and output placeholders
x = tf.placeholder(tf.float32, [None, self.total_features])
y_ = tf.placeholder(tf.float32, [None, self.num_classes])
self.x = x
self.y_ = y_
# Setup and store tensor that performs the one-hot encoding
y_train_OHEnc = tf.one_hot(self.y_train.copy(), self.num_classes)
y_val_OHEnc = tf.one_hot(self.y_val.copy(), self.num_classes)
self.y_train_OHEnc = y_train_OHEnc
self.y_val_OHEnc = y_val_OHEnc
# create each lazy_property
# each lazy_property will add tensors to the graph
self.y_conv
self.cross_entropy
self.train_step
self.accuracy
# properties for use in debugging
if self.debug:
self.grads_and_vars
# print to the user that the network has been set up, along with its properties
print("Setting up Single Conv Layer Neural net with %g x %g filters, k1 = %g, learningRate = %g"%(filter_row, filter_col, k1, learningRate))
def storeParamsFromData(self):
'''
Calculate and store parameters from the raw data
total_features: The number of CQT coefficients total (incldues all context frames)
num_training_vec: The number of training examples in your dataset
num_frames: The number of context frames in each training example (total_features / num_freq)
num_classes: The number of songs we're distinguishing between in our output
l: The length of our second convolutional kernel - for now, its equal to num_frames
'''
# Neural-network model set-up
# calculating some values which will be nice as we set up the model
num_training_vec, total_features = self.X_train.shape
num_frames = int(total_features / self.num_freq)
print('-- Num frames: {}'.format(num_frames))
num_classes = int(max(self.y_train.max(), self.y_val.max()) + 1)
l = num_frames
# store what will be helpful later
self.total_features = total_features
self.num_training_vec = num_training_vec
self.num_frames = num_frames
self.num_classes = num_classes
self.l = l
@lazy_property
def y_conv(self):
# reshape the input into the form of a spectrograph
x_image = tf.reshape(self.x, [-1, self.num_freq, self.num_frames, 1])
x_image = tf.identity(x_image, name="x_image")
# first convolutional layer parameters
W_conv1 = init_weight_variable([self.filter_row, self.filter_col, 1, self.k1], "W_conv1")
b_conv1 = init_bias_variable([self.k1], "b_conv1")
# tensor that computes the output of the first convolutional layer
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_conv1 = tf.identity(h_conv1, name="h_conv_1")
# flatten out the output of the first convolutional layer to pass to the softmax layer
h_conv1_flat = tf.reshape(h_conv1, [-1, (self.num_freq - self.filter_row + 1) * (self.num_frames - self.filter_col + 1) * self.k1])
h_conv1_flat = tf.identity(h_conv1_flat, name="h_conv1_flat")
# softmax layer parameters
W_sm = init_weight_variable([(self.num_freq - self.filter_row + 1) * (self.num_frames - self.filter_col + 1) * self.k1, self.num_classes], "W_sm")
b_sm = init_bias_variable([self.num_classes], "b_sm")
# the output of the layer - un-normalized and without a non-linearity
# since cross_entropy_with_logits takes care of that
y_conv = tf.matmul(h_conv1_flat, W_sm) + b_sm
y_conv = tf.identity(y_conv, name="y_conv")
return y_conv # would want to softmax it to get an actual prediction
@lazy_property
def cross_entropy(self):
'''
Create a tensor that computes the cross entropy cost
Use the placeholder y_ as the labels, with input y_conv
Note that softmax_cross_entropy_with_logits takes care of normalizing
y_conv to make it a probability distribution
This tensor can be accessed using: self.cross_entropy
'''
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=self.y_, logits=self.y_conv))
cross_entropy = tf.identity(cross_entropy, name="cross_entropy")
return cross_entropy
@lazy_property
def optimizer(self):
'''
Create a tensor that represents the optimizer. This tensor can
be accessed using: self.optimizer
'''
optimizer = tf.train.AdamOptimizer(learning_rate = self.learningRate)
return optimizer
@lazy_property
def train_step(self):
'''
Creates a tensor that represents a single training step. This tensor
can be passed a feed_dict that has x and y_, and it will compute the gradients
and perform a single step.
This tensor can be accessed using: self.train_step
'''
return self.optimizer.minimize(self.cross_entropy)
@lazy_property
def accuracy(self):
'''
Create a tensor that computes the accuracy, using the placeholder y_ as the labeled data
and y_conv for the predictions of the network.
This tensor can be accessed using: self.accuracy
'''
correct_prediction = tf.equal(tf.argmax(self.y_conv, 1), tf.argmax(self.y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return accuracy
'''
Properties that we'll use for debugging
'''
@lazy_property
def grads_and_vars(self):
grads_and_vars = self.optimizer.compute_gradients(self.cross_entropy, tf.trainable_variables())
return grads_and_vars
def train(self, batch_size, num_epochs, print_freq, debug_out='debug.txt'):
'''
Train the Network on the data that will have been loaded when the NN is initialized
Trained on: self.X_train, and a OH encoding of self.y_train
Trains with batch_size batches for num_epochs epochs
Debugging info is written to debug.txt (can add params to have more places to write out
to)
'''
# Starting an interactive session and initializing the parameters
#sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# replace it with the one-hot encoded one --- should I replace?
y_trainOH = sess.run(self.y_train_OHEnc)[:, 0, :]
y_valOH = sess.run(self.y_val_OHEnc)[:, 0, :]
# lists to record accuracy at several points during training
train_acc_list = []
val_acc_list = []
train_acc_on_batch_list = []
# lists to record the error at several points during training
train_err_list = []
val_err_list = []
train_err_on_batch_list = []
# track which epochs you record data during
epoch_numbers = []
# record the start time
t_start = time.time()
for epoch in range(num_epochs):
epochStart = time.time()
# train by systematically pulling batches of batch_size from
# the training set and taking a training step on each batch
for i in range(0, self.num_training_vec, batch_size):
batch_end_point = min(i + batch_size, self.num_training_vec)
train_batch_data = self.X_train[i : batch_end_point]
train_batch_label = y_trainOH[i : batch_end_point]
self.train_step.run(feed_dict={self.x: train_batch_data, self.y_: train_batch_label})
epochEnd = time.time()
# print and record data now that we've trained on our full training set
if (epoch + 1) % print_freq == 0:
# timing for the measurements of cost and accuracy
evaluationStart = time.time()
# compute training (on the most recent batch and the full data set)
# and validation cost and accuracy, then print them and add them to the list
# we start with accuracy:
train_acc = self.evalByBatch(self.accuracy, X_train, y_trainOH, 5000)
train_acc_list.append(train_acc)
val_acc = self.evalByBatch(self.accuracy, X_val, y_valOH, 5000)
val_acc_list.append(val_acc)
# Now we compute the error on each set:
train_err = self.evalByBatch(self.cross_entropy, X_train, y_trainOH, 5000)
train_err_list.append(train_err)
val_err = self.evalByBatch(self.cross_entropy, X_val, y_valOH, 5000)
val_err_list.append(val_err)
# keep track of which epochs we have data for
epoch_numbers += [epoch]
# this marks the end of our evaluation
evaluationEnd = time.time()
# print a summary of our NN at this epoch
print("epoch: %d, time (train, evaluation): (%g, %g), t acc, v acc, t cost, v cost: %.5f, %.5f, %.5f, %.5f"%(epoch+1, epochEnd - epochStart, evaluationEnd - evaluationStart, train_acc, val_acc, train_err, val_err))
# debugging print outs
if self.debug:
# print out step / current value ratio for each parameter in our network
# based on training data from the most recent batch
# to the file with name debug_out
self.debug_WriteGradAndVar(train_batch_data, train_batch_label, epoch, debug_out)
# record the total time spent training the neural network
t_end = time.time()
print('--Time elapsed for training for %g epochs: %g'%(num_epochs, t_end - t_start))
# return the lists of logged data
return [train_acc_list, val_acc_list, train_err_list, val_err_list, epoch_numbers]
def evalByBatch(self, toEval, x, y_, batchSize):
weightedAvg = 0.0
for i in range(0, len(x), batchSize):
batch_end_point = min(i + batchSize, len(x))
batch_data = x[i : batch_end_point]
batch_label = y_[i : batch_end_point]
curAmount = toEval.eval(feed_dict={self.x: batch_data, self.y_: batch_label})
# weight by the length of the batch and keep adding on
weightedAvg = weightedAvg + curAmount * float(batch_end_point - i) / len(x)
return weightedAvg
def debug_WriteGradAndVar(self, xDebug, yDebug, epoch, debug_out):
'''
Helper function that prints the ratio of the training step that would be taken
on input data and labels xDebug and yDebug to the magnitude of each parameter
in the network. This gives us a sense of how much each parameter is changing.
Inputs:
xDebug: input data to calculate the gradient from
yDebug: labels for the input data
epoch: the number of the epoch (to print out to the file)
debug_out: the file to write to - if it doesn't exist it will be created
'''
file_object = open(debug_out, 'a+')
# record which epoch this is
file_object.write("Epoch: %d\n"%(epoch))
# find the current learning rate - this will be used with the gradient to find the step size
curLearningRate = self.optimizer._lr
# print each gradient and the variables they are associated with
# the gradients are stored in tuples, where the first element is a tensor
# that computes the gradient, and the second is the parameter that gradient
# is associated with
for gv in self.grads_and_vars:
curGrads = gv[0].eval(feed_dict={self.x: xDebug, self.y_: yDebug})
curSteps = curGrads * curLearningRate # scale down the graident by the learning rate
curVars = gv[1].eval()
# How much, compared to the magnitude of the weight, are we stepping
stepToVarRatio = np.absolute(np.divide(curSteps, curVars))
# print the name of the variable, then all the step ratios (step amount / current value)
# these values will have been averaged across the training examples
curName = gv[1].name
file_object.write("Variable: " + curName + "\n")
for index, step in np.ndenumerate(stepToVarRatio):
file_object.write(str(index) + ": " + str(step) + "\n")
# print summary statistics for this layer
maxVal = np.amax(stepToVarRatio)
thirdQuartile = np.percentile(stepToVarRatio, 75)
mean = np.mean(stepToVarRatio)
median = np.median(stepToVarRatio)
firstQuartile = np.percentile(stepToVarRatio, 25)
minVal = np.amin(stepToVarRatio)
file_object.write("Statistics: (%g, %g, %g, %g, %g, %g)\n"%(minVal, firstQuartile, median, mean, thirdQuartile, maxVal))
file_object.write("---------------------------------------\n")
# close the file
file_object.close()
def makeTrainingPlots(epochs, paramValues, trainingMetricLists, validationMetricLists, paramName, metricName, titles, filenames):
'''
Plots of the given training and validation metrics versus epoch number. One plot per list
in trainingMetricLists and validationMetricLists. Assume there will be the same number of sublists
in both those parameters. Titles will hold a list of strings that will be used for the titles
of the graphs. The last title will be for the plot with all the validation curves. Filenames is a list of filenames to save your plots to
Input:
epochs: a list of the epochs on which data was taken - assume all of them took
data at the same epoch numbers
paramValues: the values of the param that we were varying (to label the curves in our validation plot)
trainingMetricLists: a list of lists, where each list represents some metric on the progress of training throughout training
validationMetricLists: a list of lists, where each list represents some metric on the progress of training throughout training
paramName: name of the parameter you're varying (e.g. learningRate or kernel height)
metricName: the name of the metric (e.g. accuracy, or cross-entropy error), to be used on the y-axis
titles: titles for the graph (will include info on the params used).
*The last title will be for the validation plot
filename: the filenames to write the graphs to (will include info on the params used)
* the last filename will be for the validation plot
Output:
Write a png file for each list in trainingMetricLists/validationMetricLists with the desired plot
'''
# figure with all the validation curves
validationFig = plt.figure(figsize=(7, 4))
validationPlot = validationFig.add_subplot(111)
# go through each setup and make a plot for each
for i in range(len(trainingMetricLists)):
# pull out the list we're concerned with
trainingMetric = trainingMetricLists[i]
validationMetric = validationMetricLists[i]
# make the figure, add plots, axis lables, a title, and legend
fig = plt.figure(figsize=(7, 4))
myPlot = fig.add_subplot(111)
myPlot.plot(epochs, trainingMetric, '.', label="Training")
myPlot.plot(epochs, validationMetric, '.', label="Validation")
myPlot.set_xlabel("Epoch Number")
myPlot.set_ylabel(metricName)
myPlot.set_title(titles[i])
myPlot.legend(loc="best", frameon=False)
# Write the figure
fig.savefig(filenames[i])
# update the figure with all the validation curves
validationPlot.plot(epochs, validationMetric, '.', label=(paramName + " = " + str(paramValues[i])))
# finish labeling + write the validation plot
validationPlot.set_xlabel("Epoch Number")
validationPlot.set_ylabel(metricName)
validationPlot.set_title(titles[-1])
validationPlot.legend(loc="best", frameon=False)
validationFig.savefig(filenames[-1])
def makeBestResultPlot(paramValues, trainingMetricLists, validationMetricLists, bestFunction, paramName, metricName, title, filename):
'''
Plot the "best" value of the training and validation metric against the param that led to it
Best is assumed to be the largest value of the metric
Input:
trainingMetricLists: a list of lists, where each list represents some metric on the progress of training throughout training
validationMetricLists: a list of lists, where each list represents some metric on the progress of training throughout training
bestFunction: function that takes in a list (of values of the metric) and returns the "best" one. Often min or max will suffice
paramName: name of the parameter you varied in this experiment (e.g. height of kernel)
metricName: name of the metric you're using (e.g. cross-entropy error)
title: the title of the graph (will include info on the params used)
filename: the filename to write the graph to (will include info on the params used)
Output:
Write a png file with the desired plot
Is there a way to call the other one to do this? if didn't assume epoch number then yes - oh well
'''
bestTrainingMetrics = [bestFunction(curList) for curList in trainingMetricLists]
bestValidationMetrics = [bestFunction(curList) for curList in validationMetricLists]
# make the figure, add plots, axis lables, a title, and legend
fig = plt.figure(figsize=(7, 4))
myPlot = fig.add_subplot(111)
myPlot.plot(paramValues, bestTrainingMetrics, '.', label="Training")
myPlot.plot(paramValues, bestValidationMetrics, '.', label="Validation")
myPlot.set_xlabel(paramName)
myPlot.set_ylabel(metricName)
myPlot.set_title(title)
myPlot.legend(loc="best", frameon=False)
# Write the figure
fig.savefig(filename)
def makeEndResultPlot(paramValues, trainingMetricLists, validationMetricLists, paramName, metricName, title, filename):
'''
Plot the final value of the training and validation metric against the param that led to it
Input:
trainingMetricLists: a list of lists, where each list represents some metric on the progress of training throughout training
validationMetricLists: a list of lists, where each list represents some metric on the progress of training throughout training
paramName:
metricName:
title: the title of the graph (will include info on the params used)
filename: the filename to write the graph to (will include info on the params used)
Output:
Write a png file with the desired plot
Is there a way to call the other one to do this? if didn't assume epoch number then yes - oh well
'''
finalTrainingMetrics = [curList[-1] for curList in trainingMetricLists]
finalValidationMetrics = [curList[-1] for curList in validationMetricLists]
# make the figure, add plots, axis lables, a title, and legend
fig = plt.figure(figsize=(7, 4))
myPlot = fig.add_subplot(111)
myPlot.plot(paramValues, finalTrainingMetrics, label="Training")
myPlot.plot(paramValues, finalValidationMetrics, label="Validation")
myPlot.set_xlabel(paramName)
myPlot.set_ylabel(metricName)
myPlot.set_title(title)
myPlot.legend(loc="best", frameon=False)
# Write the figure
fig.savefig(filename)
'''
Our main, with 121x1 convolutional layer.
'''
# read in command line parameters
try:
filterColsString = sys.argv[1]
# map it from a string into a list of ints
filterColsIn = map(int, filterColsString.strip('[]').split(','))
# read in k1 as well
k1sString = sys.argv[2]
k1sIn = map(int, k1sString.strip('[]').split(','))
# read in the learning rates
learningRatesString = sys.argv[3]
learningRatesIn = map(float, learningRatesString.strip('[]').split(','))
# read in the number of epochs
numEpochs = int(sys.argv[4])
finalPlotName = sys.argv[5]
except Exception, e:
print('-- {}'.format(e))
# filepath to the data you want to laod
filepath = '/pylon2/ci560sp/cstrong/exp3/exp3_taylorswift_d15_1s_C1C8.mat'
# define the configurations we're going to be looking at
# in this exp: just change the number of rows in a vertical kernel
filterCols = filterColsIn
filterRows = [1] * len(filterColsIn)
k1s = k1sIn
learningRates = learningRatesIn
# set training parameters
batchSize = 1000
print_freq = 1
# make lists to store data
train_acc_lists = []
val_acc_lists = []
train_err_lists = []
val_err_lists = []
epoch_number_lists = []
# load data
[X_train, y_train, X_val, y_val] = loadData(filepath)
# loop through the setups and make a model each time
for i in range(len(filterRows)):
# create the model - this will create the TF graph as well as load the data
m = Model(169, X_train, y_train, X_val, y_val, filterRows[i], filterCols[i], k1s[i], learningRates[i], False)
# actually train the model (on the data it already loaded)
[train_acc_list, val_acc_list, train_err_list, val_err_list, epoch_numbers] = m.train(1000, numEpochs, print_freq)
# store the new data
train_acc_lists.append(train_acc_list)
val_acc_lists.append(val_acc_list)
train_err_lists.append(train_err_list)
val_err_lists.append(val_err_list)
epoch_number_lists.append(epoch_numbers)
del m # clear out the model to avoid huge buildup of memory
# print what you have so far in case it crashes
print("So far after %g models we have:"%(i+1))
print("Filter Rows: %s"%(filterRows))
print("Filter Cols: %s"%(filterCols))
print("K1s: %s"%(k1s))
print("Learning Rates: %s"%(learningRates))
print("Train acc list: %s"%(str(train_acc_lists)))
print("Val acc list: %s"%(str(val_acc_lists)))
print("Train err list: %s"%(str(train_err_lists)))
print("Val err list: %s"%(str(val_err_lists)))
print("Epoch number lists: %s"%(str(epoch_number_lists)))
# printing
print("Filter Rows: %s"%(filterRows))
print("Filter Cols: %s"%(filterCols))
print("K1s: %s"%(k1s))
print("Learning Rates: %s"%(learningRates))
print("Train acc list: %s"%(str(train_acc_lists)))
print("Val acc list: %s"%(str(val_acc_lists)))
print("Train err list: %s"%(str(train_err_lists)))
print("Val err list: %s"%(str(val_err_lists)))
print("Epoch number lists: %s"%(str(epoch_number_lists)))
# plotting
trainingPlotTitles = ['Single Layer CNN with %gx%g kernels, k1=%g and LR=%g'%(filterRows[i], filterCols[i], k1s[i], learningRates[i]) for i in range(len(filterRows))]
trainingPlotTitles.append('Exp 3c_4, Validation Cross-Entropy Cost vs. Epoch')
trainingPlotFiles = ['exp3c_4_training_%gx%g_k1=%g_LR=%f_%gEpochs.png'%(filterRows[i], filterCols[i], k1s[i], learningRates[i], numEpochs) for i in range(len(filterRows))]
trainingPlotFiles.append('exp3c_4_validationCurves_%gEpochs'%(numEpochs))
makeTrainingPlots(epoch_number_lists[0], k1s, train_err_lists, val_err_lists, "k1", "Cross Entropy Cost", trainingPlotTitles, trainingPlotFiles)
makeBestResultPlot(k1s, train_err_lists, val_err_lists, min, "k1", "Cross Entropy Cost", 'Cost vs. k1', finalPlotName)
|
unknown
|
codeparrot/codeparrot-clean
| ||
import theano
import theano.tensor as T
from lasagne import init
from lasagne.layers import Conv2DLayer, TransposedConv2DLayer, ConcatLayer, SliceLayer, DropoutLayer
from base import Model
from ..lasagne_extensions.layers import (SampleLayer, MultinomialLogDensityLayer,
GaussianLogDensityLayer, StandardNormalLogDensityLayer, BernoulliLogDensityLayer,
InputLayer, DenseLayer, DimshuffleLayer, ElemwiseSumLayer, ReshapeLayer,
NonlinearityLayer, BatchNormLayer, get_all_params, get_output)
from ..lasagne_extensions.objectives import categorical_crossentropy, categorical_accuracy
from ..lasagne_extensions.nonlinearities import rectify, softplus, sigmoid, softmax
from ..lasagne_extensions.updates import total_norm_constraint
from ..lasagne_extensions.updates import adam
from parmesan.distributions import log_normal
from theano.tensor.shared_randomstreams import RandomStreams
import numpy as np
class ConvSDGMSSL(Model):
"""
The :class:'SDGMSSL' class represents the implementation of the model described in the
Auxiliary Generative Models article on Arxiv.org.
"""
def __init__(self, input_size, n_a, n_z, n_y, qa_hid, qz_hid, qy_hid, px_hid, pa_hid, nonlinearity=rectify,
n_mi_features=0, dropout_prob=0.0, px_nonlinearity=None, x_dist='bernoulli', batchnorm=False, seed=1234,
conv_output_size=512):
super(ConvSDGMSSL, self).__init__(input_size**2, qz_hid + px_hid, n_a + n_z, nonlinearity)
self.x_dist = x_dist
self.n_y = n_y
self.input_size = input_size
self.n_mi_features = n_mi_features
self.n_a = n_a
self.n_z = n_z
self.batchnorm = batchnorm
self._srng = RandomStreams(seed)
# Decide Glorot initializaiton of weights.
init_w = 1e-3
hid_w = ""
if nonlinearity == rectify or nonlinearity == softplus:
hid_w = "relu"
# Define symbolic variables for theano functions.
self.sym_beta = T.scalar('beta') # scaling constant beta
self.sym_x_l = T.matrix('x') # labeled inputs
self.sym_t_l = T.matrix('t') # labeled targets
self.sym_x_u = T.matrix('x') # unlabeled inputs
self.sym_bs_l = T.iscalar('bs_l') # number of labeled data
self.sym_samples = T.iscalar('samples') # MC samples
self.sym_z = T.matrix('z') # latent variable z
self.sym_a = T.matrix('a') # auxiliary variable a
# Assist methods for collecting the layers
def dense_layer(layer_in, n, dist_w=init.GlorotNormal, dist_b=init.Normal):
dense = DenseLayer(layer_in, n, dist_w(hid_w), dist_b(init_w), None)
if batchnorm:
dense = BatchNormLayer(dense)
if dropout_prob != 0.0:
dense = DropoutLayer(dense, dropout_prob)
return NonlinearityLayer(dense, self.transf)
def stochastic_layer(layer_in, n, samples, nonlin=None):
mu = DenseLayer(layer_in, n, init.Normal(init_w), init.Normal(init_w), nonlin)
logvar = DenseLayer(layer_in, n, init.Normal(init_w), init.Normal(init_w), nonlin)
return SampleLayer(mu, logvar, eq_samples=samples, iw_samples=1), mu, logvar
# Return the number of elements in a tensor, ignoring the first (batch size)
# axis
def num_elems(tensor):
try:
num_elems = 1
for val in tensor.output_shape[1:]:
num_elems *= val
return num_elems
except:
return -2
#
# Functions that define the convolutional and deconvolutional sections of the
# networks (they are opposites of one another)
#
def conv_net(input_layer):
if self.n_mi_features != 0:
conv_input = SliceLayer(input_layer, indices=slice(0,input_layer.shape[1] - self.n_mi_features))
mi_input = SliceLayer(input_layer, indices=slice(input_layer.shape[1]-self.n_mi_features, None))
else:
conv_input = input_layer
mi_input = None
conv_input = ReshapeLayer(conv_input, (-1, 1, self.input_size, self.input_size))
conv_layer_output_shapes = []
output = Conv2DLayer(conv_input, 64, 5, stride=2, pad='same')
conv_layer_output_shapes.append(output.output_shape[2])
output = Conv2DLayer(output, 128, 5, stride=2, pad='same')
conv_layer_output_shapes.append(output.output_shape[2])
output = ReshapeLayer(output, (-1, num_elems(output)))
if mi_input is not None:
output = ConcatLayer([output, mi_input], axis=1)
output = BatchNormLayer(DenseLayer(output, conv_output_size))
return output, conv_layer_output_shapes
def deconv_net(input_layer, conv_layer_output_shapes):
output = BatchNormLayer(DenseLayer(input_layer, 128*7*7 + self.n_mi_features))
if self.n_mi_features != 0:
deconv_input = SliceLayer(output, indices=slice(0,128*7*7))
mi_features = SliceLayer(output, indices=slice(128*7*7, 128*7*7 + self.n_mi_features))
else:
deconv_input = output
mi_features = None
output = ReshapeLayer(deconv_input, (-1, 128, 7, 7))
output = TransposedConv2DLayer(output, 64, 5, stride=2, crop='same', output_size=conv_layer_output_shapes[0])
output = TransposedConv2DLayer(output, 1, 5, stride=2, crop='same', output_size=self.input_size, nonlinearity=sigmoid)
output = ReshapeLayer(output, (-1, self.input_size**2))
if mi_features is not None:
output = ConcatLayer([output, mi_features], axis=1)
return output
# Input layers
l_x_in = InputLayer((None, self.input_size**2 + self.n_mi_features))
l_y_in = InputLayer((None, n_y))
# Reshape x to be square 2d array so that can keep using previous implementation of the
# integration over y (see build_model)
############################################################################
# Auxiliary q(a|x) #
############################################################################
# Two convolutional layers. Can add batch norm or change nonlinearity to lrelu
l_qa_x, conv_layer_output_shapes = conv_net(l_x_in)
# Add mutual information features
if len(qa_hid) > 1:
for hid in qy_hid[1:]:
l_qy_xa = dense_layer(l_qa_x, hid)
l_qa_x, l_qa_x_mu, l_qa_x_logvar = stochastic_layer(l_qa_x, n_a, self.sym_samples)
############################################################################
############################################################################
# Classifier q(y|a,x) #
############################################################################
# Dense layers for input a
l_qa_to_qy = dense_layer(l_qa_x, conv_output_size)
l_qa_to_qy = ReshapeLayer(l_qa_to_qy, (-1, self.sym_samples, 1, conv_output_size))
# Convolutional layers for input x
l_x_to_qy, _ = conv_net(l_x_in)
l_x_to_qy = DimshuffleLayer(l_x_to_qy, (0, 'x', 'x', 1))
# Combine layers from x and a
l_qy_xa = ReshapeLayer(ElemwiseSumLayer([l_qa_to_qy, l_x_to_qy]), (-1, conv_output_size))
if batchnorm:
l_qy_xa = BatchNormLayer(l_qy_xa)
if len(qy_hid) > 1:
for hid in qy_hid[1:]:
l_qy_xa = dense_layer(l_qy_xa, hid)
l_qy_xa = DenseLayer(l_qy_xa, n_y, init.GlorotNormal(), init.Normal(init_w), softmax)
#############################################################################
############################################################################
# Recognition q(z|x,a,y) #
############################################################################
# Dense layers for a
l_qa_to_qz = DenseLayer(l_qa_x, conv_output_size, init.GlorotNormal(hid_w), init.Normal(init_w), None)
l_qa_to_qz = ReshapeLayer(l_qa_to_qz, (-1, self.sym_samples, 1, conv_output_size))
# Convolutional layers for x
l_x_to_qz, _ = conv_net(l_x_in)
l_x_to_qz = DimshuffleLayer(l_x_to_qz, (0, 'x', 'x', 1))
# Dense layers for y
l_y_to_qz = DenseLayer(l_y_in, conv_output_size, init.GlorotNormal(hid_w), init.Normal(init_w), None)
l_y_to_qz = DimshuffleLayer(l_y_to_qz, (0, 'x', 'x', 1))
# Combine layers from a, x, and y
l_qz_axy = ReshapeLayer(ElemwiseSumLayer([l_qa_to_qz, l_x_to_qz, l_y_to_qz]), (-1, conv_output_size))
if batchnorm:
l_qz_axy = BatchNormLayer(l_qz_axy)
if len(qz_hid) > 1:
for hid in pa_hid[1:]:
l_qz_axy = dense_layer(l_qz_axy, hid)
l_qz_axy, l_qz_axy_mu, l_qz_axy_logvar = stochastic_layer(l_qz_axy, n_z, 1)
############################################################################
############################################################################
# Generative p(a|z,y) #
############################################################################
l_y_to_pa = DenseLayer(l_y_in, pa_hid[0], init.GlorotNormal(hid_w), init.Normal(init_w), None)
l_y_to_pa = DimshuffleLayer(l_y_to_pa, (0, 'x', 'x', 1))
l_qz_to_pa = DenseLayer(l_qz_axy, pa_hid[0], init.GlorotNormal(hid_w), init.Normal(init_w), None)
l_qz_to_pa = ReshapeLayer(l_qz_to_pa, (-1, self.sym_samples, 1, pa_hid[0]))
l_pa_zy = ReshapeLayer(ElemwiseSumLayer([l_qz_to_pa, l_y_to_pa]), [-1, pa_hid[0]])
if batchnorm:
l_pa_zy = BatchNormLayer(l_pa_zy)
l_pa_zy = NonlinearityLayer(l_pa_zy, self.transf)
if len(pa_hid) > 1:
for hid in pa_hid[1:]:
l_pa_zy = dense_layer(l_pa_zy, hid)
l_pa_zy, l_pa_zy_mu, l_pa_zy_logvar = stochastic_layer(l_pa_zy, n_a, 1)
############################################################################
############################################################################
# Generative p(x|a,z,y) #
############################################################################
# Pass a,y,z through dense layers
l_qa_to_px = DenseLayer(l_qa_x, conv_output_size, init.GlorotNormal(hid_w), init.Normal(init_w), None)
l_qa_to_px = ReshapeLayer(l_qa_to_px, (-1, self.sym_samples, 1, conv_output_size))
l_y_to_px = DenseLayer(l_y_in, conv_output_size, init.GlorotNormal(hid_w), init.Normal(init_w), None)
l_y_to_px = DimshuffleLayer(l_y_to_px, (0, 'x', 'x', 1))
l_qz_to_px = DenseLayer(l_qz_axy, conv_output_size, init.GlorotNormal(hid_w), init.Normal(init_w), None)
l_qz_to_px = ReshapeLayer(l_qz_to_px, (-1, self.sym_samples, 1, conv_output_size))
# Combine the results
l_px_azy = ReshapeLayer(ElemwiseSumLayer([l_qa_to_px, l_qz_to_px, l_y_to_px]), [-1, conv_output_size])
#if batchnorm:
# l_px_azy = BatchNormLayer(l_px_azy)
l_px_azy = NonlinearityLayer(l_px_azy, self.transf)
# Generate x using transposed convolutional layers
l_px_azy = deconv_net(l_px_azy, conv_layer_output_shapes)
l_px_azy = ReshapeLayer(l_px_azy, (-1, self.input_size**2 + self.n_mi_features))
if x_dist == 'bernoulli':
l_px_azy = DenseLayer(l_px_azy, self.input_size**2 + self.n_mi_features, init.GlorotNormal(), init.Normal(init_w), sigmoid)
elif x_dist == 'multinomial':
l_px_azy = DenseLayer(l_px_azy, self.input_size**2 + self.n_mi_features, init.GlorotNormal(), init.Normal(init_w), softmax)
elif x_dist == 'gaussian':
l_px_azy, l_px_zy_mu, l_px_zy_logvar = stochastic_layer(l_px_azy, self.input_size**2 + self.n_mi_features, 1, px_nonlinearity)
############################################################################
# Reshape all the model layers to have the same size
self.l_x_in = l_x_in
self.l_y_in = l_y_in
self.l_a_in = l_qa_x
# Output of the auxiliary network q(a|x)
self.l_qa = ReshapeLayer(l_qa_x, (-1, self.sym_samples, 1, n_a))
self.l_qa_mu = DimshuffleLayer(l_qa_x_mu, (0, 'x', 'x', 1))
self.l_qa_logvar = DimshuffleLayer(l_qa_x_logvar, (0, 'x', 'x', 1))
# Output of the recognition network q(z|x,a,y)
self.l_qz = ReshapeLayer(l_qz_axy, (-1, self.sym_samples, 1, n_z))
self.l_qz_mu = ReshapeLayer(l_qz_axy_mu, (-1, self.sym_samples, 1, n_z))
self.l_qz_logvar = ReshapeLayer(l_qz_axy_logvar, (-1, self.sym_samples, 1, n_z))
# Output of the classifier network q(y|a,x)
self.l_qy = ReshapeLayer(l_qy_xa, (-1, self.sym_samples, 1, n_y))
# Output of the generative network p(a|z,y)
self.l_pa = ReshapeLayer(l_pa_zy, (-1, self.sym_samples, 1, n_a))
self.l_pa_mu = ReshapeLayer(l_pa_zy_mu, (-1, self.sym_samples, 1, n_a))
self.l_pa_logvar = ReshapeLayer(l_pa_zy_logvar, (-1, self.sym_samples, 1, n_a))
# Output of the generative network p(x|a,z,y)
self.l_px = ReshapeLayer(l_px_azy, (-1, self.sym_samples, 1, self.input_size**2 + self.n_mi_features))
self.l_px_mu = ReshapeLayer(l_px_zy_mu, (-1, self.sym_samples, 1, self.input_size**2 + self.n_mi_features)) if x_dist == "gaussian" else None
self.l_px_logvar = ReshapeLayer(l_px_zy_logvar,
(-1, self.sym_samples, 1, self.input_size**2 + self.n_mi_features)) if x_dist == "gaussian" else None
# Predefined functions
# Classifier
inputs = [self.sym_x_l, self.sym_samples]
outputs = get_output(self.l_qy, self.sym_x_l, deterministic=True).mean(axis=(1, 2))
self.f_qy = theano.function(inputs, outputs)
# Auxiliary
inputs = [self.sym_x_l, self.sym_samples]
outputs = get_output(self.l_qa, self.sym_x_l, deterministic=True).mean(axis=(1, 2))
self.f_qa = theano.function(inputs, outputs)
#
inputs = {l_qz_axy: self.sym_z, l_y_in: self.sym_t_l}
outputs = get_output(self.l_pa, inputs, deterministic=True)
self.f_pa = theano.function([self.sym_z, self.sym_t_l, self.sym_samples], outputs)
inputs = {l_qa_x: self.sym_a, l_qz_axy: self.sym_z, l_y_in: self.sym_t_l}
outputs = get_output(self.l_px, inputs, deterministic=True)
self.f_px = theano.function([self.sym_a, self.sym_z, self.sym_t_l, self.sym_samples], outputs)
# Define model parameters
self.model_params = get_all_params([self.l_qy, self.l_pa, self.l_px])
self.trainable_model_params = get_all_params([self.l_qy, self.l_pa, self.l_px], trainable=True)
def build_model(self, train_set_unlabeled, train_set_labeled, test_set, validation_set=None):
"""
Build the auxiliary deep generative model from the initialized hyperparameters.
Define the lower bound term and compile it into a training function.
:param train_set_unlabeled: Unlabeled train set containing variables x, t.
:param train_set_labeled: Unlabeled train set containing variables x, t.
:param test_set: Test set containing variables x, t.
:param validation_set: Validation set containing variables x, t.
:return: train, test, validation function and dicts of arguments.
"""
super(ConvSDGMSSL, self).build_model(train_set_unlabeled, test_set, validation_set)
sh_train_x_l = theano.shared(np.asarray(train_set_labeled[0], dtype=theano.config.floatX), borrow=True)
sh_train_t_l = theano.shared(np.asarray(train_set_labeled[1], dtype=theano.config.floatX), borrow=True)
n = self.sh_train_x.shape[0].astype(theano.config.floatX) # no. of data points
n_l = sh_train_x_l.shape[0].astype(theano.config.floatX) # no. of labeled data points
# Define the layers for the density estimation used in the lower bound.
l_log_qa = GaussianLogDensityLayer(self.l_qa, self.l_qa_mu, self.l_qa_logvar)
l_log_qz = GaussianLogDensityLayer(self.l_qz, self.l_qz_mu, self.l_qz_logvar)
l_log_qy = MultinomialLogDensityLayer(self.l_qy, self.l_y_in, eps=1e-8)
l_log_pz = StandardNormalLogDensityLayer(self.l_qz)
l_log_pa = GaussianLogDensityLayer(self.l_qa, self.l_pa_mu, self.l_pa_logvar)
if self.x_dist == 'bernoulli':
l_log_px = BernoulliLogDensityLayer(self.l_px, self.l_x_in)
elif self.x_dist == 'multinomial':
l_log_px = MultinomialLogDensityLayer(self.l_px, setlf.l_x_in)
elif self.x_dist == 'gaussian':
l_log_px = GaussianLogDensityLayer(self.l_x_in, self.l_px_mu, self.l_px_logvar)
def lower_bound(log_pa, log_qa, log_pz, log_qz, log_py, log_px):
lb = log_px + log_py + log_pz + log_pa - log_qa - log_qz
return lb
# Lower bound for labeled data
out_layers = [l_log_pa, l_log_pz, l_log_qa, l_log_qz, l_log_px, l_log_qy]
inputs = {self.l_x_in: self.sym_x_l, self.l_y_in: self.sym_t_l}
out = get_output(out_layers, inputs, batch_norm_update_averages=False, batch_norm_use_averages=False)
log_pa_l, log_pz_l, log_qa_x_l, log_qz_axy_l, log_px_zy_l, log_qy_ax_l = out
# Prior p(y) expecting that all classes are evenly distributed
py_l = softmax(T.zeros((self.sym_x_l.shape[0], self.n_y)))
log_py_l = -categorical_crossentropy(py_l, self.sym_t_l).reshape((-1, 1)).dimshuffle((0, 'x', 'x', 1))
lb_l = lower_bound(log_pa_l, log_qa_x_l, log_pz_l, log_qz_axy_l, log_py_l, log_px_zy_l)
lb_l = lb_l.mean(axis=(1, 2)) # Mean over the sampling dimensions
log_qy_ax_l *= (self.sym_beta * (n / n_l)) # Scale the supervised cross entropy with the alpha constant
lb_l -= log_qy_ax_l.mean(axis=(1, 2)) # Collect the lower bound term and mean over sampling dimensions
# Lower bound for unlabeled data
bs_u = self.sym_x_u.shape[0]
# For the integrating out approach, we repeat the input matrix x, and construct a target (bs * n_y) x n_y
# Example of input and target matrix for a 3 class problem and batch_size=2. 2D tensors of the form
# x_repeat t_repeat
# [[x[0,0], x[0,1], ..., x[0,n_x]] [[1, 0, 0]
# [x[1,0], x[1,1], ..., x[1,n_x]] [1, 0, 0]
# [x[0,0], x[0,1], ..., x[0,n_x]] [0, 1, 0]
# [x[1,0], x[1,1], ..., x[1,n_x]] [0, 1, 0]
# [x[0,0], x[0,1], ..., x[0,n_x]] [0, 0, 1]
# [x[1,0], x[1,1], ..., x[1,n_x]]] [0, 0, 1]]
t_eye = T.eye(self.n_y, k=0)
t_u = t_eye.reshape((self.n_y, 1, self.n_y)).repeat(bs_u, axis=1).reshape((-1, self.n_y))
x_u = self.sym_x_u.reshape((1, bs_u, self.input_size**2 + self.n_mi_features)).repeat(self.n_y, axis=0).reshape((-1, self.input_size**2 + self.n_mi_features))
# Since the expectation of var a is outside the integration we calculate E_q(a|x) first
a_x_u = get_output(self.l_qa, self.sym_x_u, batch_norm_update_averages=True, batch_norm_use_averages=False)
a_x_u_rep = a_x_u.reshape((1, bs_u * self.sym_samples, self.n_a)).repeat(self.n_y, axis=0).reshape(
(-1, self.n_a))
out_layers = [l_log_pa, l_log_pz, l_log_qa, l_log_qz, l_log_px]
inputs = {self.l_x_in: x_u, self.l_y_in: t_u, self.l_a_in: a_x_u_rep}
out = get_output(out_layers, inputs, batch_norm_update_averages=False, batch_norm_use_averages=False)
log_pa_u, log_pz_u, log_qa_x_u, log_qz_axy_u, log_px_zy_u = out
################################################################
################################################################
# Prior p(y) expecting that all classes are evenly distributed #
################################################################
################## is this appropriate? ##################
################################################################
################################################################
py_u = softmax(T.zeros((bs_u * self.n_y, self.n_y)))
log_py_u = -categorical_crossentropy(py_u, t_u).reshape((-1, 1)).dimshuffle((0, 'x', 'x', 1))
lb_u = lower_bound(log_pa_u, log_qa_x_u, log_pz_u, log_qz_axy_u, log_py_u, log_px_zy_u)
lb_u = lb_u.reshape((self.n_y, 1, 1, bs_u)).transpose(3, 1, 2, 0).mean(axis=(1, 2))
inputs = {self.l_x_in: self.sym_x_u, self.l_a_in: a_x_u.reshape((-1, self.n_a))}
y_u = get_output(self.l_qy, inputs, batch_norm_update_averages=True, batch_norm_use_averages=False).mean(
axis=(1, 2))
y_u += 1e-8 # Ensure that we get no NANs when calculating the entropy
y_u /= T.sum(y_u, axis=1, keepdims=True)
lb_u = (y_u * (lb_u - T.log(y_u))).sum(axis=1)
if self.batchnorm:
# TODO: implement the BN layer correctly.
inputs = {self.l_x_in: self.sym_x_u, self.l_y_in: y_u, self.l_a_in: a_x_u}
get_output(out_layers, inputs, weighting=None, batch_norm_update_averages=True,
batch_norm_use_averages=False)
# Regularizing with weight priors p(theta|N(0,1)), collecting and clipping gradients
weight_priors = 0.0
for p in self.trainable_model_params:
if 'W' not in str(p):
continue
weight_priors += log_normal(p, 0, 1).sum()
# Collect the lower bound and scale it with the weight priors.
elbo = ((lb_l.mean() + lb_u.mean()) * n + weight_priors) / -n
lb_labeled = -lb_l.mean()
lb_unlabeled = -lb_u.mean()
grads_collect = T.grad(elbo, self.trainable_model_params)
params_collect = self.trainable_model_params
sym_beta1 = T.scalar('beta1')
sym_beta2 = T.scalar('beta2')
clip_grad, max_norm = 1, 5
mgrads = total_norm_constraint(grads_collect, max_norm=max_norm)
mgrads = [T.clip(g, -clip_grad, clip_grad) for g in mgrads]
updates = adam(mgrads, params_collect, self.sym_lr, sym_beta1, sym_beta2)
# Training function
indices = self._srng.choice(size=[self.sym_bs_l], a=sh_train_x_l.shape[0], replace=False)
x_batch_l = sh_train_x_l[indices] # Change these to symbolic variables and generate them in training loop
t_batch_l = sh_train_t_l[indices]
x_batch_u = self.sh_train_x[self.batch_slice]
if self.x_dist == 'bernoulli': # Sample bernoulli input.
x_batch_u = self._srng.binomial(size=x_batch_u.shape, n=1, p=x_batch_u, dtype=theano.config.floatX)
x_batch_l = self._srng.binomial(size=x_batch_l.shape, n=1, p=x_batch_l, dtype=theano.config.floatX)
givens = {self.sym_x_l: x_batch_l,
self.sym_x_u: x_batch_u,
self.sym_t_l: t_batch_l}
inputs = [self.sym_index, self.sym_batchsize, self.sym_bs_l, self.sym_beta,
self.sym_lr, sym_beta1, sym_beta2, self.sym_samples]
outputs = [elbo, lb_labeled, lb_unlabeled]
f_train = theano.function(inputs=inputs, outputs=outputs, givens=givens, updates=updates)
# Default training args. Note that these can be changed during or prior to training.
self.train_args['inputs']['batchsize_unlabeled'] = 100
self.train_args['inputs']['batchsize_labeled'] = 100
self.train_args['inputs']['beta'] = 0.1
self.train_args['inputs']['learningrate'] = 3e-4
self.train_args['inputs']['beta1'] = 0.9
self.train_args['inputs']['beta2'] = 0.999
self.train_args['inputs']['samples'] = 1
self.train_args['outputs']['lb'] = '%0.4f'
self.train_args['outputs']['lb-labeled'] = '%0.4f'
self.train_args['outputs']['lb-unlabeled'] = '%0.4f'
# Validation and test function
y = get_output(self.l_qy, self.sym_x_l, deterministic=True).mean(axis=(1, 2))
class_err = (1. - categorical_accuracy(y, self.sym_t_l).mean()) * 100
givens = {self.sym_x_l: self.sh_test_x,
self.sym_t_l: self.sh_test_t}
f_test = theano.function(inputs=[self.sym_samples], outputs=[class_err], givens=givens)
# Test args. Note that these can be changed during or prior to training.
self.test_args['inputs']['samples'] = 1
self.test_args['outputs']['test'] = '%0.2f%%'
f_validate = None
if validation_set is not None:
givens = {self.sym_x_l: self.sh_valid_x,
self.sym_t_l: self.sh_valid_t}
f_validate = theano.function(inputs=[self.sym_samples], outputs=[class_err], givens=givens)
# Default validation args. Note that these can be changed during or prior to training.
self.validate_args['inputs']['samples'] = 1
self.validate_args['outputs']['validation'] = '%0.2f%%'
return f_train, f_test, f_validate, self.train_args, self.test_args, self.validate_args
def get_output(self, x, samples=1):
return self.f_qy(x, samples)
def model_info(self):
qa_shapes = self.get_model_shape(get_all_params(self.l_qa))
qy_shapes = self.get_model_shape(get_all_params(self.l_qy))[len(qa_shapes) - 1:]
qz_shapes = self.get_model_shape(get_all_params(self.l_qz))[len(qa_shapes) - 1:]
px_shapes = self.get_model_shape(get_all_params(self.l_px))[(len(qz_shapes) - 1) + (len(qa_shapes) - 1):]
pa_shapes = self.get_model_shape(get_all_params(self.l_pa))[(len(qz_shapes) - 1) + (len(qa_shapes) - 1):]
s = ""
s += 'batch norm: %s.\n' % (str(self.batchnorm))
s += 'x distribution: %s.\n' % (str(self.x_dist))
s += 'model q(a|x): %s.\n' % str(qa_shapes)[1:-1]
s += 'model q(z|a,x,y): %s.\n' % str(qz_shapes)[1:-1]
s += 'model q(y|a,x): %s.\n' % str(qy_shapes)[1:-1]
s += 'model p(x|a,z,y): %s.\n' % str(px_shapes)[1:-1]
s += 'model p(a|z,y): %s.' % str(pa_shapes)[1:-1]
return s
|
unknown
|
codeparrot/codeparrot-clean
| ||
import sys
import threading
import pytest
from pydev_tests_python.debugger_unittest import IS_PY36_OR_GREATER, IS_CPYTHON
from pydev_tests_python.debug_constants import TEST_CYTHON
pytestmark = pytest.mark.skipif(not IS_PY36_OR_GREATER or not IS_CPYTHON or not TEST_CYTHON, reason='Requires CPython >= 3.6')
def get_foo_frame():
frame = sys._getframe()
return frame
class CheckClass(object):
def collect_info(self):
from _pydevd_frame_eval import pydevd_frame_evaluator
thread_info = pydevd_frame_evaluator.get_thread_info_py()
self.thread_info = thread_info
@pytest.mark.parametrize('_times', range(2))
def test_thread_info(_times):
obj = CheckClass()
obj.collect_info()
assert obj.thread_info.additional_info is not None
assert not obj.thread_info.is_pydevd_thread
thread_info = obj.thread_info
obj.collect_info()
assert obj.thread_info is thread_info
obj = CheckClass()
t = threading.Thread(target=obj.collect_info)
t.is_pydev_daemon_thread = True
t.start()
t.join()
assert obj.thread_info.additional_info is None
assert obj.thread_info.is_pydevd_thread
def method():
pass
@pytest.fixture
def _custom_global_dbg():
from _pydevd_bundle.pydevd_constants import GlobalDebuggerHolder
from pydevd import PyDB
curr = GlobalDebuggerHolder.global_dbg
PyDB() # Will make itself current
yield
GlobalDebuggerHolder.global_dbg = curr
@pytest.mark.parametrize('_times', range(2))
def test_func_code_info(_times, _custom_global_dbg):
from _pydevd_frame_eval import pydevd_frame_evaluator
# Must be called before get_func_code_info_py to initialize the _code_extra_index.
pydevd_frame_evaluator.get_thread_info_py()
func_info = pydevd_frame_evaluator.get_func_code_info_py(method.__code__)
assert func_info.co_filename is method.__code__.co_filename
func_info2 = pydevd_frame_evaluator.get_func_code_info_py(method.__code__)
assert func_info is func_info2
some_func = eval('lambda:0')
func_info3 = pydevd_frame_evaluator.get_func_code_info_py(some_func.__code__)
del some_func
del func_info3
some_func = eval('lambda:0')
pydevd_frame_evaluator.get_func_code_info_py(some_func.__code__)
func_info = pydevd_frame_evaluator.get_func_code_info_py(some_func.__code__)
assert pydevd_frame_evaluator.get_func_code_info_py(some_func.__code__) is func_info
|
unknown
|
codeparrot/codeparrot-clean
| ||
#!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: fmgr_secprof_ips
version_added: "2.8"
notes:
- Full Documentation at U(https://ftnt-ansible-docs.readthedocs.io/en/latest/).
author:
- Luke Weighall (@lweighall)
- Andrew Welsh (@Ghilli3)
- Jim Huber (@p4r4n0y1ng)
short_description: Managing IPS security profiles in FortiManager
description:
- Managing IPS security profiles in FortiManager
options:
adom:
description:
- The ADOM the configuration should belong to.
required: false
default: root
mode:
description:
- Sets one of three modes for managing the object.
- Allows use of soft-adds instead of overwriting existing values
choices: ['add', 'set', 'delete', 'update']
required: false
default: add
replacemsg_group:
description:
- Replacement message group.
required: false
name:
description:
- Sensor name.
required: false
extended_log:
description:
- Enable/disable extended logging.
required: false
choices:
- disable
- enable
comment:
description:
- Comment.
required: false
block_malicious_url:
description:
- Enable/disable malicious URL blocking.
required: false
choices:
- disable
- enable
entries:
description:
- EXPERTS ONLY! KNOWLEDGE OF FMGR JSON API IS REQUIRED!
- List of multiple child objects to be added. Expects a list of dictionaries.
- Dictionaries must use FortiManager API parameters, not the ansible ones listed below.
- If submitted, all other prefixed sub-parameters ARE IGNORED.
- This object is MUTUALLY EXCLUSIVE with its options.
- We expect that you know what you are doing with these list parameters, and are leveraging the JSON API Guide.
- WHEN IN DOUBT, USE THE SUB OPTIONS BELOW INSTEAD TO CREATE OBJECTS WITH MULTIPLE TASKS
required: false
entries_action:
description:
- Action taken with traffic in which signatures are detected.
required: false
choices:
- pass
- block
- reset
- default
entries_application:
description:
- Applications to be protected. set application ? lists available applications. all includes
all applications. other includes all unlisted applications.
required: false
entries_location:
description:
- Protect client or server traffic.
required: false
entries_log:
description:
- Enable/disable logging of signatures included in filter.
required: false
choices:
- disable
- enable
entries_log_attack_context:
description:
- Enable/disable logging of attack context| URL buffer, header buffer, body buffer, packet buffer.
required: false
choices:
- disable
- enable
entries_log_packet:
description:
- Enable/disable packet logging. Enable to save the packet that triggers the filter. You can
download the packets in pcap format for diagnostic use.
required: false
choices:
- disable
- enable
entries_os:
description:
- Operating systems to be protected. all includes all operating systems. other includes all
unlisted operating systems.
required: false
entries_protocol:
description:
- Protocols to be examined. set protocol ? lists available protocols. all includes all protocols.
other includes all unlisted protocols.
required: false
entries_quarantine:
description:
- Quarantine method.
required: false
choices:
- none
- attacker
entries_quarantine_expiry:
description:
- Duration of quarantine.
required: false
entries_quarantine_log:
description:
- Enable/disable quarantine logging.
required: false
choices:
- disable
- enable
entries_rate_count:
description:
- Count of the rate.
required: false
entries_rate_duration:
description:
- Duration (sec) of the rate.
required: false
entries_rate_mode:
description:
- Rate limit mode.
required: false
choices:
- periodical
- continuous
entries_rate_track:
description:
- Track the packet protocol field.
required: false
choices:
- none
- src-ip
- dest-ip
- dhcp-client-mac
- dns-domain
entries_rule:
description:
- Identifies the predefined or custom IPS signatures to add to the sensor.
required: false
entries_severity:
description:
- Relative severity of the signature, from info to critical. Log messages generated by the signature
include the severity.
required: false
entries_status:
description:
- Status of the signatures included in filter. default enables the filter and only use filters
with default status of enable. Filters with default status of disable will not be used.
required: false
choices:
- disable
- enable
- default
entries_exempt_ip_dst_ip:
description:
- Destination IP address and netmask.
required: false
entries_exempt_ip_src_ip:
description:
- Source IP address and netmask.
required: false
filter:
description:
- EXPERTS ONLY! KNOWLEDGE OF FMGR JSON API IS REQUIRED!
- List of multiple child objects to be added. Expects a list of dictionaries.
- Dictionaries must use FortiManager API parameters, not the ansible ones listed below.
- If submitted, all other prefixed sub-parameters ARE IGNORED.
- This object is MUTUALLY EXCLUSIVE with its options.
- We expect that you know what you are doing with these list parameters, and are leveraging the JSON API Guide.
- WHEN IN DOUBT, USE THE SUB OPTIONS BELOW INSTEAD TO CREATE OBJECTS WITH MULTIPLE TASKS
required: false
filter_action:
description:
- Action of selected rules.
required: false
choices:
- pass
- block
- default
- reset
filter_application:
description:
- Vulnerable application filter.
required: false
filter_location:
description:
- Vulnerability location filter.
required: false
filter_log:
description:
- Enable/disable logging of selected rules.
required: false
choices:
- disable
- enable
filter_log_packet:
description:
- Enable/disable packet logging of selected rules.
required: false
choices:
- disable
- enable
filter_name:
description:
- Filter name.
required: false
filter_os:
description:
- Vulnerable OS filter.
required: false
filter_protocol:
description:
- Vulnerable protocol filter.
required: false
filter_quarantine:
description:
- Quarantine IP or interface.
required: false
choices:
- none
- attacker
filter_quarantine_expiry:
description:
- Duration of quarantine in minute.
required: false
filter_quarantine_log:
description:
- Enable/disable logging of selected quarantine.
required: false
choices:
- disable
- enable
filter_severity:
description:
- Vulnerability severity filter.
required: false
filter_status:
description:
- Selected rules status.
required: false
choices:
- disable
- enable
- default
override:
description:
- EXPERTS ONLY! KNOWLEDGE OF FMGR JSON API IS REQUIRED!
- List of multiple child objects to be added. Expects a list of dictionaries.
- Dictionaries must use FortiManager API parameters, not the ansible ones listed below.
- If submitted, all other prefixed sub-parameters ARE IGNORED.
- This object is MUTUALLY EXCLUSIVE with its options.
- We expect that you know what you are doing with these list parameters, and are leveraging the JSON API Guide.
- WHEN IN DOUBT, USE THE SUB OPTIONS BELOW INSTEAD TO CREATE OBJECTS WITH MULTIPLE TASKS
required: false
override_action:
description:
- Action of override rule.
required: false
choices:
- pass
- block
- reset
override_log:
description:
- Enable/disable logging.
required: false
choices:
- disable
- enable
override_log_packet:
description:
- Enable/disable packet logging.
required: false
choices:
- disable
- enable
override_quarantine:
description:
- Quarantine IP or interface.
required: false
choices:
- none
- attacker
override_quarantine_expiry:
description:
- Duration of quarantine in minute.
required: false
override_quarantine_log:
description:
- Enable/disable logging of selected quarantine.
required: false
choices:
- disable
- enable
override_rule_id:
description:
- Override rule ID.
required: false
override_status:
description:
- Enable/disable status of override rule.
required: false
choices:
- disable
- enable
override_exempt_ip_dst_ip:
description:
- Destination IP address and netmask.
required: false
override_exempt_ip_src_ip:
description:
- Source IP address and netmask.
required: false
'''
EXAMPLES = '''
- name: DELETE Profile
fmgr_secprof_ips:
name: "Ansible_IPS_Profile"
comment: "Created by Ansible Module TEST"
mode: "delete"
- name: CREATE Profile
fmgr_secprof_ips:
name: "Ansible_IPS_Profile"
comment: "Created by Ansible Module TEST"
mode: "set"
block_malicious_url: "enable"
entries: [{severity: "high", action: "block", log-packet: "enable"}, {severity: "medium", action: "pass"}]
'''
RETURN = """
api_result:
description: full API response, includes status code and message
returned: always
type: str
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.connection import Connection
from ansible.module_utils.network.fortimanager.fortimanager import FortiManagerHandler
from ansible.module_utils.network.fortimanager.common import FMGBaseException
from ansible.module_utils.network.fortimanager.common import FMGRCommon
from ansible.module_utils.network.fortimanager.common import DEFAULT_RESULT_OBJ
from ansible.module_utils.network.fortimanager.common import FAIL_SOCKET_MSG
from ansible.module_utils.network.fortimanager.common import prepare_dict
from ansible.module_utils.network.fortimanager.common import scrub_dict
###############
# START METHODS
###############
def fmgr_ips_sensor_modify(fmgr, paramgram):
"""
:param fmgr: The fmgr object instance from fortimanager.py
:type fmgr: class object
:param paramgram: The formatted dictionary of options to process
:type paramgram: dict
:return: The response from the FortiManager
:rtype: dict
"""
mode = paramgram["mode"]
adom = paramgram["adom"]
# INIT A BASIC OBJECTS
response = DEFAULT_RESULT_OBJ
url = ""
datagram = {}
# EVAL THE MODE PARAMETER FOR SET OR ADD
if mode in ['set', 'add', 'update']:
url = '/pm/config/adom/{adom}/obj/ips/sensor'.format(adom=adom)
datagram = scrub_dict(prepare_dict(paramgram))
# EVAL THE MODE PARAMETER FOR DELETE
elif mode == "delete":
# SET THE CORRECT URL FOR DELETE
url = '/pm/config/adom/{adom}/obj/ips/sensor/{name}'.format(
adom=adom, name=paramgram["name"])
datagram = {}
response = fmgr.process_request(url, datagram, paramgram["mode"])
return response
#############
# END METHODS
#############
def main():
argument_spec = dict(
adom=dict(type="str", default="root"),
mode=dict(choices=["add", "set", "delete", "update"],
type="str", default="add"),
replacemsg_group=dict(required=False, type="str"),
name=dict(required=False, type="str"),
extended_log=dict(required=False, type="str",
choices=["disable", "enable"]),
comment=dict(required=False, type="str"),
block_malicious_url=dict(required=False, type="str", choices=[
"disable", "enable"]),
entries=dict(required=False, type="list"),
entries_action=dict(required=False, type="str", choices=[
"pass", "block", "reset", "default"]),
entries_application=dict(required=False, type="str"),
entries_location=dict(required=False, type="str"),
entries_log=dict(required=False, type="str",
choices=["disable", "enable"]),
entries_log_attack_context=dict(
required=False, type="str", choices=["disable", "enable"]),
entries_log_packet=dict(required=False, type="str", choices=[
"disable", "enable"]),
entries_os=dict(required=False, type="str"),
entries_protocol=dict(required=False, type="str"),
entries_quarantine=dict(required=False, type="str", choices=[
"none", "attacker"]),
entries_quarantine_expiry=dict(required=False, type="str"),
entries_quarantine_log=dict(
required=False, type="str", choices=["disable", "enable"]),
entries_rate_count=dict(required=False, type="int"),
entries_rate_duration=dict(required=False, type="int"),
entries_rate_mode=dict(required=False, type="str", choices=[
"periodical", "continuous"]),
entries_rate_track=dict(required=False, type="str",
choices=["none", "src-ip", "dest-ip", "dhcp-client-mac", "dns-domain"]),
entries_rule=dict(required=False, type="str"),
entries_severity=dict(required=False, type="str"),
entries_status=dict(required=False, type="str", choices=[
"disable", "enable", "default"]),
entries_exempt_ip_dst_ip=dict(required=False, type="str"),
entries_exempt_ip_src_ip=dict(required=False, type="str"),
filter=dict(required=False, type="list"),
filter_action=dict(required=False, type="str", choices=[
"pass", "block", "default", "reset"]),
filter_application=dict(required=False, type="str"),
filter_location=dict(required=False, type="str"),
filter_log=dict(required=False, type="str",
choices=["disable", "enable"]),
filter_log_packet=dict(required=False, type="str",
choices=["disable", "enable"]),
filter_name=dict(required=False, type="str"),
filter_os=dict(required=False, type="str"),
filter_protocol=dict(required=False, type="str"),
filter_quarantine=dict(required=False, type="str",
choices=["none", "attacker"]),
filter_quarantine_expiry=dict(required=False, type="int"),
filter_quarantine_log=dict(required=False, type="str", choices=[
"disable", "enable"]),
filter_severity=dict(required=False, type="str"),
filter_status=dict(required=False, type="str", choices=[
"disable", "enable", "default"]),
override=dict(required=False, type="list"),
override_action=dict(required=False, type="str",
choices=["pass", "block", "reset"]),
override_log=dict(required=False, type="str",
choices=["disable", "enable"]),
override_log_packet=dict(required=False, type="str", choices=[
"disable", "enable"]),
override_quarantine=dict(required=False, type="str", choices=[
"none", "attacker"]),
override_quarantine_expiry=dict(required=False, type="int"),
override_quarantine_log=dict(
required=False, type="str", choices=["disable", "enable"]),
override_rule_id=dict(required=False, type="str"),
override_status=dict(required=False, type="str",
choices=["disable", "enable"]),
override_exempt_ip_dst_ip=dict(required=False, type="str"),
override_exempt_ip_src_ip=dict(required=False, type="str"),
)
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False, )
# MODULE PARAMGRAM
paramgram = {
"mode": module.params["mode"],
"adom": module.params["adom"],
"replacemsg-group": module.params["replacemsg_group"],
"name": module.params["name"],
"extended-log": module.params["extended_log"],
"comment": module.params["comment"],
"block-malicious-url": module.params["block_malicious_url"],
"entries": {
"action": module.params["entries_action"],
"application": module.params["entries_application"],
"location": module.params["entries_location"],
"log": module.params["entries_log"],
"log-attack-context": module.params["entries_log_attack_context"],
"log-packet": module.params["entries_log_packet"],
"os": module.params["entries_os"],
"protocol": module.params["entries_protocol"],
"quarantine": module.params["entries_quarantine"],
"quarantine-expiry": module.params["entries_quarantine_expiry"],
"quarantine-log": module.params["entries_quarantine_log"],
"rate-count": module.params["entries_rate_count"],
"rate-duration": module.params["entries_rate_duration"],
"rate-mode": module.params["entries_rate_mode"],
"rate-track": module.params["entries_rate_track"],
"rule": module.params["entries_rule"],
"severity": module.params["entries_severity"],
"status": module.params["entries_status"],
"exempt-ip": {
"dst-ip": module.params["entries_exempt_ip_dst_ip"],
"src-ip": module.params["entries_exempt_ip_src_ip"],
},
},
"filter": {
"action": module.params["filter_action"],
"application": module.params["filter_application"],
"location": module.params["filter_location"],
"log": module.params["filter_log"],
"log-packet": module.params["filter_log_packet"],
"name": module.params["filter_name"],
"os": module.params["filter_os"],
"protocol": module.params["filter_protocol"],
"quarantine": module.params["filter_quarantine"],
"quarantine-expiry": module.params["filter_quarantine_expiry"],
"quarantine-log": module.params["filter_quarantine_log"],
"severity": module.params["filter_severity"],
"status": module.params["filter_status"],
},
"override": {
"action": module.params["override_action"],
"log": module.params["override_log"],
"log-packet": module.params["override_log_packet"],
"quarantine": module.params["override_quarantine"],
"quarantine-expiry": module.params["override_quarantine_expiry"],
"quarantine-log": module.params["override_quarantine_log"],
"rule-id": module.params["override_rule_id"],
"status": module.params["override_status"],
"exempt-ip": {
"dst-ip": module.params["override_exempt_ip_dst_ip"],
"src-ip": module.params["override_exempt_ip_src_ip"],
}
}
}
module.paramgram = paramgram
fmgr = None
if module._socket_path:
connection = Connection(module._socket_path)
fmgr = FortiManagerHandler(connection, module)
fmgr.tools = FMGRCommon()
else:
module.fail_json(**FAIL_SOCKET_MSG)
list_overrides = ['entries', 'filter', 'override']
paramgram = fmgr.tools.paramgram_child_list_override(list_overrides=list_overrides,
paramgram=paramgram, module=module)
results = DEFAULT_RESULT_OBJ
try:
results = fmgr_ips_sensor_modify(fmgr, paramgram)
fmgr.govern_response(module=module, results=results,
ansible_facts=fmgr.construct_ansible_facts(results, module.params, paramgram))
except Exception as err:
raise FMGBaseException(err)
return module.exit_json(**results[1])
if __name__ == "__main__":
main()
|
unknown
|
codeparrot/codeparrot-clean
| ||
# coding=utf-8
#
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import mock
from testtools import matchers
from ironic.common import exception
from ironic import objects
from ironic.tests.unit.db import base
from ironic.tests.unit.db import utils
from ironic.tests.unit.objects import utils as obj_utils
class TestPortObject(base.DbTestCase):
def setUp(self):
super(TestPortObject, self).setUp()
self.fake_port = utils.get_test_port()
def test_get_by_id(self):
port_id = self.fake_port['id']
with mock.patch.object(self.dbapi, 'get_port_by_id',
autospec=True) as mock_get_port:
mock_get_port.return_value = self.fake_port
port = objects.Port.get(self.context, port_id)
mock_get_port.assert_called_once_with(port_id)
self.assertEqual(self.context, port._context)
def test_get_by_uuid(self):
uuid = self.fake_port['uuid']
with mock.patch.object(self.dbapi, 'get_port_by_uuid',
autospec=True) as mock_get_port:
mock_get_port.return_value = self.fake_port
port = objects.Port.get(self.context, uuid)
mock_get_port.assert_called_once_with(uuid)
self.assertEqual(self.context, port._context)
def test_get_by_address(self):
address = self.fake_port['address']
with mock.patch.object(self.dbapi, 'get_port_by_address',
autospec=True) as mock_get_port:
mock_get_port.return_value = self.fake_port
port = objects.Port.get(self.context, address)
mock_get_port.assert_called_once_with(address)
self.assertEqual(self.context, port._context)
def test_get_bad_id_and_uuid_and_address(self):
self.assertRaises(exception.InvalidIdentity,
objects.Port.get, self.context, 'not-a-uuid')
def test_save(self):
uuid = self.fake_port['uuid']
address = "b2:54:00:cf:2d:40"
test_time = datetime.datetime(2000, 1, 1, 0, 0)
with mock.patch.object(self.dbapi, 'get_port_by_uuid',
autospec=True) as mock_get_port:
mock_get_port.return_value = self.fake_port
with mock.patch.object(self.dbapi, 'update_port',
autospec=True) as mock_update_port:
mock_update_port.return_value = (
utils.get_test_port(address=address, updated_at=test_time))
p = objects.Port.get_by_uuid(self.context, uuid)
p.address = address
p.save()
mock_get_port.assert_called_once_with(uuid)
mock_update_port.assert_called_once_with(
uuid, {'address': "b2:54:00:cf:2d:40"})
self.assertEqual(self.context, p._context)
res_updated_at = (p.updated_at).replace(tzinfo=None)
self.assertEqual(test_time, res_updated_at)
def test_refresh(self):
uuid = self.fake_port['uuid']
returns = [self.fake_port,
utils.get_test_port(address="c3:54:00:cf:2d:40")]
expected = [mock.call(uuid), mock.call(uuid)]
with mock.patch.object(self.dbapi, 'get_port_by_uuid',
side_effect=returns,
autospec=True) as mock_get_port:
p = objects.Port.get_by_uuid(self.context, uuid)
self.assertEqual("52:54:00:cf:2d:31", p.address)
p.refresh()
self.assertEqual("c3:54:00:cf:2d:40", p.address)
self.assertEqual(expected, mock_get_port.call_args_list)
self.assertEqual(self.context, p._context)
def test_save_after_refresh(self):
# Ensure that it's possible to do object.save() after object.refresh()
address = "b2:54:00:cf:2d:40"
db_node = utils.create_test_node()
db_port = utils.create_test_port(node_id=db_node.id)
p = objects.Port.get_by_uuid(self.context, db_port.uuid)
p_copy = objects.Port.get_by_uuid(self.context, db_port.uuid)
p.address = address
p.save()
p_copy.refresh()
p_copy.address = 'aa:bb:cc:dd:ee:ff'
# Ensure this passes and an exception is not generated
p_copy.save()
def test_list(self):
with mock.patch.object(self.dbapi, 'get_port_list',
autospec=True) as mock_get_list:
mock_get_list.return_value = [self.fake_port]
ports = objects.Port.list(self.context)
self.assertThat(ports, matchers.HasLength(1))
self.assertIsInstance(ports[0], objects.Port)
self.assertEqual(self.context, ports[0]._context)
def test_payload_schemas(self):
"""Assert that the port's Payload SCHEMAs have the expected properties.
A payload's SCHEMA should:
1. Have each of its keys in the payload's fields
2. Have each member of the schema match with a corresponding field
in the Port object
"""
payloads = obj_utils.get_payloads_with_schemas(objects.port)
for payload in payloads:
for schema_key in payload.SCHEMA:
self.assertIn(schema_key, payload.fields,
"for %s, schema key %s is not in fields"
% (payload, schema_key))
port_key = payload.SCHEMA[schema_key][1]
self.assertIn(port_key, objects.Port.fields,
"for %s, schema key %s has invalid port field %s"
% (payload, schema_key, port_key))
|
unknown
|
codeparrot/codeparrot-clean
| ||
# Copyright (C) 2014 Red Hat, Inc., Bryn M. Reeves <bmr@redhat.com>
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
from sos.plugins import Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin
class ActiveMq(Plugin, DebianPlugin):
"""ActiveMQ message broker
"""
plugin_name = 'activemq'
profiles = ('openshift',)
packages = ('activemq', 'activemq-core')
files = ('/var/log/activemq',)
def setup(self):
if self.get_option("all_logs"):
self.add_copy_spec(list(self.files))
else:
self.add_copy_spec([
"/var/log/activemq/activemq.log",
"/var/log/activemq/wrapper.log"
])
def postproc(self):
# activemq.xml contains credentials in this form:
# <authenticationUser ... password="changeme" ... />
self.do_file_sub(
'/etc/activemq/activemq.xml',
r'(\s*password=")[^"]*(".*)',
r"\1******\2"
)
class RedHatActiveMq(ActiveMq, RedHatPlugin):
def setup(self):
super(RedHatActiveMq, self).setup()
self.add_copy_spec([
'/etc/sysconfig/activemq',
'/etc/activemq/activemq.xml'
])
class UbuntuActiveMq(ActiveMq, UbuntuPlugin):
def setup(self):
super(UbuntuActiveMq, self).setup()
self.add_copy_spec([
'/etc/activemq',
'/etc/default/activemq'
])
|
unknown
|
codeparrot/codeparrot-clean
| ||
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.gradle.internal.info;
import org.gradle.api.JavaVersion;
import org.gradle.api.provider.Provider;
import java.io.File;
public class RuntimeJava {
private final Provider<File> javahome;
private final Provider<JavaVersion> javaVersion;
private final boolean explicitlySet;
private final String preReleaseType;
private final Provider<String> vendorDetails;
private final Integer buildNumber;
RuntimeJava(Provider<File> javahome, Provider<JavaVersion> javaVersion, Provider<String> vendorDetails, boolean explicitlySet) {
this(javahome, javaVersion, vendorDetails, explicitlySet, null, null);
}
RuntimeJava(
Provider<File> javahome,
Provider<JavaVersion> javaVersion,
Provider<String> vendorDetails,
boolean explicitlySet,
String preReleaseType,
Integer buildNumber
) {
this.javahome = javahome;
this.javaVersion = javaVersion;
this.vendorDetails = vendorDetails;
this.explicitlySet = explicitlySet;
this.preReleaseType = preReleaseType;
this.buildNumber = buildNumber;
}
public Provider<File> getJavahome() {
return javahome;
}
public boolean isPreRelease() {
return preReleaseType != null;
}
public Provider<JavaVersion> getJavaVersion() {
return javaVersion;
}
public Provider<String> getVendorDetails() {
return vendorDetails;
}
public boolean isExplicitlySet() {
return explicitlySet;
}
public String getPreReleaseType() {
return preReleaseType;
}
public Integer getBuildNumber() {
return buildNumber;
}
}
|
java
|
github
|
https://github.com/elastic/elasticsearch
|
build-tools-internal/src/main/java/org/elasticsearch/gradle/internal/info/RuntimeJava.java
|
# -*- coding: utf-8 -*-
#
# Copyright: (c) 2017, F5 Networks Inc.
# GNU General Public License v3.0 (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import json
import sys
from nose.plugins.skip import SkipTest
if sys.version_info < (2, 7):
raise SkipTest("F5 Ansible modules require Python >= 2.7")
from ansible.module_utils.basic import AnsibleModule
try:
from library.modules.bigip_tunnel import ApiParameters
from library.modules.bigip_tunnel import ModuleParameters
from library.modules.bigip_tunnel import ModuleManager
from library.modules.bigip_tunnel import ArgumentSpec
# In Ansible 2.8, Ansible changed import paths.
from test.units.compat import unittest
from test.units.compat.mock import Mock
from test.units.compat.mock import patch
from test.units.modules.utils import set_module_args
except ImportError:
try:
from ansible.modules.network.f5.bigip_tunnel import ApiParameters
from ansible.modules.network.f5.bigip_tunnel import ModuleParameters
from ansible.modules.network.f5.bigip_tunnel import ModuleManager
from ansible.modules.network.f5.bigip_tunnel import ArgumentSpec
# Ansible 2.8 imports
from units.compat import unittest
from units.compat.mock import Mock
from units.compat.mock import patch
from units.modules.utils import set_module_args
except ImportError:
raise SkipTest("F5 Ansible modules require the f5-sdk Python library")
fixture_path = os.path.join(os.path.dirname(__file__), 'fixtures')
fixture_data = {}
def load_fixture(name):
path = os.path.join(fixture_path, name)
if path in fixture_data:
return fixture_data[path]
with open(path) as f:
data = f.read()
try:
data = json.loads(data)
except Exception:
pass
fixture_data[path] = data
return data
class TestParameters(unittest.TestCase):
def test_module_parameters(self):
args = dict(
name='foo',
profile='ipip',
)
p = ModuleParameters(params=args)
assert p.name == 'foo'
assert p.profile == '/Common/ipip'
def test_api_parameters(self):
args = load_fixture('load_net_tunnel_1.json')
p = ApiParameters(params=args)
assert p.name == 'tunnel1'
@patch('ansible.module_utils.f5_utils.AnsibleF5Client._get_mgmt_root',
return_value=True)
class TestManager(unittest.TestCase):
def setUp(self):
self.spec = ArgumentSpec()
def test_create(self, *args):
set_module_args(dict(
name='foo',
profile='ipip',
local_address='2.2.2.2.',
server='localhost',
password='password',
user='admin'
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.create_on_device = Mock(return_value=True)
mm.exists = Mock(return_value=False)
results = mm.exec_module()
assert results['changed'] is True
|
unknown
|
codeparrot/codeparrot-clean
| ||
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Dumps all pwndbg-specific configuration points.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import argparse
import pwndbg.commands
import pwndbg.config
from pwndbg.color import light_yellow
from pwndbg.color import ljust_colored
from pwndbg.color import strip
from pwndbg.color.message import hint
def print_row(name, value, default, docstring, ljust_optname, ljust_value, empty_space=6):
name = ljust_colored(name, ljust_optname + empty_space)
defval = extend_value_with_default(value, default)
defval = ljust_colored(defval, ljust_value + empty_space)
result = ' '.join((name, defval, docstring))
print(result)
return result
def extend_value_with_default(value, default):
if strip(value) != strip(default):
return '%s (%s)' % (value, default)
return value
def get_config_parameters(scope, filter_pattern):
values = [v for k, v in pwndbg.config.__dict__.items()
if isinstance(v, pwndbg.config.Parameter) and v.scope == scope]
if filter_pattern:
filter_pattern = filter_pattern.lower()
values = [v for v in values if filter_pattern in v.optname.lower() or filter_pattern in v.docstring.lower()]
return values
parser = argparse.ArgumentParser(description='Shows pwndbg-specific config. The list can be filtered.')
parser.add_argument('filter_pattern', type=str, nargs='?', default=None,
help='Filter to apply to config parameters names/descriptions')
@pwndbg.commands.ArgparsedCommand(parser)
def config(filter_pattern):
values = get_config_parameters('config', filter_pattern)
if not values:
print(hint('No config parameter found with filter "{}"'.format(filter_pattern)))
return
longest_optname = max(map(len, [v.optname for v in values]))
longest_value = max(map(len, [extend_value_with_default(repr(v.value), repr(v.default)) for v in values]))
header = print_row('Name', 'Value', 'Def', 'Documentation', longest_optname, longest_value)
print('-' * (len(header)))
for v in sorted(values):
print_row(v.optname, repr(v.value), repr(v.default), v.docstring, longest_optname, longest_value)
print(hint('You can set config variable with `set <config-var> <value>`'))
print(hint('You can generate configuration file using `configfile` '
'- then put it in your .gdbinit after initializing pwndbg'))
configfile_parser = argparse.ArgumentParser(description='Generates a configuration file for the current Pwndbg options')
configfile_parser.add_argument('--show-all', action='store_true', help='Force displaying of all configs.')
@pwndbg.commands.ArgparsedCommand(configfile_parser)
def configfile(show_all=False):
configfile_print_scope('config', show_all)
themefile_parser = argparse.ArgumentParser(
description='Generates a configuration file for the current Pwndbg theme options'
)
themefile_parser.add_argument('--show-all', action='store_true', help='Force displaying of all theme options.')
@pwndbg.commands.ArgparsedCommand(themefile_parser)
def themefile(show_all=False):
configfile_print_scope('theme', show_all)
def configfile_print_scope(scope, show_all=False):
params = pwndbg.config.get_params(scope)
if not show_all:
params = list(filter(lambda p: p.is_changed, params))
if params:
if not show_all:
print(hint('Showing only changed values:'))
for p in params:
print('# %s: %s' % (p.optname, p.docstring))
print('# default: %s' % p.native_default)
print('set %s %s' % (p.optname, p.native_value))
print()
else:
print(hint('No changed values. To see current values use `%s`.' % scope))
|
unknown
|
codeparrot/codeparrot-clean
| ||
# -*- coding: utf-8 -*-
"""
Tests that quoting specifications are properly handled
during parsing for all of the parsers defined in parsers.py
"""
import csv
import pandas.util.testing as tm
from pandas import DataFrame
from pandas.compat import PY3, StringIO, u
class QuotingTests(object):
def test_bad_quote_char(self):
data = '1,2,3'
# Python 2.x: "...must be an 1-character..."
# Python 3.x: "...must be a 1-character..."
msg = '"quotechar" must be a(n)? 1-character string'
tm.assert_raises_regex(TypeError, msg, self.read_csv,
StringIO(data), quotechar='foo')
msg = 'quotechar must be set if quoting enabled'
tm.assert_raises_regex(TypeError, msg, self.read_csv,
StringIO(data), quotechar=None,
quoting=csv.QUOTE_MINIMAL)
msg = '"quotechar" must be string, not int'
tm.assert_raises_regex(TypeError, msg, self.read_csv,
StringIO(data), quotechar=2)
def test_bad_quoting(self):
data = '1,2,3'
msg = '"quoting" must be an integer'
tm.assert_raises_regex(TypeError, msg, self.read_csv,
StringIO(data), quoting='foo')
# quoting must in the range [0, 3]
msg = 'bad "quoting" value'
tm.assert_raises_regex(TypeError, msg, self.read_csv,
StringIO(data), quoting=5)
def test_quote_char_basic(self):
data = 'a,b,c\n1,2,"cat"'
expected = DataFrame([[1, 2, 'cat']],
columns=['a', 'b', 'c'])
result = self.read_csv(StringIO(data), quotechar='"')
tm.assert_frame_equal(result, expected)
def test_quote_char_various(self):
data = 'a,b,c\n1,2,"cat"'
expected = DataFrame([[1, 2, 'cat']],
columns=['a', 'b', 'c'])
quote_chars = ['~', '*', '%', '$', '@', 'P']
for quote_char in quote_chars:
new_data = data.replace('"', quote_char)
result = self.read_csv(StringIO(new_data), quotechar=quote_char)
tm.assert_frame_equal(result, expected)
def test_null_quote_char(self):
data = 'a,b,c\n1,2,3'
# sanity checks
msg = 'quotechar must be set if quoting enabled'
tm.assert_raises_regex(TypeError, msg, self.read_csv,
StringIO(data), quotechar=None,
quoting=csv.QUOTE_MINIMAL)
tm.assert_raises_regex(TypeError, msg, self.read_csv,
StringIO(data), quotechar='',
quoting=csv.QUOTE_MINIMAL)
# no errors should be raised if quoting is None
expected = DataFrame([[1, 2, 3]],
columns=['a', 'b', 'c'])
result = self.read_csv(StringIO(data), quotechar=None,
quoting=csv.QUOTE_NONE)
tm.assert_frame_equal(result, expected)
result = self.read_csv(StringIO(data), quotechar='',
quoting=csv.QUOTE_NONE)
tm.assert_frame_equal(result, expected)
def test_quoting_various(self):
data = '1,2,"foo"'
cols = ['a', 'b', 'c']
# QUOTE_MINIMAL and QUOTE_ALL apply only to
# the CSV writer, so they should have no
# special effect for the CSV reader
expected = DataFrame([[1, 2, 'foo']], columns=cols)
# test default (afterwards, arguments are all explicit)
result = self.read_csv(StringIO(data), names=cols)
tm.assert_frame_equal(result, expected)
result = self.read_csv(StringIO(data), quotechar='"',
quoting=csv.QUOTE_MINIMAL, names=cols)
tm.assert_frame_equal(result, expected)
result = self.read_csv(StringIO(data), quotechar='"',
quoting=csv.QUOTE_ALL, names=cols)
tm.assert_frame_equal(result, expected)
# QUOTE_NONE tells the reader to do no special handling
# of quote characters and leave them alone
expected = DataFrame([[1, 2, '"foo"']], columns=cols)
result = self.read_csv(StringIO(data), quotechar='"',
quoting=csv.QUOTE_NONE, names=cols)
tm.assert_frame_equal(result, expected)
# QUOTE_NONNUMERIC tells the reader to cast
# all non-quoted fields to float
expected = DataFrame([[1.0, 2.0, 'foo']], columns=cols)
result = self.read_csv(StringIO(data), quotechar='"',
quoting=csv.QUOTE_NONNUMERIC,
names=cols)
tm.assert_frame_equal(result, expected)
def test_double_quote(self):
data = 'a,b\n3,"4 "" 5"'
expected = DataFrame([[3, '4 " 5']],
columns=['a', 'b'])
result = self.read_csv(StringIO(data), quotechar='"',
doublequote=True)
tm.assert_frame_equal(result, expected)
expected = DataFrame([[3, '4 " 5"']],
columns=['a', 'b'])
result = self.read_csv(StringIO(data), quotechar='"',
doublequote=False)
tm.assert_frame_equal(result, expected)
def test_quotechar_unicode(self):
# See gh-14477
data = 'a\n1'
expected = DataFrame({'a': [1]})
result = self.read_csv(StringIO(data), quotechar=u('"'))
tm.assert_frame_equal(result, expected)
# Compared to Python 3.x, Python 2.x does not handle unicode well.
if PY3:
result = self.read_csv(StringIO(data), quotechar=u('\u0001'))
tm.assert_frame_equal(result, expected)
|
unknown
|
codeparrot/codeparrot-clean
| ||
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.obs;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
import org.apache.hadoop.fs.contract.AbstractFSContract;
/**
* Seek test cases on obs file system.
*/
public class TestOBSContractSeek extends AbstractContractSeekTest {
@Override
protected AbstractFSContract createContract(final Configuration conf) {
return new OBSContract(conf);
}
}
|
java
|
github
|
https://github.com/apache/hadoop
|
hadoop-cloud-storage-project/hadoop-huaweicloud/src/test/java/org/apache/hadoop/fs/obs/TestOBSContractSeek.java
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.