code stringlengths 1 25.8M | language stringclasses 18 values | source stringclasses 4 values | repo stringclasses 78 values | path stringlengths 0 268 |
|---|---|---|---|---|
#!/usr/bin/env python
# CMCE PDUs
from scapy.packet import Packet, bind_layers
from scapy.fields import BitField, BitEnumField, ConditionalField
from .cmce import D_SDS_DATA
# Table 446: Text message transfer SDU contents
class SDS_TRANSFER(Packet):
name = 'SDS-Transfer'
fields_desc = [
BitField('delivery_report_request', 0, 2),
BitField('short_form_report', 0, 1),
BitField('storage', 0, 1),
BitField('message_ref', 0, 8),
ConditionalField(BitField('validity_period', 0, 5), lambda pkt: pkt.storage == 1),
ConditionalField(BitField('fwd_addr_type', 0, 5), lambda pkt: pkt.storage == 1),
# FIXME : More conditionnal fields
]
# Table 428: PDU layout
class SDS_TL_PDU(Packet):
name = 'SDS-TL'
fields_desc = [
BitEnumField('message_type', 0, 4, {
0: 'SDS-TRANSFER'
}),
]
# Table 439: Protocol identifier information element contents
# FIXME : SDS-TL is only used for protocols >=128 (except 134...)
bind_layers(D_SDS_DATA, SDS_TL_PDU, proto=130)
bind_layers(SDS_TL_PDU, SDS_TRANSFER, message_type=0) | unknown | codeparrot/codeparrot-clean | ||
import ed25519
from python_sha3.python_sha3 import *
import base64
import hashlib
from binascii import hexlify, unhexlify
class Account:
def __init__(self, hexPrivKey, network='mainnet'):
self.hexPrivKey = hexPrivKey
self.network = network
self._calculateKeyPair()
self._calculateAddress()
def _calculateKeyPair(self):
self.sk = unhexlify(self.hexPrivKey)[::-1]
self.pk = ed25519.publickey_hash_unsafe(self.sk, sha3_512)
self.hexPublicKey = hexlify(self.pk)
def _calculateAddress(self):
pubkey = self.pk
s = sha3_256()
s.update(pubkey)
sha3_pubkey = s.digest()
h = hashlib.new('ripemd160')
h.update(sha3_pubkey)
ripe = h.digest()
if self.network == 'testnet':
version = "\x98" + ripe
else:
version = "\x68" + ripe
s2 = sha3_256()
s2.update(version)
checksum = s2.digest()[0:4]
self.address = base64.b32encode(version + checksum)
def getHexPublicKey(self):
return self.hexPublicKey
def getHexPrivateKey(self):
return self.hexPrivKey
def getAddress(self):
return self.address
def sign(self, binMessage):
signature = ed25519.signature_hash_unsafe(binMessage, self.sk, self.pk, sha3_512)
# print ' sig:', hexlify(signature)
return signature
def verify(self, hexedMessage):
pass | unknown | codeparrot/codeparrot-clean | ||
---
title: Stores
---
<!-- - how to use
- how to write
- TODO should the details for the store methods belong to the reference section? -->
A _store_ is an object that allows reactive access to a value via a simple _store contract_. The [`svelte/store` module](../svelte-store) contains minimal store implementations which fulfil this contract.
Any time you have a reference to a store, you can access its value inside a component by prefixing it with the `$` character. This causes Svelte to declare the prefixed variable, subscribe to the store at component initialisation and unsubscribe when appropriate.
Assignments to `$`-prefixed variables require that the variable be a writable store, and will result in a call to the store's `.set` method.
Note that the store must be declared at the top level of the component — not inside an `if` block or a function, for example.
Local variables (that do not represent store values) must _not_ have a `$` prefix.
```svelte
<script>
import { writable } from 'svelte/store';
const count = writable(0);
console.log($count); // logs 0
count.set(1);
console.log($count); // logs 1
$count = 2;
console.log($count); // logs 2
</script>
```
## When to use stores
Prior to Svelte 5, stores were the go-to solution for creating cross-component reactive states or extracting logic. With runes, these use cases have greatly diminished.
- when extracting logic, it's better to take advantage of runes' universal reactivity: You can use runes outside the top level of components and even place them into JavaScript or TypeScript files (using a `.svelte.js` or `.svelte.ts` file ending)
- when creating shared state, you can create a `$state` object containing the values you need and then manipulate said state
```ts
/// file: state.svelte.js
export const userState = $state({
name: 'name',
/* ... */
});
```
```svelte
<!--- file: App.svelte --->
<script>
import { userState } from './state.svelte.js';
</script>
<p>User name: {userState.name}</p>
<button onclick={() => {
userState.name = 'new name';
}}>
change name
</button>
```
Stores are still a good solution when you have complex asynchronous data streams or it's important to have more manual control over updating values or listening to changes. If you're familiar with RxJs and want to reuse that knowledge, the `$` also comes in handy for you.
## svelte/store
The `svelte/store` module contains a minimal store implementation which fulfil the store contract. It provides methods for creating stores that you can update from the outside, stores you can only update from the inside, and for combining and deriving stores.
### `writable`
Function that creates a store which has values that can be set from 'outside' components. It gets created as an object with additional `set` and `update` methods.
`set` is a method that takes one argument which is the value to be set. The store value gets set to the value of the argument if the store value is not already equal to it.
`update` is a method that takes one argument which is a callback. The callback takes the existing store value as its argument and returns the new value to be set to the store.
```js
/// file: store.js
import { writable } from 'svelte/store';
const count = writable(0);
count.subscribe((value) => {
console.log(value);
}); // logs '0'
count.set(1); // logs '1'
count.update((n) => n + 1); // logs '2'
```
If a function is passed as the second argument, it will be called when the number of subscribers goes from zero to one (but not from one to two, etc). That function will be passed a `set` function which changes the value of the store, and an `update` function which works like the `update` method on the store, taking a callback to calculate the store's new value from its old value. It must return a `stop` function that is called when the subscriber count goes from one to zero.
```js
/// file: store.js
import { writable } from 'svelte/store';
const count = writable(0, () => {
console.log('got a subscriber');
return () => console.log('no more subscribers');
});
count.set(1); // does nothing
const unsubscribe = count.subscribe((value) => {
console.log(value);
}); // logs 'got a subscriber', then '1'
unsubscribe(); // logs 'no more subscribers'
```
Note that the value of a `writable` is lost when it is destroyed, for example when the page is refreshed. However, you can write your own logic to sync the value to for example the `localStorage`.
### `readable`
Creates a store whose value cannot be set from 'outside', the first argument is the store's initial value, and the second argument to `readable` is the same as the second argument to `writable`.
```ts
import { readable } from 'svelte/store';
const time = readable(new Date(), (set) => {
set(new Date());
const interval = setInterval(() => {
set(new Date());
}, 1000);
return () => clearInterval(interval);
});
const ticktock = readable('tick', (set, update) => {
const interval = setInterval(() => {
update((sound) => (sound === 'tick' ? 'tock' : 'tick'));
}, 1000);
return () => clearInterval(interval);
});
```
### `derived`
Derives a store from one or more other stores. The callback runs initially when the first subscriber subscribes and then whenever the store dependencies change.
In the simplest version, `derived` takes a single store, and the callback returns a derived value.
```ts
// @filename: ambient.d.ts
import { type Writable } from 'svelte/store';
declare global {
const a: Writable<number>;
}
export {};
// @filename: index.ts
// ---cut---
import { derived } from 'svelte/store';
const doubled = derived(a, ($a) => $a * 2);
```
The callback can set a value asynchronously by accepting a second argument, `set`, and an optional third argument, `update`, calling either or both of them when appropriate.
In this case, you can also pass a third argument to `derived` — the initial value of the derived store before `set` or `update` is first called. If no initial value is specified, the store's initial value will be `undefined`.
```ts
// @filename: ambient.d.ts
import { type Writable } from 'svelte/store';
declare global {
const a: Writable<number>;
}
export {};
// @filename: index.ts
// @errors: 18046 2769 7006
// ---cut---
import { derived } from 'svelte/store';
const delayed = derived(
a,
($a, set) => {
setTimeout(() => set($a), 1000);
},
2000
);
const delayedIncrement = derived(a, ($a, set, update) => {
set($a);
setTimeout(() => update((x) => x + 1), 1000);
// every time $a produces a value, this produces two
// values, $a immediately and then $a + 1 a second later
});
```
If you return a function from the callback, it will be called when a) the callback runs again, or b) the last subscriber unsubscribes.
```ts
// @filename: ambient.d.ts
import { type Writable } from 'svelte/store';
declare global {
const frequency: Writable<number>;
}
export {};
// @filename: index.ts
// ---cut---
import { derived } from 'svelte/store';
const tick = derived(
frequency,
($frequency, set) => {
const interval = setInterval(() => {
set(Date.now());
}, 1000 / $frequency);
return () => {
clearInterval(interval);
};
},
2000
);
```
In both cases, an array of arguments can be passed as the first argument instead of a single store.
```ts
// @filename: ambient.d.ts
import { type Writable } from 'svelte/store';
declare global {
const a: Writable<number>;
const b: Writable<number>;
}
export {};
// @filename: index.ts
// ---cut---
import { derived } from 'svelte/store';
const summed = derived([a, b], ([$a, $b]) => $a + $b);
const delayed = derived([a, b], ([$a, $b], set) => {
setTimeout(() => set($a + $b), 1000);
});
```
### `readonly`
This simple helper function makes a store readonly. You can still subscribe to the changes from the original one using this new readable store.
```js
import { readonly, writable } from 'svelte/store';
const writableStore = writable(1);
const readableStore = readonly(writableStore);
readableStore.subscribe(console.log);
writableStore.set(2); // console: 2
// @errors: 2339
readableStore.set(2); // ERROR
```
### `get`
Generally, you should read the value of a store by subscribing to it and using the value as it changes over time. Occasionally, you may need to retrieve the value of a store to which you're not subscribed. `get` allows you to do so.
> [!NOTE] This works by creating a subscription, reading the value, then unsubscribing. It's therefore not recommended in hot code paths.
```ts
// @filename: ambient.d.ts
import { type Writable } from 'svelte/store';
declare global {
const store: Writable<string>;
}
export {};
// @filename: index.ts
// ---cut---
import { get } from 'svelte/store';
const value = get(store);
```
## Store contract
```ts
// @noErrors
store = { subscribe: (subscription: (value: any) => void) => (() => void), set?: (value: any) => void }
```
You can create your own stores without relying on [`svelte/store`](../svelte-store), by implementing the _store contract_:
1. A store must contain a `.subscribe` method, which must accept as its argument a subscription function. This subscription function must be immediately and synchronously called with the store's current value upon calling `.subscribe`. All of a store's active subscription functions must later be synchronously called whenever the store's value changes.
2. The `.subscribe` method must return an unsubscribe function. Calling an unsubscribe function must stop its subscription, and its corresponding subscription function must not be called again by the store.
3. A store may _optionally_ contain a `.set` method, which must accept as its argument a new value for the store, and which synchronously calls all of the store's active subscription functions. Such a store is called a _writable store_.
For interoperability with RxJS Observables, the `.subscribe` method is also allowed to return an object with an `.unsubscribe` method, rather than return the unsubscription function directly. Note however that unless `.subscribe` synchronously calls the subscription (which is not required by the Observable spec), Svelte will see the value of the store as `undefined` until it does. | unknown | github | https://github.com/sveltejs/svelte | documentation/docs/06-runtime/01-stores.md |
import pymysql
import re
import string
import logging, coloredlogs
from modules.config import *
class Database:
def __init__(self, host, user, password, name, autocommit):
self.host = host
self.user = user
self.password = password
self.name = name
self.autocommit = autocommit
self.db = self.database_connection()
def database_connection(self):
logging.info("Connecting to database..")
connection = pymysql.connect(
host = self.host,
user = self.user,
password = self.password,
db = self.name,
autocommit = self.autocommit,
charset = 'utf8'
)
db = connection.cursor()
return db
def db_close(self):
return self.db.close()
def db_format(self, data):
format_data = re.findall('[+-]?\d+(?:\.\d+)?', str(data))
return format_data[0]
def db_tuple_to_string(self, data):
return ''.join(data)
def db_add_user(self, user):
self.db.execute("INSERT ignore into points VALUES ('{}', 0)".format(user))
def db_add_points_user(self, user, points):
self.db.execute("UPDATE points set points = points + {} where user_id = '{}'".format(points, user))
def db_minus_points_user(self, user, points):
self.db.execute("UPDATE points set points = points - {} where user_id = '{}'".format(points, user))
def db_check_user_exists(self, user):
points = self.db.execute("SELECT user_id FROM points where user_id = '{}'".format(user))
check_user_exists = self.db.fetchone()
if(check_user_exists == None):
return False
else:
return True
def bttv_parse(self, username):
output = username.replace('@', '')
return output
def db_get_points_user(self, user, self_user):
user = self.bttv_parse(user)
points = self.db_get_user_points_int(user)
if(points < 0):
output = "{}, you have {} {} BabyRage".format(user, points, CURRENCY)
else:
output = "{}, you have {} {}".format(user, points, CURRENCY)
if(user != self_user):
output = "{} has {} {}".format(user, points, CURRENCY)
return output
def db_get_user_points_int(self, user):
points = self.db.execute("SELECT points from points where user_id = '{}'".format(user))
get_points = self.db.fetchone()
#if(get_points == None):
# self.db_add_user(user)
# points = self.db.execute("SELECT points from points where user_id = '{}'".format(user))
# get_points = self.db.fetchone()
return int(self.db_format(get_points))
def db_get_user_total(self):
self.db.execute("SELECT COUNT(*) AS user_id FROM points")
user_total = self.db.fetchone()
format_user_total = self.db_format(str(user_total))
return(str(format_user_total))
def db_get_user_rank(self, user):
user = self.bttv_parse(user)
#try:
self.db.execute("SELECT 1 + (SELECT count(*) FROM points a WHERE a.points > b.points ) AS rank FROM points b WHERE user_id = '{}' ORDER BY rank LIMIT 1".format(user))
ranking = self.db.fetchone()
format_ranking = self.db_format(ranking)
format_points = self.db_get_user_points_int(user)
total_users = self.db_get_user_total()
output = "{} is rank {} out of {}, with {} {}!".format(user, str(format_ranking), total_users, format_points, CURRENCY)
return output
#except Exception:
# self.db_add_user(user)
# points = self.db_get_user_points_int(user)
# return "{} is the lowest rank, with {} {} FeelsBadMan".format(user, points, CURRENCY)
def db_get_follower(self):
self.db.execute("SELECT follower FROM latest_follower")
follower = self.db.fetchall()
follower_parsed = self.db_tuple_to_string(follower[0])
return follower_parsed
def db_new_follower(self, follower):
self.db.execute("UPDATE latest_follower SET follower = '{}'".format(follower))
logging.info("New follower added to DB")
# DUEL QUERIES
def db_add_duel(self, user, opponent, amount):
self.db.execute("INSERT INTO duels VALUES ('{}', '{}', {}, NOW())".format(user, opponent, amount))
def db_check_duel_exists_user(self, user):
check_duel = self.db.execute("SELECT user FROM duels WHERE user = '{}'".format(user))
check_duel_user = self.db.fetchone()
if(check_duel_user == None):
return False
else:
return True
def db_check_duel_exists_opponent(self, user):
check_duel = self.db.execute("SELECT opponent FROM duels WHERE opponent = '{}'".format(user))
check_duel_user = self.db.fetchone()
if(check_duel_user == None):
return False
else:
return True
def db_get_duel_user_from_opponent(self, user):
self.db.execute("SELECT user FROM duels WHERE opponent = '{}'".format(user))
opponent = self.db.fetchall()
opponent_parsed = self.db_tuple_to_string(opponent[0])
return opponent_parsed
def db_get_duel_opponent_from_user(self, user):
self.db.execute("SELECT opponent FROM duels WHERE user = '{}'".format(user))
opponent = self.db.fetchall()
opponent_parsed = self.db_tuple_to_string(opponent[0])
return opponent_parsed
def db_get_duel_amount(self, user):
self.db.execute("SELECT amount FROM duels WHERE opponent = '{}'".format(user))
get_amount = self.db.fetchone()
return int(self.db_format(get_amount))
def db_remove_duel(self, user):
self.db.execute("DELETE FROM duels WHERE opponent = '{}'".format(user))
def db_duel_expired(self):
val = int(DUEL_EXPIRE / 60)
self.db.execute("DELETE FROM duels WHERE time < (NOW() - INTERVAL {} MINUTE)".format(val))
### NEW COMMAND TESTING
def db_check_command_exists(self, command):
command = self.db.execute("SELECT content FROM commands WHERE command = '{}'".format(command))
check_command = self.db.fetchone()
if(check_command == None):
return False
else:
return True
def db_add_command(self, command, content, user):
if(self.db_check_command_exists(command)):
return False
else:
print(content)
command = command.replace("'", "\'")
self.db.execute('INSERT INTO commands VALUES ("{}", "{}")'.format(command, str(content)))
return True
def db_edit_command(self, command, new_content, user):
if(self.db_check_command_exists(command)):
self.db.execute("UPDATE commands SET content='{}' WHERE command = '{}'".format(new_content, command))
return True
else:
return False
def db_delete_command(self, command, user):
if(self.db_check_command_exists(command)):
self.db.execute("DELETE FROM commands WHERE command = '{}'".format(command))
return True
else:
return False
def db_get_command(self, command, user):
self.db.execute("SELECT content FROM commands WHERE command = '{}'".format(command))
response = self.db.fetchone()
parsed_response = self.db_tuple_to_string(response[0])
if '{user}' in str(parsed_response):
parsed_response = parsed_response.replace('{user}', user)
return parsed_response
def db_add_song_request(self, song_id, user):
print(song_id)
print(user)
try:
self.db.execute("INSERT INTO song_requests VALUES ('{}', '{}', NOW())".format(song_id, user))
return True
except:
return False | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import lambdainst.models
from django.conf import settings
import datetime
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='GiftCode',
fields=[
('id', models.AutoField(serialize=False, primary_key=True, verbose_name='ID', auto_created=True)),
('code', models.CharField(default=lambdainst.models.random_gift_code, max_length=32)),
('time', models.DurationField(default=datetime.timedelta(30))),
('created', models.DateTimeField(null=True, auto_now_add=True)),
('single_use', models.BooleanField(default=True)),
('free_only', models.BooleanField(default=True)),
('available', models.BooleanField(default=True)),
('comment', models.TextField(blank=True)),
('created_by', models.ForeignKey(related_name='created_giftcode_set', null=True, blank=True, to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name_plural': 'Gift Codes',
'verbose_name': 'Gift Code',
},
),
migrations.CreateModel(
name='GiftCodeUser',
fields=[
('id', models.AutoField(serialize=False, primary_key=True, verbose_name='ID', auto_created=True)),
('date', models.DateTimeField(null=True, auto_now_add=True)),
('code', models.ForeignKey(to='lambdainst.GiftCode')),
('user', models.ForeignKey(to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name_plural': 'Gift Code Users',
'verbose_name': 'Gift Code User',
},
),
migrations.CreateModel(
name='VPNUser',
fields=[
('id', models.AutoField(serialize=False, primary_key=True, verbose_name='ID', auto_created=True)),
('notes', models.TextField(blank=True)),
('expiration', models.DateTimeField(null=True, blank=True)),
('last_expiry_notice', models.DateTimeField(null=True, blank=True)),
('notify_expiration', models.BooleanField(default=True)),
('trial_periods_given', models.IntegerField(default=0)),
('last_vpn_auth', models.DateTimeField(null=True, blank=True)),
('referrer_used', models.BooleanField(default=False)),
('referrer', models.ForeignKey(related_name='referrals', null=True, on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL)),
('user', models.OneToOneField(to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name_plural': 'VPN Users',
'verbose_name': 'VPN User',
},
),
migrations.AddField(
model_name='giftcode',
name='users',
field=models.ManyToManyField(through='lambdainst.GiftCodeUser', to=settings.AUTH_USER_MODEL),
),
] | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import inspect
import json
import os
import random
import re
import unittest
from dataclasses import fields, is_dataclass
from pathlib import Path
from textwrap import dedent
from typing import get_args
from huggingface_hub import (
AudioClassificationInput,
AutomaticSpeechRecognitionInput,
DepthEstimationInput,
ImageClassificationInput,
ImageSegmentationInput,
ObjectDetectionInput,
QuestionAnsweringInput,
VideoClassificationInput,
ZeroShotImageClassificationInput,
)
from transformers.models.auto.processing_auto import PROCESSOR_MAPPING_NAMES
from transformers.pipelines import (
AudioClassificationPipeline,
AutomaticSpeechRecognitionPipeline,
DepthEstimationPipeline,
ImageClassificationPipeline,
ImageSegmentationPipeline,
ObjectDetectionPipeline,
QuestionAnsweringPipeline,
VideoClassificationPipeline,
ZeroShotImageClassificationPipeline,
)
from transformers.testing_utils import (
is_pipeline_test,
require_av,
require_pytesseract,
require_timm,
require_torch,
require_vision,
)
from transformers.utils import direct_transformers_import, logging
from .pipelines.test_pipelines_any_to_any import AnyToAnyPipelineTests
from .pipelines.test_pipelines_audio_classification import AudioClassificationPipelineTests
from .pipelines.test_pipelines_automatic_speech_recognition import AutomaticSpeechRecognitionPipelineTests
from .pipelines.test_pipelines_depth_estimation import DepthEstimationPipelineTests
from .pipelines.test_pipelines_document_question_answering import DocumentQuestionAnsweringPipelineTests
from .pipelines.test_pipelines_feature_extraction import FeatureExtractionPipelineTests
from .pipelines.test_pipelines_fill_mask import FillMaskPipelineTests
from .pipelines.test_pipelines_image_classification import ImageClassificationPipelineTests
from .pipelines.test_pipelines_image_feature_extraction import ImageFeatureExtractionPipelineTests
from .pipelines.test_pipelines_image_segmentation import ImageSegmentationPipelineTests
from .pipelines.test_pipelines_image_text_to_text import ImageTextToTextPipelineTests
from .pipelines.test_pipelines_image_to_image import ImageToImagePipelineTests
from .pipelines.test_pipelines_mask_generation import MaskGenerationPipelineTests
from .pipelines.test_pipelines_object_detection import ObjectDetectionPipelineTests
from .pipelines.test_pipelines_question_answering import QAPipelineTests
from .pipelines.test_pipelines_table_question_answering import TQAPipelineTests
from .pipelines.test_pipelines_text_classification import TextClassificationPipelineTests
from .pipelines.test_pipelines_text_generation import TextGenerationPipelineTests
from .pipelines.test_pipelines_text_to_audio import TextToAudioPipelineTests
from .pipelines.test_pipelines_token_classification import TokenClassificationPipelineTests
from .pipelines.test_pipelines_video_classification import VideoClassificationPipelineTests
from .pipelines.test_pipelines_visual_question_answering import VisualQuestionAnsweringPipelineTests
from .pipelines.test_pipelines_zero_shot import ZeroShotClassificationPipelineTests
from .pipelines.test_pipelines_zero_shot_audio_classification import ZeroShotAudioClassificationPipelineTests
from .pipelines.test_pipelines_zero_shot_image_classification import ZeroShotImageClassificationPipelineTests
from .pipelines.test_pipelines_zero_shot_object_detection import ZeroShotObjectDetectionPipelineTests
pipeline_test_mapping = {
"audio-classification": {"test": AudioClassificationPipelineTests},
"automatic-speech-recognition": {"test": AutomaticSpeechRecognitionPipelineTests},
"depth-estimation": {"test": DepthEstimationPipelineTests},
"document-question-answering": {"test": DocumentQuestionAnsweringPipelineTests},
"feature-extraction": {"test": FeatureExtractionPipelineTests},
"fill-mask": {"test": FillMaskPipelineTests},
"image-classification": {"test": ImageClassificationPipelineTests},
"image-feature-extraction": {"test": ImageFeatureExtractionPipelineTests},
"image-segmentation": {"test": ImageSegmentationPipelineTests},
"image-text-to-text": {"test": ImageTextToTextPipelineTests},
"image-to-image": {"test": ImageToImagePipelineTests},
"mask-generation": {"test": MaskGenerationPipelineTests},
"any-to-any": {"test": AnyToAnyPipelineTests},
"object-detection": {"test": ObjectDetectionPipelineTests},
"question-answering": {"test": QAPipelineTests},
"table-question-answering": {"test": TQAPipelineTests},
"text-classification": {"test": TextClassificationPipelineTests},
"text-generation": {"test": TextGenerationPipelineTests},
"text-to-audio": {"test": TextToAudioPipelineTests},
"token-classification": {"test": TokenClassificationPipelineTests},
"video-classification": {"test": VideoClassificationPipelineTests},
"visual-question-answering": {"test": VisualQuestionAnsweringPipelineTests},
"zero-shot": {"test": ZeroShotClassificationPipelineTests},
"zero-shot-audio-classification": {"test": ZeroShotAudioClassificationPipelineTests},
"zero-shot-image-classification": {"test": ZeroShotImageClassificationPipelineTests},
"zero-shot-object-detection": {"test": ZeroShotObjectDetectionPipelineTests},
}
task_to_pipeline_and_spec_mapping = {
# Adding a task to this list will cause its pipeline input signature to be checked against the corresponding
# task spec in the HF Hub
"audio-classification": (AudioClassificationPipeline, AudioClassificationInput),
"automatic-speech-recognition": (AutomaticSpeechRecognitionPipeline, AutomaticSpeechRecognitionInput),
"depth-estimation": (DepthEstimationPipeline, DepthEstimationInput),
"image-classification": (ImageClassificationPipeline, ImageClassificationInput),
"image-segmentation": (ImageSegmentationPipeline, ImageSegmentationInput),
"object-detection": (ObjectDetectionPipeline, ObjectDetectionInput),
"question-answering": (QuestionAnsweringPipeline, QuestionAnsweringInput),
"video-classification": (VideoClassificationPipeline, VideoClassificationInput),
"zero-shot-image-classification": (ZeroShotImageClassificationPipeline, ZeroShotImageClassificationInput),
}
for task_info in pipeline_test_mapping.values():
test = task_info["test"]
task_info["mapping"] = {
"pt": getattr(test, "model_mapping", None),
}
# The default value `hf-internal-testing` is for running the pipeline testing against the tiny models on the Hub.
# For debugging purpose, we can specify a local path which is the `output_path` argument of a previous run of
# `utils/create_dummy_models.py`.
TRANSFORMERS_TINY_MODEL_PATH = os.environ.get("TRANSFORMERS_TINY_MODEL_PATH", "hf-internal-testing")
if TRANSFORMERS_TINY_MODEL_PATH == "hf-internal-testing":
TINY_MODEL_SUMMARY_FILE_PATH = os.path.join(Path(__file__).parent.parent, "tests/utils/tiny_model_summary.json")
else:
TINY_MODEL_SUMMARY_FILE_PATH = os.path.join(TRANSFORMERS_TINY_MODEL_PATH, "reports", "tiny_model_summary.json")
with open(TINY_MODEL_SUMMARY_FILE_PATH) as fp:
tiny_model_summary = json.load(fp)
PATH_TO_TRANSFORMERS = os.path.join(Path(__file__).parent.parent, "src/transformers")
# Dynamically import the Transformers module to grab the attribute classes of the processor form their names.
transformers_module = direct_transformers_import(PATH_TO_TRANSFORMERS)
logger = logging.get_logger(__name__)
class PipelineTesterMixin:
model_tester = None
pipeline_model_mapping = None
def run_task_tests(self, task, dtype="float32"):
"""Run pipeline tests for a specific `task`
Args:
task (`str`):
A task name. This should be a key in the mapping `pipeline_test_mapping`.
dtype (`str`, `optional`, defaults to `'float32'`):
The torch dtype to use for the model. Can be used for FP16/other precision inference.
"""
if task not in self.pipeline_model_mapping:
self.skipTest(
f"{self.__class__.__name__}::test_pipeline_{task.replace('-', '_')}_{dtype} is skipped: `{task}` is not in "
f"`self.pipeline_model_mapping` for `{self.__class__.__name__}`."
)
model_architectures = self.pipeline_model_mapping[task]
if not isinstance(model_architectures, tuple):
model_architectures = (model_architectures,)
# We are going to run tests for multiple model architectures, some of them might be skipped
# with this flag we are control if at least one model were tested or all were skipped
at_least_one_model_is_tested = False
for model_architecture in model_architectures:
model_arch_name = model_architecture.__name__
model_type = model_architecture.config_class.model_type
if model_arch_name not in tiny_model_summary:
continue
tokenizer_names = tiny_model_summary[model_arch_name]["tokenizer_classes"]
# Sort image processors and feature extractors from tiny-models json file
image_processor_names = []
feature_extractor_names = []
processor_classes = tiny_model_summary[model_arch_name]["processor_classes"]
for cls_name in processor_classes:
if "ImageProcessor" in cls_name:
image_processor_names.append(cls_name)
elif "FeatureExtractor" in cls_name:
feature_extractor_names.append(cls_name)
# Processor classes are not in tiny models JSON file, so extract them from the mapping
# processors are mapped to instance, e.g. "XxxProcessor"
processor_names = PROCESSOR_MAPPING_NAMES.get(model_type, None)
if not isinstance(processor_names, (list, tuple)):
processor_names = [processor_names]
commit = None
if model_arch_name in tiny_model_summary and "sha" in tiny_model_summary[model_arch_name]:
commit = tiny_model_summary[model_arch_name]["sha"]
repo_name = f"tiny-random-{model_arch_name}"
if TRANSFORMERS_TINY_MODEL_PATH != "hf-internal-testing":
repo_name = model_arch_name
self.run_model_pipeline_tests(
task,
repo_name,
model_architecture,
tokenizer_names=tokenizer_names,
image_processor_names=image_processor_names,
feature_extractor_names=feature_extractor_names,
processor_names=processor_names,
commit=commit,
dtype=dtype,
)
at_least_one_model_is_tested = True
if task in task_to_pipeline_and_spec_mapping:
pipeline, hub_spec = task_to_pipeline_and_spec_mapping[task]
compare_pipeline_args_to_hub_spec(pipeline, hub_spec)
if not at_least_one_model_is_tested:
self.skipTest(
f"{self.__class__.__name__}::test_pipeline_{task.replace('-', '_')}_{dtype} is skipped: Could not find any "
f"model architecture in the tiny models JSON file for `{task}`."
)
def run_model_pipeline_tests(
self,
task,
repo_name,
model_architecture,
tokenizer_names,
image_processor_names,
feature_extractor_names,
processor_names,
commit,
dtype="float32",
):
"""Run pipeline tests for a specific `task` with the give model class and tokenizer/processor class names
Args:
task (`str`):
A task name. This should be a key in the mapping `pipeline_test_mapping`.
repo_name (`str`):
A model repository id on the Hub.
model_architecture (`type`):
A subclass of `PretrainedModel` or `PretrainedModel`.
tokenizer_names (`list[str]`):
A list of names of a subclasses of `PreTrainedTokenizerFast` or `PreTrainedTokenizer`.
image_processor_names (`list[str]`):
A list of names of subclasses of `BaseImageProcessor`.
feature_extractor_names (`list[str]`):
A list of names of subclasses of `FeatureExtractionMixin`.
processor_names (`list[str]`):
A list of names of subclasses of `ProcessorMixin`.
commit (`str`):
The commit hash of the model repository on the Hub.
dtype (`str`, `optional`, defaults to `'float32'`):
The torch dtype to use for the model. Can be used for FP16/other precision inference.
"""
# Get an instance of the corresponding class `XXXPipelineTests` in order to use `get_test_pipeline` and
# `run_pipeline_test`.
pipeline_test_class_name = pipeline_test_mapping[task]["test"].__name__
# If no image processor or feature extractor is found, we still need to test the pipeline with None
# otherwise for any empty list we might skip all the tests
tokenizer_names = tokenizer_names or [None]
image_processor_names = image_processor_names or [None]
feature_extractor_names = feature_extractor_names or [None]
processor_names = processor_names or [None]
test_cases = [
{
"tokenizer_name": tokenizer_name,
"image_processor_name": image_processor_name,
"feature_extractor_name": feature_extractor_name,
"processor_name": processor_name,
}
for tokenizer_name in tokenizer_names
for image_processor_name in image_processor_names
for feature_extractor_name in feature_extractor_names
for processor_name in processor_names
]
for test_case in test_cases:
tokenizer_name = test_case["tokenizer_name"]
image_processor_name = test_case["image_processor_name"]
feature_extractor_name = test_case["feature_extractor_name"]
processor_name = test_case["processor_name"]
do_skip_test_case = self.is_pipeline_test_to_skip(
pipeline_test_class_name,
model_architecture.config_class,
model_architecture,
tokenizer_name,
image_processor_name,
feature_extractor_name,
processor_name,
)
if do_skip_test_case:
logger.warning(
f"{self.__class__.__name__}::test_pipeline_{task.replace('-', '_')}_{dtype} is skipped: test is "
f"currently known to fail for: model `{model_architecture.__name__}` | tokenizer "
f"`{tokenizer_name}` | image processor `{image_processor_name}` | feature extractor {feature_extractor_name}."
)
continue
self.run_pipeline_test(
task,
repo_name,
model_architecture,
tokenizer_name=tokenizer_name,
image_processor_name=image_processor_name,
feature_extractor_name=feature_extractor_name,
processor_name=processor_name,
commit=commit,
dtype=dtype,
)
def run_pipeline_test(
self,
task,
repo_name,
model_architecture,
tokenizer_name,
image_processor_name,
feature_extractor_name,
processor_name,
commit,
dtype="float32",
):
"""Run pipeline tests for a specific `task` with the give model class and tokenizer/processor class name
The model will be loaded from a model repository on the Hub.
Args:
task (`str`):
A task name. This should be a key in the mapping `pipeline_test_mapping`.
repo_name (`str`):
A model repository id on the Hub.
model_architecture (`type`):
A subclass of `PretrainedModel` or `PretrainedModel`.
tokenizer_name (`str`):
The name of a subclass of `PreTrainedTokenizerFast` or `PreTrainedTokenizer`.
image_processor_name (`str`):
The name of a subclass of `BaseImageProcessor`.
feature_extractor_name (`str`):
The name of a subclass of `FeatureExtractionMixin`.
processor_name (`str`):
The name of a subclass of `ProcessorMixin`.
commit (`str`):
The commit hash of the model repository on the Hub.
dtype (`str`, `optional`, defaults to `'float32'`):
The torch dtype to use for the model. Can be used for FP16/other precision inference.
"""
repo_id = f"{TRANSFORMERS_TINY_MODEL_PATH}/{repo_name}"
model_type = model_architecture.config_class.model_type
if TRANSFORMERS_TINY_MODEL_PATH != "hf-internal-testing":
repo_id = os.path.join(TRANSFORMERS_TINY_MODEL_PATH, model_type, repo_name)
# -------------------- Load model --------------------
# TODO: We should check if a model file is on the Hub repo. instead.
try:
model = model_architecture.from_pretrained(repo_id, revision=commit, use_safetensors=True)
except Exception:
logger.warning(
f"{self.__class__.__name__}::test_pipeline_{task.replace('-', '_')}_{dtype} is skipped: Could not find or load "
f"the model from `{repo_id}` with `{model_architecture}`."
)
self.skipTest(f"Could not find or load the model from {repo_id} with {model_architecture}.")
# -------------------- Load tokenizer --------------------
tokenizer = None
if tokenizer_name is not None:
tokenizer_class = getattr(transformers_module, tokenizer_name)
tokenizer = tokenizer_class.from_pretrained(repo_id, revision=commit)
# -------------------- Load processors --------------------
processors = {}
for key, name in zip(
["image_processor", "feature_extractor", "processor"],
[image_processor_name, feature_extractor_name, processor_name],
):
if name is not None:
try:
# Can fail if some extra dependencies are not installed
processor_class = getattr(transformers_module, name)
processor = processor_class.from_pretrained(repo_id, revision=commit)
processors[key] = processor
except Exception:
logger.warning(
f"{self.__class__.__name__}::test_pipeline_{task.replace('-', '_')}_{dtype} is skipped: "
f"Could not load the {key} from `{repo_id}` with `{name}`."
)
self.skipTest(f"Could not load the {key} from {repo_id} with {name}.")
# ---------------------------------------------------------
# TODO: Maybe not upload such problematic tiny models to Hub.
if tokenizer is None and "image_processor" not in processors and "feature_extractor" not in processors:
logger.warning(
f"{self.__class__.__name__}::test_pipeline_{task.replace('-', '_')}_{dtype} is skipped: Could not find or load "
f"any tokenizer / image processor / feature extractor from `{repo_id}`."
)
self.skipTest(f"Could not find or load any tokenizer / processor from {repo_id}.")
pipeline_test_class_name = pipeline_test_mapping[task]["test"].__name__
if self.is_pipeline_test_to_skip_more(pipeline_test_class_name, model.config, model, tokenizer, **processors):
logger.warning(
f"{self.__class__.__name__}::test_pipeline_{task.replace('-', '_')}_{dtype} is skipped: test is "
f"currently known to fail for: model `{model_architecture.__name__}` | tokenizer "
f"`{tokenizer_name}` | image processor `{image_processor_name}` | feature extractor `{feature_extractor_name}`."
)
self.skipTest(
f"Test is known to fail for: model `{model_architecture.__name__}` | tokenizer `{tokenizer_name}` "
f"| image processor `{image_processor_name}` | feature extractor `{feature_extractor_name}`."
)
# validate
validate_test_components(model, tokenizer)
if hasattr(model, "eval"):
model = model.eval()
# Get an instance of the corresponding class `XXXPipelineTests` in order to use `get_test_pipeline` and
# `run_pipeline_test`.
task_test = pipeline_test_mapping[task]["test"]()
pipeline, examples = task_test.get_test_pipeline(model, tokenizer, **processors, dtype=dtype)
if pipeline is None:
# The test can disable itself, but it should be very marginal
# Concerns: Wav2Vec2ForCTC without tokenizer test (FastTokenizer don't exist)
logger.warning(
f"{self.__class__.__name__}::test_pipeline_{task.replace('-', '_')}_{dtype} is skipped: Could not get the "
"pipeline for testing."
)
self.skipTest(reason="Could not get the pipeline for testing.")
task_test.run_pipeline_test(pipeline, examples)
def run_batch_test(pipeline, examples):
# Need to copy because `Conversation` are stateful
if pipeline.tokenizer is not None and pipeline.tokenizer.pad_token_id is None:
return # No batching for this and it's OK
# 10 examples with batch size 4 means there needs to be a unfinished batch
# which is important for the unbatcher
def data(n):
for _ in range(n):
# Need to copy because Conversation object is mutated
yield copy.deepcopy(random.choice(examples))
out = []
for item in pipeline(data(10), batch_size=4):
out.append(item)
self.assertEqual(len(out), 10)
run_batch_test(pipeline, examples)
@is_pipeline_test
def test_pipeline_audio_classification(self):
self.run_task_tests(task="audio-classification")
@is_pipeline_test
@require_torch
def test_pipeline_audio_classification_fp16(self):
self.run_task_tests(task="audio-classification", dtype="float16")
@is_pipeline_test
def test_pipeline_automatic_speech_recognition(self):
self.run_task_tests(task="automatic-speech-recognition")
@is_pipeline_test
@require_torch
def test_pipeline_automatic_speech_recognition_fp16(self):
self.run_task_tests(task="automatic-speech-recognition", dtype="float16")
@is_pipeline_test
@require_vision
@require_timm
@require_torch
def test_pipeline_depth_estimation(self):
self.run_task_tests(task="depth-estimation")
@is_pipeline_test
@require_vision
@require_timm
@require_torch
def test_pipeline_depth_estimation_fp16(self):
self.run_task_tests(task="depth-estimation", dtype="float16")
@is_pipeline_test
@require_pytesseract
@require_torch
@require_vision
def test_pipeline_document_question_answering(self):
self.run_task_tests(task="document-question-answering")
@is_pipeline_test
@require_pytesseract
@require_torch
@require_vision
def test_pipeline_document_question_answering_fp16(self):
self.run_task_tests(task="document-question-answering", dtype="float16")
@is_pipeline_test
def test_pipeline_feature_extraction(self):
self.run_task_tests(task="feature-extraction")
@is_pipeline_test
@require_torch
def test_pipeline_feature_extraction_fp16(self):
self.run_task_tests(task="feature-extraction", dtype="float16")
@is_pipeline_test
def test_pipeline_fill_mask(self):
self.run_task_tests(task="fill-mask")
@is_pipeline_test
@require_torch
def test_pipeline_fill_mask_fp16(self):
self.run_task_tests(task="fill-mask", dtype="float16")
@is_pipeline_test
@require_torch
@require_vision
def test_pipeline_image_classification(self):
self.run_task_tests(task="image-classification")
@is_pipeline_test
@require_vision
@require_torch
def test_pipeline_image_classification_fp16(self):
self.run_task_tests(task="image-classification", dtype="float16")
@is_pipeline_test
@require_vision
@require_timm
@require_torch
def test_pipeline_image_segmentation(self):
self.run_task_tests(task="image-segmentation")
@is_pipeline_test
@require_vision
@require_timm
@require_torch
def test_pipeline_image_segmentation_fp16(self):
self.run_task_tests(task="image-segmentation", dtype="float16")
@is_pipeline_test
@require_vision
@require_torch
def test_pipeline_image_text_to_text(self):
self.run_task_tests(task="image-text-to-text")
@is_pipeline_test
@require_vision
@require_torch
def test_pipeline_image_text_to_text_fp16(self):
self.run_task_tests(task="image-text-to-text", dtype="float16")
@is_pipeline_test
@require_vision
@require_torch
def test_pipeline_any_to_any(self):
self.run_task_tests(task="any-to-any")
@is_pipeline_test
@require_vision
@require_torch
def test_pipeline_any_to_any_fp16(self):
self.run_task_tests(task="any-to-any", dtype="float16")
@is_pipeline_test
@require_timm
@require_vision
@require_torch
def test_pipeline_image_feature_extraction(self):
self.run_task_tests(task="image-feature-extraction")
@is_pipeline_test
@require_timm
@require_vision
@require_torch
def test_pipeline_image_feature_extraction_fp16(self):
self.run_task_tests(task="image-feature-extraction", dtype="float16")
@unittest.skip(reason="`run_pipeline_test` is currently not implemented.")
@is_pipeline_test
@require_vision
@require_torch
def test_pipeline_mask_generation(self):
self.run_task_tests(task="mask-generation")
@unittest.skip(reason="`run_pipeline_test` is currently not implemented.")
@is_pipeline_test
@require_vision
@require_torch
def test_pipeline_mask_generation_fp16(self):
self.run_task_tests(task="mask-generation", dtype="float16")
@is_pipeline_test
@require_vision
@require_timm
@require_torch
def test_pipeline_object_detection(self):
self.run_task_tests(task="object-detection")
@is_pipeline_test
@require_vision
@require_timm
@require_torch
def test_pipeline_object_detection_fp16(self):
self.run_task_tests(task="object-detection", dtype="float16")
@is_pipeline_test
def test_pipeline_question_answering(self):
self.run_task_tests(task="question-answering")
@is_pipeline_test
@require_torch
def test_pipeline_question_answering_fp16(self):
self.run_task_tests(task="question-answering", dtype="float16")
@is_pipeline_test
def test_pipeline_table_question_answering(self):
self.run_task_tests(task="table-question-answering")
@is_pipeline_test
@require_torch
def test_pipeline_table_question_answering_fp16(self):
self.run_task_tests(task="table-question-answering", dtype="float16")
@is_pipeline_test
def test_pipeline_text_classification(self):
self.run_task_tests(task="text-classification")
@is_pipeline_test
@require_torch
def test_pipeline_text_classification_fp16(self):
self.run_task_tests(task="text-classification", dtype="float16")
@is_pipeline_test
@require_torch
def test_pipeline_text_generation(self):
self.run_task_tests(task="text-generation")
@is_pipeline_test
@require_torch
def test_pipeline_text_generation_fp16(self):
self.run_task_tests(task="text-generation", dtype="float16")
@is_pipeline_test
@require_torch
def test_pipeline_text_to_audio(self):
self.run_task_tests(task="text-to-audio")
@is_pipeline_test
@require_torch
def test_pipeline_text_to_audio_fp16(self):
self.run_task_tests(task="text-to-audio", dtype="float16")
@is_pipeline_test
def test_pipeline_token_classification(self):
self.run_task_tests(task="token-classification")
@is_pipeline_test
@require_torch
def test_pipeline_token_classification_fp16(self):
self.run_task_tests(task="token-classification", dtype="float16")
@is_pipeline_test
@require_torch
@require_vision
@require_av
def test_pipeline_video_classification(self):
self.run_task_tests(task="video-classification")
@is_pipeline_test
@require_vision
@require_torch
@require_av
def test_pipeline_video_classification_fp16(self):
self.run_task_tests(task="video-classification", dtype="float16")
@is_pipeline_test
@require_torch
@require_vision
def test_pipeline_visual_question_answering(self):
self.run_task_tests(task="visual-question-answering")
@is_pipeline_test
@require_torch
@require_vision
def test_pipeline_visual_question_answering_fp16(self):
self.run_task_tests(task="visual-question-answering", dtype="float16")
@is_pipeline_test
def test_pipeline_zero_shot(self):
self.run_task_tests(task="zero-shot")
@is_pipeline_test
@require_torch
def test_pipeline_zero_shot_fp16(self):
self.run_task_tests(task="zero-shot", dtype="float16")
@is_pipeline_test
@require_torch
def test_pipeline_zero_shot_audio_classification(self):
self.run_task_tests(task="zero-shot-audio-classification")
@is_pipeline_test
@require_torch
def test_pipeline_zero_shot_audio_classification_fp16(self):
self.run_task_tests(task="zero-shot-audio-classification", dtype="float16")
@is_pipeline_test
@require_vision
def test_pipeline_zero_shot_image_classification(self):
self.run_task_tests(task="zero-shot-image-classification")
@is_pipeline_test
@require_vision
@require_torch
def test_pipeline_zero_shot_image_classification_fp16(self):
self.run_task_tests(task="zero-shot-image-classification", dtype="float16")
@is_pipeline_test
@require_vision
@require_torch
def test_pipeline_zero_shot_object_detection(self):
self.run_task_tests(task="zero-shot-object-detection")
@is_pipeline_test
@require_vision
@require_torch
def test_pipeline_zero_shot_object_detection_fp16(self):
self.run_task_tests(task="zero-shot-object-detection", dtype="float16")
# This contains the test cases to be skipped without model architecture being involved.
def is_pipeline_test_to_skip(
self,
pipeline_test_case_name,
config_class,
model_architecture,
tokenizer_name,
image_processor_name,
feature_extractor_name,
processor_name,
):
"""Skip some tests based on the classes or their names without the instantiated objects.
This is to avoid calling `from_pretrained` (so reducing the runtime) if we already know the tests will fail.
"""
# No fix is required for this case.
if (
pipeline_test_case_name == "DocumentQuestionAnsweringPipelineTests"
and tokenizer_name is not None
and not tokenizer_name.endswith("Fast")
):
# `DocumentQuestionAnsweringPipelineTests` requires a fast tokenizer.
return True
return False
def is_pipeline_test_to_skip_more(
self,
pipeline_test_case_name,
config,
model,
tokenizer,
image_processor=None,
feature_extractor=None,
processor=None,
): # noqa
"""Skip some more tests based on the information from the instantiated objects."""
# No fix is required for this case.
if (
pipeline_test_case_name == "QAPipelineTests"
and tokenizer is not None
and getattr(tokenizer, "pad_token", None) is None
and not tokenizer.__class__.__name__.endswith("Fast")
):
# `QAPipelineTests` doesn't work with a slow tokenizer that has no pad token.
return True
return False
def validate_test_components(model, tokenizer):
# TODO: Move this to tiny model creation script
# head-specific (within a model type) necessary changes to the config
# 1. for `BlenderbotForCausalLM`
if model.__class__.__name__ == "BlenderbotForCausalLM":
model.config.encoder_no_repeat_ngram_size = 0
# TODO: Change the tiny model creation script: don't create models with problematic tokenizers
# Avoid `IndexError` in embedding layers
CONFIG_WITHOUT_VOCAB_SIZE = ["CanineConfig"]
if tokenizer is not None:
# Removing `decoder=True` in `get_text_config` can lead to conflicting values e.g. in MusicGen
config_vocab_size = getattr(model.config.get_text_config(decoder=True), "vocab_size", None)
# For CLIP-like models
if config_vocab_size is None:
if hasattr(model.config, "text_encoder"):
config_vocab_size = getattr(model.config.text_config, "vocab_size", None)
if config_vocab_size is None and model.config.__class__.__name__ not in CONFIG_WITHOUT_VOCAB_SIZE:
raise ValueError(
"Could not determine `vocab_size` from model configuration while `tokenizer` is not `None`."
)
def get_arg_names_from_hub_spec(hub_spec, first_level=True):
# This util is used in pipeline tests, to verify that a pipeline's documented arguments
# match the Hub specification for that task
arg_names = []
for field in fields(hub_spec):
# Recurse into nested fields, but max one level
if is_dataclass(field.type):
arg_names.extend([field.name for field in fields(field.type)])
continue
# Next, catch nested fields that are part of a Union[], which is usually caused by Optional[]
for param_type in get_args(field.type):
if is_dataclass(param_type):
# Again, recurse into nested fields, but max one level
arg_names.extend([field.name for field in fields(param_type)])
break
else:
# Finally, this line triggers if it's not a nested field
arg_names.append(field.name)
return arg_names
def parse_args_from_docstring_by_indentation(docstring):
# This util is used in pipeline tests, to extract the argument names from a google-format docstring
# to compare them against the Hub specification for that task. It uses indentation levels as a primary
# source of truth, so these have to be correct!
docstring = dedent(docstring)
lines_by_indent = [
(len(line) - len(line.lstrip()), line.strip()) for line in docstring.split("\n") if line.strip()
]
args_lineno = None
args_indent = None
args_end = None
for lineno, (indent, line) in enumerate(lines_by_indent):
if line == "Args:":
args_lineno = lineno
args_indent = indent
continue
elif args_lineno is not None and indent == args_indent:
args_end = lineno
break
if args_lineno is None:
raise ValueError("No args block to parse!")
elif args_end is None:
args_block = lines_by_indent[args_lineno + 1 :]
else:
args_block = lines_by_indent[args_lineno + 1 : args_end]
outer_indent_level = min(line[0] for line in args_block)
outer_lines = [line for line in args_block if line[0] == outer_indent_level]
arg_names = [re.match(r"(\w+)\W", line[1]).group(1) for line in outer_lines]
return arg_names
def compare_pipeline_args_to_hub_spec(pipeline_class, hub_spec):
"""
Compares the docstring of a pipeline class to the fields of the matching Hub input signature class to ensure that
they match. This guarantees that Transformers pipelines can be used in inference without needing to manually
refactor or rename inputs.
"""
ALLOWED_TRANSFORMERS_ONLY_ARGS = ["timeout"]
docstring = inspect.getdoc(pipeline_class.__call__).strip()
docstring_args = set(parse_args_from_docstring_by_indentation(docstring))
hub_args = set(get_arg_names_from_hub_spec(hub_spec))
# Special casing: We allow the name of this arg to differ
hub_generate_args = [
hub_arg for hub_arg in hub_args if hub_arg.startswith("generate") or hub_arg.startswith("generation")
]
docstring_generate_args = [
docstring_arg
for docstring_arg in docstring_args
if docstring_arg.startswith("generate") or docstring_arg.startswith("generation")
]
if (
len(hub_generate_args) == 1
and len(docstring_generate_args) == 1
and hub_generate_args != docstring_generate_args
):
hub_args.remove(hub_generate_args[0])
docstring_args.remove(docstring_generate_args[0])
# Special casing 2: We permit some transformers-only arguments that don't affect pipeline output
for arg in ALLOWED_TRANSFORMERS_ONLY_ARGS:
if arg in docstring_args and arg not in hub_args:
docstring_args.remove(arg)
if hub_args != docstring_args:
error = [f"{pipeline_class.__name__} differs from JS spec {hub_spec.__name__}"]
matching_args = hub_args & docstring_args
huggingface_hub_only = hub_args - docstring_args
transformers_only = docstring_args - hub_args
if matching_args:
error.append(f"Matching args: {matching_args}")
if huggingface_hub_only:
error.append(f"Huggingface Hub only: {huggingface_hub_only}")
if transformers_only:
error.append(f"Transformers only: {transformers_only}")
raise ValueError("\n".join(error)) | python | github | https://github.com/huggingface/transformers | tests/test_pipeline_mixin.py |
'''"Executable documentation" for the pickle module.
Extensive comments about the pickle protocols and pickle-machine opcodes
can be found here. Some functions meant for external use:
genops(pickle)
Generate all the opcodes in a pickle, as (opcode, arg, position) triples.
dis(pickle, out=None, memo=None, indentlevel=4)
Print a symbolic disassembly of a pickle.
'''
import codecs
import io
import pickle
import re
import sys
__all__ = ['dis', 'genops', 'optimize']
bytes_types = pickle.bytes_types
# Other ideas:
#
# - A pickle verifier: read a pickle and check it exhaustively for
# well-formedness. dis() does a lot of this already.
#
# - A protocol identifier: examine a pickle and return its protocol number
# (== the highest .proto attr value among all the opcodes in the pickle).
# dis() already prints this info at the end.
#
# - A pickle optimizer: for example, tuple-building code is sometimes more
# elaborate than necessary, catering for the possibility that the tuple
# is recursive. Or lots of times a PUT is generated that's never accessed
# by a later GET.
# "A pickle" is a program for a virtual pickle machine (PM, but more accurately
# called an unpickling machine). It's a sequence of opcodes, interpreted by the
# PM, building an arbitrarily complex Python object.
#
# For the most part, the PM is very simple: there are no looping, testing, or
# conditional instructions, no arithmetic and no function calls. Opcodes are
# executed once each, from first to last, until a STOP opcode is reached.
#
# The PM has two data areas, "the stack" and "the memo".
#
# Many opcodes push Python objects onto the stack; e.g., INT pushes a Python
# integer object on the stack, whose value is gotten from a decimal string
# literal immediately following the INT opcode in the pickle bytestream. Other
# opcodes take Python objects off the stack. The result of unpickling is
# whatever object is left on the stack when the final STOP opcode is executed.
#
# The memo is simply an array of objects, or it can be implemented as a dict
# mapping little integers to objects. The memo serves as the PM's "long term
# memory", and the little integers indexing the memo are akin to variable
# names. Some opcodes pop a stack object into the memo at a given index,
# and others push a memo object at a given index onto the stack again.
#
# At heart, that's all the PM has. Subtleties arise for these reasons:
#
# + Object identity. Objects can be arbitrarily complex, and subobjects
# may be shared (for example, the list [a, a] refers to the same object a
# twice). It can be vital that unpickling recreate an isomorphic object
# graph, faithfully reproducing sharing.
#
# + Recursive objects. For example, after "L = []; L.append(L)", L is a
# list, and L[0] is the same list. This is related to the object identity
# point, and some sequences of pickle opcodes are subtle in order to
# get the right result in all cases.
#
# + Things pickle doesn't know everything about. Examples of things pickle
# does know everything about are Python's builtin scalar and container
# types, like ints and tuples. They generally have opcodes dedicated to
# them. For things like module references and instances of user-defined
# classes, pickle's knowledge is limited. Historically, many enhancements
# have been made to the pickle protocol in order to do a better (faster,
# and/or more compact) job on those.
#
# + Backward compatibility and micro-optimization. As explained below,
# pickle opcodes never go away, not even when better ways to do a thing
# get invented. The repertoire of the PM just keeps growing over time.
# For example, protocol 0 had two opcodes for building Python integers (INT
# and LONG), protocol 1 added three more for more-efficient pickling of short
# integers, and protocol 2 added two more for more-efficient pickling of
# long integers (before protocol 2, the only ways to pickle a Python long
# took time quadratic in the number of digits, for both pickling and
# unpickling). "Opcode bloat" isn't so much a subtlety as a source of
# wearying complication.
#
#
# Pickle protocols:
#
# For compatibility, the meaning of a pickle opcode never changes. Instead new
# pickle opcodes get added, and each version's unpickler can handle all the
# pickle opcodes in all protocol versions to date. So old pickles continue to
# be readable forever. The pickler can generally be told to restrict itself to
# the subset of opcodes available under previous protocol versions too, so that
# users can create pickles under the current version readable by older
# versions. However, a pickle does not contain its version number embedded
# within it. If an older unpickler tries to read a pickle using a later
# protocol, the result is most likely an exception due to seeing an unknown (in
# the older unpickler) opcode.
#
# The original pickle used what's now called "protocol 0", and what was called
# "text mode" before Python 2.3. The entire pickle bytestream is made up of
# printable 7-bit ASCII characters, plus the newline character, in protocol 0.
# That's why it was called text mode. Protocol 0 is small and elegant, but
# sometimes painfully inefficient.
#
# The second major set of additions is now called "protocol 1", and was called
# "binary mode" before Python 2.3. This added many opcodes with arguments
# consisting of arbitrary bytes, including NUL bytes and unprintable "high bit"
# bytes. Binary mode pickles can be substantially smaller than equivalent
# text mode pickles, and sometimes faster too; e.g., BININT represents a 4-byte
# int as 4 bytes following the opcode, which is cheaper to unpickle than the
# (perhaps) 11-character decimal string attached to INT. Protocol 1 also added
# a number of opcodes that operate on many stack elements at once (like APPENDS
# and SETITEMS), and "shortcut" opcodes (like EMPTY_DICT and EMPTY_TUPLE).
#
# The third major set of additions came in Python 2.3, and is called "protocol
# 2". This added:
#
# - A better way to pickle instances of new-style classes (NEWOBJ).
#
# - A way for a pickle to identify its protocol (PROTO).
#
# - Time- and space- efficient pickling of long ints (LONG{1,4}).
#
# - Shortcuts for small tuples (TUPLE{1,2,3}}.
#
# - Dedicated opcodes for bools (NEWTRUE, NEWFALSE).
#
# - The "extension registry", a vector of popular objects that can be pushed
# efficiently by index (EXT{1,2,4}). This is akin to the memo and GET, but
# the registry contents are predefined (there's nothing akin to the memo's
# PUT).
#
# Another independent change with Python 2.3 is the abandonment of any
# pretense that it might be safe to load pickles received from untrusted
# parties -- no sufficient security analysis has been done to guarantee
# this and there isn't a use case that warrants the expense of such an
# analysis.
#
# To this end, all tests for __safe_for_unpickling__ or for
# copyreg.safe_constructors are removed from the unpickling code.
# References to these variables in the descriptions below are to be seen
# as describing unpickling in Python 2.2 and before.
# Meta-rule: Descriptions are stored in instances of descriptor objects,
# with plain constructors. No meta-language is defined from which
# descriptors could be constructed. If you want, e.g., XML, write a little
# program to generate XML from the objects.
##############################################################################
# Some pickle opcodes have an argument, following the opcode in the
# bytestream. An argument is of a specific type, described by an instance
# of ArgumentDescriptor. These are not to be confused with arguments taken
# off the stack -- ArgumentDescriptor applies only to arguments embedded in
# the opcode stream, immediately following an opcode.
# Represents the number of bytes consumed by an argument delimited by the
# next newline character.
UP_TO_NEWLINE = -1
# Represents the number of bytes consumed by a two-argument opcode where
# the first argument gives the number of bytes in the second argument.
TAKEN_FROM_ARGUMENT1 = -2 # num bytes is 1-byte unsigned int
TAKEN_FROM_ARGUMENT4 = -3 # num bytes is 4-byte signed little-endian int
TAKEN_FROM_ARGUMENT4U = -4 # num bytes is 4-byte unsigned little-endian int
TAKEN_FROM_ARGUMENT8U = -5 # num bytes is 8-byte unsigned little-endian int
class ArgumentDescriptor(object):
__slots__ = (
# name of descriptor record, also a module global name; a string
'name',
# length of argument, in bytes; an int; UP_TO_NEWLINE and
# TAKEN_FROM_ARGUMENT{1,4,8} are negative values for variable-length
# cases
'n',
# a function taking a file-like object, reading this kind of argument
# from the object at the current position, advancing the current
# position by n bytes, and returning the value of the argument
'reader',
# human-readable docs for this arg descriptor; a string
'doc',
)
def __init__(self, name, n, reader, doc):
assert isinstance(name, str)
self.name = name
assert isinstance(n, int) and (n >= 0 or
n in (UP_TO_NEWLINE,
TAKEN_FROM_ARGUMENT1,
TAKEN_FROM_ARGUMENT4,
TAKEN_FROM_ARGUMENT4U,
TAKEN_FROM_ARGUMENT8U))
self.n = n
self.reader = reader
assert isinstance(doc, str)
self.doc = doc
from struct import unpack as _unpack
def read_uint1(f):
r"""
>>> import io
>>> read_uint1(io.BytesIO(b'\xff'))
255
"""
data = f.read(1)
if data:
return data[0]
raise ValueError("not enough data in stream to read uint1")
uint1 = ArgumentDescriptor(
name='uint1',
n=1,
reader=read_uint1,
doc="One-byte unsigned integer.")
def read_uint2(f):
r"""
>>> import io
>>> read_uint2(io.BytesIO(b'\xff\x00'))
255
>>> read_uint2(io.BytesIO(b'\xff\xff'))
65535
"""
data = f.read(2)
if len(data) == 2:
return _unpack("<H", data)[0]
raise ValueError("not enough data in stream to read uint2")
uint2 = ArgumentDescriptor(
name='uint2',
n=2,
reader=read_uint2,
doc="Two-byte unsigned integer, little-endian.")
def read_int4(f):
r"""
>>> import io
>>> read_int4(io.BytesIO(b'\xff\x00\x00\x00'))
255
>>> read_int4(io.BytesIO(b'\x00\x00\x00\x80')) == -(2**31)
True
"""
data = f.read(4)
if len(data) == 4:
return _unpack("<i", data)[0]
raise ValueError("not enough data in stream to read int4")
int4 = ArgumentDescriptor(
name='int4',
n=4,
reader=read_int4,
doc="Four-byte signed integer, little-endian, 2's complement.")
def read_uint4(f):
r"""
>>> import io
>>> read_uint4(io.BytesIO(b'\xff\x00\x00\x00'))
255
>>> read_uint4(io.BytesIO(b'\x00\x00\x00\x80')) == 2**31
True
"""
data = f.read(4)
if len(data) == 4:
return _unpack("<I", data)[0]
raise ValueError("not enough data in stream to read uint4")
uint4 = ArgumentDescriptor(
name='uint4',
n=4,
reader=read_uint4,
doc="Four-byte unsigned integer, little-endian.")
def read_uint8(f):
r"""
>>> import io
>>> read_uint8(io.BytesIO(b'\xff\x00\x00\x00\x00\x00\x00\x00'))
255
>>> read_uint8(io.BytesIO(b'\xff' * 8)) == 2**64-1
True
"""
data = f.read(8)
if len(data) == 8:
return _unpack("<Q", data)[0]
raise ValueError("not enough data in stream to read uint8")
uint8 = ArgumentDescriptor(
name='uint8',
n=8,
reader=read_uint8,
doc="Eight-byte unsigned integer, little-endian.")
def read_stringnl(f, decode=True, stripquotes=True, *, encoding='latin-1'):
r"""
>>> import io
>>> read_stringnl(io.BytesIO(b"'abcd'\nefg\n"))
'abcd'
>>> read_stringnl(io.BytesIO(b"\n"))
Traceback (most recent call last):
...
ValueError: no string quotes around b''
>>> read_stringnl(io.BytesIO(b"\n"), stripquotes=False)
''
>>> read_stringnl(io.BytesIO(b"''\n"))
''
>>> read_stringnl(io.BytesIO(b'"abcd"'))
Traceback (most recent call last):
...
ValueError: no newline found when trying to read stringnl
Embedded escapes are undone in the result.
>>> read_stringnl(io.BytesIO(br"'a\n\\b\x00c\td'" + b"\n'e'"))
'a\n\\b\x00c\td'
"""
data = f.readline()
if not data.endswith(b'\n'):
raise ValueError("no newline found when trying to read stringnl")
data = data[:-1] # lose the newline
if stripquotes:
for q in (b'"', b"'"):
if data.startswith(q):
if not data.endswith(q):
raise ValueError("string quote %r not found at both "
"ends of %r" % (q, data))
data = data[1:-1]
break
else:
raise ValueError("no string quotes around %r" % data)
if decode:
data = codecs.escape_decode(data)[0].decode(encoding)
return data
stringnl = ArgumentDescriptor(
name='stringnl',
n=UP_TO_NEWLINE,
reader=read_stringnl,
doc="""A newline-terminated string.
This is a repr-style string, with embedded escapes, and
bracketing quotes.
""")
def read_stringnl_noescape(f):
return read_stringnl(f, stripquotes=False, encoding='utf-8')
stringnl_noescape = ArgumentDescriptor(
name='stringnl_noescape',
n=UP_TO_NEWLINE,
reader=read_stringnl_noescape,
doc="""A newline-terminated string.
This is a str-style string, without embedded escapes,
or bracketing quotes. It should consist solely of
printable ASCII characters.
""")
def read_stringnl_noescape_pair(f):
r"""
>>> import io
>>> read_stringnl_noescape_pair(io.BytesIO(b"Queue\nEmpty\njunk"))
'Queue Empty'
"""
return "%s %s" % (read_stringnl_noescape(f), read_stringnl_noescape(f))
stringnl_noescape_pair = ArgumentDescriptor(
name='stringnl_noescape_pair',
n=UP_TO_NEWLINE,
reader=read_stringnl_noescape_pair,
doc="""A pair of newline-terminated strings.
These are str-style strings, without embedded
escapes, or bracketing quotes. They should
consist solely of printable ASCII characters.
The pair is returned as a single string, with
a single blank separating the two strings.
""")
def read_string1(f):
r"""
>>> import io
>>> read_string1(io.BytesIO(b"\x00"))
''
>>> read_string1(io.BytesIO(b"\x03abcdef"))
'abc'
"""
n = read_uint1(f)
assert n >= 0
data = f.read(n)
if len(data) == n:
return data.decode("latin-1")
raise ValueError("expected %d bytes in a string1, but only %d remain" %
(n, len(data)))
string1 = ArgumentDescriptor(
name="string1",
n=TAKEN_FROM_ARGUMENT1,
reader=read_string1,
doc="""A counted string.
The first argument is a 1-byte unsigned int giving the number
of bytes in the string, and the second argument is that many
bytes.
""")
def read_string4(f):
r"""
>>> import io
>>> read_string4(io.BytesIO(b"\x00\x00\x00\x00abc"))
''
>>> read_string4(io.BytesIO(b"\x03\x00\x00\x00abcdef"))
'abc'
>>> read_string4(io.BytesIO(b"\x00\x00\x00\x03abcdef"))
Traceback (most recent call last):
...
ValueError: expected 50331648 bytes in a string4, but only 6 remain
"""
n = read_int4(f)
if n < 0:
raise ValueError("string4 byte count < 0: %d" % n)
data = f.read(n)
if len(data) == n:
return data.decode("latin-1")
raise ValueError("expected %d bytes in a string4, but only %d remain" %
(n, len(data)))
string4 = ArgumentDescriptor(
name="string4",
n=TAKEN_FROM_ARGUMENT4,
reader=read_string4,
doc="""A counted string.
The first argument is a 4-byte little-endian signed int giving
the number of bytes in the string, and the second argument is
that many bytes.
""")
def read_bytes1(f):
r"""
>>> import io
>>> read_bytes1(io.BytesIO(b"\x00"))
b''
>>> read_bytes1(io.BytesIO(b"\x03abcdef"))
b'abc'
"""
n = read_uint1(f)
assert n >= 0
data = f.read(n)
if len(data) == n:
return data
raise ValueError("expected %d bytes in a bytes1, but only %d remain" %
(n, len(data)))
bytes1 = ArgumentDescriptor(
name="bytes1",
n=TAKEN_FROM_ARGUMENT1,
reader=read_bytes1,
doc="""A counted bytes string.
The first argument is a 1-byte unsigned int giving the number
of bytes, and the second argument is that many bytes.
""")
def read_bytes4(f):
r"""
>>> import io
>>> read_bytes4(io.BytesIO(b"\x00\x00\x00\x00abc"))
b''
>>> read_bytes4(io.BytesIO(b"\x03\x00\x00\x00abcdef"))
b'abc'
>>> read_bytes4(io.BytesIO(b"\x00\x00\x00\x03abcdef"))
Traceback (most recent call last):
...
ValueError: expected 50331648 bytes in a bytes4, but only 6 remain
"""
n = read_uint4(f)
assert n >= 0
if n > sys.maxsize:
raise ValueError("bytes4 byte count > sys.maxsize: %d" % n)
data = f.read(n)
if len(data) == n:
return data
raise ValueError("expected %d bytes in a bytes4, but only %d remain" %
(n, len(data)))
bytes4 = ArgumentDescriptor(
name="bytes4",
n=TAKEN_FROM_ARGUMENT4U,
reader=read_bytes4,
doc="""A counted bytes string.
The first argument is a 4-byte little-endian unsigned int giving
the number of bytes, and the second argument is that many bytes.
""")
def read_bytes8(f):
r"""
>>> import io, struct, sys
>>> read_bytes8(io.BytesIO(b"\x00\x00\x00\x00\x00\x00\x00\x00abc"))
b''
>>> read_bytes8(io.BytesIO(b"\x03\x00\x00\x00\x00\x00\x00\x00abcdef"))
b'abc'
>>> bigsize8 = struct.pack("<Q", sys.maxsize//3)
>>> read_bytes8(io.BytesIO(bigsize8 + b"abcdef")) #doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: expected ... bytes in a bytes8, but only 6 remain
"""
n = read_uint8(f)
assert n >= 0
if n > sys.maxsize:
raise ValueError("bytes8 byte count > sys.maxsize: %d" % n)
data = f.read(n)
if len(data) == n:
return data
raise ValueError("expected %d bytes in a bytes8, but only %d remain" %
(n, len(data)))
bytes8 = ArgumentDescriptor(
name="bytes8",
n=TAKEN_FROM_ARGUMENT8U,
reader=read_bytes8,
doc="""A counted bytes string.
The first argument is an 8-byte little-endian unsigned int giving
the number of bytes, and the second argument is that many bytes.
""")
def read_bytearray8(f):
r"""
>>> import io, struct, sys
>>> read_bytearray8(io.BytesIO(b"\x00\x00\x00\x00\x00\x00\x00\x00abc"))
bytearray(b'')
>>> read_bytearray8(io.BytesIO(b"\x03\x00\x00\x00\x00\x00\x00\x00abcdef"))
bytearray(b'abc')
>>> bigsize8 = struct.pack("<Q", sys.maxsize//3)
>>> read_bytearray8(io.BytesIO(bigsize8 + b"abcdef")) #doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: expected ... bytes in a bytearray8, but only 6 remain
"""
n = read_uint8(f)
assert n >= 0
if n > sys.maxsize:
raise ValueError("bytearray8 byte count > sys.maxsize: %d" % n)
data = f.read(n)
if len(data) == n:
return bytearray(data)
raise ValueError("expected %d bytes in a bytearray8, but only %d remain" %
(n, len(data)))
bytearray8 = ArgumentDescriptor(
name="bytearray8",
n=TAKEN_FROM_ARGUMENT8U,
reader=read_bytearray8,
doc="""A counted bytearray.
The first argument is an 8-byte little-endian unsigned int giving
the number of bytes, and the second argument is that many bytes.
""")
def read_unicodestringnl(f):
r"""
>>> import io
>>> read_unicodestringnl(io.BytesIO(b"abc\\uabcd\njunk")) == 'abc\uabcd'
True
"""
data = f.readline()
if not data.endswith(b'\n'):
raise ValueError("no newline found when trying to read "
"unicodestringnl")
data = data[:-1] # lose the newline
return str(data, 'raw-unicode-escape')
unicodestringnl = ArgumentDescriptor(
name='unicodestringnl',
n=UP_TO_NEWLINE,
reader=read_unicodestringnl,
doc="""A newline-terminated Unicode string.
This is raw-unicode-escape encoded, so consists of
printable ASCII characters, and may contain embedded
escape sequences.
""")
def read_unicodestring1(f):
r"""
>>> import io
>>> s = 'abcd\uabcd'
>>> enc = s.encode('utf-8')
>>> enc
b'abcd\xea\xaf\x8d'
>>> n = bytes([len(enc)]) # little-endian 1-byte length
>>> t = read_unicodestring1(io.BytesIO(n + enc + b'junk'))
>>> s == t
True
>>> read_unicodestring1(io.BytesIO(n + enc[:-1]))
Traceback (most recent call last):
...
ValueError: expected 7 bytes in a unicodestring1, but only 6 remain
"""
n = read_uint1(f)
assert n >= 0
data = f.read(n)
if len(data) == n:
return str(data, 'utf-8', 'surrogatepass')
raise ValueError("expected %d bytes in a unicodestring1, but only %d "
"remain" % (n, len(data)))
unicodestring1 = ArgumentDescriptor(
name="unicodestring1",
n=TAKEN_FROM_ARGUMENT1,
reader=read_unicodestring1,
doc="""A counted Unicode string.
The first argument is a 1-byte little-endian signed int
giving the number of bytes in the string, and the second
argument-- the UTF-8 encoding of the Unicode string --
contains that many bytes.
""")
def read_unicodestring4(f):
r"""
>>> import io
>>> s = 'abcd\uabcd'
>>> enc = s.encode('utf-8')
>>> enc
b'abcd\xea\xaf\x8d'
>>> n = bytes([len(enc), 0, 0, 0]) # little-endian 4-byte length
>>> t = read_unicodestring4(io.BytesIO(n + enc + b'junk'))
>>> s == t
True
>>> read_unicodestring4(io.BytesIO(n + enc[:-1]))
Traceback (most recent call last):
...
ValueError: expected 7 bytes in a unicodestring4, but only 6 remain
"""
n = read_uint4(f)
assert n >= 0
if n > sys.maxsize:
raise ValueError("unicodestring4 byte count > sys.maxsize: %d" % n)
data = f.read(n)
if len(data) == n:
return str(data, 'utf-8', 'surrogatepass')
raise ValueError("expected %d bytes in a unicodestring4, but only %d "
"remain" % (n, len(data)))
unicodestring4 = ArgumentDescriptor(
name="unicodestring4",
n=TAKEN_FROM_ARGUMENT4U,
reader=read_unicodestring4,
doc="""A counted Unicode string.
The first argument is a 4-byte little-endian signed int
giving the number of bytes in the string, and the second
argument-- the UTF-8 encoding of the Unicode string --
contains that many bytes.
""")
def read_unicodestring8(f):
r"""
>>> import io
>>> s = 'abcd\uabcd'
>>> enc = s.encode('utf-8')
>>> enc
b'abcd\xea\xaf\x8d'
>>> n = bytes([len(enc)]) + b'\0' * 7 # little-endian 8-byte length
>>> t = read_unicodestring8(io.BytesIO(n + enc + b'junk'))
>>> s == t
True
>>> read_unicodestring8(io.BytesIO(n + enc[:-1]))
Traceback (most recent call last):
...
ValueError: expected 7 bytes in a unicodestring8, but only 6 remain
"""
n = read_uint8(f)
assert n >= 0
if n > sys.maxsize:
raise ValueError("unicodestring8 byte count > sys.maxsize: %d" % n)
data = f.read(n)
if len(data) == n:
return str(data, 'utf-8', 'surrogatepass')
raise ValueError("expected %d bytes in a unicodestring8, but only %d "
"remain" % (n, len(data)))
unicodestring8 = ArgumentDescriptor(
name="unicodestring8",
n=TAKEN_FROM_ARGUMENT8U,
reader=read_unicodestring8,
doc="""A counted Unicode string.
The first argument is an 8-byte little-endian signed int
giving the number of bytes in the string, and the second
argument-- the UTF-8 encoding of the Unicode string --
contains that many bytes.
""")
def read_decimalnl_short(f):
r"""
>>> import io
>>> read_decimalnl_short(io.BytesIO(b"1234\n56"))
1234
>>> read_decimalnl_short(io.BytesIO(b"1234L\n56"))
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: b'1234L'
"""
s = read_stringnl(f, decode=False, stripquotes=False)
# There's a hack for True and False here.
if s == b"00":
return False
elif s == b"01":
return True
return int(s)
def read_decimalnl_long(f):
r"""
>>> import io
>>> read_decimalnl_long(io.BytesIO(b"1234L\n56"))
1234
>>> read_decimalnl_long(io.BytesIO(b"123456789012345678901234L\n6"))
123456789012345678901234
"""
s = read_stringnl(f, decode=False, stripquotes=False)
if s[-1:] == b'L':
s = s[:-1]
return int(s)
decimalnl_short = ArgumentDescriptor(
name='decimalnl_short',
n=UP_TO_NEWLINE,
reader=read_decimalnl_short,
doc="""A newline-terminated decimal integer literal.
This never has a trailing 'L', and the integer fit
in a short Python int on the box where the pickle
was written -- but there's no guarantee it will fit
in a short Python int on the box where the pickle
is read.
""")
decimalnl_long = ArgumentDescriptor(
name='decimalnl_long',
n=UP_TO_NEWLINE,
reader=read_decimalnl_long,
doc="""A newline-terminated decimal integer literal.
This has a trailing 'L', and can represent integers
of any size.
""")
def read_floatnl(f):
r"""
>>> import io
>>> read_floatnl(io.BytesIO(b"-1.25\n6"))
-1.25
"""
s = read_stringnl(f, decode=False, stripquotes=False)
return float(s)
floatnl = ArgumentDescriptor(
name='floatnl',
n=UP_TO_NEWLINE,
reader=read_floatnl,
doc="""A newline-terminated decimal floating literal.
In general this requires 17 significant digits for roundtrip
identity, and pickling then unpickling infinities, NaNs, and
minus zero doesn't work across boxes, or on some boxes even
on itself (e.g., Windows can't read the strings it produces
for infinities or NaNs).
""")
def read_float8(f):
r"""
>>> import io, struct
>>> raw = struct.pack(">d", -1.25)
>>> raw
b'\xbf\xf4\x00\x00\x00\x00\x00\x00'
>>> read_float8(io.BytesIO(raw + b"\n"))
-1.25
"""
data = f.read(8)
if len(data) == 8:
return _unpack(">d", data)[0]
raise ValueError("not enough data in stream to read float8")
float8 = ArgumentDescriptor(
name='float8',
n=8,
reader=read_float8,
doc="""An 8-byte binary representation of a float, big-endian.
The format is unique to Python, and shared with the struct
module (format string '>d') "in theory" (the struct and pickle
implementations don't share the code -- they should). It's
strongly related to the IEEE-754 double format, and, in normal
cases, is in fact identical to the big-endian 754 double format.
On other boxes the dynamic range is limited to that of a 754
double, and "add a half and chop" rounding is used to reduce
the precision to 53 bits. However, even on a 754 box,
infinities, NaNs, and minus zero may not be handled correctly
(may not survive roundtrip pickling intact).
""")
# Protocol 2 formats
from pickle import decode_long
def read_long1(f):
r"""
>>> import io
>>> read_long1(io.BytesIO(b"\x00"))
0
>>> read_long1(io.BytesIO(b"\x02\xff\x00"))
255
>>> read_long1(io.BytesIO(b"\x02\xff\x7f"))
32767
>>> read_long1(io.BytesIO(b"\x02\x00\xff"))
-256
>>> read_long1(io.BytesIO(b"\x02\x00\x80"))
-32768
"""
n = read_uint1(f)
data = f.read(n)
if len(data) != n:
raise ValueError("not enough data in stream to read long1")
return decode_long(data)
long1 = ArgumentDescriptor(
name="long1",
n=TAKEN_FROM_ARGUMENT1,
reader=read_long1,
doc="""A binary long, little-endian, using 1-byte size.
This first reads one byte as an unsigned size, then reads that
many bytes and interprets them as a little-endian 2's-complement long.
If the size is 0, that's taken as a shortcut for the long 0L.
""")
def read_long4(f):
r"""
>>> import io
>>> read_long4(io.BytesIO(b"\x02\x00\x00\x00\xff\x00"))
255
>>> read_long4(io.BytesIO(b"\x02\x00\x00\x00\xff\x7f"))
32767
>>> read_long4(io.BytesIO(b"\x02\x00\x00\x00\x00\xff"))
-256
>>> read_long4(io.BytesIO(b"\x02\x00\x00\x00\x00\x80"))
-32768
>>> read_long1(io.BytesIO(b"\x00\x00\x00\x00"))
0
"""
n = read_int4(f)
if n < 0:
raise ValueError("long4 byte count < 0: %d" % n)
data = f.read(n)
if len(data) != n:
raise ValueError("not enough data in stream to read long4")
return decode_long(data)
long4 = ArgumentDescriptor(
name="long4",
n=TAKEN_FROM_ARGUMENT4,
reader=read_long4,
doc="""A binary representation of a long, little-endian.
This first reads four bytes as a signed size (but requires the
size to be >= 0), then reads that many bytes and interprets them
as a little-endian 2's-complement long. If the size is 0, that's taken
as a shortcut for the int 0, although LONG1 should really be used
then instead (and in any case where # of bytes < 256).
""")
##############################################################################
# Object descriptors. The stack used by the pickle machine holds objects,
# and in the stack_before and stack_after attributes of OpcodeInfo
# descriptors we need names to describe the various types of objects that can
# appear on the stack.
class StackObject(object):
__slots__ = (
# name of descriptor record, for info only
'name',
# type of object, or tuple of type objects (meaning the object can
# be of any type in the tuple)
'obtype',
# human-readable docs for this kind of stack object; a string
'doc',
)
def __init__(self, name, obtype, doc):
assert isinstance(name, str)
self.name = name
assert isinstance(obtype, type) or isinstance(obtype, tuple)
if isinstance(obtype, tuple):
for contained in obtype:
assert isinstance(contained, type)
self.obtype = obtype
assert isinstance(doc, str)
self.doc = doc
def __repr__(self):
return self.name
pyint = pylong = StackObject(
name='int',
obtype=int,
doc="A Python integer object.")
pyinteger_or_bool = StackObject(
name='int_or_bool',
obtype=(int, bool),
doc="A Python integer or boolean object.")
pybool = StackObject(
name='bool',
obtype=bool,
doc="A Python boolean object.")
pyfloat = StackObject(
name='float',
obtype=float,
doc="A Python float object.")
pybytes_or_str = pystring = StackObject(
name='bytes_or_str',
obtype=(bytes, str),
doc="A Python bytes or (Unicode) string object.")
pybytes = StackObject(
name='bytes',
obtype=bytes,
doc="A Python bytes object.")
pybytearray = StackObject(
name='bytearray',
obtype=bytearray,
doc="A Python bytearray object.")
pyunicode = StackObject(
name='str',
obtype=str,
doc="A Python (Unicode) string object.")
pynone = StackObject(
name="None",
obtype=type(None),
doc="The Python None object.")
pytuple = StackObject(
name="tuple",
obtype=tuple,
doc="A Python tuple object.")
pylist = StackObject(
name="list",
obtype=list,
doc="A Python list object.")
pydict = StackObject(
name="dict",
obtype=dict,
doc="A Python dict object.")
pyset = StackObject(
name="set",
obtype=set,
doc="A Python set object.")
pyfrozenset = StackObject(
name="frozenset",
obtype=set,
doc="A Python frozenset object.")
pybuffer = StackObject(
name='buffer',
obtype=object,
doc="A Python buffer-like object.")
anyobject = StackObject(
name='any',
obtype=object,
doc="Any kind of object whatsoever.")
markobject = StackObject(
name="mark",
obtype=StackObject,
doc="""'The mark' is a unique object.
Opcodes that operate on a variable number of objects
generally don't embed the count of objects in the opcode,
or pull it off the stack. Instead the MARK opcode is used
to push a special marker object on the stack, and then
some other opcodes grab all the objects from the top of
the stack down to (but not including) the topmost marker
object.
""")
stackslice = StackObject(
name="stackslice",
obtype=StackObject,
doc="""An object representing a contiguous slice of the stack.
This is used in conjunction with markobject, to represent all
of the stack following the topmost markobject. For example,
the POP_MARK opcode changes the stack from
[..., markobject, stackslice]
to
[...]
No matter how many object are on the stack after the topmost
markobject, POP_MARK gets rid of all of them (including the
topmost markobject too).
""")
##############################################################################
# Descriptors for pickle opcodes.
class OpcodeInfo(object):
__slots__ = (
# symbolic name of opcode; a string
'name',
# the code used in a bytestream to represent the opcode; a
# one-character string
'code',
# If the opcode has an argument embedded in the byte string, an
# instance of ArgumentDescriptor specifying its type. Note that
# arg.reader(s) can be used to read and decode the argument from
# the bytestream s, and arg.doc documents the format of the raw
# argument bytes. If the opcode doesn't have an argument embedded
# in the bytestream, arg should be None.
'arg',
# what the stack looks like before this opcode runs; a list
'stack_before',
# what the stack looks like after this opcode runs; a list
'stack_after',
# the protocol number in which this opcode was introduced; an int
'proto',
# human-readable docs for this opcode; a string
'doc',
)
def __init__(self, name, code, arg,
stack_before, stack_after, proto, doc):
assert isinstance(name, str)
self.name = name
assert isinstance(code, str)
assert len(code) == 1
self.code = code
assert arg is None or isinstance(arg, ArgumentDescriptor)
self.arg = arg
assert isinstance(stack_before, list)
for x in stack_before:
assert isinstance(x, StackObject)
self.stack_before = stack_before
assert isinstance(stack_after, list)
for x in stack_after:
assert isinstance(x, StackObject)
self.stack_after = stack_after
assert isinstance(proto, int) and 0 <= proto <= pickle.HIGHEST_PROTOCOL
self.proto = proto
assert isinstance(doc, str)
self.doc = doc
I = OpcodeInfo
opcodes = [
# Ways to spell integers.
I(name='INT',
code='I',
arg=decimalnl_short,
stack_before=[],
stack_after=[pyinteger_or_bool],
proto=0,
doc="""Push an integer or bool.
The argument is a newline-terminated decimal literal string.
The intent may have been that this always fit in a short Python int,
but INT can be generated in pickles written on a 64-bit box that
require a Python long on a 32-bit box. The difference between this
and LONG then is that INT skips a trailing 'L', and produces a short
int whenever possible.
Another difference is due to that, when bool was introduced as a
distinct type in 2.3, builtin names True and False were also added to
2.2.2, mapping to ints 1 and 0. For compatibility in both directions,
True gets pickled as INT + "I01\\n", and False as INT + "I00\\n".
Leading zeroes are never produced for a genuine integer. The 2.3
(and later) unpicklers special-case these and return bool instead;
earlier unpicklers ignore the leading "0" and return the int.
"""),
I(name='BININT',
code='J',
arg=int4,
stack_before=[],
stack_after=[pyint],
proto=1,
doc="""Push a four-byte signed integer.
This handles the full range of Python (short) integers on a 32-bit
box, directly as binary bytes (1 for the opcode and 4 for the integer).
If the integer is non-negative and fits in 1 or 2 bytes, pickling via
BININT1 or BININT2 saves space.
"""),
I(name='BININT1',
code='K',
arg=uint1,
stack_before=[],
stack_after=[pyint],
proto=1,
doc="""Push a one-byte unsigned integer.
This is a space optimization for pickling very small non-negative ints,
in range(256).
"""),
I(name='BININT2',
code='M',
arg=uint2,
stack_before=[],
stack_after=[pyint],
proto=1,
doc="""Push a two-byte unsigned integer.
This is a space optimization for pickling small positive ints, in
range(256, 2**16). Integers in range(256) can also be pickled via
BININT2, but BININT1 instead saves a byte.
"""),
I(name='LONG',
code='L',
arg=decimalnl_long,
stack_before=[],
stack_after=[pyint],
proto=0,
doc="""Push a long integer.
The same as INT, except that the literal ends with 'L', and always
unpickles to a Python long. There doesn't seem a real purpose to the
trailing 'L'.
Note that LONG takes time quadratic in the number of digits when
unpickling (this is simply due to the nature of decimal->binary
conversion). Proto 2 added linear-time (in C; still quadratic-time
in Python) LONG1 and LONG4 opcodes.
"""),
I(name="LONG1",
code='\x8a',
arg=long1,
stack_before=[],
stack_after=[pyint],
proto=2,
doc="""Long integer using one-byte length.
A more efficient encoding of a Python long; the long1 encoding
says it all."""),
I(name="LONG4",
code='\x8b',
arg=long4,
stack_before=[],
stack_after=[pyint],
proto=2,
doc="""Long integer using four-byte length.
A more efficient encoding of a Python long; the long4 encoding
says it all."""),
# Ways to spell strings (8-bit, not Unicode).
I(name='STRING',
code='S',
arg=stringnl,
stack_before=[],
stack_after=[pybytes_or_str],
proto=0,
doc="""Push a Python string object.
The argument is a repr-style string, with bracketing quote characters,
and perhaps embedded escapes. The argument extends until the next
newline character. These are usually decoded into a str instance
using the encoding given to the Unpickler constructor. or the default,
'ASCII'. If the encoding given was 'bytes' however, they will be
decoded as bytes object instead.
"""),
I(name='BINSTRING',
code='T',
arg=string4,
stack_before=[],
stack_after=[pybytes_or_str],
proto=1,
doc="""Push a Python string object.
There are two arguments: the first is a 4-byte little-endian
signed int giving the number of bytes in the string, and the
second is that many bytes, which are taken literally as the string
content. These are usually decoded into a str instance using the
encoding given to the Unpickler constructor. or the default,
'ASCII'. If the encoding given was 'bytes' however, they will be
decoded as bytes object instead.
"""),
I(name='SHORT_BINSTRING',
code='U',
arg=string1,
stack_before=[],
stack_after=[pybytes_or_str],
proto=1,
doc="""Push a Python string object.
There are two arguments: the first is a 1-byte unsigned int giving
the number of bytes in the string, and the second is that many
bytes, which are taken literally as the string content. These are
usually decoded into a str instance using the encoding given to
the Unpickler constructor. or the default, 'ASCII'. If the
encoding given was 'bytes' however, they will be decoded as bytes
object instead.
"""),
# Bytes (protocol 3 and higher)
I(name='BINBYTES',
code='B',
arg=bytes4,
stack_before=[],
stack_after=[pybytes],
proto=3,
doc="""Push a Python bytes object.
There are two arguments: the first is a 4-byte little-endian unsigned int
giving the number of bytes, and the second is that many bytes, which are
taken literally as the bytes content.
"""),
I(name='SHORT_BINBYTES',
code='C',
arg=bytes1,
stack_before=[],
stack_after=[pybytes],
proto=3,
doc="""Push a Python bytes object.
There are two arguments: the first is a 1-byte unsigned int giving
the number of bytes, and the second is that many bytes, which are taken
literally as the string content.
"""),
I(name='BINBYTES8',
code='\x8e',
arg=bytes8,
stack_before=[],
stack_after=[pybytes],
proto=4,
doc="""Push a Python bytes object.
There are two arguments: the first is an 8-byte unsigned int giving
the number of bytes in the string, and the second is that many bytes,
which are taken literally as the string content.
"""),
# Bytearray (protocol 5 and higher)
I(name='BYTEARRAY8',
code='\x96',
arg=bytearray8,
stack_before=[],
stack_after=[pybytearray],
proto=5,
doc="""Push a Python bytearray object.
There are two arguments: the first is an 8-byte unsigned int giving
the number of bytes in the bytearray, and the second is that many bytes,
which are taken literally as the bytearray content.
"""),
# Out-of-band buffer (protocol 5 and higher)
I(name='NEXT_BUFFER',
code='\x97',
arg=None,
stack_before=[],
stack_after=[pybuffer],
proto=5,
doc="Push an out-of-band buffer object."),
I(name='READONLY_BUFFER',
code='\x98',
arg=None,
stack_before=[pybuffer],
stack_after=[pybuffer],
proto=5,
doc="Make an out-of-band buffer object read-only."),
# Ways to spell None.
I(name='NONE',
code='N',
arg=None,
stack_before=[],
stack_after=[pynone],
proto=0,
doc="Push None on the stack."),
# Ways to spell bools, starting with proto 2. See INT for how this was
# done before proto 2.
I(name='NEWTRUE',
code='\x88',
arg=None,
stack_before=[],
stack_after=[pybool],
proto=2,
doc="Push True onto the stack."),
I(name='NEWFALSE',
code='\x89',
arg=None,
stack_before=[],
stack_after=[pybool],
proto=2,
doc="Push False onto the stack."),
# Ways to spell Unicode strings.
I(name='UNICODE',
code='V',
arg=unicodestringnl,
stack_before=[],
stack_after=[pyunicode],
proto=0, # this may be pure-text, but it's a later addition
doc="""Push a Python Unicode string object.
The argument is a raw-unicode-escape encoding of a Unicode string,
and so may contain embedded escape sequences. The argument extends
until the next newline character.
"""),
I(name='SHORT_BINUNICODE',
code='\x8c',
arg=unicodestring1,
stack_before=[],
stack_after=[pyunicode],
proto=4,
doc="""Push a Python Unicode string object.
There are two arguments: the first is a 1-byte little-endian signed int
giving the number of bytes in the string. The second is that many
bytes, and is the UTF-8 encoding of the Unicode string.
"""),
I(name='BINUNICODE',
code='X',
arg=unicodestring4,
stack_before=[],
stack_after=[pyunicode],
proto=1,
doc="""Push a Python Unicode string object.
There are two arguments: the first is a 4-byte little-endian unsigned int
giving the number of bytes in the string. The second is that many
bytes, and is the UTF-8 encoding of the Unicode string.
"""),
I(name='BINUNICODE8',
code='\x8d',
arg=unicodestring8,
stack_before=[],
stack_after=[pyunicode],
proto=4,
doc="""Push a Python Unicode string object.
There are two arguments: the first is an 8-byte little-endian signed int
giving the number of bytes in the string. The second is that many
bytes, and is the UTF-8 encoding of the Unicode string.
"""),
# Ways to spell floats.
I(name='FLOAT',
code='F',
arg=floatnl,
stack_before=[],
stack_after=[pyfloat],
proto=0,
doc="""Newline-terminated decimal float literal.
The argument is repr(a_float), and in general requires 17 significant
digits for roundtrip conversion to be an identity (this is so for
IEEE-754 double precision values, which is what Python float maps to
on most boxes).
In general, FLOAT cannot be used to transport infinities, NaNs, or
minus zero across boxes (or even on a single box, if the platform C
library can't read the strings it produces for such things -- Windows
is like that), but may do less damage than BINFLOAT on boxes with
greater precision or dynamic range than IEEE-754 double.
"""),
I(name='BINFLOAT',
code='G',
arg=float8,
stack_before=[],
stack_after=[pyfloat],
proto=1,
doc="""Float stored in binary form, with 8 bytes of data.
This generally requires less than half the space of FLOAT encoding.
In general, BINFLOAT cannot be used to transport infinities, NaNs, or
minus zero, raises an exception if the exponent exceeds the range of
an IEEE-754 double, and retains no more than 53 bits of precision (if
there are more than that, "add a half and chop" rounding is used to
cut it back to 53 significant bits).
"""),
# Ways to build lists.
I(name='EMPTY_LIST',
code=']',
arg=None,
stack_before=[],
stack_after=[pylist],
proto=1,
doc="Push an empty list."),
I(name='APPEND',
code='a',
arg=None,
stack_before=[pylist, anyobject],
stack_after=[pylist],
proto=0,
doc="""Append an object to a list.
Stack before: ... pylist anyobject
Stack after: ... pylist+[anyobject]
although pylist is really extended in-place.
"""),
I(name='APPENDS',
code='e',
arg=None,
stack_before=[pylist, markobject, stackslice],
stack_after=[pylist],
proto=1,
doc="""Extend a list by a slice of stack objects.
Stack before: ... pylist markobject stackslice
Stack after: ... pylist+stackslice
although pylist is really extended in-place.
"""),
I(name='LIST',
code='l',
arg=None,
stack_before=[markobject, stackslice],
stack_after=[pylist],
proto=0,
doc="""Build a list out of the topmost stack slice, after markobject.
All the stack entries following the topmost markobject are placed into
a single Python list, which single list object replaces all of the
stack from the topmost markobject onward. For example,
Stack before: ... markobject 1 2 3 'abc'
Stack after: ... [1, 2, 3, 'abc']
"""),
# Ways to build tuples.
I(name='EMPTY_TUPLE',
code=')',
arg=None,
stack_before=[],
stack_after=[pytuple],
proto=1,
doc="Push an empty tuple."),
I(name='TUPLE',
code='t',
arg=None,
stack_before=[markobject, stackslice],
stack_after=[pytuple],
proto=0,
doc="""Build a tuple out of the topmost stack slice, after markobject.
All the stack entries following the topmost markobject are placed into
a single Python tuple, which single tuple object replaces all of the
stack from the topmost markobject onward. For example,
Stack before: ... markobject 1 2 3 'abc'
Stack after: ... (1, 2, 3, 'abc')
"""),
I(name='TUPLE1',
code='\x85',
arg=None,
stack_before=[anyobject],
stack_after=[pytuple],
proto=2,
doc="""Build a one-tuple out of the topmost item on the stack.
This code pops one value off the stack and pushes a tuple of
length 1 whose one item is that value back onto it. In other
words:
stack[-1] = tuple(stack[-1:])
"""),
I(name='TUPLE2',
code='\x86',
arg=None,
stack_before=[anyobject, anyobject],
stack_after=[pytuple],
proto=2,
doc="""Build a two-tuple out of the top two items on the stack.
This code pops two values off the stack and pushes a tuple of
length 2 whose items are those values back onto it. In other
words:
stack[-2:] = [tuple(stack[-2:])]
"""),
I(name='TUPLE3',
code='\x87',
arg=None,
stack_before=[anyobject, anyobject, anyobject],
stack_after=[pytuple],
proto=2,
doc="""Build a three-tuple out of the top three items on the stack.
This code pops three values off the stack and pushes a tuple of
length 3 whose items are those values back onto it. In other
words:
stack[-3:] = [tuple(stack[-3:])]
"""),
# Ways to build dicts.
I(name='EMPTY_DICT',
code='}',
arg=None,
stack_before=[],
stack_after=[pydict],
proto=1,
doc="Push an empty dict."),
I(name='DICT',
code='d',
arg=None,
stack_before=[markobject, stackslice],
stack_after=[pydict],
proto=0,
doc="""Build a dict out of the topmost stack slice, after markobject.
All the stack entries following the topmost markobject are placed into
a single Python dict, which single dict object replaces all of the
stack from the topmost markobject onward. The stack slice alternates
key, value, key, value, .... For example,
Stack before: ... markobject 1 2 3 'abc'
Stack after: ... {1: 2, 3: 'abc'}
"""),
I(name='SETITEM',
code='s',
arg=None,
stack_before=[pydict, anyobject, anyobject],
stack_after=[pydict],
proto=0,
doc="""Add a key+value pair to an existing dict.
Stack before: ... pydict key value
Stack after: ... pydict
where pydict has been modified via pydict[key] = value.
"""),
I(name='SETITEMS',
code='u',
arg=None,
stack_before=[pydict, markobject, stackslice],
stack_after=[pydict],
proto=1,
doc="""Add an arbitrary number of key+value pairs to an existing dict.
The slice of the stack following the topmost markobject is taken as
an alternating sequence of keys and values, added to the dict
immediately under the topmost markobject. Everything at and after the
topmost markobject is popped, leaving the mutated dict at the top
of the stack.
Stack before: ... pydict markobject key_1 value_1 ... key_n value_n
Stack after: ... pydict
where pydict has been modified via pydict[key_i] = value_i for i in
1, 2, ..., n, and in that order.
"""),
# Ways to build sets
I(name='EMPTY_SET',
code='\x8f',
arg=None,
stack_before=[],
stack_after=[pyset],
proto=4,
doc="Push an empty set."),
I(name='ADDITEMS',
code='\x90',
arg=None,
stack_before=[pyset, markobject, stackslice],
stack_after=[pyset],
proto=4,
doc="""Add an arbitrary number of items to an existing set.
The slice of the stack following the topmost markobject is taken as
a sequence of items, added to the set immediately under the topmost
markobject. Everything at and after the topmost markobject is popped,
leaving the mutated set at the top of the stack.
Stack before: ... pyset markobject item_1 ... item_n
Stack after: ... pyset
where pyset has been modified via pyset.add(item_i) = item_i for i in
1, 2, ..., n, and in that order.
"""),
# Way to build frozensets
I(name='FROZENSET',
code='\x91',
arg=None,
stack_before=[markobject, stackslice],
stack_after=[pyfrozenset],
proto=4,
doc="""Build a frozenset out of the topmost slice, after markobject.
All the stack entries following the topmost markobject are placed into
a single Python frozenset, which single frozenset object replaces all
of the stack from the topmost markobject onward. For example,
Stack before: ... markobject 1 2 3
Stack after: ... frozenset({1, 2, 3})
"""),
# Stack manipulation.
I(name='POP',
code='0',
arg=None,
stack_before=[anyobject],
stack_after=[],
proto=0,
doc="Discard the top stack item, shrinking the stack by one item."),
I(name='DUP',
code='2',
arg=None,
stack_before=[anyobject],
stack_after=[anyobject, anyobject],
proto=0,
doc="Push the top stack item onto the stack again, duplicating it."),
I(name='MARK',
code='(',
arg=None,
stack_before=[],
stack_after=[markobject],
proto=0,
doc="""Push markobject onto the stack.
markobject is a unique object, used by other opcodes to identify a
region of the stack containing a variable number of objects for them
to work on. See markobject.doc for more detail.
"""),
I(name='POP_MARK',
code='1',
arg=None,
stack_before=[markobject, stackslice],
stack_after=[],
proto=1,
doc="""Pop all the stack objects at and above the topmost markobject.
When an opcode using a variable number of stack objects is done,
POP_MARK is used to remove those objects, and to remove the markobject
that delimited their starting position on the stack.
"""),
# Memo manipulation. There are really only two operations (get and put),
# each in all-text, "short binary", and "long binary" flavors.
I(name='GET',
code='g',
arg=decimalnl_short,
stack_before=[],
stack_after=[anyobject],
proto=0,
doc="""Read an object from the memo and push it on the stack.
The index of the memo object to push is given by the newline-terminated
decimal string following. BINGET and LONG_BINGET are space-optimized
versions.
"""),
I(name='BINGET',
code='h',
arg=uint1,
stack_before=[],
stack_after=[anyobject],
proto=1,
doc="""Read an object from the memo and push it on the stack.
The index of the memo object to push is given by the 1-byte unsigned
integer following.
"""),
I(name='LONG_BINGET',
code='j',
arg=uint4,
stack_before=[],
stack_after=[anyobject],
proto=1,
doc="""Read an object from the memo and push it on the stack.
The index of the memo object to push is given by the 4-byte unsigned
little-endian integer following.
"""),
I(name='PUT',
code='p',
arg=decimalnl_short,
stack_before=[],
stack_after=[],
proto=0,
doc="""Store the stack top into the memo. The stack is not popped.
The index of the memo location to write into is given by the newline-
terminated decimal string following. BINPUT and LONG_BINPUT are
space-optimized versions.
"""),
I(name='BINPUT',
code='q',
arg=uint1,
stack_before=[],
stack_after=[],
proto=1,
doc="""Store the stack top into the memo. The stack is not popped.
The index of the memo location to write into is given by the 1-byte
unsigned integer following.
"""),
I(name='LONG_BINPUT',
code='r',
arg=uint4,
stack_before=[],
stack_after=[],
proto=1,
doc="""Store the stack top into the memo. The stack is not popped.
The index of the memo location to write into is given by the 4-byte
unsigned little-endian integer following.
"""),
I(name='MEMOIZE',
code='\x94',
arg=None,
stack_before=[anyobject],
stack_after=[anyobject],
proto=4,
doc="""Store the stack top into the memo. The stack is not popped.
The index of the memo location to write is the number of
elements currently present in the memo.
"""),
# Access the extension registry (predefined objects). Akin to the GET
# family.
I(name='EXT1',
code='\x82',
arg=uint1,
stack_before=[],
stack_after=[anyobject],
proto=2,
doc="""Extension code.
This code and the similar EXT2 and EXT4 allow using a registry
of popular objects that are pickled by name, typically classes.
It is envisioned that through a global negotiation and
registration process, third parties can set up a mapping between
ints and object names.
In order to guarantee pickle interchangeability, the extension
code registry ought to be global, although a range of codes may
be reserved for private use.
EXT1 has a 1-byte integer argument. This is used to index into the
extension registry, and the object at that index is pushed on the stack.
"""),
I(name='EXT2',
code='\x83',
arg=uint2,
stack_before=[],
stack_after=[anyobject],
proto=2,
doc="""Extension code.
See EXT1. EXT2 has a two-byte integer argument.
"""),
I(name='EXT4',
code='\x84',
arg=int4,
stack_before=[],
stack_after=[anyobject],
proto=2,
doc="""Extension code.
See EXT1. EXT4 has a four-byte integer argument.
"""),
# Push a class object, or module function, on the stack, via its module
# and name.
I(name='GLOBAL',
code='c',
arg=stringnl_noescape_pair,
stack_before=[],
stack_after=[anyobject],
proto=0,
doc="""Push a global object (module.attr) on the stack.
Two newline-terminated strings follow the GLOBAL opcode. The first is
taken as a module name, and the second as a class name. The class
object module.class is pushed on the stack. More accurately, the
object returned by self.find_class(module, class) is pushed on the
stack, so unpickling subclasses can override this form of lookup.
"""),
I(name='STACK_GLOBAL',
code='\x93',
arg=None,
stack_before=[pyunicode, pyunicode],
stack_after=[anyobject],
proto=4,
doc="""Push a global object (module.attr) on the stack.
"""),
# Ways to build objects of classes pickle doesn't know about directly
# (user-defined classes). I despair of documenting this accurately
# and comprehensibly -- you really have to read the pickle code to
# find all the special cases.
I(name='REDUCE',
code='R',
arg=None,
stack_before=[anyobject, anyobject],
stack_after=[anyobject],
proto=0,
doc="""Push an object built from a callable and an argument tuple.
The opcode is named to remind of the __reduce__() method.
Stack before: ... callable pytuple
Stack after: ... callable(*pytuple)
The callable and the argument tuple are the first two items returned
by a __reduce__ method. Applying the callable to the argtuple is
supposed to reproduce the original object, or at least get it started.
If the __reduce__ method returns a 3-tuple, the last component is an
argument to be passed to the object's __setstate__, and then the REDUCE
opcode is followed by code to create setstate's argument, and then a
BUILD opcode to apply __setstate__ to that argument.
If not isinstance(callable, type), REDUCE complains unless the
callable has been registered with the copyreg module's
safe_constructors dict, or the callable has a magic
'__safe_for_unpickling__' attribute with a true value. I'm not sure
why it does this, but I've sure seen this complaint often enough when
I didn't want to <wink>.
"""),
I(name='BUILD',
code='b',
arg=None,
stack_before=[anyobject, anyobject],
stack_after=[anyobject],
proto=0,
doc="""Finish building an object, via __setstate__ or dict update.
Stack before: ... anyobject argument
Stack after: ... anyobject
where anyobject may have been mutated, as follows:
If the object has a __setstate__ method,
anyobject.__setstate__(argument)
is called.
Else the argument must be a dict, the object must have a __dict__, and
the object is updated via
anyobject.__dict__.update(argument)
"""),
I(name='INST',
code='i',
arg=stringnl_noescape_pair,
stack_before=[markobject, stackslice],
stack_after=[anyobject],
proto=0,
doc="""Build a class instance.
This is the protocol 0 version of protocol 1's OBJ opcode.
INST is followed by two newline-terminated strings, giving a
module and class name, just as for the GLOBAL opcode (and see
GLOBAL for more details about that). self.find_class(module, name)
is used to get a class object.
In addition, all the objects on the stack following the topmost
markobject are gathered into a tuple and popped (along with the
topmost markobject), just as for the TUPLE opcode.
Now it gets complicated. If all of these are true:
+ The argtuple is empty (markobject was at the top of the stack
at the start).
+ The class object does not have a __getinitargs__ attribute.
then we want to create an old-style class instance without invoking
its __init__() method (pickle has waffled on this over the years; not
calling __init__() is current wisdom). In this case, an instance of
an old-style dummy class is created, and then we try to rebind its
__class__ attribute to the desired class object. If this succeeds,
the new instance object is pushed on the stack, and we're done.
Else (the argtuple is not empty, it's not an old-style class object,
or the class object does have a __getinitargs__ attribute), the code
first insists that the class object have a __safe_for_unpickling__
attribute. Unlike as for the __safe_for_unpickling__ check in REDUCE,
it doesn't matter whether this attribute has a true or false value, it
only matters whether it exists (XXX this is a bug). If
__safe_for_unpickling__ doesn't exist, UnpicklingError is raised.
Else (the class object does have a __safe_for_unpickling__ attr),
the class object obtained from INST's arguments is applied to the
argtuple obtained from the stack, and the resulting instance object
is pushed on the stack.
NOTE: checks for __safe_for_unpickling__ went away in Python 2.3.
NOTE: the distinction between old-style and new-style classes does
not make sense in Python 3.
"""),
I(name='OBJ',
code='o',
arg=None,
stack_before=[markobject, anyobject, stackslice],
stack_after=[anyobject],
proto=1,
doc="""Build a class instance.
This is the protocol 1 version of protocol 0's INST opcode, and is
very much like it. The major difference is that the class object
is taken off the stack, allowing it to be retrieved from the memo
repeatedly if several instances of the same class are created. This
can be much more efficient (in both time and space) than repeatedly
embedding the module and class names in INST opcodes.
Unlike INST, OBJ takes no arguments from the opcode stream. Instead
the class object is taken off the stack, immediately above the
topmost markobject:
Stack before: ... markobject classobject stackslice
Stack after: ... new_instance_object
As for INST, the remainder of the stack above the markobject is
gathered into an argument tuple, and then the logic seems identical,
except that no __safe_for_unpickling__ check is done (XXX this is
a bug). See INST for the gory details.
NOTE: In Python 2.3, INST and OBJ are identical except for how they
get the class object. That was always the intent; the implementations
had diverged for accidental reasons.
"""),
I(name='NEWOBJ',
code='\x81',
arg=None,
stack_before=[anyobject, anyobject],
stack_after=[anyobject],
proto=2,
doc="""Build an object instance.
The stack before should be thought of as containing a class
object followed by an argument tuple (the tuple being the stack
top). Call these cls and args. They are popped off the stack,
and the value returned by cls.__new__(cls, *args) is pushed back
onto the stack.
"""),
I(name='NEWOBJ_EX',
code='\x92',
arg=None,
stack_before=[anyobject, anyobject, anyobject],
stack_after=[anyobject],
proto=4,
doc="""Build an object instance.
The stack before should be thought of as containing a class
object followed by an argument tuple and by a keyword argument dict
(the dict being the stack top). Call these cls and args. They are
popped off the stack, and the value returned by
cls.__new__(cls, *args, *kwargs) is pushed back onto the stack.
"""),
# Machine control.
I(name='PROTO',
code='\x80',
arg=uint1,
stack_before=[],
stack_after=[],
proto=2,
doc="""Protocol version indicator.
For protocol 2 and above, a pickle must start with this opcode.
The argument is the protocol version, an int in range(2, 256).
"""),
I(name='STOP',
code='.',
arg=None,
stack_before=[anyobject],
stack_after=[],
proto=0,
doc="""Stop the unpickling machine.
Every pickle ends with this opcode. The object at the top of the stack
is popped, and that's the result of unpickling. The stack should be
empty then.
"""),
# Framing support.
I(name='FRAME',
code='\x95',
arg=uint8,
stack_before=[],
stack_after=[],
proto=4,
doc="""Indicate the beginning of a new frame.
The unpickler may use this opcode to safely prefetch data from its
underlying stream.
"""),
# Ways to deal with persistent IDs.
I(name='PERSID',
code='P',
arg=stringnl_noescape,
stack_before=[],
stack_after=[anyobject],
proto=0,
doc="""Push an object identified by a persistent ID.
The pickle module doesn't define what a persistent ID means. PERSID's
argument is a newline-terminated str-style (no embedded escapes, no
bracketing quote characters) string, which *is* "the persistent ID".
The unpickler passes this string to self.persistent_load(). Whatever
object that returns is pushed on the stack. There is no implementation
of persistent_load() in Python's unpickler: it must be supplied by an
unpickler subclass.
"""),
I(name='BINPERSID',
code='Q',
arg=None,
stack_before=[anyobject],
stack_after=[anyobject],
proto=1,
doc="""Push an object identified by a persistent ID.
Like PERSID, except the persistent ID is popped off the stack (instead
of being a string embedded in the opcode bytestream). The persistent
ID is passed to self.persistent_load(), and whatever object that
returns is pushed on the stack. See PERSID for more detail.
"""),
]
del I
# Verify uniqueness of .name and .code members.
name2i = {}
code2i = {}
for i, d in enumerate(opcodes):
if d.name in name2i:
raise ValueError("repeated name %r at indices %d and %d" %
(d.name, name2i[d.name], i))
if d.code in code2i:
raise ValueError("repeated code %r at indices %d and %d" %
(d.code, code2i[d.code], i))
name2i[d.name] = i
code2i[d.code] = i
del name2i, code2i, i, d
##############################################################################
# Build a code2op dict, mapping opcode characters to OpcodeInfo records.
# Also ensure we've got the same stuff as pickle.py, although the
# introspection here is dicey.
code2op = {}
for d in opcodes:
code2op[d.code] = d
del d
def assure_pickle_consistency(verbose=False):
copy = code2op.copy()
for name in pickle.__all__:
if not re.match("[A-Z][A-Z0-9_]+$", name):
if verbose:
print("skipping %r: it doesn't look like an opcode name" % name)
continue
picklecode = getattr(pickle, name)
if not isinstance(picklecode, bytes) or len(picklecode) != 1:
if verbose:
print(("skipping %r: value %r doesn't look like a pickle "
"code" % (name, picklecode)))
continue
picklecode = picklecode.decode("latin-1")
if picklecode in copy:
if verbose:
print("checking name %r w/ code %r for consistency" % (
name, picklecode))
d = copy[picklecode]
if d.name != name:
raise ValueError("for pickle code %r, pickle.py uses name %r "
"but we're using name %r" % (picklecode,
name,
d.name))
# Forget this one. Any left over in copy at the end are a problem
# of a different kind.
del copy[picklecode]
else:
raise ValueError("pickle.py appears to have a pickle opcode with "
"name %r and code %r, but we don't" %
(name, picklecode))
if copy:
msg = ["we appear to have pickle opcodes that pickle.py doesn't have:"]
for code, d in copy.items():
msg.append(" name %r with code %r" % (d.name, code))
raise ValueError("\n".join(msg))
assure_pickle_consistency()
del assure_pickle_consistency
##############################################################################
# A pickle opcode generator.
def _genops(data, yield_end_pos=False):
if isinstance(data, bytes_types):
data = io.BytesIO(data)
if hasattr(data, "tell"):
getpos = data.tell
else:
getpos = lambda: None
while True:
pos = getpos()
code = data.read(1)
opcode = code2op.get(code.decode("latin-1"))
if opcode is None:
if code == b"":
raise ValueError("pickle exhausted before seeing STOP")
else:
raise ValueError("at position %s, opcode %r unknown" % (
"<unknown>" if pos is None else pos,
code))
if opcode.arg is None:
arg = None
else:
arg = opcode.arg.reader(data)
if yield_end_pos:
yield opcode, arg, pos, getpos()
else:
yield opcode, arg, pos
if code == b'.':
assert opcode.name == 'STOP'
break
def genops(pickle):
"""Generate all the opcodes in a pickle.
'pickle' is a file-like object, or string, containing the pickle.
Each opcode in the pickle is generated, from the current pickle position,
stopping after a STOP opcode is delivered. A triple is generated for
each opcode:
opcode, arg, pos
opcode is an OpcodeInfo record, describing the current opcode.
If the opcode has an argument embedded in the pickle, arg is its decoded
value, as a Python object. If the opcode doesn't have an argument, arg
is None.
If the pickle has a tell() method, pos was the value of pickle.tell()
before reading the current opcode. If the pickle is a bytes object,
it's wrapped in a BytesIO object, and the latter's tell() result is
used. Else (the pickle doesn't have a tell(), and it's not obvious how
to query its current position) pos is None.
"""
return _genops(pickle)
##############################################################################
# A pickle optimizer.
def optimize(p):
'Optimize a pickle string by removing unused PUT opcodes'
put = 'PUT'
get = 'GET'
oldids = set() # set of all PUT ids
newids = {} # set of ids used by a GET opcode
opcodes = [] # (op, idx) or (pos, end_pos)
proto = 0
protoheader = b''
for opcode, arg, pos, end_pos in _genops(p, yield_end_pos=True):
if 'PUT' in opcode.name:
oldids.add(arg)
opcodes.append((put, arg))
elif opcode.name == 'MEMOIZE':
idx = len(oldids)
oldids.add(idx)
opcodes.append((put, idx))
elif 'FRAME' in opcode.name:
pass
elif 'GET' in opcode.name:
if opcode.proto > proto:
proto = opcode.proto
newids[arg] = None
opcodes.append((get, arg))
elif opcode.name == 'PROTO':
if arg > proto:
proto = arg
if pos == 0:
protoheader = p[pos:end_pos]
else:
opcodes.append((pos, end_pos))
else:
opcodes.append((pos, end_pos))
del oldids
# Copy the opcodes except for PUTS without a corresponding GET
out = io.BytesIO()
# Write the PROTO header before any framing
out.write(protoheader)
pickler = pickle._Pickler(out, proto)
if proto >= 4:
pickler.framer.start_framing()
idx = 0
for op, arg in opcodes:
frameless = False
if op is put:
if arg not in newids:
continue
data = pickler.put(idx)
newids[arg] = idx
idx += 1
elif op is get:
data = pickler.get(newids[arg])
else:
data = p[op:arg]
frameless = len(data) > pickler.framer._FRAME_SIZE_TARGET
pickler.framer.commit_frame(force=frameless)
if frameless:
pickler.framer.file_write(data)
else:
pickler.write(data)
pickler.framer.end_framing()
return out.getvalue()
##############################################################################
# A symbolic pickle disassembler.
def dis(pickle, out=None, memo=None, indentlevel=4, annotate=0):
"""Produce a symbolic disassembly of a pickle.
'pickle' is a file-like object, or string, containing a (at least one)
pickle. The pickle is disassembled from the current position, through
the first STOP opcode encountered.
Optional arg 'out' is a file-like object to which the disassembly is
printed. It defaults to sys.stdout.
Optional arg 'memo' is a Python dict, used as the pickle's memo. It
may be mutated by dis(), if the pickle contains PUT or BINPUT opcodes.
Passing the same memo object to another dis() call then allows disassembly
to proceed across multiple pickles that were all created by the same
pickler with the same memo. Ordinarily you don't need to worry about this.
Optional arg 'indentlevel' is the number of blanks by which to indent
a new MARK level. It defaults to 4.
Optional arg 'annotate' if nonzero instructs dis() to add short
description of the opcode on each line of disassembled output.
The value given to 'annotate' must be an integer and is used as a
hint for the column where annotation should start. The default
value is 0, meaning no annotations.
In addition to printing the disassembly, some sanity checks are made:
+ All embedded opcode arguments "make sense".
+ Explicit and implicit pop operations have enough items on the stack.
+ When an opcode implicitly refers to a markobject, a markobject is
actually on the stack.
+ A memo entry isn't referenced before it's defined.
+ The markobject isn't stored in the memo.
"""
# Most of the hair here is for sanity checks, but most of it is needed
# anyway to detect when a protocol 0 POP takes a MARK off the stack
# (which in turn is needed to indent MARK blocks correctly).
stack = [] # crude emulation of unpickler stack
if memo is None:
memo = {} # crude emulation of unpickler memo
maxproto = -1 # max protocol number seen
markstack = [] # bytecode positions of MARK opcodes
indentchunk = ' ' * indentlevel
errormsg = None
annocol = annotate # column hint for annotations
for opcode, arg, pos in genops(pickle):
if pos is not None:
print("%5d:" % pos, end=' ', file=out)
line = "%-4s %s%s" % (repr(opcode.code)[1:-1],
indentchunk * len(markstack),
opcode.name)
maxproto = max(maxproto, opcode.proto)
before = opcode.stack_before # don't mutate
after = opcode.stack_after # don't mutate
numtopop = len(before)
# See whether a MARK should be popped.
markmsg = None
if markobject in before or (opcode.name == "POP" and
stack and
stack[-1] is markobject):
assert markobject not in after
if __debug__:
if markobject in before:
assert before[-1] is stackslice
if markstack:
markpos = markstack.pop()
if markpos is None:
markmsg = "(MARK at unknown opcode offset)"
else:
markmsg = "(MARK at %d)" % markpos
# Pop everything at and after the topmost markobject.
while stack[-1] is not markobject:
stack.pop()
stack.pop()
# Stop later code from popping too much.
try:
numtopop = before.index(markobject)
except ValueError:
assert opcode.name == "POP"
numtopop = 0
else:
errormsg = "no MARK exists on stack"
# Check for correct memo usage.
if opcode.name in ("PUT", "BINPUT", "LONG_BINPUT", "MEMOIZE"):
if opcode.name == "MEMOIZE":
memo_idx = len(memo)
markmsg = "(as %d)" % memo_idx
else:
assert arg is not None
memo_idx = arg
if not stack:
errormsg = "stack is empty -- can't store into memo"
elif stack[-1] is markobject:
errormsg = "can't store markobject in the memo"
else:
memo[memo_idx] = stack[-1]
elif opcode.name in ("GET", "BINGET", "LONG_BINGET"):
if arg in memo:
assert len(after) == 1
after = [memo[arg]] # for better stack emulation
else:
errormsg = "memo key %r has never been stored into" % arg
if arg is not None or markmsg:
# make a mild effort to align arguments
line += ' ' * (10 - len(opcode.name))
if arg is not None:
if opcode.name in ("STRING", "BINSTRING", "SHORT_BINSTRING"):
line += ' ' + ascii(arg)
else:
line += ' ' + repr(arg)
if markmsg:
line += ' ' + markmsg
if annotate:
line += ' ' * (annocol - len(line))
# make a mild effort to align annotations
annocol = len(line)
if annocol > 50:
annocol = annotate
line += ' ' + opcode.doc.split('\n', 1)[0]
print(line, file=out)
if errormsg:
# Note that we delayed complaining until the offending opcode
# was printed.
raise ValueError(errormsg)
# Emulate the stack effects.
if len(stack) < numtopop:
raise ValueError("tries to pop %d items from stack with "
"only %d items" % (numtopop, len(stack)))
if numtopop:
del stack[-numtopop:]
if markobject in after:
assert markobject not in before
markstack.append(pos)
stack.extend(after)
print("highest protocol among opcodes =", maxproto, file=out)
if stack:
raise ValueError("stack not empty after STOP: %r" % stack)
# For use in the doctest, simply as an example of a class to pickle.
class _Example:
def __init__(self, value):
self.value = value
_dis_test = r"""
>>> import pickle
>>> x = [1, 2, (3, 4), {b'abc': "def"}]
>>> pkl0 = pickle.dumps(x, 0)
>>> dis(pkl0)
0: ( MARK
1: l LIST (MARK at 0)
2: p PUT 0
5: I INT 1
8: a APPEND
9: I INT 2
12: a APPEND
13: ( MARK
14: I INT 3
17: I INT 4
20: t TUPLE (MARK at 13)
21: p PUT 1
24: a APPEND
25: ( MARK
26: d DICT (MARK at 25)
27: p PUT 2
30: c GLOBAL '_codecs encode'
46: p PUT 3
49: ( MARK
50: V UNICODE 'abc'
55: p PUT 4
58: V UNICODE 'latin1'
66: p PUT 5
69: t TUPLE (MARK at 49)
70: p PUT 6
73: R REDUCE
74: p PUT 7
77: V UNICODE 'def'
82: p PUT 8
85: s SETITEM
86: a APPEND
87: . STOP
highest protocol among opcodes = 0
Try again with a "binary" pickle.
>>> pkl1 = pickle.dumps(x, 1)
>>> dis(pkl1)
0: ] EMPTY_LIST
1: q BINPUT 0
3: ( MARK
4: K BININT1 1
6: K BININT1 2
8: ( MARK
9: K BININT1 3
11: K BININT1 4
13: t TUPLE (MARK at 8)
14: q BINPUT 1
16: } EMPTY_DICT
17: q BINPUT 2
19: c GLOBAL '_codecs encode'
35: q BINPUT 3
37: ( MARK
38: X BINUNICODE 'abc'
46: q BINPUT 4
48: X BINUNICODE 'latin1'
59: q BINPUT 5
61: t TUPLE (MARK at 37)
62: q BINPUT 6
64: R REDUCE
65: q BINPUT 7
67: X BINUNICODE 'def'
75: q BINPUT 8
77: s SETITEM
78: e APPENDS (MARK at 3)
79: . STOP
highest protocol among opcodes = 1
Exercise the INST/OBJ/BUILD family.
>>> import pickletools
>>> dis(pickle.dumps(pickletools.dis, 0))
0: c GLOBAL 'pickletools dis'
17: p PUT 0
20: . STOP
highest protocol among opcodes = 0
>>> from pickletools import _Example
>>> x = [_Example(42)] * 2
>>> dis(pickle.dumps(x, 0))
0: ( MARK
1: l LIST (MARK at 0)
2: p PUT 0
5: c GLOBAL 'copy_reg _reconstructor'
30: p PUT 1
33: ( MARK
34: c GLOBAL 'pickletools _Example'
56: p PUT 2
59: c GLOBAL '__builtin__ object'
79: p PUT 3
82: N NONE
83: t TUPLE (MARK at 33)
84: p PUT 4
87: R REDUCE
88: p PUT 5
91: ( MARK
92: d DICT (MARK at 91)
93: p PUT 6
96: V UNICODE 'value'
103: p PUT 7
106: I INT 42
110: s SETITEM
111: b BUILD
112: a APPEND
113: g GET 5
116: a APPEND
117: . STOP
highest protocol among opcodes = 0
>>> dis(pickle.dumps(x, 1))
0: ] EMPTY_LIST
1: q BINPUT 0
3: ( MARK
4: c GLOBAL 'copy_reg _reconstructor'
29: q BINPUT 1
31: ( MARK
32: c GLOBAL 'pickletools _Example'
54: q BINPUT 2
56: c GLOBAL '__builtin__ object'
76: q BINPUT 3
78: N NONE
79: t TUPLE (MARK at 31)
80: q BINPUT 4
82: R REDUCE
83: q BINPUT 5
85: } EMPTY_DICT
86: q BINPUT 6
88: X BINUNICODE 'value'
98: q BINPUT 7
100: K BININT1 42
102: s SETITEM
103: b BUILD
104: h BINGET 5
106: e APPENDS (MARK at 3)
107: . STOP
highest protocol among opcodes = 1
Try "the canonical" recursive-object test.
>>> L = []
>>> T = L,
>>> L.append(T)
>>> L[0] is T
True
>>> T[0] is L
True
>>> L[0][0] is L
True
>>> T[0][0] is T
True
>>> dis(pickle.dumps(L, 0))
0: ( MARK
1: l LIST (MARK at 0)
2: p PUT 0
5: ( MARK
6: g GET 0
9: t TUPLE (MARK at 5)
10: p PUT 1
13: a APPEND
14: . STOP
highest protocol among opcodes = 0
>>> dis(pickle.dumps(L, 1))
0: ] EMPTY_LIST
1: q BINPUT 0
3: ( MARK
4: h BINGET 0
6: t TUPLE (MARK at 3)
7: q BINPUT 1
9: a APPEND
10: . STOP
highest protocol among opcodes = 1
Note that, in the protocol 0 pickle of the recursive tuple, the disassembler
has to emulate the stack in order to realize that the POP opcode at 16 gets
rid of the MARK at 0.
>>> dis(pickle.dumps(T, 0))
0: ( MARK
1: ( MARK
2: l LIST (MARK at 1)
3: p PUT 0
6: ( MARK
7: g GET 0
10: t TUPLE (MARK at 6)
11: p PUT 1
14: a APPEND
15: 0 POP
16: 0 POP (MARK at 0)
17: g GET 1
20: . STOP
highest protocol among opcodes = 0
>>> dis(pickle.dumps(T, 1))
0: ( MARK
1: ] EMPTY_LIST
2: q BINPUT 0
4: ( MARK
5: h BINGET 0
7: t TUPLE (MARK at 4)
8: q BINPUT 1
10: a APPEND
11: 1 POP_MARK (MARK at 0)
12: h BINGET 1
14: . STOP
highest protocol among opcodes = 1
Try protocol 2.
>>> dis(pickle.dumps(L, 2))
0: \x80 PROTO 2
2: ] EMPTY_LIST
3: q BINPUT 0
5: h BINGET 0
7: \x85 TUPLE1
8: q BINPUT 1
10: a APPEND
11: . STOP
highest protocol among opcodes = 2
>>> dis(pickle.dumps(T, 2))
0: \x80 PROTO 2
2: ] EMPTY_LIST
3: q BINPUT 0
5: h BINGET 0
7: \x85 TUPLE1
8: q BINPUT 1
10: a APPEND
11: 0 POP
12: h BINGET 1
14: . STOP
highest protocol among opcodes = 2
Try protocol 3 with annotations:
>>> dis(pickle.dumps(T, 3), annotate=1)
0: \x80 PROTO 3 Protocol version indicator.
2: ] EMPTY_LIST Push an empty list.
3: q BINPUT 0 Store the stack top into the memo. The stack is not popped.
5: h BINGET 0 Read an object from the memo and push it on the stack.
7: \x85 TUPLE1 Build a one-tuple out of the topmost item on the stack.
8: q BINPUT 1 Store the stack top into the memo. The stack is not popped.
10: a APPEND Append an object to a list.
11: 0 POP Discard the top stack item, shrinking the stack by one item.
12: h BINGET 1 Read an object from the memo and push it on the stack.
14: . STOP Stop the unpickling machine.
highest protocol among opcodes = 2
"""
_memo_test = r"""
>>> import pickle
>>> import io
>>> f = io.BytesIO()
>>> p = pickle.Pickler(f, 2)
>>> x = [1, 2, 3]
>>> p.dump(x)
>>> p.dump(x)
>>> f.seek(0)
0
>>> memo = {}
>>> dis(f, memo=memo)
0: \x80 PROTO 2
2: ] EMPTY_LIST
3: q BINPUT 0
5: ( MARK
6: K BININT1 1
8: K BININT1 2
10: K BININT1 3
12: e APPENDS (MARK at 5)
13: . STOP
highest protocol among opcodes = 2
>>> dis(f, memo=memo)
14: \x80 PROTO 2
16: h BINGET 0
18: . STOP
highest protocol among opcodes = 2
"""
__test__ = {'disassembler_test': _dis_test,
'disassembler_memo_test': _memo_test,
}
def _main(args=None):
import argparse
parser = argparse.ArgumentParser(
description='disassemble one or more pickle files',
color=True,
)
parser.add_argument(
'pickle_file',
nargs='+', help='the pickle file')
parser.add_argument(
'-o', '--output',
help='the file where the output should be written')
parser.add_argument(
'-m', '--memo', action='store_true',
help='preserve memo between disassemblies')
parser.add_argument(
'-l', '--indentlevel', default=4, type=int,
help='the number of blanks by which to indent a new MARK level')
parser.add_argument(
'-a', '--annotate', action='store_true',
help='annotate each line with a short opcode description')
parser.add_argument(
'-p', '--preamble', default="==> {name} <==",
help='if more than one pickle file is specified, print this before'
' each disassembly')
args = parser.parse_args(args)
annotate = 30 if args.annotate else 0
memo = {} if args.memo else None
if args.output is None:
output = sys.stdout
else:
output = open(args.output, 'w')
try:
for arg in args.pickle_file:
if len(args.pickle_file) > 1:
name = '<stdin>' if arg == '-' else arg
preamble = args.preamble.format(name=name)
output.write(preamble + '\n')
if arg == '-':
dis(sys.stdin.buffer, output, memo, args.indentlevel, annotate)
else:
with open(arg, 'rb') as f:
dis(f, output, memo, args.indentlevel, annotate)
finally:
if output is not sys.stdout:
output.close()
if __name__ == "__main__":
_main() | python | github | https://github.com/python/cpython | Lib/pickletools.py |
# -*- coding: utf-8 -*-
#
# Copyright 2012-2015 Spotify AB
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
The abstract :py:class:`Task` class.
It is a central concept of Luigi and represents the state of the workflow.
See :doc:`/tasks` for an overview.
"""
try:
from itertools import imap as map
except ImportError:
pass
import logging
import traceback
import warnings
from luigi import six
from luigi import parameter
from luigi.task_register import Register, TaskClassException
Parameter = parameter.Parameter
logger = logging.getLogger('luigi-interface')
def namespace(namespace=None):
"""
Call to set namespace of tasks declared after the call.
If called without arguments or with ``None`` as the namespace, the namespace
is reset, which is recommended to do at the end of any file where the
namespace is set to avoid unintentionally setting namespace on tasks outside
of the scope of the current file.
The namespace of a Task can also be changed by specifying the property
``task_namespace``. This solution has the advantage that the namespace
doesn't have to be restored.
.. code-block:: python
class Task2(luigi.Task):
task_namespace = 'namespace2'
"""
Register._default_namespace = namespace
def id_to_name_and_params(task_id):
# DEPRECATED
import luigi.tools.parse_task
return luigi.tools.parse_task.id_to_name_and_params(task_id)
class BulkCompleteNotImplementedError(NotImplementedError):
"""This is here to trick pylint.
pylint thinks anything raising NotImplementedError needs to be implemented
in any subclass. bulk_complete isn't like that. This tricks pylint into
thinking that the default implementation is a valid implementation and no
an abstract method."""
pass
@six.add_metaclass(Register)
class Task(object):
"""
This is the base class of all Luigi Tasks, the base unit of work in Luigi.
A Luigi Task describes a unit or work.
The key methods of a Task, which must be implemented in a subclass are:
* :py:meth:`run` - the computation done by this task.
* :py:meth:`requires` - the list of Tasks that this Task depends on.
* :py:meth:`output` - the output :py:class:`Target` that this Task creates.
Parameters to the Task should be declared as members of the class, e.g.::
.. code-block:: python
class MyTask(luigi.Task):
count = luigi.IntParameter()
Each Task exposes a constructor accepting all :py:class:`Parameter` (and
values) as kwargs. e.g. ``MyTask(count=10)`` would instantiate `MyTask`.
In addition to any declared properties and methods, there are a few
non-declared properties, which are created by the :py:class:`Register`
metaclass:
``Task.task_namespace``
optional string which is prepended to the task name for the sake of
scheduling. If it isn't overridden in a Task, whatever was last declared
using `luigi.namespace` will be used.
``Task._parameters``
list of ``(parameter_name, parameter)`` tuples for this task class
"""
_event_callbacks = {}
#: Priority of the task: the scheduler should favor available
#: tasks with higher priority values first.
#: See :ref:`Task.priority`
priority = 0
disabled = False
#: Resources used by the task. Should be formatted like {"scp": 1} to indicate that the
#: task requires 1 unit of the scp resource.
resources = {}
#: Number of seconds after which to time out the run function.
#: No timeout if set to 0.
#: Defaults to 0 or value in luigi.cfg
worker_timeout = None
@property
def owner_email(self):
'''
Override this to send out additional error emails to task owner, in addition to the one
defined in `core`.`error-email`. This should return a string or a list of strings. e.g.
'test@exmaple.com' or ['test1@example.com', 'test2@example.com']
'''
return None
@property
def use_cmdline_section(self):
''' Property used by core config such as `--workers` etc.
These will be exposed without the class as prefix.'''
return True
@classmethod
def event_handler(cls, event):
"""
Decorator for adding event handlers.
"""
def wrapped(callback):
cls._event_callbacks.setdefault(cls, {}).setdefault(event, set()).add(callback)
return callback
return wrapped
def trigger_event(self, event, *args, **kwargs):
"""
Trigger that calls all of the specified events associated with this class.
"""
for event_class, event_callbacks in six.iteritems(self._event_callbacks):
if not isinstance(self, event_class):
continue
for callback in event_callbacks.get(event, []):
try:
# callbacks are protected
callback(*args, **kwargs)
except KeyboardInterrupt:
return
except BaseException:
logger.exception("Error in event callback for %r", event)
@property
def task_module(self):
''' Returns what Python module to import to get access to this class. '''
# TODO(erikbern): we should think about a language-agnostic mechanism
return self.__class__.__module__
@property
def task_family(self):
"""
Convenience method since a property on the metaclass isn't directly accessible through the class instances.
"""
return self.__class__.task_family
@classmethod
def get_params(cls):
"""
Returns all of the Parameters for this Task.
"""
# We want to do this here and not at class instantiation, or else there is no room to extend classes dynamically
params = []
for param_name in dir(cls):
param_obj = getattr(cls, param_name)
if not isinstance(param_obj, Parameter):
continue
params.append((param_name, param_obj))
# The order the parameters are created matters. See Parameter class
params.sort(key=lambda t: t[1].counter)
return params
@classmethod
def get_param_values(cls, params, args, kwargs):
"""
Get the values of the parameters from the args and kwargs.
:param params: list of (param_name, Parameter).
:param args: positional arguments
:param kwargs: keyword arguments.
:returns: list of `(name, value)` tuples, one for each parameter.
"""
result = {}
params_dict = dict(params)
task_name = cls.task_family
# In case any exceptions are thrown, create a helpful description of how the Task was invoked
# TODO: should we detect non-reprable arguments? These will lead to mysterious errors
exc_desc = '%s[args=%s, kwargs=%s]' % (task_name, args, kwargs)
# Fill in the positional arguments
positional_params = [(n, p) for n, p in params if p.positional]
for i, arg in enumerate(args):
if i >= len(positional_params):
raise parameter.UnknownParameterException('%s: takes at most %d parameters (%d given)' % (exc_desc, len(positional_params), len(args)))
param_name, param_obj = positional_params[i]
result[param_name] = arg
# Then the optional arguments
for param_name, arg in six.iteritems(kwargs):
if param_name in result:
raise parameter.DuplicateParameterException('%s: parameter %s was already set as a positional parameter' % (exc_desc, param_name))
if param_name not in params_dict:
raise parameter.UnknownParameterException('%s: unknown parameter %s' % (exc_desc, param_name))
result[param_name] = arg
# Then use the defaults for anything not filled in
for param_name, param_obj in params:
if param_name not in result:
if not param_obj.has_task_value(task_name, param_name):
raise parameter.MissingParameterException("%s: requires the '%s' parameter to be set" % (exc_desc, param_name))
result[param_name] = param_obj.task_value(task_name, param_name)
def list_to_tuple(x):
""" Make tuples out of lists and sets to allow hashing """
if isinstance(x, list) or isinstance(x, set):
return tuple(x)
else:
return x
# Sort it by the correct order and make a list
return [(param_name, list_to_tuple(result[param_name])) for param_name, param_obj in params]
def __init__(self, *args, **kwargs):
"""
Constructor to resolve values for all Parameters.
For example, the Task:
.. code-block:: python
class MyTask(luigi.Task):
count = luigi.IntParameter()
can be instantiated as ``MyTask(count=10)``.
"""
params = self.get_params()
param_values = self.get_param_values(params, args, kwargs)
# Set all values on class instance
for key, value in param_values:
setattr(self, key, value)
# Register args and kwargs as an attribute on the class. Might be useful
self.param_args = tuple(value for key, value in param_values)
self.param_kwargs = dict(param_values)
# Build up task id
task_id_parts = []
param_objs = dict(params)
for param_name, param_value in param_values:
if param_objs[param_name].significant:
task_id_parts.append('%s=%s' % (param_name, param_objs[param_name].serialize(param_value)))
self.task_id = '%s(%s)' % (self.task_family, ', '.join(task_id_parts))
self.__hash = hash(self.task_id)
def initialized(self):
"""
Returns ``True`` if the Task is initialized and ``False`` otherwise.
"""
return hasattr(self, 'task_id')
@classmethod
def from_str_params(cls, params_str=None):
"""
Creates an instance from a str->str hash.
:param params_str: dict of param name -> value.
"""
if params_str is None:
params_str = {}
kwargs = {}
for param_name, param in cls.get_params():
if param.significant or param_name in params_str:
value = param.parse_from_input(param_name, params_str[param_name])
kwargs[param_name] = value
return cls(**kwargs)
def to_str_params(self):
"""
Convert all parameters to a str->str hash.
"""
params_str = {}
params = dict(self.get_params())
for param_name, param_value in six.iteritems(self.param_kwargs):
params_str[param_name] = params[param_name].serialize(param_value)
return params_str
def clone(self, cls=None, **kwargs):
"""
Creates a new instance from an existing instance where some of the args have changed.
There's at least two scenarios where this is useful (see test/clone_test.py):
* remove a lot of boiler plate when you have recursive dependencies and lots of args
* there's task inheritance and some logic is on the base class
:param cls:
:param kwargs:
:return:
"""
k = self.param_kwargs.copy()
k.update(six.iteritems(kwargs))
if cls is None:
cls = self.__class__
new_k = {}
for param_name, param_class in cls.get_params():
if param_name in k:
new_k[param_name] = k[param_name]
return cls(**new_k)
def __hash__(self):
return self.__hash
def __repr__(self):
return self.task_id
def __eq__(self, other):
return self.__class__ == other.__class__ and self.param_args == other.param_args
def complete(self):
"""
If the task has any outputs, return ``True`` if all outputs exist.
Otherwise, return ``False``.
However, you may freely override this method with custom logic.
"""
outputs = flatten(self.output())
if len(outputs) == 0:
warnings.warn(
"Task %r without outputs has no custom complete() method" % self,
stacklevel=2
)
return False
return all(map(lambda output: output.exists(), outputs))
@classmethod
def bulk_complete(cls, parameter_tuples):
"""
Returns those of parameter_tuples for which this Task is complete.
Override (with an efficient implementation) for efficient scheduling
with range tools. Keep the logic consistent with that of complete().
"""
raise BulkCompleteNotImplementedError()
def output(self):
"""
The output that this Task produces.
The output of the Task determines if the Task needs to be run--the task
is considered finished iff the outputs all exist. Subclasses should
override this method to return a single :py:class:`Target` or a list of
:py:class:`Target` instances.
Implementation note
If running multiple workers, the output must be a resource that is accessible
by all workers, such as a DFS or database. Otherwise, workers might compute
the same output since they don't see the work done by other workers.
See :ref:`Task.output`
"""
return [] # default impl
def requires(self):
"""
The Tasks that this Task depends on.
A Task will only run if all of the Tasks that it requires are completed.
If your Task does not require any other Tasks, then you don't need to
override this method. Otherwise, a Subclasses can override this method
to return a single Task, a list of Task instances, or a dict whose
values are Task instances.
See :ref:`Task.requires`
"""
return [] # default impl
def _requires(self):
"""
Override in "template" tasks which themselves are supposed to be
subclassed and thus have their requires() overridden (name preserved to
provide consistent end-user experience), yet need to introduce
(non-input) dependencies.
Must return an iterable which among others contains the _requires() of
the superclass.
"""
return flatten(self.requires()) # base impl
def process_resources(self):
"""
Override in "template" tasks which provide common resource functionality
but allow subclasses to specify additional resources while preserving
the name for consistent end-user experience.
"""
return self.resources # default impl
def input(self):
"""
Returns the outputs of the Tasks returned by :py:meth:`requires`
See :ref:`Task.input`
:return: a list of :py:class:`Target` objects which are specified as
outputs of all required Tasks.
"""
return getpaths(self.requires())
def deps(self):
"""
Internal method used by the scheduler.
Returns the flattened list of requires.
"""
# used by scheduler
return flatten(self._requires())
def run(self):
"""
The task run method, to be overridden in a subclass.
See :ref:`Task.run`
"""
pass # default impl
def on_failure(self, exception):
"""
Override for custom error handling.
This method gets called if an exception is raised in :py:meth:`run`.
Return value of this method is json encoded and sent to the scheduler as the `expl` argument. Its string representation will be used as the body of the error email sent out if any.
Default behavior is to return a string representation of the stack trace.
"""
traceback_string = traceback.format_exc()
return "Runtime error:\n%s" % traceback_string
def on_success(self):
"""
Override for doing custom completion handling for a larger class of tasks
This method gets called when :py:meth:`run` completes without raising any exceptions.
The returned value is json encoded and sent to the scheduler as the `expl` argument.
Default behavior is to send an None value"""
pass
class MixinNaiveBulkComplete(object):
"""
Enables a Task to be efficiently scheduled with e.g. range tools, by providing a bulk_complete implementation which checks completeness in a loop.
Applicable to tasks whose completeness checking is cheap.
This doesn't exploit output location specific APIs for speed advantage, nevertheless removes redundant scheduler roundtrips.
"""
@classmethod
def bulk_complete(cls, parameter_tuples):
return [t for t in parameter_tuples if cls(t).complete()]
def externalize(task):
"""
Returns an externalized version of the Task.
See :py:class:`ExternalTask`.
"""
task.run = NotImplemented
return task
class ExternalTask(Task):
"""
Subclass for references to external dependencies.
An ExternalTask's does not have a `run` implementation, which signifies to
the framework that this Task's :py:meth:`output` is generated outside of
Luigi.
"""
run = NotImplemented
class WrapperTask(Task):
"""
Use for tasks that only wrap other tasks and that by definition are done if all their requirements exist.
"""
def complete(self):
return all(r.complete() for r in flatten(self.requires()))
class Config(Task):
"""Used for configuration that's not specific to a certain task
TODO: let's refactor Task & Config so that it inherits from a common
ParamContainer base class
"""
pass
def getpaths(struct):
"""
Maps all Tasks in a structured data object to their .output().
"""
if isinstance(struct, Task):
return struct.output()
elif isinstance(struct, dict):
r = {}
for k, v in six.iteritems(struct):
r[k] = getpaths(v)
return r
else:
# Remaining case: assume r is iterable...
try:
s = list(struct)
except TypeError:
raise Exception('Cannot map %s to Task/dict/list' % str(struct))
return [getpaths(r) for r in s]
def flatten(struct):
"""
Creates a flat list of all all items in structured output (dicts, lists, items):
.. code-block:: python
>>> sorted(flatten({'a': 'foo', 'b': 'bar'}))
['bar', 'foo']
>>> sorted(flatten(['foo', ['bar', 'troll']]))
['bar', 'foo', 'troll']
>>> flatten('foo')
['foo']
>>> flatten(42)
[42]
"""
if struct is None:
return []
flat = []
if isinstance(struct, dict):
for _, result in six.iteritems(struct):
flat += flatten(result)
return flat
if isinstance(struct, six.string_types):
return [struct]
try:
# if iterable
iterator = iter(struct)
except TypeError:
return [struct]
for result in iterator:
flat += flatten(result)
return flat
def flatten_output(task):
"""
Lists all output targets by recursively walking output-less (wrapper) tasks.
FIXME order consistently.
"""
r = flatten(task.output())
if not r:
for dep in flatten(task.requires()):
r += flatten_output(dep)
return r | unknown | codeparrot/codeparrot-clean | ||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from tempest_lib import decorators
from tempest import config
from tempest import exceptions
from tempest.openstack.common import log as logging
from tempest.scenario import manager
from tempest import test
CONF = config.CONF
LOG = logging.getLogger(__name__)
class CfnInitScenarioTest(manager.OrchestrationScenarioTest):
def setUp(self):
super(CfnInitScenarioTest, self).setUp()
if not CONF.orchestration.image_ref:
raise self.skipException("No image available to test")
self.client = self.orchestration_client
self.template_name = 'cfn_init_signal.yaml'
def assign_keypair(self):
self.stack_name = self._stack_rand_name()
if CONF.orchestration.keypair_name:
self.keypair = None
self.keypair_name = CONF.orchestration.keypair_name
else:
self.keypair = self.create_keypair()
self.keypair_name = self.keypair['name']
def launch_stack(self):
net = self._get_default_network()
self.parameters = {
'key_name': self.keypair_name,
'flavor': CONF.orchestration.instance_type,
'image': CONF.orchestration.image_ref,
'timeout': CONF.orchestration.build_timeout,
'network': net['id'],
}
# create the stack
self.template = self._load_template(__file__, self.template_name)
stack = self.client.create_stack(
name=self.stack_name,
template=self.template,
parameters=self.parameters)
stack = stack['stack']
self.stack = self.client.get_stack(stack['id'])
self.stack_identifier = '%s/%s' % (self.stack_name, self.stack['id'])
self.addCleanup(self.delete_wrapper,
self.orchestration_client.delete_stack,
self.stack_identifier)
def check_stack(self):
sid = self.stack_identifier
self.client.wait_for_resource_status(
sid, 'WaitHandle', 'CREATE_COMPLETE')
self.client.wait_for_resource_status(
sid, 'SmokeSecurityGroup', 'CREATE_COMPLETE')
self.client.wait_for_resource_status(
sid, 'SmokeKeys', 'CREATE_COMPLETE')
self.client.wait_for_resource_status(
sid, 'CfnUser', 'CREATE_COMPLETE')
self.client.wait_for_resource_status(
sid, 'SmokeServer', 'CREATE_COMPLETE')
server_resource = self.client.get_resource(sid, 'SmokeServer')
server_id = server_resource['physical_resource_id']
server = self.servers_client.get_server(server_id)
server_ip =\
server['addresses'][CONF.compute.network_for_ssh][0]['addr']
if not self.ping_ip_address(
server_ip, ping_timeout=CONF.orchestration.build_timeout):
self._log_console_output(servers=[server])
self.fail(
"(CfnInitScenarioTest:test_server_cfn_init) Timed out waiting "
"for %s to become reachable" % server_ip)
try:
self.client.wait_for_resource_status(
sid, 'WaitCondition', 'CREATE_COMPLETE')
except (exceptions.StackResourceBuildErrorException,
exceptions.TimeoutException) as e:
raise e
finally:
# attempt to log the server console regardless of WaitCondition
# going to complete. This allows successful and failed cloud-init
# logs to be compared
self._log_console_output(servers=[server])
self.client.wait_for_stack_status(sid, 'CREATE_COMPLETE')
stack = self.client.get_stack(sid)
# This is an assert of great significance, as it means the following
# has happened:
# - cfn-init read the provided metadata and wrote out a file
# - a user was created and credentials written to the server
# - a cfn-signal was built which was signed with provided credentials
# - the wait condition was fulfilled and the stack has changed state
wait_status = json.loads(
self._stack_output(stack, 'WaitConditionStatus'))
self.assertEqual('smoke test complete', wait_status['smoke_status'])
if self.keypair:
# Check that the user can authenticate with the generated
# keypair
self.get_remote_client(server_ip, username='ec2-user',
log_console_of_servers=[server])
@test.attr(type='slow')
@decorators.skip_because(bug='1374175')
@test.services('orchestration', 'compute')
def test_server_cfn_init(self):
self.assign_keypair()
self.launch_stack()
self.check_stack() | unknown | codeparrot/codeparrot-clean | ||
"""
Data migration creation command
"""
from __future__ import print_function
import sys
import os
import re
from optparse import make_option
try:
set
except NameError:
from sets import Set as set
from django.core.management.base import BaseCommand
from django.core.management.color import no_style
from django.db import models
from django.conf import settings
from south.migration import Migrations
from south.exceptions import NoMigrations
from south.creator import freezer
class Command(BaseCommand):
option_list = BaseCommand.option_list + (
make_option('--freeze', action='append', dest='freeze_list', type='string',
help='Freeze the specified app(s). Provide an app name with each; use the option multiple times for multiple apps'),
make_option('--stdout', action='store_true', dest='stdout', default=False,
help='Print the migration to stdout instead of writing it to a file.'),
)
help = "Creates a new template data migration for the given app"
usage_str = "Usage: ./manage.py datamigration appname migrationname [--stdout] [--freeze appname]"
def handle(self, app=None, name="", freeze_list=None, stdout=False, verbosity=1, **options):
verbosity = int(verbosity)
# Any supposed lists that are None become empty lists
freeze_list = freeze_list or []
# --stdout means name = -
if stdout:
name = "-"
# Only allow valid names
if re.search('[^_\w]', name) and name != "-":
self.error("Migration names should contain only alphanumeric characters and underscores.")
# If not name, there's an error
if not name:
self.error("You must provide a name for this migration.\n" + self.usage_str)
if not app:
self.error("You must provide an app to create a migration for.\n" + self.usage_str)
# Ensure that verbosity is not a string (Python 3)
try:
verbosity = int(verbosity)
except ValueError:
self.error("Verbosity must be an number.\n" + self.usage_str)
# Get the Migrations for this app (creating the migrations dir if needed)
migrations = Migrations(app, force_creation=True, verbose_creation=verbosity > 0)
# See what filename is next in line. We assume they use numbers.
new_filename = migrations.next_filename(name)
# Work out which apps to freeze
apps_to_freeze = self.calc_frozen_apps(migrations, freeze_list)
# So, what's in this file, then?
file_contents = self.get_migration_template() % {
"frozen_models": freezer.freeze_apps_to_string(apps_to_freeze),
"complete_apps": apps_to_freeze and "complete_apps = [%s]" % (", ".join(map(repr, apps_to_freeze))) or ""
}
# - is a special name which means 'print to stdout'
if name == "-":
print(file_contents)
# Write the migration file if the name isn't -
else:
fp = open(os.path.join(migrations.migrations_dir(), new_filename), "w")
fp.write(file_contents)
fp.close()
print("Created %s." % new_filename, file=sys.stderr)
def calc_frozen_apps(self, migrations, freeze_list):
"""
Works out, from the current app, settings, and the command line options,
which apps should be frozen.
"""
apps_to_freeze = []
for to_freeze in freeze_list:
if "." in to_freeze:
self.error("You cannot freeze %r; you must provide an app label, like 'auth' or 'books'." % to_freeze)
# Make sure it's a real app
if not models.get_app(to_freeze):
self.error("You cannot freeze %r; it's not an installed app." % to_freeze)
# OK, it's fine
apps_to_freeze.append(to_freeze)
if getattr(settings, 'SOUTH_AUTO_FREEZE_APP', True):
apps_to_freeze.append(migrations.app_label())
return apps_to_freeze
def error(self, message, code=1):
"""
Prints the error, and exits with the given code.
"""
print(message, file=sys.stderr)
sys.exit(code)
def get_migration_template(self):
return MIGRATION_TEMPLATE
MIGRATION_TEMPLATE = """# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
class Migration(DataMigration):
def forwards(self, orm):
"Write your forwards methods here."
# Note: Don't use "from appname.models import ModelName".
# Use orm.ModelName to refer to models in this application,
# and orm['appname.ModelName'] for models in other applications.
def backwards(self, orm):
"Write your backwards methods here."
models = %(frozen_models)s
%(complete_apps)s
symmetrical = True
""" | unknown | codeparrot/codeparrot-clean | ||
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://help.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: github-actions
directory: /
schedule:
interval: weekly
- package-ecosystem: pip
directory: /.codespell
schedule:
interval: weekly | unknown | github | https://github.com/redis/redis | .github/dependabot.yml |
########
# Copyright (c) 2014 GigaSpaces Technologies Ltd. All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
import os
import codecs
from setuptools import setup
here = os.path.abspath(os.path.dirname(__file__))
def read(*parts):
# intentionally *not* adding an encoding option to open
return codecs.open(os.path.join(here, *parts), 'r').read()
setup(
name='wagon',
version='0.10.1',
url='https://github.com/cloudify-cosmo/wagon',
author='Cloudify',
author_email='cosmo-admin@cloudify.co',
license='Apache 2.0',
platforms='All',
description='Creates Python Wheel based archives with dependencies',
long_description=read('README.rst'),
py_modules=['wagon'],
include_package_data=True,
zip_safe=False,
entry_points={'console_scripts': ['wagon = wagon:main']},
install_requires=[
"wheel",
],
extras_require={
'dist': ['distro>=0.6.0'],
'venv': ['virtualenv>=12.1'],
},
python_requires='>=2.6,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',
classifiers=[
'Development Status :: 4 - Beta',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Natural Language :: English',
'Environment :: Console',
'Intended Audience :: Developers',
'Intended Audience :: Information Technology',
'Intended Audience :: System Administrators',
'License :: OSI Approved :: Apache Software License',
'Operating System :: MacOS :: MacOS X',
'Operating System :: POSIX :: Linux',
'Operating System :: Microsoft',
'Topic :: Software Development :: Libraries :: Python Modules',
],
) | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
#
# Python Serial Port Extension for Win32, Linux, BSD, Jython
# module for serial IO for POSIX compatible systems, like Linux
# see __init__.py
#
# (C) 2001-2010 Chris Liechti <cliechti@gmx.net>
# this is distributed under a free software license, see license.txt
#
# parts based on code from Grant B. Edwards <grante@visi.com>:
# ftp://ftp.visi.com/users/grante/python/PosixSerial.py
#
# references: http://www.easysw.com/~mike/serial/serial.html
import sys, os, fcntl, termios, struct, select, errno, time
from serial.serialutil import *
# Do check the Python version as some constants have moved.
if (sys.hexversion < 0x020100f0):
import TERMIOS
else:
TERMIOS = termios
if (sys.hexversion < 0x020200f0):
import FCNTL
else:
FCNTL = fcntl
# try to detect the OS so that a device can be selected...
# this code block should supply a device() and set_special_baudrate() function
# for the platform
plat = sys.platform.lower()
if plat[:5] == 'linux': # Linux (confirmed)
def device(port):
return '/dev/ttyS%d' % port
TCGETS2 = 0x802C542A
TCSETS2 = 0x402C542B
BOTHER = 0o010000
def set_special_baudrate(port, baudrate):
# right size is 44 on x86_64, allow for some growth
import array
buf = array.array('i', [0] * 64)
try:
# get serial_struct
FCNTL.ioctl(port.fd, TCGETS2, buf)
# set custom speed
buf[2] &= ~TERMIOS.CBAUD
buf[2] |= BOTHER
buf[9] = buf[10] = baudrate
# set serial_struct
res = FCNTL.ioctl(port.fd, TCSETS2, buf)
except IOError, e:
raise ValueError('Failed to set custom baud rate (%s): %s' % (baudrate, e))
baudrate_constants = {
0: 0000000, # hang up
50: 0000001,
75: 0000002,
110: 0000003,
134: 0000004,
150: 0000005,
200: 0000006,
300: 0000007,
600: 0000010,
1200: 0000011,
1800: 0000012,
2400: 0000013,
4800: 0000014,
9600: 0000015,
19200: 0000016,
38400: 0000017,
57600: 0010001,
115200: 0010002,
230400: 0010003,
460800: 0010004,
500000: 0010005,
576000: 0010006,
921600: 0010007,
1000000: 0010010,
1152000: 0010011,
1500000: 0010012,
2000000: 0010013,
2500000: 0010014,
3000000: 0010015,
3500000: 0010016,
4000000: 0010017
}
elif plat == 'cygwin': # cygwin/win32 (confirmed)
def device(port):
return '/dev/com%d' % (port + 1)
def set_special_baudrate(port, baudrate):
raise ValueError("sorry don't know how to handle non standard baud rate on this platform")
baudrate_constants = {
128000: 0x01003,
256000: 0x01005,
500000: 0x01007,
576000: 0x01008,
921600: 0x01009,
1000000: 0x0100a,
1152000: 0x0100b,
1500000: 0x0100c,
2000000: 0x0100d,
2500000: 0x0100e,
3000000: 0x0100f
}
elif plat[:7] == 'openbsd': # OpenBSD
def device(port):
return '/dev/cua%02d' % port
def set_special_baudrate(port, baudrate):
raise ValueError("sorry don't know how to handle non standard baud rate on this platform")
baudrate_constants = {}
elif plat[:3] == 'bsd' or \
plat[:7] == 'freebsd':
def device(port):
return '/dev/cuad%d' % port
def set_special_baudrate(port, baudrate):
raise ValueError("sorry don't know how to handle non standard baud rate on this platform")
baudrate_constants = {}
elif plat[:6] == 'darwin': # OS X
version = os.uname()[2].split('.')
# Tiger or above can support arbitrary serial speeds
if int(version[0]) >= 8:
def set_special_baudrate(port, baudrate):
# use IOKit-specific call to set up high speeds
import array, fcntl
buf = array.array('i', [baudrate])
IOSSIOSPEED = 0x80045402 #_IOW('T', 2, speed_t)
fcntl.ioctl(port.fd, IOSSIOSPEED, buf, 1)
else: # version < 8
def set_special_baudrate(port, baudrate):
raise ValueError("baud rate not supported")
def device(port):
return '/dev/cuad%d' % port
baudrate_constants = {}
elif plat[:6] == 'netbsd': # NetBSD 1.6 testing by Erk
def device(port):
return '/dev/dty%02d' % port
def set_special_baudrate(port, baudrate):
raise ValueError("sorry don't know how to handle non standard baud rate on this platform")
baudrate_constants = {}
elif plat[:4] == 'irix': # IRIX (partially tested)
def device(port):
return '/dev/ttyf%d' % (port+1) #XXX different device names depending on flow control
def set_special_baudrate(port, baudrate):
raise ValueError("sorry don't know how to handle non standard baud rate on this platform")
baudrate_constants = {}
elif plat[:2] == 'hp': # HP-UX (not tested)
def device(port):
return '/dev/tty%dp0' % (port+1)
def set_special_baudrate(port, baudrate):
raise ValueError("sorry don't know how to handle non standard baud rate on this platform")
baudrate_constants = {}
elif plat[:5] == 'sunos': # Solaris/SunOS (confirmed)
def device(port):
return '/dev/tty%c' % (ord('a')+port)
def set_special_baudrate(port, baudrate):
raise ValueError("sorry don't know how to handle non standard baud rate on this platform")
baudrate_constants = {}
elif plat[:3] == 'aix': # AIX
def device(port):
return '/dev/tty%d' % (port)
def set_special_baudrate(port, baudrate):
raise ValueError("sorry don't know how to handle non standard baud rate on this platform")
baudrate_constants = {}
else:
# platform detection has failed...
sys.stderr.write("""\
don't know how to number ttys on this system.
! Use an explicit path (eg /dev/ttyS1) or send this information to
! the author of this module:
sys.platform = %r
os.name = %r
serialposix.py version = %s
also add the device name of the serial port and where the
counting starts for the first serial port.
e.g. 'first serial port: /dev/ttyS0'
and with a bit luck you can get this module running...
""" % (sys.platform, os.name, VERSION))
# no exception, just continue with a brave attempt to build a device name
# even if the device name is not correct for the platform it has chances
# to work using a string with the real device name as port parameter.
def device(portum):
return '/dev/ttyS%d' % portnum
def set_special_baudrate(port, baudrate):
raise SerialException("sorry don't know how to handle non standard baud rate on this platform")
baudrate_constants = {}
#~ raise Exception, "this module does not run on this platform, sorry."
# whats up with "aix", "beos", ....
# they should work, just need to know the device names.
# load some constants for later use.
# try to use values from TERMIOS, use defaults from linux otherwise
TIOCMGET = hasattr(TERMIOS, 'TIOCMGET') and TERMIOS.TIOCMGET or 0x5415
TIOCMBIS = hasattr(TERMIOS, 'TIOCMBIS') and TERMIOS.TIOCMBIS or 0x5416
TIOCMBIC = hasattr(TERMIOS, 'TIOCMBIC') and TERMIOS.TIOCMBIC or 0x5417
TIOCMSET = hasattr(TERMIOS, 'TIOCMSET') and TERMIOS.TIOCMSET or 0x5418
#TIOCM_LE = hasattr(TERMIOS, 'TIOCM_LE') and TERMIOS.TIOCM_LE or 0x001
TIOCM_DTR = hasattr(TERMIOS, 'TIOCM_DTR') and TERMIOS.TIOCM_DTR or 0x002
TIOCM_RTS = hasattr(TERMIOS, 'TIOCM_RTS') and TERMIOS.TIOCM_RTS or 0x004
#TIOCM_ST = hasattr(TERMIOS, 'TIOCM_ST') and TERMIOS.TIOCM_ST or 0x008
#TIOCM_SR = hasattr(TERMIOS, 'TIOCM_SR') and TERMIOS.TIOCM_SR or 0x010
TIOCM_CTS = hasattr(TERMIOS, 'TIOCM_CTS') and TERMIOS.TIOCM_CTS or 0x020
TIOCM_CAR = hasattr(TERMIOS, 'TIOCM_CAR') and TERMIOS.TIOCM_CAR or 0x040
TIOCM_RNG = hasattr(TERMIOS, 'TIOCM_RNG') and TERMIOS.TIOCM_RNG or 0x080
TIOCM_DSR = hasattr(TERMIOS, 'TIOCM_DSR') and TERMIOS.TIOCM_DSR or 0x100
TIOCM_CD = hasattr(TERMIOS, 'TIOCM_CD') and TERMIOS.TIOCM_CD or TIOCM_CAR
TIOCM_RI = hasattr(TERMIOS, 'TIOCM_RI') and TERMIOS.TIOCM_RI or TIOCM_RNG
#TIOCM_OUT1 = hasattr(TERMIOS, 'TIOCM_OUT1') and TERMIOS.TIOCM_OUT1 or 0x2000
#TIOCM_OUT2 = hasattr(TERMIOS, 'TIOCM_OUT2') and TERMIOS.TIOCM_OUT2 or 0x4000
if hasattr(TERMIOS, 'TIOCINQ'):
TIOCINQ = TERMIOS.TIOCINQ
else:
TIOCINQ = hasattr(TERMIOS, 'FIONREAD') and TERMIOS.FIONREAD or 0x541B
TIOCOUTQ = hasattr(TERMIOS, 'TIOCOUTQ') and TERMIOS.TIOCOUTQ or 0x5411
TIOCM_zero_str = struct.pack('I', 0)
TIOCM_RTS_str = struct.pack('I', TIOCM_RTS)
TIOCM_DTR_str = struct.pack('I', TIOCM_DTR)
TIOCSBRK = hasattr(TERMIOS, 'TIOCSBRK') and TERMIOS.TIOCSBRK or 0x5427
TIOCCBRK = hasattr(TERMIOS, 'TIOCCBRK') and TERMIOS.TIOCCBRK or 0x5428
class PosixSerial(SerialBase):
"""Serial port class POSIX implementation. Serial port configuration is
done with termios and fcntl. Runs on Linux and many other Un*x like
systems."""
def open(self):
"""Open port with current settings. This may throw a SerialException
if the port cannot be opened."""
if self._port is None:
raise SerialException("Port must be configured before it can be used.")
if self._isOpen:
raise SerialException("Port is already open.")
self.fd = None
# open
try:
self.fd = os.open(self.portstr, os.O_RDWR|os.O_NOCTTY|os.O_NONBLOCK)
except IOError, msg:
self.fd = None
raise SerialException(msg.errno, "could not open port %s: %s" % (self._port, msg))
#~ fcntl.fcntl(self.fd, FCNTL.F_SETFL, 0) # set blocking
try:
self._reconfigurePort()
except:
try:
os.close(self.fd)
except:
# ignore any exception when closing the port
# also to keep original exception that happened when setting up
pass
self.fd = None
raise
else:
self._isOpen = True
self.flushInput()
def _reconfigurePort(self):
"""Set communication parameters on opened port."""
if self.fd is None:
raise SerialException("Can only operate on a valid file descriptor")
custom_baud = None
vmin = vtime = 0 # timeout is done via select
if self._interCharTimeout is not None:
vmin = 1
vtime = int(self._interCharTimeout * 10)
try:
orig_attr = termios.tcgetattr(self.fd)
iflag, oflag, cflag, lflag, ispeed, ospeed, cc = orig_attr
except termios.error, msg: # if a port is nonexistent but has a /dev file, it'll fail here
raise SerialException("Could not configure port: %s" % msg)
# set up raw mode / no echo / binary
cflag |= (TERMIOS.CLOCAL|TERMIOS.CREAD)
lflag &= ~(TERMIOS.ICANON|TERMIOS.ECHO|TERMIOS.ECHOE|TERMIOS.ECHOK|TERMIOS.ECHONL|
TERMIOS.ISIG|TERMIOS.IEXTEN) #|TERMIOS.ECHOPRT
for flag in ('ECHOCTL', 'ECHOKE'): # netbsd workaround for Erk
if hasattr(TERMIOS, flag):
lflag &= ~getattr(TERMIOS, flag)
oflag &= ~(TERMIOS.OPOST)
iflag &= ~(TERMIOS.INLCR|TERMIOS.IGNCR|TERMIOS.ICRNL|TERMIOS.IGNBRK)
if hasattr(TERMIOS, 'IUCLC'):
iflag &= ~TERMIOS.IUCLC
if hasattr(TERMIOS, 'PARMRK'):
iflag &= ~TERMIOS.PARMRK
# setup baud rate
try:
ispeed = ospeed = getattr(TERMIOS, 'B%s' % (self._baudrate))
except AttributeError:
try:
ispeed = ospeed = baudrate_constants[self._baudrate]
except KeyError:
#~ raise ValueError('Invalid baud rate: %r' % self._baudrate)
# may need custom baud rate, it isn't in our list.
ispeed = ospeed = getattr(TERMIOS, 'B38400')
try:
custom_baud = int(self._baudrate) # store for later
except ValueError:
raise ValueError('Invalid baud rate: %r' % self._baudrate)
else:
if custom_baud < 0:
raise ValueError('Invalid baud rate: %r' % self._baudrate)
# setup char len
cflag &= ~TERMIOS.CSIZE
if self._bytesize == 8:
cflag |= TERMIOS.CS8
elif self._bytesize == 7:
cflag |= TERMIOS.CS7
elif self._bytesize == 6:
cflag |= TERMIOS.CS6
elif self._bytesize == 5:
cflag |= TERMIOS.CS5
else:
raise ValueError('Invalid char len: %r' % self._bytesize)
# setup stopbits
if self._stopbits == STOPBITS_ONE:
cflag &= ~(TERMIOS.CSTOPB)
elif self._stopbits == STOPBITS_ONE_POINT_FIVE:
cflag |= (TERMIOS.CSTOPB) # XXX same as TWO.. there is no POSIX support for 1.5
elif self._stopbits == STOPBITS_TWO:
cflag |= (TERMIOS.CSTOPB)
else:
raise ValueError('Invalid stop bit specification: %r' % self._stopbits)
# setup parity
iflag &= ~(TERMIOS.INPCK|TERMIOS.ISTRIP)
if self._parity == PARITY_NONE:
cflag &= ~(TERMIOS.PARENB|TERMIOS.PARODD)
elif self._parity == PARITY_EVEN:
cflag &= ~(TERMIOS.PARODD)
cflag |= (TERMIOS.PARENB)
elif self._parity == PARITY_ODD:
cflag |= (TERMIOS.PARENB|TERMIOS.PARODD)
else:
raise ValueError('Invalid parity: %r' % self._parity)
# setup flow control
# xonxoff
if hasattr(TERMIOS, 'IXANY'):
if self._xonxoff:
iflag |= (TERMIOS.IXON|TERMIOS.IXOFF) #|TERMIOS.IXANY)
else:
iflag &= ~(TERMIOS.IXON|TERMIOS.IXOFF|TERMIOS.IXANY)
else:
if self._xonxoff:
iflag |= (TERMIOS.IXON|TERMIOS.IXOFF)
else:
iflag &= ~(TERMIOS.IXON|TERMIOS.IXOFF)
# rtscts
if hasattr(TERMIOS, 'CRTSCTS'):
if self._rtscts:
cflag |= (TERMIOS.CRTSCTS)
else:
cflag &= ~(TERMIOS.CRTSCTS)
elif hasattr(TERMIOS, 'CNEW_RTSCTS'): # try it with alternate constant name
if self._rtscts:
cflag |= (TERMIOS.CNEW_RTSCTS)
else:
cflag &= ~(TERMIOS.CNEW_RTSCTS)
# XXX should there be a warning if setting up rtscts (and xonxoff etc) fails??
# buffer
# vmin "minimal number of characters to be read. = for non blocking"
if vmin < 0 or vmin > 255:
raise ValueError('Invalid vmin: %r ' % vmin)
cc[TERMIOS.VMIN] = vmin
# vtime
if vtime < 0 or vtime > 255:
raise ValueError('Invalid vtime: %r' % vtime)
cc[TERMIOS.VTIME] = vtime
# activate settings
if [iflag, oflag, cflag, lflag, ispeed, ospeed, cc] != orig_attr:
termios.tcsetattr(self.fd, TERMIOS.TCSANOW, [iflag, oflag, cflag, lflag, ispeed, ospeed, cc])
# apply custom baud rate, if any
if custom_baud is not None:
set_special_baudrate(self, custom_baud)
def close(self):
"""Close port"""
if self._isOpen:
if self.fd is not None:
os.close(self.fd)
self.fd = None
self._isOpen = False
def makeDeviceName(self, port):
return device(port)
# - - - - - - - - - - - - - - - - - - - - - - - -
def inWaiting(self):
"""Return the number of characters currently in the input buffer."""
#~ s = fcntl.ioctl(self.fd, TERMIOS.FIONREAD, TIOCM_zero_str)
s = fcntl.ioctl(self.fd, TIOCINQ, TIOCM_zero_str)
return struct.unpack('I',s)[0]
# select based implementation, proved to work on many systems
def read(self, size=1):
"""Read size bytes from the serial port. If a timeout is set it may
return less characters as requested. With no timeout it will block
until the requested number of bytes is read."""
if not self._isOpen: raise portNotOpenError
read = bytearray()
while len(read) < size:
try:
ready,_,_ = select.select([self.fd],[],[], self._timeout)
# If select was used with a timeout, and the timeout occurs, it
# returns with empty lists -> thus abort read operation.
# For timeout == 0 (non-blocking operation) also abort when there
# is nothing to read.
if not ready:
break # timeout
buf = os.read(self.fd, size-len(read))
# read should always return some data as select reported it was
# ready to read when we get to this point.
if not buf:
# Disconnected devices, at least on Linux, show the
# behavior that they are always ready to read immediately
# but reading returns nothing.
raise SerialException('device reports readiness to read but returned no data (device disconnected or multiple access on port?)')
read.extend(buf)
except select.error, e:
# ignore EAGAIN errors. all other errors are shown
# see also http://www.python.org/dev/peps/pep-3151/#select
if e[0] != errno.EAGAIN:
raise SerialException('read failed: %s' % (e,))
except OSError, e:
# ignore EAGAIN errors. all other errors are shown
if e.errno != errno.EAGAIN:
raise SerialException('read failed: %s' % (e,))
return bytes(read)
def write(self, data):
"""Output the given string over the serial port."""
if not self._isOpen: raise portNotOpenError
d = to_bytes(data)
tx_len = len(d)
if self._writeTimeout is not None and self._writeTimeout > 0:
timeout = time.time() + self._writeTimeout
else:
timeout = None
while tx_len > 0:
try:
n = os.write(self.fd, d)
if timeout:
# when timeout is set, use select to wait for being ready
# with the time left as timeout
timeleft = timeout - time.time()
if timeleft < 0:
raise writeTimeoutError
_, ready, _ = select.select([], [self.fd], [], timeleft)
if not ready:
raise writeTimeoutError
else:
# wait for write operation
_, ready, _ = select.select([], [self.fd], [], None)
if not ready:
raise SerialException('write failed (select)')
d = d[n:]
tx_len -= n
except OSError, v:
if v.errno != errno.EAGAIN:
raise SerialException('write failed: %s' % (v,))
return len(data)
def flush(self):
"""Flush of file like objects. In this case, wait until all data
is written."""
self.drainOutput()
def flushInput(self):
"""Clear input buffer, discarding all that is in the buffer."""
if not self._isOpen: raise portNotOpenError
termios.tcflush(self.fd, TERMIOS.TCIFLUSH)
def flushOutput(self):
"""Clear output buffer, aborting the current output and
discarding all that is in the buffer."""
if not self._isOpen: raise portNotOpenError
termios.tcflush(self.fd, TERMIOS.TCOFLUSH)
def sendBreak(self, duration=0.25):
"""Send break condition. Timed, returns to idle state after given duration."""
if not self._isOpen: raise portNotOpenError
termios.tcsendbreak(self.fd, int(duration/0.25))
def setBreak(self, level=1):
"""Set break: Controls TXD. When active, no transmitting is possible."""
if self.fd is None: raise portNotOpenError
if level:
fcntl.ioctl(self.fd, TIOCSBRK)
else:
fcntl.ioctl(self.fd, TIOCCBRK)
def setRTS(self, level=1):
"""Set terminal status line: Request To Send"""
if not self._isOpen: raise portNotOpenError
if level:
fcntl.ioctl(self.fd, TIOCMBIS, TIOCM_RTS_str)
else:
fcntl.ioctl(self.fd, TIOCMBIC, TIOCM_RTS_str)
def setDTR(self, level=1):
"""Set terminal status line: Data Terminal Ready"""
if not self._isOpen: raise portNotOpenError
if level:
fcntl.ioctl(self.fd, TIOCMBIS, TIOCM_DTR_str)
else:
fcntl.ioctl(self.fd, TIOCMBIC, TIOCM_DTR_str)
def getCTS(self):
"""Read terminal status line: Clear To Send"""
if not self._isOpen: raise portNotOpenError
s = fcntl.ioctl(self.fd, TIOCMGET, TIOCM_zero_str)
return struct.unpack('I',s)[0] & TIOCM_CTS != 0
def getDSR(self):
"""Read terminal status line: Data Set Ready"""
if not self._isOpen: raise portNotOpenError
s = fcntl.ioctl(self.fd, TIOCMGET, TIOCM_zero_str)
return struct.unpack('I',s)[0] & TIOCM_DSR != 0
def getRI(self):
"""Read terminal status line: Ring Indicator"""
if not self._isOpen: raise portNotOpenError
s = fcntl.ioctl(self.fd, TIOCMGET, TIOCM_zero_str)
return struct.unpack('I',s)[0] & TIOCM_RI != 0
def getCD(self):
"""Read terminal status line: Carrier Detect"""
if not self._isOpen: raise portNotOpenError
s = fcntl.ioctl(self.fd, TIOCMGET, TIOCM_zero_str)
return struct.unpack('I',s)[0] & TIOCM_CD != 0
# - - platform specific - - - -
def outWaiting(self):
"""Return the number of characters currently in the output buffer."""
#~ s = fcntl.ioctl(self.fd, TERMIOS.FIONREAD, TIOCM_zero_str)
s = fcntl.ioctl(self.fd, TIOCOUTQ, TIOCM_zero_str)
return struct.unpack('I',s)[0]
def drainOutput(self):
"""internal - not portable!"""
if not self._isOpen: raise portNotOpenError
termios.tcdrain(self.fd)
def nonblocking(self):
"""internal - not portable!"""
if not self._isOpen: raise portNotOpenError
fcntl.fcntl(self.fd, FCNTL.F_SETFL, os.O_NONBLOCK)
def fileno(self):
"""\
For easier use of the serial port instance with select.
WARNING: this function is not portable to different platforms!
"""
if not self._isOpen: raise portNotOpenError
return self.fd
def setXON(self, level=True):
"""\
Manually control flow - when software flow control is enabled.
This will send XON (true) and XOFF (false) to the other device.
WARNING: this function is not portable to different platforms!
"""
if not self.hComPort: raise portNotOpenError
if enable:
termios.tcflow(self.fd, TERMIOS.TCION)
else:
termios.tcflow(self.fd, TERMIOS.TCIOFF)
def flowControlOut(self, enable):
"""\
Manually control flow of outgoing data - when hardware or software flow
control is enabled.
WARNING: this function is not portable to different platforms!
"""
if not self._isOpen: raise portNotOpenError
if enable:
termios.tcflow(self.fd, TERMIOS.TCOON)
else:
termios.tcflow(self.fd, TERMIOS.TCOOFF)
# assemble Serial class with the platform specifc implementation and the base
# for file-like behavior. for Python 2.6 and newer, that provide the new I/O
# library, derrive from io.RawIOBase
try:
import io
except ImportError:
# classic version with our own file-like emulation
class Serial(PosixSerial, FileLike):
pass
else:
# io library present
class Serial(PosixSerial, io.RawIOBase):
pass
class PosixPollSerial(Serial):
"""poll based read implementation. not all systems support poll properly.
however this one has better handling of errors, such as a device
disconnecting while it's in use (e.g. USB-serial unplugged)"""
def read(self, size=1):
"""Read size bytes from the serial port. If a timeout is set it may
return less characters as requested. With no timeout it will block
until the requested number of bytes is read."""
if self.fd is None: raise portNotOpenError
read = bytearray()
poll = select.poll()
poll.register(self.fd, select.POLLIN|select.POLLERR|select.POLLHUP|select.POLLNVAL)
if size > 0:
while len(read) < size:
# print "\tread(): size",size, "have", len(read) #debug
# wait until device becomes ready to read (or something fails)
for fd, event in poll.poll(self._timeout*1000):
if event & (select.POLLERR|select.POLLHUP|select.POLLNVAL):
raise SerialException('device reports error (poll)')
# we don't care if it is select.POLLIN or timeout, that's
# handled below
buf = os.read(self.fd, size - len(read))
read.extend(buf)
if ((self._timeout is not None and self._timeout >= 0) or
(self._interCharTimeout is not None and self._interCharTimeout > 0)) and not buf:
break # early abort on timeout
return bytes(read)
if __name__ == '__main__':
s = Serial(0,
baudrate=19200, # baud rate
bytesize=EIGHTBITS, # number of data bits
parity=PARITY_EVEN, # enable parity checking
stopbits=STOPBITS_ONE, # number of stop bits
timeout=3, # set a timeout value, None for waiting forever
xonxoff=0, # enable software flow control
rtscts=0, # enable RTS/CTS flow control
)
s.setRTS(1)
s.setDTR(1)
s.flushInput()
s.flushOutput()
s.write('hello')
sys.stdout.write('%r\n' % s.read(5))
sys.stdout.write('%s\n' % s.inWaiting())
del s | unknown | codeparrot/codeparrot-clean | ||
"""
Python shell for Diofant.
This is just a normal Python shell (IPython shell if you have the
IPython package installed), that adds default imports and run
some initialization code.
"""
import argparse
import ast
import atexit
import code
import os
import readline
from diofant.interactive.session import (AutomaticSymbols,
IntegerDivisionWrapper)
__all__ = ()
parser = argparse.ArgumentParser(description=__doc__,
prog='python -m diofant')
parser.add_argument('--no-wrap-division',
help="Don't wrap integer divisions with Fraction",
action='store_true')
parser.add_argument('-a', '--auto-symbols',
help="Automatically create missing Symbol's",
action='store_true')
parser.add_argument('--no-ipython', help="Don't use IPython",
action='store_true')
def main():
args, ipython_args = parser.parse_known_args()
lines = ['from diofant import *',
'init_printing()',
"x, y, z, t = symbols('x y z t')",
"k, m, n = symbols('k m n', integer=True)",
"f, g, h = symbols('f g h', cls=Function)",
'init_printing(pretty_print=True, use_unicode=True)']
try:
import IPython
import traitlets
except ImportError:
args.no_ipython = True
if not args.no_ipython:
config = traitlets.config.loader.Config()
shell = config.InteractiveShell
ast_transformers = shell.ast_transformers
if not args.no_wrap_division:
ast_transformers.append(IntegerDivisionWrapper())
shell.confirm_exit = False
config.TerminalIPythonApp.display_banner = False
app = IPython.terminal.ipapp.TerminalIPythonApp.instance(config=config)
app.initialize(ipython_args)
shell = app.shell
for l in lines:
shell.run_cell(l, silent=True)
if args.auto_symbols:
shell.run_cell('from diofant.interactive.session import AutomaticSymbols')
shell.run_cell('ip = get_ipython()')
shell.run_cell('ip.ast_transformers.append(AutomaticSymbols(ip.user_ns))')
shell.run_cell('del ip')
app.start()
else:
ast_transformers = []
ns = {}
if not args.no_wrap_division:
ast_transformers.append(IntegerDivisionWrapper())
if args.auto_symbols:
ast_transformers.append(AutomaticSymbols(ns))
class DiofantConsole(code.InteractiveConsole):
"""An interactive console with readline support."""
def __init__(self, ast_transformers=[], **kwargs):
super().__init__(**kwargs)
readline.parse_and_bind('tab: complete')
history = os.path.expanduser('~/.python_history')
readline.read_history_file(history)
atexit.register(readline.write_history_file, history)
self.ast_transformers = ast_transformers
def runsource(self, source, filename='<input>', symbol='single'):
try:
tree = ast.parse(source)
except SyntaxError:
return True
for t in self.ast_transformers:
tree = t.visit(tree)
ast.fix_missing_locations(tree)
source = ast.unparse(tree)
source = source.split('\n')
source = ';'.join(source)
return super().runsource(source, filename=filename, symbol=symbol)
c = DiofantConsole(ast_transformers=ast_transformers, locals=ns)
for l in lines:
c.push(l)
c.interact('', '')
if __name__ == '__main__': # pragma: no branch
main() | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Debug estimators.
Debug estimators are bias-only estimators that can be used for debugging
and as simple baselines.
Example:
```
# Build DebugClassifier
classifier = DebugClassifier()
# Input builders
def input_fn_train: # returns x, y (where y represents label's class index).
pass
def input_fn_eval: # returns x, y (where y represents label's class index).
pass
# Fit model.
classifier.fit(input_fn=input_fn_train)
# Evaluate cross entropy between the test and train labels.
loss = classifier.evaluate(input_fn=input_fn_eval)["loss"]
# predict_classes outputs the most commonly seen class in training.
predicted_label = classifier.predict_classes(new_samples)
# predict_proba outputs the class distribution from training.
label_distribution = classifier.predict_proba(new_samples)
```
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.contrib.layers.python.layers import optimizers
from tensorflow.contrib.learn.python.learn.estimators import estimator
from tensorflow.contrib.learn.python.learn.estimators import head as head_lib
from tensorflow.contrib.learn.python.learn.estimators import prediction_key
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import check_ops
def _get_feature_dict(features):
if isinstance(features, dict):
return features
return {"": features}
def debug_model_fn(features, labels, mode, params, config=None):
"""Model_fn for debug models.
Args:
features: `Tensor` or dict of `Tensor` (depends on data passed to `fit`).
labels: Labels that are compatible with the `_Head` instance in `params`.
mode: Defines whether this is training, evaluation or prediction.
See `ModeKeys`.
params: A dict of hyperparameters containing:
* head: A `_Head` instance.
config: `RunConfig` object to configure the runtime settings.
Raises:
KeyError: If weight column is specified but not present.
ValueError: If features is an empty dictionary.
Returns:
A `ModelFnOps` instance.
"""
del config # Unused.
features = _get_feature_dict(features)
if not features:
raise ValueError("Features cannot be empty.")
head = params["head"]
size_checks = []
batch_size = None
# The first dimension is assumed to be a batch size and must be consistent
# among all of the features.
for feature in features.values():
first_dim = array_ops.shape(feature)[0]
if batch_size is None:
batch_size = first_dim
else:
size_checks.append(check_ops.assert_equal(batch_size, first_dim))
with ops.control_dependencies(size_checks):
logits = array_ops.zeros([batch_size, head.logits_dimension])
def train_op_fn(loss):
return optimizers.optimize_loss(
loss, global_step=None, learning_rate=0.3, optimizer="Adagrad")
return head.create_model_fn_ops(
features=features,
labels=labels,
mode=mode,
train_op_fn=train_op_fn,
logits=logits)
class DebugClassifier(estimator.Estimator):
"""A classifier for TensorFlow Debug models.
Example:
```python
# Build DebugClassifier
classifier = DebugClassifier()
# Input builders
def input_fn_train: # returns x, y (where y represents label's class index).
pass
def input_fn_eval: # returns x, y (where y represents label's class index).
pass
# Fit model.
classifier.fit(input_fn=input_fn_train)
# Evaluate cross entropy between the test and train labels.
loss = classifier.evaluate(input_fn=input_fn_eval)["loss"]
# predict_class outputs the most commonly seen class in training.
predicted_label = classifier.predict_class(new_samples)
# predict_proba outputs the class distribution from training.
label_distribution = classifier.predict_proba(new_samples)
```
Input of `fit` and `evaluate` should have following features,
otherwise there will be a `KeyError`:
* if `weight_column_name` is not `None`, a feature with
`key=weight_column_name` whose value is a `Tensor`.
"""
def __init__(self,
model_dir=None,
n_classes=2,
weight_column_name=None,
config=None,
feature_engineering_fn=None,
label_keys=None):
"""Initializes a DebugClassifier instance.
Args:
model_dir: Directory to save model parameters, graph and etc. This can
also be used to load checkpoints from the directory into a estimator to
continue training a previously saved model.
n_classes: number of label classes. Default is binary classification.
It must be greater than 1. Note: Class labels are integers representing
the class index (i.e. values from 0 to n_classes-1). For arbitrary
label values (e.g. string labels), convert to class indices first.
weight_column_name: A string defining feature column name representing
weights. It is used to down weight or boost examples during training. It
will be multiplied by the loss of the example.
config: `RunConfig` object to configure the runtime settings.
feature_engineering_fn: Feature engineering function. Takes features and
labels which are the output of `input_fn` and returns
features and labels which will be fed into the model.
label_keys: Optional list of strings with size `[n_classes]` defining the
label vocabulary. Only supported for `n_classes` > 2.
Returns:
A `DebugClassifier` estimator.
Raises:
ValueError: If `n_classes` < 2.
"""
params = {"head":
head_lib.multi_class_head(
n_classes=n_classes,
weight_column_name=weight_column_name,
enable_centered_bias=True,
label_keys=label_keys)}
super(DebugClassifier, self).__init__(
model_fn=debug_model_fn,
model_dir=model_dir,
config=config,
params=params,
feature_engineering_fn=feature_engineering_fn)
def predict_classes(self, input_fn=None, batch_size=None):
"""Returns predicted classes for given features.
Args:
input_fn: Input function.
batch_size: Override default batch size.
Returns:
An iterable of predicted classes. Each predicted class is represented by
its class index (i.e. integer from 0 to n_classes-1).
"""
key = prediction_key.PredictionKey.CLASSES
preds = self.predict(
input_fn=input_fn, batch_size=batch_size, outputs=[key])
return (pred[key] for pred in preds)
def predict_proba(self,
input_fn=None,
batch_size=None):
"""Returns prediction probabilities for given features.
Args:
input_fn: Input function.
batch_size: Override default batch size.
Returns:
An iterable of predicted probabilities with shape [batch_size, n_classes].
"""
key = prediction_key.PredictionKey.PROBABILITIES
preds = self.predict(
input_fn=input_fn,
batch_size=batch_size,
outputs=[key])
return (pred[key] for pred in preds)
class DebugRegressor(estimator.Estimator):
"""A regressor for TensorFlow Debug models.
Example:
```python
# Build DebugRegressor
regressor = DebugRegressor()
# Input builders
def input_fn_train: # returns x, y (where y represents label's class index).
pass
def input_fn_eval: # returns x, y (where y represents label's class index).
pass
# Fit model.
regressor.fit(input_fn=input_fn_train)
# Evaluate squared-loss between the test and train targets.
loss = regressor.evaluate(input_fn=input_fn_eval)["loss"]
# predict_scores outputs mean value seen during training.
predicted_targets = regressor.predict_scores(new_samples)
```
Input of `fit` and `evaluate` should have following features,
otherwise there will be a `KeyError`:
* if `weight_column_name` is not `None`, a feature with
`key=weight_column_name` whose value is a `Tensor`.
"""
def __init__(self,
model_dir=None,
label_dimension=1,
weight_column_name=None,
config=None,
feature_engineering_fn=None):
"""Initializes a DebugRegressor instance.
Args:
model_dir: Directory to save model parameters, graph and etc. This can
also be used to load checkpoints from the directory into a estimator to
continue training a previously saved model.
label_dimension: Number of regression targets per example. This is the
size of the last dimension of the labels and logits `Tensor` objects
(typically, these have shape `[batch_size, label_dimension]`).
weight_column_name: A string defining feature column name representing
weights. It is used to down weight or boost examples during training. It
will be multiplied by the loss of the example.
config: `RunConfig` object to configure the runtime settings.
feature_engineering_fn: Feature engineering function. Takes features and
labels which are the output of `input_fn` and returns
features and labels which will be fed into the model.
Returns:
A `DebugRegressor` estimator.
"""
params = {
"head":
head_lib.regression_head(
weight_column_name=weight_column_name,
label_dimension=label_dimension,
enable_centered_bias=True)
}
super(DebugRegressor, self).__init__(
model_fn=debug_model_fn,
model_dir=model_dir,
config=config,
params=params,
feature_engineering_fn=feature_engineering_fn)
def predict_scores(self, input_fn=None, batch_size=None):
"""Returns predicted scores for given features.
Args:
input_fn: Input function.
batch_size: Override default batch size.
Returns:
An iterable of predicted scores.
"""
key = prediction_key.PredictionKey.SCORES
preds = self.predict(
input_fn=input_fn, batch_size=batch_size, outputs=[key])
return (pred[key] for pred in preds) | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
###############################################################################
#
# ResetPassword
# Resets a user's password to new randomized password.
#
# Python versions 2.6, 2.7, 3.x
#
# Copyright 2014, Temboo Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
# either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
#
#
###############################################################################
from temboo.core.choreography import Choreography
from temboo.core.choreography import InputSet
from temboo.core.choreography import ResultSet
from temboo.core.choreography import ChoreographyExecution
import json
class ResetPassword(Choreography):
def __init__(self, temboo_session):
"""
Create a new instance of the ResetPassword Choreo. A TembooSession object, containing a valid
set of Temboo credentials, must be supplied.
"""
super(ResetPassword, self).__init__(temboo_session, '/Library/Salesforce/Passwords/ResetPassword')
def new_input_set(self):
return ResetPasswordInputSet()
def _make_result_set(self, result, path):
return ResetPasswordResultSet(result, path)
def _make_execution(self, session, exec_id, path):
return ResetPasswordChoreographyExecution(session, exec_id, path)
class ResetPasswordInputSet(InputSet):
"""
An InputSet with methods appropriate for specifying the inputs to the ResetPassword
Choreo. The InputSet object is used to specify input parameters when executing this Choreo.
"""
def set_AccessToken(self, value):
"""
Set the value of the AccessToken input for this Choreo. ((optional, string) A valid access token retrieved during the OAuth process. This is required unless you provide the ClientID, ClientSecret, and RefreshToken to generate a new access token.)
"""
super(ResetPasswordInputSet, self)._set_input('AccessToken', value)
def set_ClientID(self, value):
"""
Set the value of the ClientID input for this Choreo. ((conditional, string) The Client ID provided by Salesforce. Required unless providing a valid AccessToken.)
"""
super(ResetPasswordInputSet, self)._set_input('ClientID', value)
def set_ClientSecret(self, value):
"""
Set the value of the ClientSecret input for this Choreo. ((conditional, string) The Client Secret provided by Salesforce. Required unless providing a valid AccessToken.)
"""
super(ResetPasswordInputSet, self)._set_input('ClientSecret', value)
def set_ID(self, value):
"""
Set the value of the ID input for this Choreo. ((required, string) The ID of the user whos password you are resetting.)
"""
super(ResetPasswordInputSet, self)._set_input('ID', value)
def set_InstanceName(self, value):
"""
Set the value of the InstanceName input for this Choreo. ((required, string) The server url prefix that indicates which instance your Salesforce account is on (e.g. na1, na2, na3, etc).)
"""
super(ResetPasswordInputSet, self)._set_input('InstanceName', value)
def set_RefreshToken(self, value):
"""
Set the value of the RefreshToken input for this Choreo. ((conditional, string) An OAuth Refresh Token used to generate a new access token when the original token is expired. Required unless providing a valid AccessToken.)
"""
super(ResetPasswordInputSet, self)._set_input('RefreshToken', value)
def set_ResponseFormat(self, value):
"""
Set the value of the ResponseFormat input for this Choreo. ((optional, string) The format that the response should be in. Valid values are: json (the default) and xml.)
"""
super(ResetPasswordInputSet, self)._set_input('ResponseFormat', value)
class ResetPasswordResultSet(ResultSet):
"""
A ResultSet with methods tailored to the values returned by the ResetPassword Choreo.
The ResultSet object is used to retrieve the results of a Choreo execution.
"""
def getJSONFromString(self, str):
return json.loads(str)
def get_Response(self):
"""
Retrieve the value for the "Response" output from this Choreo execution. (The response from Salesforce.)
"""
return self._output.get('Response', None)
def get_NewAccessToken(self):
"""
Retrieve the value for the "NewAccessToken" output from this Choreo execution. ((string) Contains a new AccessToken when the RefreshToken is provided.)
"""
return self._output.get('NewAccessToken', None)
def get_NewPassword(self):
"""
Retrieve the value for the "NewPassword" output from this Choreo execution. ((string) New password returned from Salesforce.)
"""
return self._output.get('NewPassword', None)
class ResetPasswordChoreographyExecution(ChoreographyExecution):
def _make_result_set(self, response, path):
return ResetPasswordResultSet(response, path) | unknown | codeparrot/codeparrot-clean | ||
/*-------------------------------------------------------------------------
*
* postinit.c
* postgres initialization utilities
*
* Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
*
* IDENTIFICATION
* src/backend/utils/init/postinit.c
*
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include <ctype.h>
#include <fcntl.h>
#include <unistd.h>
#include "access/genam.h"
#include "access/heapam.h"
#include "access/htup_details.h"
#include "access/session.h"
#include "access/tableam.h"
#include "access/xact.h"
#include "access/xlog.h"
#include "access/xloginsert.h"
#include "catalog/namespace.h"
#include "catalog/pg_authid.h"
#include "catalog/pg_collation.h"
#include "catalog/pg_database.h"
#include "catalog/pg_db_role_setting.h"
#include "catalog/pg_tablespace.h"
#include "libpq/auth.h"
#include "libpq/libpq-be.h"
#include "mb/pg_wchar.h"
#include "miscadmin.h"
#include "pgstat.h"
#include "postmaster/autovacuum.h"
#include "postmaster/postmaster.h"
#include "replication/slot.h"
#include "replication/slotsync.h"
#include "replication/walsender.h"
#include "storage/aio_subsys.h"
#include "storage/bufmgr.h"
#include "storage/fd.h"
#include "storage/ipc.h"
#include "storage/lmgr.h"
#include "storage/proc.h"
#include "storage/procarray.h"
#include "storage/procnumber.h"
#include "storage/procsignal.h"
#include "storage/sinvaladt.h"
#include "storage/smgr.h"
#include "storage/sync.h"
#include "tcop/backend_startup.h"
#include "tcop/tcopprot.h"
#include "utils/acl.h"
#include "utils/builtins.h"
#include "utils/fmgroids.h"
#include "utils/guc_hooks.h"
#include "utils/injection_point.h"
#include "utils/memutils.h"
#include "utils/pg_locale.h"
#include "utils/portal.h"
#include "utils/ps_status.h"
#include "utils/snapmgr.h"
#include "utils/syscache.h"
#include "utils/timeout.h"
/* has this backend called EmitConnectionWarnings()? */
static bool ConnectionWarningsEmitted;
/* content of warnings to send via EmitConnectionWarnings() */
static List *ConnectionWarningMessages;
static List *ConnectionWarningDetails;
static HeapTuple GetDatabaseTuple(const char *dbname);
static HeapTuple GetDatabaseTupleByOid(Oid dboid);
static void PerformAuthentication(Port *port);
static void CheckMyDatabase(const char *name, bool am_superuser, bool override_allow_connections);
static void ShutdownPostgres(int code, Datum arg);
static void StatementTimeoutHandler(void);
static void LockTimeoutHandler(void);
static void IdleInTransactionSessionTimeoutHandler(void);
static void TransactionTimeoutHandler(void);
static void IdleSessionTimeoutHandler(void);
static void IdleStatsUpdateTimeoutHandler(void);
static void ClientCheckTimeoutHandler(void);
static bool ThereIsAtLeastOneRole(void);
static void process_startup_options(Port *port, bool am_superuser);
static void process_settings(Oid databaseid, Oid roleid);
static void EmitConnectionWarnings(void);
/*** InitPostgres support ***/
/*
* GetDatabaseTuple -- fetch the pg_database row for a database
*
* This is used during backend startup when we don't yet have any access to
* system catalogs in general. In the worst case, we can seqscan pg_database
* using nothing but the hard-wired descriptor that relcache.c creates for
* pg_database. In more typical cases, relcache.c was able to load
* descriptors for both pg_database and its indexes from the shared relcache
* cache file, and so we can do an indexscan. criticalSharedRelcachesBuilt
* tells whether we got the cached descriptors.
*/
static HeapTuple
GetDatabaseTuple(const char *dbname)
{
HeapTuple tuple;
Relation relation;
SysScanDesc scan;
ScanKeyData key[1];
/*
* form a scan key
*/
ScanKeyInit(&key[0],
Anum_pg_database_datname,
BTEqualStrategyNumber, F_NAMEEQ,
CStringGetDatum(dbname));
/*
* Open pg_database and fetch a tuple. Force heap scan if we haven't yet
* built the critical shared relcache entries (i.e., we're starting up
* without a shared relcache cache file).
*/
relation = table_open(DatabaseRelationId, AccessShareLock);
scan = systable_beginscan(relation, DatabaseNameIndexId,
criticalSharedRelcachesBuilt,
NULL,
1, key);
tuple = systable_getnext(scan);
/* Must copy tuple before releasing buffer */
if (HeapTupleIsValid(tuple))
tuple = heap_copytuple(tuple);
/* all done */
systable_endscan(scan);
table_close(relation, AccessShareLock);
return tuple;
}
/*
* GetDatabaseTupleByOid -- as above, but search by database OID
*/
static HeapTuple
GetDatabaseTupleByOid(Oid dboid)
{
HeapTuple tuple;
Relation relation;
SysScanDesc scan;
ScanKeyData key[1];
/*
* form a scan key
*/
ScanKeyInit(&key[0],
Anum_pg_database_oid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(dboid));
/*
* Open pg_database and fetch a tuple. Force heap scan if we haven't yet
* built the critical shared relcache entries (i.e., we're starting up
* without a shared relcache cache file).
*/
relation = table_open(DatabaseRelationId, AccessShareLock);
scan = systable_beginscan(relation, DatabaseOidIndexId,
criticalSharedRelcachesBuilt,
NULL,
1, key);
tuple = systable_getnext(scan);
/* Must copy tuple before releasing buffer */
if (HeapTupleIsValid(tuple))
tuple = heap_copytuple(tuple);
/* all done */
systable_endscan(scan);
table_close(relation, AccessShareLock);
return tuple;
}
/*
* PerformAuthentication -- authenticate a remote client
*
* returns: nothing. Will not return at all if there's any failure.
*/
static void
PerformAuthentication(Port *port)
{
/* This should be set already, but let's make sure */
ClientAuthInProgress = true; /* limit visibility of log messages */
/*
* In EXEC_BACKEND case, we didn't inherit the contents of pg_hba.conf
* etcetera from the postmaster, and have to load them ourselves.
*
* FIXME: [fork/exec] Ugh. Is there a way around this overhead?
*/
#ifdef EXEC_BACKEND
/*
* load_hba() and load_ident() want to work within the PostmasterContext,
* so create that if it doesn't exist (which it won't). We'll delete it
* again later, in PostgresMain.
*/
if (PostmasterContext == NULL)
PostmasterContext = AllocSetContextCreate(TopMemoryContext,
"Postmaster",
ALLOCSET_DEFAULT_SIZES);
if (!load_hba())
{
/*
* It makes no sense to continue if we fail to load the HBA file,
* since there is no way to connect to the database in this case.
*/
ereport(FATAL,
/* translator: %s is a configuration file */
(errmsg("could not load %s", HbaFileName)));
}
if (!load_ident())
{
/*
* It is ok to continue if we fail to load the IDENT file, although it
* means that you cannot log in using any of the authentication
* methods that need a user name mapping. load_ident() already logged
* the details of error to the log.
*/
}
#endif
/* Capture authentication start time for logging */
conn_timing.auth_start = GetCurrentTimestamp();
/*
* Set up a timeout in case a buggy or malicious client fails to respond
* during authentication. Since we're inside a transaction and might do
* database access, we have to use the statement_timeout infrastructure.
*/
enable_timeout_after(STATEMENT_TIMEOUT, AuthenticationTimeout * 1000);
/*
* Now perform authentication exchange.
*/
set_ps_display("authentication");
ClientAuthentication(port); /* might not return, if failure */
/*
* Done with authentication. Disable the timeout, and log if needed.
*/
disable_timeout(STATEMENT_TIMEOUT, false);
/* Capture authentication end time for logging */
conn_timing.auth_end = GetCurrentTimestamp();
if (log_connections & LOG_CONNECTION_AUTHORIZATION)
{
StringInfoData logmsg;
initStringInfo(&logmsg);
if (am_walsender)
appendStringInfo(&logmsg, _("replication connection authorized: user=%s"),
port->user_name);
else
appendStringInfo(&logmsg, _("connection authorized: user=%s"),
port->user_name);
if (!am_walsender)
appendStringInfo(&logmsg, _(" database=%s"), port->database_name);
if (port->application_name != NULL)
appendStringInfo(&logmsg, _(" application_name=%s"),
port->application_name);
#ifdef USE_SSL
if (port->ssl_in_use)
appendStringInfo(&logmsg, _(" SSL enabled (protocol=%s, cipher=%s, bits=%d)"),
be_tls_get_version(port),
be_tls_get_cipher(port),
be_tls_get_cipher_bits(port));
#endif
#ifdef ENABLE_GSS
if (port->gss)
{
const char *princ = be_gssapi_get_princ(port);
if (princ)
appendStringInfo(&logmsg,
_(" GSS (authenticated=%s, encrypted=%s, delegated_credentials=%s, principal=%s)"),
be_gssapi_get_auth(port) ? _("yes") : _("no"),
be_gssapi_get_enc(port) ? _("yes") : _("no"),
be_gssapi_get_delegation(port) ? _("yes") : _("no"),
princ);
else
appendStringInfo(&logmsg,
_(" GSS (authenticated=%s, encrypted=%s, delegated_credentials=%s)"),
be_gssapi_get_auth(port) ? _("yes") : _("no"),
be_gssapi_get_enc(port) ? _("yes") : _("no"),
be_gssapi_get_delegation(port) ? _("yes") : _("no"));
}
#endif
ereport(LOG, errmsg_internal("%s", logmsg.data));
pfree(logmsg.data);
}
set_ps_display("startup");
ClientAuthInProgress = false; /* client_min_messages is active now */
}
/*
* CheckMyDatabase -- fetch information from the pg_database entry for our DB
*/
static void
CheckMyDatabase(const char *name, bool am_superuser, bool override_allow_connections)
{
HeapTuple tup;
Form_pg_database dbform;
Datum datum;
bool isnull;
char *collate;
char *ctype;
/* Fetch our pg_database row normally, via syscache */
tup = SearchSysCache1(DATABASEOID, ObjectIdGetDatum(MyDatabaseId));
if (!HeapTupleIsValid(tup))
elog(ERROR, "cache lookup failed for database %u", MyDatabaseId);
dbform = (Form_pg_database) GETSTRUCT(tup);
/* This recheck is strictly paranoia */
if (strcmp(name, NameStr(dbform->datname)) != 0)
ereport(FATAL,
(errcode(ERRCODE_UNDEFINED_DATABASE),
errmsg("database \"%s\" has disappeared from pg_database",
name),
errdetail("Database OID %u now seems to belong to \"%s\".",
MyDatabaseId, NameStr(dbform->datname))));
/*
* Check permissions to connect to the database.
*
* These checks are not enforced when in standalone mode, so that there is
* a way to recover from disabling all access to all databases, for
* example "UPDATE pg_database SET datallowconn = false;".
*/
if (IsUnderPostmaster)
{
/*
* Check that the database is currently allowing connections.
* (Background processes can override this test and the next one by
* setting override_allow_connections.)
*/
if (!dbform->datallowconn && !override_allow_connections)
ereport(FATAL,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("database \"%s\" is not currently accepting connections",
name)));
/*
* Check privilege to connect to the database. (The am_superuser test
* is redundant, but since we have the flag, might as well check it
* and save a few cycles.)
*/
if (!am_superuser && !override_allow_connections &&
object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(),
ACL_CONNECT) != ACLCHECK_OK)
ereport(FATAL,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied for database \"%s\"", name),
errdetail("User does not have CONNECT privilege.")));
/*
* Check connection limit for this database. We enforce the limit
* only for regular backends, since other process types have their own
* PGPROC pools.
*
* There is a race condition here --- we create our PGPROC before
* checking for other PGPROCs. If two backends did this at about the
* same time, they might both think they were over the limit, while
* ideally one should succeed and one fail. Getting that to work
* exactly seems more trouble than it is worth, however; instead we
* just document that the connection limit is approximate.
*/
if (dbform->datconnlimit >= 0 &&
AmRegularBackendProcess() &&
!am_superuser &&
CountDBConnections(MyDatabaseId) > dbform->datconnlimit)
ereport(FATAL,
(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
errmsg("too many connections for database \"%s\"",
name)));
}
/*
* OK, we're golden. Next to-do item is to save the encoding info out of
* the pg_database tuple.
*/
SetDatabaseEncoding(dbform->encoding);
/* Record it as a GUC internal option, too */
SetConfigOption("server_encoding", GetDatabaseEncodingName(),
PGC_INTERNAL, PGC_S_DYNAMIC_DEFAULT);
/* If we have no other source of client_encoding, use server encoding */
SetConfigOption("client_encoding", GetDatabaseEncodingName(),
PGC_BACKEND, PGC_S_DYNAMIC_DEFAULT);
/* assign locale variables */
datum = SysCacheGetAttrNotNull(DATABASEOID, tup, Anum_pg_database_datcollate);
collate = TextDatumGetCString(datum);
datum = SysCacheGetAttrNotNull(DATABASEOID, tup, Anum_pg_database_datctype);
ctype = TextDatumGetCString(datum);
/*
* Historically, we set LC_COLLATE from datcollate, as well. That's no
* longer necessary because all collation behavior is handled through
* pg_locale_t.
*/
if (pg_perm_setlocale(LC_CTYPE, ctype) == NULL)
ereport(FATAL,
(errmsg("database locale is incompatible with operating system"),
errdetail("The database was initialized with LC_CTYPE \"%s\", "
" which is not recognized by setlocale().", ctype),
errhint("Recreate the database with another locale or install the missing locale.")));
init_database_collation();
/*
* Check collation version. See similar code in
* pg_newlocale_from_collation(). Note that here we warn instead of error
* in any case, so that we don't prevent connecting.
*/
datum = SysCacheGetAttr(DATABASEOID, tup, Anum_pg_database_datcollversion,
&isnull);
if (!isnull)
{
char *actual_versionstr;
char *collversionstr;
char *locale;
collversionstr = TextDatumGetCString(datum);
if (dbform->datlocprovider == COLLPROVIDER_LIBC)
locale = collate;
else
{
datum = SysCacheGetAttrNotNull(DATABASEOID, tup, Anum_pg_database_datlocale);
locale = TextDatumGetCString(datum);
}
actual_versionstr = get_collation_actual_version(dbform->datlocprovider, locale);
if (!actual_versionstr)
/* should not happen */
elog(WARNING,
"database \"%s\" has no actual collation version, but a version was recorded",
name);
else if (strcmp(actual_versionstr, collversionstr) != 0)
ereport(WARNING,
(errmsg("database \"%s\" has a collation version mismatch",
name),
errdetail("The database was created using collation version %s, "
"but the operating system provides version %s.",
collversionstr, actual_versionstr),
errhint("Rebuild all objects in this database that use the default collation and run "
"ALTER DATABASE %s REFRESH COLLATION VERSION, "
"or build PostgreSQL with the right library version.",
quote_identifier(name))));
}
ReleaseSysCache(tup);
}
/*
* pg_split_opts -- split a string of options and append it to an argv array
*
* The caller is responsible for ensuring the argv array is large enough. The
* maximum possible number of arguments added by this routine is
* (strlen(optstr) + 1) / 2.
*
* Because some option values can contain spaces we allow escaping using
* backslashes, with \\ representing a literal backslash.
*/
void
pg_split_opts(char **argv, int *argcp, const char *optstr)
{
StringInfoData s;
initStringInfo(&s);
while (*optstr)
{
bool last_was_escape = false;
resetStringInfo(&s);
/* skip over leading space */
while (isspace((unsigned char) *optstr))
optstr++;
if (*optstr == '\0')
break;
/*
* Parse a single option, stopping at the first space, unless it's
* escaped.
*/
while (*optstr)
{
if (isspace((unsigned char) *optstr) && !last_was_escape)
break;
if (!last_was_escape && *optstr == '\\')
last_was_escape = true;
else
{
last_was_escape = false;
appendStringInfoChar(&s, *optstr);
}
optstr++;
}
/* now store the option in the next argv[] position */
argv[(*argcp)++] = pstrdup(s.data);
}
pfree(s.data);
}
/*
* Initialize MaxBackends value from config options.
*
* This must be called after modules have had the chance to alter GUCs in
* shared_preload_libraries and before shared memory size is determined.
*
* Note that in EXEC_BACKEND environment, the value is passed down from
* postmaster to subprocesses via BackendParameters in SubPostmasterMain; only
* postmaster itself and processes not under postmaster control should call
* this.
*/
void
InitializeMaxBackends(void)
{
Assert(MaxBackends == 0);
/* Note that this does not include "auxiliary" processes */
MaxBackends = MaxConnections + autovacuum_worker_slots +
max_worker_processes + max_wal_senders + NUM_SPECIAL_WORKER_PROCS;
if (MaxBackends > MAX_BACKENDS)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("too many server processes configured"),
errdetail("\"max_connections\" (%d) plus \"autovacuum_worker_slots\" (%d) plus \"max_worker_processes\" (%d) plus \"max_wal_senders\" (%d) must be less than %d.",
MaxConnections, autovacuum_worker_slots,
max_worker_processes, max_wal_senders,
MAX_BACKENDS - (NUM_SPECIAL_WORKER_PROCS - 1))));
}
/*
* Initialize the number of fast-path lock slots in PGPROC.
*
* This must be called after modules have had the chance to alter GUCs in
* shared_preload_libraries and before shared memory size is determined.
*/
void
InitializeFastPathLocks(void)
{
/* Should be initialized only once. */
Assert(FastPathLockGroupsPerBackend == 0);
/*
* Based on the max_locks_per_transaction GUC, as that's a good indicator
* of the expected number of locks, figure out the value for
* FastPathLockGroupsPerBackend. This must be a power-of-two. We cap the
* value at FP_LOCK_GROUPS_PER_BACKEND_MAX and insist the value is at
* least 1.
*
* The default max_locks_per_transaction = 64 means 4 groups by default.
*/
FastPathLockGroupsPerBackend =
Max(Min(pg_nextpower2_32(max_locks_per_xact) / FP_LOCK_SLOTS_PER_GROUP,
FP_LOCK_GROUPS_PER_BACKEND_MAX), 1);
/* Validate we did get a power-of-two */
Assert(FastPathLockGroupsPerBackend ==
pg_nextpower2_32(FastPathLockGroupsPerBackend));
}
/*
* Early initialization of a backend (either standalone or under postmaster).
* This happens even before InitPostgres.
*
* This is separate from InitPostgres because it is also called by auxiliary
* processes, such as the background writer process, which may not call
* InitPostgres at all.
*/
void
BaseInit(void)
{
Assert(MyProc != NULL);
/*
* Initialize our input/output/debugging file descriptors.
*/
DebugFileOpen();
/*
* Initialize file access. Done early so other subsystems can access
* files.
*/
InitFileAccess();
/*
* Initialize statistics reporting. This needs to happen early to ensure
* that pgstat's shutdown callback runs after the shutdown callbacks of
* all subsystems that can produce stats (like e.g. transaction commits
* can).
*/
pgstat_initialize();
/*
* Initialize AIO before infrastructure that might need to actually
* execute AIO.
*/
pgaio_init_backend();
/* Do local initialization of storage and buffer managers */
InitSync();
smgrinit();
InitBufferManagerAccess();
/*
* Initialize temporary file access after pgstat, so that the temporary
* file shutdown hook can report temporary file statistics.
*/
InitTemporaryFileAccess();
/*
* Initialize local buffers for WAL record construction, in case we ever
* try to insert XLOG.
*/
InitXLogInsert();
/* Initialize lock manager's local structs */
InitLockManagerAccess();
/* Initialize logical info WAL logging state */
InitializeProcessXLogLogicalInfo();
/*
* Initialize replication slots after pgstat. The exit hook might need to
* drop ephemeral slots, which in turn triggers stats reporting.
*/
ReplicationSlotInitialize();
}
/* --------------------------------
* InitPostgres
* Initialize POSTGRES.
*
* Parameters:
* in_dbname, dboid: specify database to connect to, as described below
* username, useroid: specify role to connect as, as described below
* flags:
* - INIT_PG_LOAD_SESSION_LIBS to honor [session|local]_preload_libraries.
* - INIT_PG_OVERRIDE_ALLOW_CONNS to connect despite !datallowconn.
* - INIT_PG_OVERRIDE_ROLE_LOGIN to connect despite !rolcanlogin.
* out_dbname: optional output parameter, see below; pass NULL if not used
*
* The database can be specified by name, using the in_dbname parameter, or by
* OID, using the dboid parameter. Specify NULL or InvalidOid respectively
* for the unused parameter. If dboid is provided, the actual database
* name can be returned to the caller in out_dbname. If out_dbname isn't
* NULL, it must point to a buffer of size NAMEDATALEN.
*
* Similarly, the role can be passed by name, using the username parameter,
* or by OID using the useroid parameter.
*
* In bootstrap mode the database and username parameters are NULL/InvalidOid.
* The autovacuum launcher process doesn't specify these parameters either,
* because it only goes far enough to be able to read pg_database; it doesn't
* connect to any particular database. An autovacuum worker specifies a
* database but not a username; conversely, a physical walsender specifies
* username but not database.
*
* By convention, INIT_PG_LOAD_SESSION_LIBS should be passed in "flags" in
* "interactive" sessions (including standalone backends), but not in
* background processes such as autovacuum. Note in particular that it
* shouldn't be true in parallel worker processes; those have another
* mechanism for replicating their leader's set of loaded libraries.
*
* We expect that InitProcess() was already called, so we already have a
* PGPROC struct ... but it's not completely filled in yet.
*
* Note:
* Be very careful with the order of calls in the InitPostgres function.
* --------------------------------
*/
void
InitPostgres(const char *in_dbname, Oid dboid,
const char *username, Oid useroid,
bits32 flags,
char *out_dbname)
{
bool bootstrap = IsBootstrapProcessingMode();
bool am_superuser;
char *fullpath;
char dbname[NAMEDATALEN];
int nfree = 0;
elog(DEBUG3, "InitPostgres");
/*
* Add my PGPROC struct to the ProcArray.
*
* Once I have done this, I am visible to other backends!
*/
InitProcessPhase2();
/* Initialize status reporting */
pgstat_beinit();
/*
* And initialize an entry in the PgBackendStatus array. That way, if
* LWLocks or third-party authentication should happen to hang, it is
* possible to retrieve some information about what is going on.
*/
if (!bootstrap)
{
pgstat_bestart_initial();
INJECTION_POINT("init-pre-auth", NULL);
}
/*
* Initialize my entry in the shared-invalidation manager's array of
* per-backend data.
*/
SharedInvalBackendInit(false);
ProcSignalInit(MyCancelKey, MyCancelKeyLength);
/*
* Also set up timeout handlers needed for backend operation. We need
* these in every case except bootstrap.
*/
if (!bootstrap)
{
RegisterTimeout(DEADLOCK_TIMEOUT, CheckDeadLockAlert);
RegisterTimeout(STATEMENT_TIMEOUT, StatementTimeoutHandler);
RegisterTimeout(LOCK_TIMEOUT, LockTimeoutHandler);
RegisterTimeout(IDLE_IN_TRANSACTION_SESSION_TIMEOUT,
IdleInTransactionSessionTimeoutHandler);
RegisterTimeout(TRANSACTION_TIMEOUT, TransactionTimeoutHandler);
RegisterTimeout(IDLE_SESSION_TIMEOUT, IdleSessionTimeoutHandler);
RegisterTimeout(CLIENT_CONNECTION_CHECK_TIMEOUT, ClientCheckTimeoutHandler);
RegisterTimeout(IDLE_STATS_UPDATE_TIMEOUT,
IdleStatsUpdateTimeoutHandler);
}
/*
* If this is either a bootstrap process or a standalone backend, start up
* the XLOG machinery, and register to have it closed down at exit. In
* other cases, the startup process is responsible for starting up the
* XLOG machinery, and the checkpointer for closing it down.
*/
if (!IsUnderPostmaster)
{
/*
* We don't yet have an aux-process resource owner, but StartupXLOG
* and ShutdownXLOG will need one. Hence, create said resource owner
* (and register a callback to clean it up after ShutdownXLOG runs).
*/
CreateAuxProcessResourceOwner();
StartupXLOG();
/* Release (and warn about) any buffer pins leaked in StartupXLOG */
ReleaseAuxProcessResources(true);
/* Reset CurrentResourceOwner to nothing for the moment */
CurrentResourceOwner = NULL;
/*
* Use before_shmem_exit() so that ShutdownXLOG() can rely on DSM
* segments etc to work (which in turn is required for pgstats).
*/
before_shmem_exit(pgstat_before_server_shutdown, 0);
before_shmem_exit(ShutdownXLOG, 0);
}
/*
* Initialize the relation cache and the system catalog caches. Note that
* no catalog access happens here; we only set up the hashtable structure.
* We must do this before starting a transaction because transaction abort
* would try to touch these hashtables.
*/
RelationCacheInitialize();
InitCatalogCache();
InitPlanCache();
/* Initialize portal manager */
EnablePortalManager();
/*
* Load relcache entries for the shared system catalogs. This must create
* at least entries for pg_database and catalogs used for authentication.
*/
RelationCacheInitializePhase2();
/*
* Set up process-exit callback to do pre-shutdown cleanup. This is one
* of the first before_shmem_exit callbacks we register; thus, this will
* be one of the last things we do before low-level modules like the
* buffer manager begin to close down. We need to have this in place
* before we begin our first transaction --- if we fail during the
* initialization transaction, as is entirely possible, we need the
* AbortTransaction call to clean up.
*/
before_shmem_exit(ShutdownPostgres, 0);
/* The autovacuum launcher is done here */
if (AmAutoVacuumLauncherProcess())
{
/* fill in the remainder of this entry in the PgBackendStatus array */
pgstat_bestart_final();
return;
}
/*
* Start a new transaction here before first access to db.
*/
if (!bootstrap)
{
/* statement_timestamp must be set for timeouts to work correctly */
SetCurrentStatementStartTimestamp();
StartTransactionCommand();
/*
* transaction_isolation will have been set to the default by the
* above. If the default is "serializable", and we are in hot
* standby, we will fail if we don't change it to something lower.
* Fortunately, "read committed" is plenty good enough.
*/
XactIsoLevel = XACT_READ_COMMITTED;
}
/*
* Perform client authentication if necessary, then figure out our
* postgres user ID, and see if we are a superuser.
*
* In standalone mode, autovacuum worker processes and slot sync worker
* process, we use a fixed ID, otherwise we figure it out from the
* authenticated user name.
*/
if (bootstrap || AmAutoVacuumWorkerProcess() || AmLogicalSlotSyncWorkerProcess())
{
InitializeSessionUserIdStandalone();
am_superuser = true;
}
else if (!IsUnderPostmaster)
{
InitializeSessionUserIdStandalone();
am_superuser = true;
if (!ThereIsAtLeastOneRole())
ereport(WARNING,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("no roles are defined in this database system"),
errhint("You should immediately run CREATE USER \"%s\" SUPERUSER;.",
username != NULL ? username : "postgres")));
}
else if (AmBackgroundWorkerProcess())
{
if (username == NULL && !OidIsValid(useroid))
{
InitializeSessionUserIdStandalone();
am_superuser = true;
}
else
{
InitializeSessionUserId(username, useroid,
(flags & INIT_PG_OVERRIDE_ROLE_LOGIN) != 0);
am_superuser = superuser();
}
}
else
{
/* normal multiuser case */
Assert(MyProcPort != NULL);
PerformAuthentication(MyProcPort);
InitializeSessionUserId(username, useroid, false);
/* ensure that auth_method is actually valid, aka authn_id is not NULL */
if (MyClientConnectionInfo.authn_id)
InitializeSystemUser(MyClientConnectionInfo.authn_id,
hba_authname(MyClientConnectionInfo.auth_method));
am_superuser = superuser();
}
/* Report any SSL/GSS details for the session. */
if (MyProcPort != NULL)
{
Assert(!bootstrap);
pgstat_bestart_security();
}
/*
* Binary upgrades only allowed super-user connections
*/
if (IsBinaryUpgrade && !am_superuser)
{
ereport(FATAL,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to connect in binary upgrade mode")));
}
/*
* The last few regular connection slots are reserved for superusers and
* roles with privileges of pg_use_reserved_connections. We do not apply
* these limits to background processes, since they all have their own
* pools of PGPROC slots.
*
* Note: At this point, the new backend has already claimed a proc struct,
* so we must check whether the number of free slots is strictly less than
* the reserved connection limits.
*/
if (AmRegularBackendProcess() && !am_superuser &&
(SuperuserReservedConnections + ReservedConnections) > 0 &&
!HaveNFreeProcs(SuperuserReservedConnections + ReservedConnections, &nfree))
{
if (nfree < SuperuserReservedConnections)
ereport(FATAL,
(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
errmsg("remaining connection slots are reserved for roles with the %s attribute",
"SUPERUSER")));
if (!has_privs_of_role(GetUserId(), ROLE_PG_USE_RESERVED_CONNECTIONS))
ereport(FATAL,
(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
errmsg("remaining connection slots are reserved for roles with privileges of the \"%s\" role",
"pg_use_reserved_connections")));
}
/* Check replication permissions needed for walsender processes. */
if (am_walsender)
{
Assert(!bootstrap);
if (!has_rolreplication(GetUserId()))
ereport(FATAL,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied to start WAL sender"),
errdetail("Only roles with the %s attribute may start a WAL sender process.",
"REPLICATION")));
}
/*
* If this is a plain walsender only supporting physical replication, we
* don't want to connect to any particular database. Just finish the
* backend startup by processing any options from the startup packet, and
* we're done.
*/
if (am_walsender && !am_db_walsender)
{
/* process any options passed in the startup packet */
if (MyProcPort != NULL)
process_startup_options(MyProcPort, am_superuser);
/* Apply PostAuthDelay as soon as we've read all options */
if (PostAuthDelay > 0)
pg_usleep(PostAuthDelay * 1000000L);
/* initialize client encoding */
InitializeClientEncoding();
/* fill in the remainder of this entry in the PgBackendStatus array */
pgstat_bestart_final();
/* close the transaction we started above */
CommitTransactionCommand();
/* send any WARNINGs we've accumulated during initialization */
EmitConnectionWarnings();
return;
}
/*
* Set up the global variables holding database id and default tablespace.
* But note we won't actually try to touch the database just yet.
*
* We take a shortcut in the bootstrap case, otherwise we have to look up
* the db's entry in pg_database.
*/
if (bootstrap)
{
dboid = Template1DbOid;
MyDatabaseTableSpace = DEFAULTTABLESPACE_OID;
}
else if (in_dbname != NULL)
{
HeapTuple tuple;
Form_pg_database dbform;
tuple = GetDatabaseTuple(in_dbname);
if (!HeapTupleIsValid(tuple))
ereport(FATAL,
(errcode(ERRCODE_UNDEFINED_DATABASE),
errmsg("database \"%s\" does not exist", in_dbname)));
dbform = (Form_pg_database) GETSTRUCT(tuple);
dboid = dbform->oid;
}
else if (!OidIsValid(dboid))
{
/*
* If this is a background worker not bound to any particular
* database, we're done now. Everything that follows only makes sense
* if we are bound to a specific database. We do need to close the
* transaction we started before returning.
*/
if (!bootstrap)
{
pgstat_bestart_final();
CommitTransactionCommand();
}
return;
}
/*
* Now, take a writer's lock on the database we are trying to connect to.
* If there is a concurrently running DROP DATABASE on that database, this
* will block us until it finishes (and has committed its update of
* pg_database).
*
* Note that the lock is not held long, only until the end of this startup
* transaction. This is OK since we will advertise our use of the
* database in the ProcArray before dropping the lock (in fact, that's the
* next thing to do). Anyone trying a DROP DATABASE after this point will
* see us in the array once they have the lock. Ordering is important for
* this because we don't want to advertise ourselves as being in this
* database until we have the lock; otherwise we create what amounts to a
* deadlock with CountOtherDBBackends().
*
* Note: use of RowExclusiveLock here is reasonable because we envision
* our session as being a concurrent writer of the database. If we had a
* way of declaring a session as being guaranteed-read-only, we could use
* AccessShareLock for such sessions and thereby not conflict against
* CREATE DATABASE.
*/
if (!bootstrap)
LockSharedObject(DatabaseRelationId, dboid, 0, RowExclusiveLock);
/*
* Recheck pg_database to make sure the target database hasn't gone away.
* If there was a concurrent DROP DATABASE, this ensures we will die
* cleanly without creating a mess.
*/
if (!bootstrap)
{
HeapTuple tuple;
Form_pg_database datform;
tuple = GetDatabaseTupleByOid(dboid);
if (HeapTupleIsValid(tuple))
datform = (Form_pg_database) GETSTRUCT(tuple);
if (!HeapTupleIsValid(tuple) ||
(in_dbname && namestrcmp(&datform->datname, in_dbname)))
{
if (in_dbname)
ereport(FATAL,
(errcode(ERRCODE_UNDEFINED_DATABASE),
errmsg("database \"%s\" does not exist", in_dbname),
errdetail("It seems to have just been dropped or renamed.")));
else
ereport(FATAL,
(errcode(ERRCODE_UNDEFINED_DATABASE),
errmsg("database %u does not exist", dboid)));
}
strlcpy(dbname, NameStr(datform->datname), sizeof(dbname));
if (database_is_invalid_form(datform))
{
ereport(FATAL,
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("cannot connect to invalid database \"%s\"", dbname),
errhint("Use DROP DATABASE to drop invalid databases."));
}
MyDatabaseTableSpace = datform->dattablespace;
MyDatabaseHasLoginEventTriggers = datform->dathasloginevt;
/* pass the database name back to the caller */
if (out_dbname)
strcpy(out_dbname, dbname);
}
/*
* Now that we rechecked, we are certain to be connected to a database and
* thus can set MyDatabaseId.
*
* It is important that MyDatabaseId only be set once we are sure that the
* target database can no longer be concurrently dropped or renamed. For
* example, without this guarantee, pgstat_update_dbstats() could create
* entries for databases that were just dropped in the pgstat shutdown
* callback, which could confuse other code paths like the autovacuum
* scheduler.
*/
MyDatabaseId = dboid;
/*
* Now we can mark our PGPROC entry with the database ID.
*
* We assume this is an atomic store so no lock is needed; though actually
* things would work fine even if it weren't atomic. Anyone searching the
* ProcArray for this database's ID should hold the database lock, so they
* would not be executing concurrently with this store. A process looking
* for another database's ID could in theory see a chance match if it read
* a partially-updated databaseId value; but as long as all such searches
* wait and retry, as in CountOtherDBBackends(), they will certainly see
* the correct value on their next try.
*/
MyProc->databaseId = MyDatabaseId;
/*
* We established a catalog snapshot while reading pg_authid and/or
* pg_database; but until we have set up MyDatabaseId, we won't react to
* incoming sinval messages for unshared catalogs, so we won't realize it
* if the snapshot has been invalidated. Assume it's no good anymore.
*/
InvalidateCatalogSnapshot();
/*
* Now we should be able to access the database directory safely. Verify
* it's there and looks reasonable.
*/
fullpath = GetDatabasePath(MyDatabaseId, MyDatabaseTableSpace);
if (!bootstrap)
{
if (access(fullpath, F_OK) == -1)
{
if (errno == ENOENT)
ereport(FATAL,
(errcode(ERRCODE_UNDEFINED_DATABASE),
errmsg("database \"%s\" does not exist",
dbname),
errdetail("The database subdirectory \"%s\" is missing.",
fullpath)));
else
ereport(FATAL,
(errcode_for_file_access(),
errmsg("could not access directory \"%s\": %m",
fullpath)));
}
ValidatePgVersion(fullpath);
}
SetDatabasePath(fullpath);
pfree(fullpath);
/*
* It's now possible to do real access to the system catalogs.
*
* Load relcache entries for the system catalogs. This must create at
* least the minimum set of "nailed-in" cache entries.
*/
RelationCacheInitializePhase3();
/* set up ACL framework (so CheckMyDatabase can check permissions) */
initialize_acl();
/*
* Re-read the pg_database row for our database, check permissions and set
* up database-specific GUC settings. We can't do this until all the
* database-access infrastructure is up. (Also, it wants to know if the
* user is a superuser, so the above stuff has to happen first.)
*/
if (!bootstrap)
CheckMyDatabase(dbname, am_superuser,
(flags & INIT_PG_OVERRIDE_ALLOW_CONNS) != 0);
/*
* Now process any command-line switches and any additional GUC variable
* settings passed in the startup packet. We couldn't do this before
* because we didn't know if client is a superuser.
*/
if (MyProcPort != NULL)
process_startup_options(MyProcPort, am_superuser);
/* Process pg_db_role_setting options */
process_settings(MyDatabaseId, GetSessionUserId());
/* Apply PostAuthDelay as soon as we've read all options */
if (PostAuthDelay > 0)
pg_usleep(PostAuthDelay * 1000000L);
/*
* Initialize various default states that can't be set up until we've
* selected the active user and gotten the right GUC settings.
*/
/* set default namespace search path */
InitializeSearchPath();
/* initialize client encoding */
InitializeClientEncoding();
/* Initialize this backend's session state. */
InitializeSession();
/*
* If this is an interactive session, load any libraries that should be
* preloaded at backend start. Since those are determined by GUCs, this
* can't happen until GUC settings are complete, but we want it to happen
* during the initial transaction in case anything that requires database
* access needs to be done.
*/
if ((flags & INIT_PG_LOAD_SESSION_LIBS) != 0)
process_session_preload_libraries();
/* fill in the remainder of this entry in the PgBackendStatus array */
if (!bootstrap)
pgstat_bestart_final();
/* close the transaction we started above */
if (!bootstrap)
CommitTransactionCommand();
/* send any WARNINGs we've accumulated during initialization */
EmitConnectionWarnings();
}
/*
* Process any command-line switches and any additional GUC variable
* settings passed in the startup packet.
*/
static void
process_startup_options(Port *port, bool am_superuser)
{
GucContext gucctx;
ListCell *gucopts;
gucctx = am_superuser ? PGC_SU_BACKEND : PGC_BACKEND;
/*
* First process any command-line switches that were included in the
* startup packet, if we are in a regular backend.
*/
if (port->cmdline_options != NULL)
{
/*
* The maximum possible number of commandline arguments that could
* come from port->cmdline_options is (strlen + 1) / 2; see
* pg_split_opts().
*/
char **av;
int maxac;
int ac;
maxac = 2 + (strlen(port->cmdline_options) + 1) / 2;
av = palloc_array(char *, maxac);
ac = 0;
av[ac++] = "postgres";
pg_split_opts(av, &ac, port->cmdline_options);
av[ac] = NULL;
Assert(ac < maxac);
(void) process_postgres_switches(ac, av, gucctx, NULL);
}
/*
* Process any additional GUC variable settings passed in startup packet.
* These are handled exactly like command-line variables.
*/
gucopts = list_head(port->guc_options);
while (gucopts)
{
char *name;
char *value;
name = lfirst(gucopts);
gucopts = lnext(port->guc_options, gucopts);
value = lfirst(gucopts);
gucopts = lnext(port->guc_options, gucopts);
SetConfigOption(name, value, gucctx, PGC_S_CLIENT);
}
}
/*
* Load GUC settings from pg_db_role_setting.
*
* We try specific settings for the database/role combination, as well as
* general for this database and for this user.
*/
static void
process_settings(Oid databaseid, Oid roleid)
{
Relation relsetting;
Snapshot snapshot;
if (!IsUnderPostmaster)
return;
relsetting = table_open(DbRoleSettingRelationId, AccessShareLock);
/* read all the settings under the same snapshot for efficiency */
snapshot = RegisterSnapshot(GetCatalogSnapshot(DbRoleSettingRelationId));
/* Later settings are ignored if set earlier. */
ApplySetting(snapshot, databaseid, roleid, relsetting, PGC_S_DATABASE_USER);
ApplySetting(snapshot, InvalidOid, roleid, relsetting, PGC_S_USER);
ApplySetting(snapshot, databaseid, InvalidOid, relsetting, PGC_S_DATABASE);
ApplySetting(snapshot, InvalidOid, InvalidOid, relsetting, PGC_S_GLOBAL);
UnregisterSnapshot(snapshot);
table_close(relsetting, AccessShareLock);
}
/*
* Backend-shutdown callback. Do cleanup that we want to be sure happens
* before all the supporting modules begin to nail their doors shut via
* their own callbacks.
*
* User-level cleanup, such as temp-relation removal and UNLISTEN, happens
* via separate callbacks that execute before this one. We don't combine the
* callbacks because we still want this one to happen if the user-level
* cleanup fails.
*/
static void
ShutdownPostgres(int code, Datum arg)
{
/* Make sure we've killed any active transaction */
AbortOutOfAnyTransaction();
/*
* User locks are not released by transaction end, so be sure to release
* them explicitly.
*/
LockReleaseAll(USER_LOCKMETHOD, true);
}
/*
* STATEMENT_TIMEOUT handler: trigger a query-cancel interrupt.
*/
static void
StatementTimeoutHandler(void)
{
int sig = SIGINT;
/*
* During authentication the timeout is used to deal with
* authentication_timeout - we want to quit in response to such timeouts.
*/
if (ClientAuthInProgress)
sig = SIGTERM;
#ifdef HAVE_SETSID
/* try to signal whole process group */
kill(-MyProcPid, sig);
#endif
kill(MyProcPid, sig);
}
/*
* LOCK_TIMEOUT handler: trigger a query-cancel interrupt.
*/
static void
LockTimeoutHandler(void)
{
#ifdef HAVE_SETSID
/* try to signal whole process group */
kill(-MyProcPid, SIGINT);
#endif
kill(MyProcPid, SIGINT);
}
static void
TransactionTimeoutHandler(void)
{
TransactionTimeoutPending = true;
InterruptPending = true;
SetLatch(MyLatch);
}
static void
IdleInTransactionSessionTimeoutHandler(void)
{
IdleInTransactionSessionTimeoutPending = true;
InterruptPending = true;
SetLatch(MyLatch);
}
static void
IdleSessionTimeoutHandler(void)
{
IdleSessionTimeoutPending = true;
InterruptPending = true;
SetLatch(MyLatch);
}
static void
IdleStatsUpdateTimeoutHandler(void)
{
IdleStatsUpdateTimeoutPending = true;
InterruptPending = true;
SetLatch(MyLatch);
}
static void
ClientCheckTimeoutHandler(void)
{
CheckClientConnectionPending = true;
InterruptPending = true;
SetLatch(MyLatch);
}
/*
* Returns true if at least one role is defined in this database cluster.
*/
static bool
ThereIsAtLeastOneRole(void)
{
Relation pg_authid_rel;
TableScanDesc scan;
bool result;
pg_authid_rel = table_open(AuthIdRelationId, AccessShareLock);
scan = table_beginscan_catalog(pg_authid_rel, 0, NULL);
result = (heap_getnext(scan, ForwardScanDirection) != NULL);
table_endscan(scan);
table_close(pg_authid_rel, AccessShareLock);
return result;
}
/*
* Stores a warning message to be sent later via EmitConnectionWarnings().
* Both msg and detail must be non-NULL.
*
* NB: Caller should ensure the strings are allocated in a long-lived context
* like TopMemoryContext.
*/
void
StoreConnectionWarning(char *msg, char *detail)
{
MemoryContext oldcontext;
Assert(msg);
Assert(detail);
if (ConnectionWarningsEmitted)
elog(ERROR, "StoreConnectionWarning() called after EmitConnectionWarnings()");
oldcontext = MemoryContextSwitchTo(TopMemoryContext);
ConnectionWarningMessages = lappend(ConnectionWarningMessages, msg);
ConnectionWarningDetails = lappend(ConnectionWarningDetails, detail);
MemoryContextSwitchTo(oldcontext);
}
/*
* Sends the warning messages saved via StoreConnectionWarning() and frees the
* strings and lists.
*
* NB: This can only be called once per backend.
*/
static void
EmitConnectionWarnings(void)
{
ListCell *lc_msg;
ListCell *lc_detail;
if (ConnectionWarningsEmitted)
elog(ERROR, "EmitConnectionWarnings() called more than once");
else
ConnectionWarningsEmitted = true;
forboth(lc_msg, ConnectionWarningMessages,
lc_detail, ConnectionWarningDetails)
{
ereport(WARNING,
(errmsg("%s", (char *) lfirst(lc_msg)),
errdetail("%s", (char *) lfirst(lc_detail))));
}
list_free_deep(ConnectionWarningMessages);
list_free_deep(ConnectionWarningDetails);
} | c | github | https://github.com/postgres/postgres | src/backend/utils/init/postinit.c |
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Session'
db.create_table('tafe_session', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('session_number', self.gf('django.db.models.fields.CharField')(max_length=1)),
('subject', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['tafe.Subject'])),
('day', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['tafe.Day'])),
('timetable', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['tafe.Timetable'])),
))
db.send_create_signal('tafe', ['Session'])
# Adding model 'Day'
db.create_table('tafe_day', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('date', self.gf('django.db.models.fields.DateField')()),
))
db.send_create_signal('tafe', ['Day'])
# Adding model 'Timetable'
db.create_table('tafe_timetable', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('year', self.gf('django.db.models.fields.IntegerField')()),
('term', self.gf('django.db.models.fields.IntegerField')()),
('start_date', self.gf('django.db.models.fields.DateField')()),
('end_date', self.gf('django.db.models.fields.DateField')()),
('slug', self.gf('django.db.models.fields.SlugField')(max_length=12)),
))
db.send_create_signal('tafe', ['Timetable'])
# Adding unique constraint on 'Timetable', fields ['year', 'term']
db.create_unique('tafe_timetable', ['year', 'term'])
# Deleting field 'Attendance.date'
db.delete_column('tafe_attendance', 'date')
# Deleting field 'Attendance.session'
db.delete_column('tafe_attendance', 'session')
# Adding field 'Attendance.absent'
db.add_column('tafe_attendance', 'absent',
self.gf('django.db.models.fields.CharField')(default='', max_length=1, blank=True),
keep_default=False)
# Adding M2M table for field session on 'Attendance'
db.create_table('tafe_attendance_session', (
('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)),
('attendance', models.ForeignKey(orm['tafe.attendance'], null=False)),
('session', models.ForeignKey(orm['tafe.session'], null=False))
))
db.create_unique('tafe_attendance_session', ['attendance_id', 'session_id'])
def backwards(self, orm):
# Removing unique constraint on 'Timetable', fields ['year', 'term']
db.delete_unique('tafe_timetable', ['year', 'term'])
# Deleting model 'Session'
db.delete_table('tafe_session')
# Deleting model 'Day'
db.delete_table('tafe_day')
# Deleting model 'Timetable'
db.delete_table('tafe_timetable')
# Adding field 'Attendance.date'
db.add_column('tafe_attendance', 'date',
self.gf('django.db.models.fields.DateField')(default=datetime.datetime(2012, 8, 31, 0, 0)),
keep_default=False)
# Adding field 'Attendance.session'
db.add_column('tafe_attendance', 'session',
self.gf('django.db.models.fields.CharField')(default=datetime.datetime(2012, 8, 31, 0, 0), max_length=1),
keep_default=False)
# Deleting field 'Attendance.absent'
db.delete_column('tafe_attendance', 'absent')
# Removing M2M table for field session on 'Attendance'
db.delete_table('tafe_attendance_session')
models = {
'tafe.attendance': {
'Meta': {'object_name': 'Attendance'},
'absent': ('django.db.models.fields.CharField', [], {'max_length': '1', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'reason': ('django.db.models.fields.CharField', [], {'default': "'P'", 'max_length': '1'}),
'session': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['tafe.Session']", 'symmetrical': 'False'})
},
'tafe.course': {
'Meta': {'object_name': 'Course'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '30'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '40'}),
'students': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': "orm['tafe.Student']", 'null': 'True', 'through': "orm['tafe.Enrolment']", 'blank': 'True'}),
'subjects': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': "orm['tafe.Subject']", 'null': 'True', 'blank': 'True'})
},
'tafe.day': {
'Meta': {'object_name': 'Day'},
'date': ('django.db.models.fields.DateField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
},
'tafe.enrolment': {
'Meta': {'object_name': 'Enrolment'},
'course': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['tafe.Course']"}),
'date_ended': ('django.db.models.fields.DateField', [], {'null': 'True', 'blank': 'True'}),
'date_started': ('django.db.models.fields.DateField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'mark': ('django.db.models.fields.CharField', [], {'max_length': '1', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '40', 'blank': 'True'}),
'student': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['tafe.Student']"})
},
'tafe.grade': {
'Meta': {'object_name': 'Grade'},
'attendance': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': "orm['tafe.Attendance']", 'null': 'True', 'blank': 'True'}),
'date_started': ('django.db.models.fields.DateField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'results': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['tafe.SubjectResults']", 'null': 'True', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '60'}),
'student': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['tafe.Student']"}),
'subject': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['tafe.Subject']"})
},
'tafe.session': {
'Meta': {'object_name': 'Session'},
'day': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['tafe.Day']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'session_number': ('django.db.models.fields.CharField', [], {'max_length': '1'}),
'subject': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['tafe.Subject']"}),
'timetable': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['tafe.Timetable']"})
},
'tafe.staff': {
'Meta': {'object_name': 'Staff'},
'added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'dob': ('django.db.models.fields.DateField', [], {}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30'}),
'gender': ('django.db.models.fields.CharField', [], {'default': "'F'", 'max_length': "'1'"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'phone': ('django.db.models.fields.CharField', [], {'max_length': '12', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '40', 'blank': 'True'}),
'surname': ('django.db.models.fields.CharField', [], {'max_length': '30'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
'tafe.student': {
'Meta': {'object_name': 'Student'},
'added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'dob': ('django.db.models.fields.DateField', [], {}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30'}),
'gender': ('django.db.models.fields.CharField', [], {'default': "'F'", 'max_length': "'1'"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'phone': ('django.db.models.fields.CharField', [], {'max_length': '12', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '40', 'blank': 'True'}),
'surname': ('django.db.models.fields.CharField', [], {'max_length': '30'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
'tafe.subject': {
'Meta': {'object_name': 'Subject'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'members': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': "orm['tafe.Student']", 'null': 'True', 'through': "orm['tafe.Grade']", 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '30'}),
'semester': ('django.db.models.fields.CharField', [], {'max_length': '1', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '40'}),
'year': ('django.db.models.fields.IntegerField', [], {})
},
'tafe.subjectresults': {
'Meta': {'object_name': 'SubjectResults'},
'date': ('django.db.models.fields.DateField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'mark': ('django.db.models.fields.CharField', [], {'max_length': '2'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '30'})
},
'tafe.timetable': {
'Meta': {'unique_together': "(('year', 'term'),)", 'object_name': 'Timetable'},
'end_date': ('django.db.models.fields.DateField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '12'}),
'start_date': ('django.db.models.fields.DateField', [], {}),
'term': ('django.db.models.fields.IntegerField', [], {}),
'year': ('django.db.models.fields.IntegerField', [], {})
}
}
complete_apps = ['tafe'] | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
class TFProfLoggerTest(tf.test.TestCase):
def _BuildSmallPlaceholderlModel(self):
a = tf.placeholder(tf.int32, [2, 2])
b = tf.placeholder(tf.int32, [2, 2])
y = tf.matmul(a, b)
return a, b, y
def _BuildSmallModel(self):
a = tf.constant([[1, 2], [3, 4]])
b = tf.constant([[1, 2], [3, 4]])
return tf.matmul(a, b)
def testFillMissingShape(self):
a, b, y = self._BuildSmallPlaceholderlModel()
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
sess = tf.Session()
sess.run(y,
options=run_options,
run_metadata=run_metadata,
feed_dict={a: [[1, 2], [2, 3]],
b: [[1, 2], [2, 3]]})
graph2 = tf.Graph()
# Use copy_op_to_graph to remove shape information.
y2 = tf.contrib.copy_graph.copy_op_to_graph(y, graph2, [])
self.assertEquals('<unknown>', str(y2.get_shape()))
tf.contrib.tfprof.tfprof_logger._fill_missing_graph_shape(graph2,
run_metadata)
self.assertEquals('(2, 2)', str(y2.get_shape()))
def testFailedFillMissingShape(self):
y = self._BuildSmallModel()
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
sess = tf.Session()
sess.run(y, options=run_options, run_metadata=run_metadata)
graph2 = tf.Graph()
y2 = tf.contrib.copy_graph.copy_op_to_graph(y, graph2, [])
self.assertEquals('<unknown>', str(y2.get_shape()))
# run_metadata has special name for MatMul, hence failed to fill shape.
tf.contrib.tfprof.tfprof_logger._fill_missing_graph_shape(graph2,
run_metadata)
self.assertEquals('<unknown>', str(y2.get_shape()))
if __name__ == '__main__':
tf.test.main() | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import, unicode_literals
import collections
import datetime
import errno
import io
import itertools
import json
import locale
import os
import platform
import re
import shutil
import subprocess
import socket
import sys
import time
import traceback
if os.name == 'nt':
import ctypes
from .compat import (
compat_cookiejar,
compat_expanduser,
compat_http_client,
compat_kwargs,
compat_str,
compat_urllib_error,
compat_urllib_request,
)
from .utils import (
escape_url,
ContentTooShortError,
date_from_str,
DateRange,
DEFAULT_OUTTMPL,
determine_ext,
DownloadError,
encodeFilename,
ExtractorError,
format_bytes,
formatSeconds,
get_term_width,
locked_file,
make_HTTPS_handler,
MaxDownloadsReached,
PagedList,
PostProcessingError,
platform_name,
preferredencoding,
SameFileError,
sanitize_filename,
subtitles_filename,
takewhile_inclusive,
UnavailableVideoError,
url_basename,
write_json_file,
write_string,
YoutubeDLHandler,
prepend_extension,
args_to_str,
)
from .cache import Cache
from .extractor import get_info_extractor, gen_extractors
from .downloader import get_suitable_downloader
from .downloader.rtmp import rtmpdump_version
from .postprocessor import (
FFmpegMergerPP,
FFmpegPostProcessor,
get_postprocessor,
)
from .version import __version__
class YoutubeDL(object):
"""YoutubeDL class.
YoutubeDL objects are the ones responsible of downloading the
actual video file and writing it to disk if the user has requested
it, among some other tasks. In most cases there should be one per
program. As, given a video URL, the downloader doesn't know how to
extract all the needed information, task that InfoExtractors do, it
has to pass the URL to one of them.
For this, YoutubeDL objects have a method that allows
InfoExtractors to be registered in a given order. When it is passed
a URL, the YoutubeDL object handles it to the first InfoExtractor it
finds that reports being able to handle it. The InfoExtractor extracts
all the information about the video or videos the URL refers to, and
YoutubeDL process the extracted information, possibly using a File
Downloader to download the video.
YoutubeDL objects accept a lot of parameters. In order not to saturate
the object constructor with arguments, it receives a dictionary of
options instead. These options are available through the params
attribute for the InfoExtractors to use. The YoutubeDL also
registers itself as the downloader in charge for the InfoExtractors
that are added to it, so this is a "mutual registration".
Available options:
username: Username for authentication purposes.
password: Password for authentication purposes.
videopassword: Password for acces a video.
usenetrc: Use netrc for authentication instead.
verbose: Print additional info to stdout.
quiet: Do not print messages to stdout.
no_warnings: Do not print out anything for warnings.
forceurl: Force printing final URL.
forcetitle: Force printing title.
forceid: Force printing ID.
forcethumbnail: Force printing thumbnail URL.
forcedescription: Force printing description.
forcefilename: Force printing final filename.
forceduration: Force printing duration.
forcejson: Force printing info_dict as JSON.
dump_single_json: Force printing the info_dict of the whole playlist
(or video) as a single JSON line.
simulate: Do not download the video files.
format: Video format code. See options.py for more information.
format_limit: Highest quality format to try.
outtmpl: Template for output names.
restrictfilenames: Do not allow "&" and spaces in file names
ignoreerrors: Do not stop on download errors.
nooverwrites: Prevent overwriting files.
playliststart: Playlist item to start at.
playlistend: Playlist item to end at.
playlistreverse: Download playlist items in reverse order.
matchtitle: Download only matching titles.
rejecttitle: Reject downloads for matching titles.
logger: Log messages to a logging.Logger instance.
logtostderr: Log messages to stderr instead of stdout.
writedescription: Write the video description to a .description file
writeinfojson: Write the video description to a .info.json file
writeannotations: Write the video annotations to a .annotations.xml file
writethumbnail: Write the thumbnail image to a file
writesubtitles: Write the video subtitles to a file
writeautomaticsub: Write the automatic subtitles to a file
allsubtitles: Downloads all the subtitles of the video
(requires writesubtitles or writeautomaticsub)
listsubtitles: Lists all available subtitles for the video
subtitlesformat: Subtitle format [srt/sbv/vtt] (default=srt)
subtitleslangs: List of languages of the subtitles to download
keepvideo: Keep the video file after post-processing
daterange: A DateRange object, download only if the upload_date is in the range.
skip_download: Skip the actual download of the video file
cachedir: Location of the cache files in the filesystem.
False to disable filesystem cache.
noplaylist: Download single video instead of a playlist if in doubt.
age_limit: An integer representing the user's age in years.
Unsuitable videos for the given age are skipped.
min_views: An integer representing the minimum view count the video
must have in order to not be skipped.
Videos without view count information are always
downloaded. None for no limit.
max_views: An integer representing the maximum view count.
Videos that are more popular than that are not
downloaded.
Videos without view count information are always
downloaded. None for no limit.
download_archive: File name of a file where all downloads are recorded.
Videos already present in the file are not downloaded
again.
cookiefile: File name where cookies should be read from and dumped to.
nocheckcertificate:Do not verify SSL certificates
prefer_insecure: Use HTTP instead of HTTPS to retrieve information.
At the moment, this is only supported by YouTube.
proxy: URL of the proxy server to use
socket_timeout: Time to wait for unresponsive hosts, in seconds
bidi_workaround: Work around buggy terminals without bidirectional text
support, using fridibi
debug_printtraffic:Print out sent and received HTTP traffic
include_ads: Download ads as well
default_search: Prepend this string if an input url is not valid.
'auto' for elaborate guessing
encoding: Use this encoding instead of the system-specified.
extract_flat: Do not resolve URLs, return the immediate result.
Pass in 'in_playlist' to only show this behavior for
playlist items.
postprocessors: A list of dictionaries, each with an entry
* key: The name of the postprocessor. See
youtube_dl/postprocessor/__init__.py for a list.
as well as any further keyword arguments for the
postprocessor.
progress_hooks: A list of functions that get called on download
progress, with a dictionary with the entries
* filename: The final filename
* status: One of "downloading" and "finished"
The dict may also have some of the following entries:
* downloaded_bytes: Bytes on disk
* total_bytes: Size of the whole file, None if unknown
* tmpfilename: The filename we're currently writing to
* eta: The estimated time in seconds, None if unknown
* speed: The download speed in bytes/second, None if
unknown
Progress hooks are guaranteed to be called at least once
(with status "finished") if the download is successful.
The following parameters are not used by YoutubeDL itself, they are used by
the FileDownloader:
nopart, updatetime, buffersize, ratelimit, min_filesize, max_filesize, test,
noresizebuffer, retries, continuedl, noprogress, consoletitle
The following options are used by the post processors:
prefer_ffmpeg: If True, use ffmpeg instead of avconv if both are available,
otherwise prefer avconv.
exec_cmd: Arbitrary command to run after downloading
"""
params = None
_ies = []
_pps = []
_download_retcode = None
_num_downloads = None
_screen_file = None
def __init__(self, params=None, auto_init=True):
"""Create a FileDownloader object with the given options."""
if params is None:
params = {}
self._ies = []
self._ies_instances = {}
self._pps = []
self._progress_hooks = []
self._download_retcode = 0
self._num_downloads = 0
self._screen_file = [sys.stdout, sys.stderr][params.get('logtostderr', False)]
self._err_file = sys.stderr
self.params = params
self.cache = Cache(self)
if params.get('bidi_workaround', False):
try:
import pty
master, slave = pty.openpty()
width = get_term_width()
if width is None:
width_args = []
else:
width_args = ['-w', str(width)]
sp_kwargs = dict(
stdin=subprocess.PIPE,
stdout=slave,
stderr=self._err_file)
try:
self._output_process = subprocess.Popen(
['bidiv'] + width_args, **sp_kwargs
)
except OSError:
self._output_process = subprocess.Popen(
['fribidi', '-c', 'UTF-8'] + width_args, **sp_kwargs)
self._output_channel = os.fdopen(master, 'rb')
except OSError as ose:
if ose.errno == 2:
self.report_warning('Could not find fribidi executable, ignoring --bidi-workaround . Make sure that fribidi is an executable file in one of the directories in your $PATH.')
else:
raise
if (sys.version_info >= (3,) and sys.platform != 'win32' and
sys.getfilesystemencoding() in ['ascii', 'ANSI_X3.4-1968']
and not params.get('restrictfilenames', False)):
# On Python 3, the Unicode filesystem API will throw errors (#1474)
self.report_warning(
'Assuming --restrict-filenames since file system encoding '
'cannot encode all characters. '
'Set the LC_ALL environment variable to fix this.')
self.params['restrictfilenames'] = True
if '%(stitle)s' in self.params.get('outtmpl', ''):
self.report_warning('%(stitle)s is deprecated. Use the %(title)s and the --restrict-filenames flag(which also secures %(uploader)s et al) instead.')
self._setup_opener()
if auto_init:
self.print_debug_header()
self.add_default_info_extractors()
for pp_def_raw in self.params.get('postprocessors', []):
pp_class = get_postprocessor(pp_def_raw['key'])
pp_def = dict(pp_def_raw)
del pp_def['key']
pp = pp_class(self, **compat_kwargs(pp_def))
self.add_post_processor(pp)
for ph in self.params.get('progress_hooks', []):
self.add_progress_hook(ph)
def warn_if_short_id(self, argv):
# short YouTube ID starting with dash?
idxs = [
i for i, a in enumerate(argv)
if re.match(r'^-[0-9A-Za-z_-]{10}$', a)]
if idxs:
correct_argv = (
['youtube-dl'] +
[a for i, a in enumerate(argv) if i not in idxs] +
['--'] + [argv[i] for i in idxs]
)
self.report_warning(
'Long argument string detected. '
'Use -- to separate parameters and URLs, like this:\n%s\n' %
args_to_str(correct_argv))
def add_info_extractor(self, ie):
"""Add an InfoExtractor object to the end of the list."""
self._ies.append(ie)
self._ies_instances[ie.ie_key()] = ie
ie.set_downloader(self)
def get_info_extractor(self, ie_key):
"""
Get an instance of an IE with name ie_key, it will try to get one from
the _ies list, if there's no instance it will create a new one and add
it to the extractor list.
"""
ie = self._ies_instances.get(ie_key)
if ie is None:
ie = get_info_extractor(ie_key)()
self.add_info_extractor(ie)
return ie
def add_default_info_extractors(self):
"""
Add the InfoExtractors returned by gen_extractors to the end of the list
"""
for ie in gen_extractors():
self.add_info_extractor(ie)
def add_post_processor(self, pp):
"""Add a PostProcessor object to the end of the chain."""
self._pps.append(pp)
pp.set_downloader(self)
def add_progress_hook(self, ph):
"""Add the progress hook (currently only for the file downloader)"""
self._progress_hooks.append(ph)
def _bidi_workaround(self, message):
if not hasattr(self, '_output_channel'):
return message
assert hasattr(self, '_output_process')
assert isinstance(message, compat_str)
line_count = message.count('\n') + 1
self._output_process.stdin.write((message + '\n').encode('utf-8'))
self._output_process.stdin.flush()
res = ''.join(self._output_channel.readline().decode('utf-8')
for _ in range(line_count))
return res[:-len('\n')]
def to_screen(self, message, skip_eol=False):
"""Print message to stdout if not in quiet mode."""
return self.to_stdout(message, skip_eol, check_quiet=True)
def _write_string(self, s, out=None):
write_string(s, out=out, encoding=self.params.get('encoding'))
def to_stdout(self, message, skip_eol=False, check_quiet=False):
"""Print message to stdout if not in quiet mode."""
if self.params.get('logger'):
self.params['logger'].debug(message)
elif not check_quiet or not self.params.get('quiet', False):
message = self._bidi_workaround(message)
terminator = ['\n', ''][skip_eol]
output = message + terminator
self._write_string(output, self._screen_file)
def to_stderr(self, message):
"""Print message to stderr."""
assert isinstance(message, compat_str)
if self.params.get('logger'):
self.params['logger'].error(message)
else:
message = self._bidi_workaround(message)
output = message + '\n'
self._write_string(output, self._err_file)
def to_console_title(self, message):
if not self.params.get('consoletitle', False):
return
if os.name == 'nt' and ctypes.windll.kernel32.GetConsoleWindow():
# c_wchar_p() might not be necessary if `message` is
# already of type unicode()
ctypes.windll.kernel32.SetConsoleTitleW(ctypes.c_wchar_p(message))
elif 'TERM' in os.environ:
self._write_string('\033]0;%s\007' % message, self._screen_file)
def save_console_title(self):
if not self.params.get('consoletitle', False):
return
if 'TERM' in os.environ:
# Save the title on stack
self._write_string('\033[22;0t', self._screen_file)
def restore_console_title(self):
if not self.params.get('consoletitle', False):
return
if 'TERM' in os.environ:
# Restore the title from stack
self._write_string('\033[23;0t', self._screen_file)
def __enter__(self):
self.save_console_title()
return self
def __exit__(self, *args):
self.restore_console_title()
if self.params.get('cookiefile') is not None:
self.cookiejar.save()
def trouble(self, message=None, tb=None):
"""Determine action to take when a download problem appears.
Depending on if the downloader has been configured to ignore
download errors or not, this method may throw an exception or
not when errors are found, after printing the message.
tb, if given, is additional traceback information.
"""
if message is not None:
self.to_stderr(message)
if self.params.get('verbose'):
if tb is None:
if sys.exc_info()[0]: # if .trouble has been called from an except block
tb = ''
if hasattr(sys.exc_info()[1], 'exc_info') and sys.exc_info()[1].exc_info[0]:
tb += ''.join(traceback.format_exception(*sys.exc_info()[1].exc_info))
tb += compat_str(traceback.format_exc())
else:
tb_data = traceback.format_list(traceback.extract_stack())
tb = ''.join(tb_data)
self.to_stderr(tb)
if not self.params.get('ignoreerrors', False):
if sys.exc_info()[0] and hasattr(sys.exc_info()[1], 'exc_info') and sys.exc_info()[1].exc_info[0]:
exc_info = sys.exc_info()[1].exc_info
else:
exc_info = sys.exc_info()
raise DownloadError(message, exc_info)
self._download_retcode = 1
def report_warning(self, message):
'''
Print the message to stderr, it will be prefixed with 'WARNING:'
If stderr is a tty file the 'WARNING:' will be colored
'''
if self.params.get('logger') is not None:
self.params['logger'].warning(message)
else:
if self.params.get('no_warnings'):
return
if self._err_file.isatty() and os.name != 'nt':
_msg_header = '\033[0;33mWARNING:\033[0m'
else:
_msg_header = 'WARNING:'
warning_message = '%s %s' % (_msg_header, message)
self.to_stderr(warning_message)
def report_error(self, message, tb=None):
'''
Do the same as trouble, but prefixes the message with 'ERROR:', colored
in red if stderr is a tty file.
'''
if self._err_file.isatty() and os.name != 'nt':
_msg_header = '\033[0;31mERROR:\033[0m'
else:
_msg_header = 'ERROR:'
error_message = '%s %s' % (_msg_header, message)
self.trouble(error_message, tb)
def report_file_already_downloaded(self, file_name):
"""Report file has already been fully downloaded."""
try:
self.to_screen('[download] %s has already been downloaded' % file_name)
except UnicodeEncodeError:
self.to_screen('[download] The file has already been downloaded')
def prepare_filename(self, info_dict):
"""Generate the output filename."""
try:
template_dict = dict(info_dict)
template_dict['epoch'] = int(time.time())
autonumber_size = self.params.get('autonumber_size')
if autonumber_size is None:
autonumber_size = 5
autonumber_templ = '%0' + str(autonumber_size) + 'd'
template_dict['autonumber'] = autonumber_templ % self._num_downloads
if template_dict.get('playlist_index') is not None:
template_dict['playlist_index'] = '%0*d' % (len(str(template_dict['n_entries'])), template_dict['playlist_index'])
if template_dict.get('resolution') is None:
if template_dict.get('width') and template_dict.get('height'):
template_dict['resolution'] = '%dx%d' % (template_dict['width'], template_dict['height'])
elif template_dict.get('height'):
template_dict['resolution'] = '%sp' % template_dict['height']
elif template_dict.get('width'):
template_dict['resolution'] = '?x%d' % template_dict['width']
sanitize = lambda k, v: sanitize_filename(
compat_str(v),
restricted=self.params.get('restrictfilenames'),
is_id=(k == 'id'))
template_dict = dict((k, sanitize(k, v))
for k, v in template_dict.items()
if v is not None)
template_dict = collections.defaultdict(lambda: 'NA', template_dict)
outtmpl = self.params.get('outtmpl', DEFAULT_OUTTMPL)
tmpl = compat_expanduser(outtmpl)
filename = tmpl % template_dict
return filename
except ValueError as err:
self.report_error('Error in output template: ' + str(err) + ' (encoding: ' + repr(preferredencoding()) + ')')
return None
def _match_entry(self, info_dict):
""" Returns None iff the file should be downloaded """
video_title = info_dict.get('title', info_dict.get('id', 'video'))
if 'title' in info_dict:
# This can happen when we're just evaluating the playlist
title = info_dict['title']
matchtitle = self.params.get('matchtitle', False)
if matchtitle:
if not re.search(matchtitle, title, re.IGNORECASE):
return '"' + title + '" title did not match pattern "' + matchtitle + '"'
rejecttitle = self.params.get('rejecttitle', False)
if rejecttitle:
if re.search(rejecttitle, title, re.IGNORECASE):
return '"' + title + '" title matched reject pattern "' + rejecttitle + '"'
date = info_dict.get('upload_date', None)
if date is not None:
dateRange = self.params.get('daterange', DateRange())
if date not in dateRange:
return '%s upload date is not in range %s' % (date_from_str(date).isoformat(), dateRange)
view_count = info_dict.get('view_count', None)
if view_count is not None:
min_views = self.params.get('min_views')
if min_views is not None and view_count < min_views:
return 'Skipping %s, because it has not reached minimum view count (%d/%d)' % (video_title, view_count, min_views)
max_views = self.params.get('max_views')
if max_views is not None and view_count > max_views:
return 'Skipping %s, because it has exceeded the maximum view count (%d/%d)' % (video_title, view_count, max_views)
age_limit = self.params.get('age_limit')
if age_limit is not None:
actual_age_limit = info_dict.get('age_limit')
if actual_age_limit is None:
actual_age_limit = 0
if age_limit < actual_age_limit:
return 'Skipping "' + title + '" because it is age restricted'
if self.in_download_archive(info_dict):
return '%s has already been recorded in archive' % video_title
return None
@staticmethod
def add_extra_info(info_dict, extra_info):
'''Set the keys from extra_info in info dict if they are missing'''
for key, value in extra_info.items():
info_dict.setdefault(key, value)
def extract_info(self, url, download=True, ie_key=None, extra_info={},
process=True):
'''
Returns a list with a dictionary for each video we find.
If 'download', also downloads the videos.
extra_info is a dict containing the extra values to add to each result
'''
if ie_key:
ies = [self.get_info_extractor(ie_key)]
else:
ies = self._ies
for ie in ies:
if not ie.suitable(url):
continue
if not ie.working():
self.report_warning('The program functionality for this site has been marked as broken, '
'and will probably not work.')
try:
ie_result = ie.extract(url)
if ie_result is None: # Finished already (backwards compatibility; listformats and friends should be moved here)
break
if isinstance(ie_result, list):
# Backwards compatibility: old IE result format
ie_result = {
'_type': 'compat_list',
'entries': ie_result,
}
self.add_default_extra_info(ie_result, ie, url)
if process:
return self.process_ie_result(ie_result, download, extra_info)
else:
return ie_result
except ExtractorError as de: # An error we somewhat expected
self.report_error(compat_str(de), de.format_traceback())
break
except MaxDownloadsReached:
raise
except Exception as e:
if self.params.get('ignoreerrors', False):
self.report_error(compat_str(e), tb=compat_str(traceback.format_exc()))
break
else:
raise
else:
self.report_error('no suitable InfoExtractor for URL %s' % url)
def add_default_extra_info(self, ie_result, ie, url):
self.add_extra_info(ie_result, {
'extractor': ie.IE_NAME,
'webpage_url': url,
'webpage_url_basename': url_basename(url),
'extractor_key': ie.ie_key(),
})
def process_ie_result(self, ie_result, download=True, extra_info={}):
"""
Take the result of the ie(may be modified) and resolve all unresolved
references (URLs, playlist items).
It will also download the videos if 'download'.
Returns the resolved ie_result.
"""
result_type = ie_result.get('_type', 'video')
if result_type in ('url', 'url_transparent'):
extract_flat = self.params.get('extract_flat', False)
if ((extract_flat == 'in_playlist' and 'playlist' in extra_info) or
extract_flat is True):
if self.params.get('forcejson', False):
self.to_stdout(json.dumps(ie_result))
return ie_result
if result_type == 'video':
self.add_extra_info(ie_result, extra_info)
return self.process_video_result(ie_result, download=download)
elif result_type == 'url':
# We have to add extra_info to the results because it may be
# contained in a playlist
return self.extract_info(ie_result['url'],
download,
ie_key=ie_result.get('ie_key'),
extra_info=extra_info)
elif result_type == 'url_transparent':
# Use the information from the embedding page
info = self.extract_info(
ie_result['url'], ie_key=ie_result.get('ie_key'),
extra_info=extra_info, download=False, process=False)
force_properties = dict(
(k, v) for k, v in ie_result.items() if v is not None)
for f in ('_type', 'url'):
if f in force_properties:
del force_properties[f]
new_result = info.copy()
new_result.update(force_properties)
assert new_result.get('_type') != 'url_transparent'
return self.process_ie_result(
new_result, download=download, extra_info=extra_info)
elif result_type == 'playlist' or result_type == 'multi_video':
# We process each entry in the playlist
playlist = ie_result.get('title', None) or ie_result.get('id', None)
self.to_screen('[download] Downloading playlist: %s' % playlist)
playlist_results = []
playliststart = self.params.get('playliststart', 1) - 1
playlistend = self.params.get('playlistend', None)
# For backwards compatibility, interpret -1 as whole list
if playlistend == -1:
playlistend = None
ie_entries = ie_result['entries']
if isinstance(ie_entries, list):
n_all_entries = len(ie_entries)
entries = ie_entries[playliststart:playlistend]
n_entries = len(entries)
self.to_screen(
"[%s] playlist %s: Collected %d video ids (downloading %d of them)" %
(ie_result['extractor'], playlist, n_all_entries, n_entries))
elif isinstance(ie_entries, PagedList):
entries = ie_entries.getslice(
playliststart, playlistend)
n_entries = len(entries)
self.to_screen(
"[%s] playlist %s: Downloading %d videos" %
(ie_result['extractor'], playlist, n_entries))
else: # iterable
entries = list(itertools.islice(
ie_entries, playliststart, playlistend))
n_entries = len(entries)
self.to_screen(
"[%s] playlist %s: Downloading %d videos" %
(ie_result['extractor'], playlist, n_entries))
if self.params.get('playlistreverse', False):
entries = entries[::-1]
for i, entry in enumerate(entries, 1):
self.to_screen('[download] Downloading video %s of %s' % (i, n_entries))
extra = {
'n_entries': n_entries,
'playlist': playlist,
'playlist_id': ie_result.get('id'),
'playlist_title': ie_result.get('title'),
'playlist_index': i + playliststart,
'extractor': ie_result['extractor'],
'webpage_url': ie_result['webpage_url'],
'webpage_url_basename': url_basename(ie_result['webpage_url']),
'extractor_key': ie_result['extractor_key'],
}
reason = self._match_entry(entry)
if reason is not None:
self.to_screen('[download] ' + reason)
continue
entry_result = self.process_ie_result(entry,
download=download,
extra_info=extra)
playlist_results.append(entry_result)
ie_result['entries'] = playlist_results
return ie_result
elif result_type == 'compat_list':
self.report_warning(
'Extractor %s returned a compat_list result. '
'It needs to be updated.' % ie_result.get('extractor'))
def _fixup(r):
self.add_extra_info(
r,
{
'extractor': ie_result['extractor'],
'webpage_url': ie_result['webpage_url'],
'webpage_url_basename': url_basename(ie_result['webpage_url']),
'extractor_key': ie_result['extractor_key'],
}
)
return r
ie_result['entries'] = [
self.process_ie_result(_fixup(r), download, extra_info)
for r in ie_result['entries']
]
return ie_result
else:
raise Exception('Invalid result type: %s' % result_type)
def select_format(self, format_spec, available_formats):
if format_spec == 'best' or format_spec is None:
return available_formats[-1]
elif format_spec == 'worst':
return available_formats[0]
elif format_spec == 'bestaudio':
audio_formats = [
f for f in available_formats
if f.get('vcodec') == 'none']
if audio_formats:
return audio_formats[-1]
elif format_spec == 'worstaudio':
audio_formats = [
f for f in available_formats
if f.get('vcodec') == 'none']
if audio_formats:
return audio_formats[0]
elif format_spec == 'bestvideo':
video_formats = [
f for f in available_formats
if f.get('acodec') == 'none']
if video_formats:
return video_formats[-1]
elif format_spec == 'worstvideo':
video_formats = [
f for f in available_formats
if f.get('acodec') == 'none']
if video_formats:
return video_formats[0]
else:
extensions = ['mp4', 'flv', 'webm', '3gp', 'm4a']
if format_spec in extensions:
filter_f = lambda f: f['ext'] == format_spec
else:
filter_f = lambda f: f['format_id'] == format_spec
matches = list(filter(filter_f, available_formats))
if matches:
return matches[-1]
return None
def process_video_result(self, info_dict, download=True):
assert info_dict.get('_type', 'video') == 'video'
if 'id' not in info_dict:
raise ExtractorError('Missing "id" field in extractor result')
if 'title' not in info_dict:
raise ExtractorError('Missing "title" field in extractor result')
if 'playlist' not in info_dict:
# It isn't part of a playlist
info_dict['playlist'] = None
info_dict['playlist_index'] = None
thumbnails = info_dict.get('thumbnails')
if thumbnails:
thumbnails.sort(key=lambda t: (
t.get('width'), t.get('height'), t.get('url')))
for t in thumbnails:
if 'width' in t and 'height' in t:
t['resolution'] = '%dx%d' % (t['width'], t['height'])
if thumbnails and 'thumbnail' not in info_dict:
info_dict['thumbnail'] = thumbnails[-1]['url']
if 'display_id' not in info_dict and 'id' in info_dict:
info_dict['display_id'] = info_dict['id']
if info_dict.get('upload_date') is None and info_dict.get('timestamp') is not None:
# Working around negative timestamps in Windows
# (see http://bugs.python.org/issue1646728)
if info_dict['timestamp'] < 0 and os.name == 'nt':
info_dict['timestamp'] = 0
upload_date = datetime.datetime.utcfromtimestamp(
info_dict['timestamp'])
info_dict['upload_date'] = upload_date.strftime('%Y%m%d')
# This extractors handle format selection themselves
if info_dict['extractor'] in ['Youku']:
if download:
self.process_info(info_dict)
return info_dict
# We now pick which formats have to be downloaded
if info_dict.get('formats') is None:
# There's only one format available
formats = [info_dict]
else:
formats = info_dict['formats']
if not formats:
raise ExtractorError('No video formats found!')
# We check that all the formats have the format and format_id fields
for i, format in enumerate(formats):
if 'url' not in format:
raise ExtractorError('Missing "url" key in result (index %d)' % i)
if format.get('format_id') is None:
format['format_id'] = compat_str(i)
if format.get('format') is None:
format['format'] = '{id} - {res}{note}'.format(
id=format['format_id'],
res=self.format_resolution(format),
note=' ({0})'.format(format['format_note']) if format.get('format_note') is not None else '',
)
# Automatically determine file extension if missing
if 'ext' not in format:
format['ext'] = determine_ext(format['url']).lower()
format_limit = self.params.get('format_limit', None)
if format_limit:
formats = list(takewhile_inclusive(
lambda f: f['format_id'] != format_limit, formats
))
# TODO Central sorting goes here
if formats[0] is not info_dict:
# only set the 'formats' fields if the original info_dict list them
# otherwise we end up with a circular reference, the first (and unique)
# element in the 'formats' field in info_dict is info_dict itself,
# wich can't be exported to json
info_dict['formats'] = formats
if self.params.get('listformats', None):
self.list_formats(info_dict)
return
req_format = self.params.get('format')
if req_format is None:
req_format = 'best'
formats_to_download = []
# The -1 is for supporting YoutubeIE
if req_format in ('-1', 'all'):
formats_to_download = formats
else:
for rfstr in req_format.split(','):
# We can accept formats requested in the format: 34/5/best, we pick
# the first that is available, starting from left
req_formats = rfstr.split('/')
for rf in req_formats:
if re.match(r'.+?\+.+?', rf) is not None:
# Two formats have been requested like '137+139'
format_1, format_2 = rf.split('+')
formats_info = (self.select_format(format_1, formats),
self.select_format(format_2, formats))
if all(formats_info):
# The first format must contain the video and the
# second the audio
if formats_info[0].get('vcodec') == 'none':
self.report_error('The first format must '
'contain the video, try using '
'"-f %s+%s"' % (format_2, format_1))
return
selected_format = {
'requested_formats': formats_info,
'format': rf,
'ext': formats_info[0]['ext'],
}
else:
selected_format = None
else:
selected_format = self.select_format(rf, formats)
if selected_format is not None:
formats_to_download.append(selected_format)
break
if not formats_to_download:
raise ExtractorError('requested format not available',
expected=True)
if download:
if len(formats_to_download) > 1:
self.to_screen('[info] %s: downloading video in %s formats' % (info_dict['id'], len(formats_to_download)))
for format in formats_to_download:
new_info = dict(info_dict)
new_info.update(format)
self.process_info(new_info)
# We update the info dict with the best quality format (backwards compatibility)
info_dict.update(formats_to_download[-1])
return info_dict
def process_info(self, info_dict):
"""Process a single resolved IE result."""
assert info_dict.get('_type', 'video') == 'video'
max_downloads = self.params.get('max_downloads')
if max_downloads is not None:
if self._num_downloads >= int(max_downloads):
raise MaxDownloadsReached()
info_dict['fulltitle'] = info_dict['title']
if len(info_dict['title']) > 200:
info_dict['title'] = info_dict['title'][:197] + '...'
# Keep for backwards compatibility
info_dict['stitle'] = info_dict['title']
if 'format' not in info_dict:
info_dict['format'] = info_dict['ext']
reason = self._match_entry(info_dict)
if reason is not None:
self.to_screen('[download] ' + reason)
return
self._num_downloads += 1
filename = self.prepare_filename(info_dict)
# Forced printings
if self.params.get('forcetitle', False):
self.to_stdout(info_dict['fulltitle'])
if self.params.get('forceid', False):
self.to_stdout(info_dict['id'])
if self.params.get('forceurl', False):
if info_dict.get('requested_formats') is not None:
for f in info_dict['requested_formats']:
self.to_stdout(f['url'] + f.get('play_path', ''))
else:
# For RTMP URLs, also include the playpath
self.to_stdout(info_dict['url'] + info_dict.get('play_path', ''))
if self.params.get('forcethumbnail', False) and info_dict.get('thumbnail') is not None:
self.to_stdout(info_dict['thumbnail'])
if self.params.get('forcedescription', False) and info_dict.get('description') is not None:
self.to_stdout(info_dict['description'])
if self.params.get('forcefilename', False) and filename is not None:
self.to_stdout(filename)
if self.params.get('forceduration', False) and info_dict.get('duration') is not None:
self.to_stdout(formatSeconds(info_dict['duration']))
if self.params.get('forceformat', False):
self.to_stdout(info_dict['format'])
if self.params.get('forcejson', False):
info_dict['_filename'] = filename
self.to_stdout(json.dumps(info_dict))
if self.params.get('dump_single_json', False):
info_dict['_filename'] = filename
# Do nothing else if in simulate mode
if self.params.get('simulate', False):
return
if filename is None:
return
try:
dn = os.path.dirname(encodeFilename(filename))
if dn and not os.path.exists(dn):
os.makedirs(dn)
except (OSError, IOError) as err:
self.report_error('unable to create directory ' + compat_str(err))
return
if self.params.get('writedescription', False):
descfn = filename + '.description'
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(descfn)):
self.to_screen('[info] Video description is already present')
elif info_dict.get('description') is None:
self.report_warning('There\'s no description to write.')
else:
try:
self.to_screen('[info] Writing video description to: ' + descfn)
with io.open(encodeFilename(descfn), 'w', encoding='utf-8') as descfile:
descfile.write(info_dict['description'])
except (OSError, IOError):
self.report_error('Cannot write description file ' + descfn)
return
if self.params.get('writeannotations', False):
annofn = filename + '.annotations.xml'
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(annofn)):
self.to_screen('[info] Video annotations are already present')
else:
try:
self.to_screen('[info] Writing video annotations to: ' + annofn)
with io.open(encodeFilename(annofn), 'w', encoding='utf-8') as annofile:
annofile.write(info_dict['annotations'])
except (KeyError, TypeError):
self.report_warning('There are no annotations to write.')
except (OSError, IOError):
self.report_error('Cannot write annotations file: ' + annofn)
return
subtitles_are_requested = any([self.params.get('writesubtitles', False),
self.params.get('writeautomaticsub')])
if subtitles_are_requested and 'subtitles' in info_dict and info_dict['subtitles']:
# subtitles download errors are already managed as troubles in relevant IE
# that way it will silently go on when used with unsupporting IE
subtitles = info_dict['subtitles']
sub_format = self.params.get('subtitlesformat', 'srt')
for sub_lang in subtitles.keys():
sub = subtitles[sub_lang]
if sub is None:
continue
try:
sub_filename = subtitles_filename(filename, sub_lang, sub_format)
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(sub_filename)):
self.to_screen('[info] Video subtitle %s.%s is already_present' % (sub_lang, sub_format))
else:
self.to_screen('[info] Writing video subtitles to: ' + sub_filename)
with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8') as subfile:
subfile.write(sub)
except (OSError, IOError):
self.report_error('Cannot write subtitles file ' + sub_filename)
return
if self.params.get('writeinfojson', False):
infofn = os.path.splitext(filename)[0] + '.info.json'
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(infofn)):
self.to_screen('[info] Video description metadata is already present')
else:
self.to_screen('[info] Writing video description metadata as JSON to: ' + infofn)
try:
write_json_file(info_dict, infofn)
except (OSError, IOError):
self.report_error('Cannot write metadata to JSON file ' + infofn)
return
if self.params.get('writethumbnail', False):
if info_dict.get('thumbnail') is not None:
thumb_format = determine_ext(info_dict['thumbnail'], 'jpg')
thumb_filename = os.path.splitext(filename)[0] + '.' + thumb_format
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(thumb_filename)):
self.to_screen('[%s] %s: Thumbnail is already present' %
(info_dict['extractor'], info_dict['id']))
else:
self.to_screen('[%s] %s: Downloading thumbnail ...' %
(info_dict['extractor'], info_dict['id']))
try:
uf = self.urlopen(info_dict['thumbnail'])
with open(thumb_filename, 'wb') as thumbf:
shutil.copyfileobj(uf, thumbf)
self.to_screen('[%s] %s: Writing thumbnail to: %s' %
(info_dict['extractor'], info_dict['id'], thumb_filename))
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self.report_warning('Unable to download thumbnail "%s": %s' %
(info_dict['thumbnail'], compat_str(err)))
if not self.params.get('skip_download', False):
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(filename)):
success = True
else:
try:
def dl(name, info):
fd = get_suitable_downloader(info)(self, self.params)
for ph in self._progress_hooks:
fd.add_progress_hook(ph)
if self.params.get('verbose'):
self.to_stdout('[debug] Invoking downloader on %r' % info.get('url'))
return fd.download(name, info)
if info_dict.get('requested_formats') is not None:
downloaded = []
success = True
merger = FFmpegMergerPP(self, not self.params.get('keepvideo'))
if not merger._executable:
postprocessors = []
self.report_warning('You have requested multiple '
'formats but ffmpeg or avconv are not installed.'
' The formats won\'t be merged')
else:
postprocessors = [merger]
for f in info_dict['requested_formats']:
new_info = dict(info_dict)
new_info.update(f)
fname = self.prepare_filename(new_info)
fname = prepend_extension(fname, 'f%s' % f['format_id'])
downloaded.append(fname)
partial_success = dl(fname, new_info)
success = success and partial_success
info_dict['__postprocessors'] = postprocessors
info_dict['__files_to_merge'] = downloaded
else:
# Just a single file
success = dl(filename, info_dict)
except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
self.report_error('unable to download video data: %s' % str(err))
return
except (OSError, IOError) as err:
raise UnavailableVideoError(err)
except (ContentTooShortError, ) as err:
self.report_error('content too short (expected %s bytes and served %s)' % (err.expected, err.downloaded))
return
if success:
try:
self.post_process(filename, info_dict)
except (PostProcessingError) as err:
self.report_error('postprocessing: %s' % str(err))
return
self.record_download_archive(info_dict)
def download(self, url_list):
"""Download a given list of URLs."""
outtmpl = self.params.get('outtmpl', DEFAULT_OUTTMPL)
if (len(url_list) > 1 and
'%' not in outtmpl
and self.params.get('max_downloads') != 1):
raise SameFileError(outtmpl)
for url in url_list:
try:
# It also downloads the videos
res = self.extract_info(url)
except UnavailableVideoError:
self.report_error('unable to download video')
except MaxDownloadsReached:
self.to_screen('[info] Maximum number of downloaded files reached.')
raise
else:
if self.params.get('dump_single_json', False):
self.to_stdout(json.dumps(res))
return self._download_retcode
def download_with_info_file(self, info_filename):
with io.open(info_filename, 'r', encoding='utf-8') as f:
info = json.load(f)
try:
self.process_ie_result(info, download=True)
except DownloadError:
webpage_url = info.get('webpage_url')
if webpage_url is not None:
self.report_warning('The info failed to download, trying with "%s"' % webpage_url)
return self.download([webpage_url])
else:
raise
return self._download_retcode
def post_process(self, filename, ie_info):
"""Run all the postprocessors on the given file."""
info = dict(ie_info)
info['filepath'] = filename
keep_video = None
pps_chain = []
if ie_info.get('__postprocessors') is not None:
pps_chain.extend(ie_info['__postprocessors'])
pps_chain.extend(self._pps)
for pp in pps_chain:
try:
keep_video_wish, new_info = pp.run(info)
if keep_video_wish is not None:
if keep_video_wish:
keep_video = keep_video_wish
elif keep_video is None:
# No clear decision yet, let IE decide
keep_video = keep_video_wish
except PostProcessingError as e:
self.report_error(e.msg)
if keep_video is False and not self.params.get('keepvideo', False):
try:
self.to_screen('Deleting original file %s (pass -k to keep)' % filename)
os.remove(encodeFilename(filename))
except (IOError, OSError):
self.report_warning('Unable to remove downloaded video file')
def _make_archive_id(self, info_dict):
# Future-proof against any change in case
# and backwards compatibility with prior versions
extractor = info_dict.get('extractor_key')
if extractor is None:
if 'id' in info_dict:
extractor = info_dict.get('ie_key') # key in a playlist
if extractor is None:
return None # Incomplete video information
return extractor.lower() + ' ' + info_dict['id']
def in_download_archive(self, info_dict):
fn = self.params.get('download_archive')
if fn is None:
return False
vid_id = self._make_archive_id(info_dict)
if vid_id is None:
return False # Incomplete video information
try:
with locked_file(fn, 'r', encoding='utf-8') as archive_file:
for line in archive_file:
if line.strip() == vid_id:
return True
except IOError as ioe:
if ioe.errno != errno.ENOENT:
raise
return False
def record_download_archive(self, info_dict):
fn = self.params.get('download_archive')
if fn is None:
return
vid_id = self._make_archive_id(info_dict)
assert vid_id
with locked_file(fn, 'a', encoding='utf-8') as archive_file:
archive_file.write(vid_id + '\n')
@staticmethod
def format_resolution(format, default='unknown'):
if format.get('vcodec') == 'none':
return 'audio only'
if format.get('resolution') is not None:
return format['resolution']
if format.get('height') is not None:
if format.get('width') is not None:
res = '%sx%s' % (format['width'], format['height'])
else:
res = '%sp' % format['height']
elif format.get('width') is not None:
res = '?x%d' % format['width']
else:
res = default
return res
def _format_note(self, fdict):
res = ''
if fdict.get('ext') in ['f4f', 'f4m']:
res += '(unsupported) '
if fdict.get('format_note') is not None:
res += fdict['format_note'] + ' '
if fdict.get('tbr') is not None:
res += '%4dk ' % fdict['tbr']
if fdict.get('container') is not None:
if res:
res += ', '
res += '%s container' % fdict['container']
if (fdict.get('vcodec') is not None and
fdict.get('vcodec') != 'none'):
if res:
res += ', '
res += fdict['vcodec']
if fdict.get('vbr') is not None:
res += '@'
elif fdict.get('vbr') is not None and fdict.get('abr') is not None:
res += 'video@'
if fdict.get('vbr') is not None:
res += '%4dk' % fdict['vbr']
if fdict.get('fps') is not None:
res += ', %sfps' % fdict['fps']
if fdict.get('acodec') is not None:
if res:
res += ', '
if fdict['acodec'] == 'none':
res += 'video only'
else:
res += '%-5s' % fdict['acodec']
elif fdict.get('abr') is not None:
if res:
res += ', '
res += 'audio'
if fdict.get('abr') is not None:
res += '@%3dk' % fdict['abr']
if fdict.get('asr') is not None:
res += ' (%5dHz)' % fdict['asr']
if fdict.get('filesize') is not None:
if res:
res += ', '
res += format_bytes(fdict['filesize'])
elif fdict.get('filesize_approx') is not None:
if res:
res += ', '
res += '~' + format_bytes(fdict['filesize_approx'])
return res
def list_formats(self, info_dict):
def line(format, idlen=20):
return (('%-' + compat_str(idlen + 1) + 's%-10s%-12s%s') % (
format['format_id'],
format['ext'],
self.format_resolution(format),
self._format_note(format),
))
formats = info_dict.get('formats', [info_dict])
idlen = max(len('format code'),
max(len(f['format_id']) for f in formats))
formats_s = [line(f, idlen) for f in formats]
if len(formats) > 1:
formats_s[0] += (' ' if self._format_note(formats[0]) else '') + '(worst)'
formats_s[-1] += (' ' if self._format_note(formats[-1]) else '') + '(best)'
header_line = line({
'format_id': 'format code', 'ext': 'extension',
'resolution': 'resolution', 'format_note': 'note'}, idlen=idlen)
self.to_screen('[info] Available formats for %s:\n%s\n%s' %
(info_dict['id'], header_line, '\n'.join(formats_s)))
def urlopen(self, req):
""" Start an HTTP download """
# According to RFC 3986, URLs can not contain non-ASCII characters, however this is not
# always respected by websites, some tend to give out URLs with non percent-encoded
# non-ASCII characters (see telemb.py, ard.py [#3412])
# urllib chokes on URLs with non-ASCII characters (see http://bugs.python.org/issue3991)
# To work around aforementioned issue we will replace request's original URL with
# percent-encoded one
req_is_string = isinstance(req, basestring if sys.version_info < (3, 0) else compat_str)
url = req if req_is_string else req.get_full_url()
url_escaped = escape_url(url)
# Substitute URL if any change after escaping
if url != url_escaped:
if req_is_string:
req = url_escaped
else:
req = compat_urllib_request.Request(
url_escaped, data=req.data, headers=req.headers,
origin_req_host=req.origin_req_host, unverifiable=req.unverifiable)
return self._opener.open(req, timeout=self._socket_timeout)
def print_debug_header(self):
if not self.params.get('verbose'):
return
if type('') is not compat_str:
# Python 2.6 on SLES11 SP1 (https://github.com/rg3/youtube-dl/issues/3326)
self.report_warning(
'Your Python is broken! Update to a newer and supported version')
stdout_encoding = getattr(
sys.stdout, 'encoding', 'missing (%s)' % type(sys.stdout).__name__)
encoding_str = (
'[debug] Encodings: locale %s, fs %s, out %s, pref %s\n' % (
locale.getpreferredencoding(),
sys.getfilesystemencoding(),
stdout_encoding,
self.get_encoding()))
write_string(encoding_str, encoding=None)
self._write_string('[debug] youtube-dl version ' + __version__ + '\n')
try:
sp = subprocess.Popen(
['git', 'rev-parse', '--short', 'HEAD'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
cwd=os.path.dirname(os.path.abspath(__file__)))
out, err = sp.communicate()
out = out.decode().strip()
if re.match('[0-9a-f]+', out):
self._write_string('[debug] Git HEAD: ' + out + '\n')
except:
try:
sys.exc_clear()
except:
pass
self._write_string('[debug] Python version %s - %s\n' % (
platform.python_version(), platform_name()))
exe_versions = FFmpegPostProcessor.get_versions()
exe_versions['rtmpdump'] = rtmpdump_version()
exe_str = ', '.join(
'%s %s' % (exe, v)
for exe, v in sorted(exe_versions.items())
if v
)
if not exe_str:
exe_str = 'none'
self._write_string('[debug] exe versions: %s\n' % exe_str)
proxy_map = {}
for handler in self._opener.handlers:
if hasattr(handler, 'proxies'):
proxy_map.update(handler.proxies)
self._write_string('[debug] Proxy map: ' + compat_str(proxy_map) + '\n')
def _setup_opener(self):
timeout_val = self.params.get('socket_timeout')
self._socket_timeout = 600 if timeout_val is None else float(timeout_val)
opts_cookiefile = self.params.get('cookiefile')
opts_proxy = self.params.get('proxy')
if opts_cookiefile is None:
self.cookiejar = compat_cookiejar.CookieJar()
else:
self.cookiejar = compat_cookiejar.MozillaCookieJar(
opts_cookiefile)
if os.access(opts_cookiefile, os.R_OK):
self.cookiejar.load()
cookie_processor = compat_urllib_request.HTTPCookieProcessor(
self.cookiejar)
if opts_proxy is not None:
if opts_proxy == '':
proxies = {}
else:
proxies = {'http': opts_proxy, 'https': opts_proxy}
else:
proxies = compat_urllib_request.getproxies()
# Set HTTPS proxy to HTTP one if given (https://github.com/rg3/youtube-dl/issues/805)
if 'http' in proxies and 'https' not in proxies:
proxies['https'] = proxies['http']
proxy_handler = compat_urllib_request.ProxyHandler(proxies)
debuglevel = 1 if self.params.get('debug_printtraffic') else 0
https_handler = make_HTTPS_handler(
self.params.get('nocheckcertificate', False), debuglevel=debuglevel)
ydlh = YoutubeDLHandler(debuglevel=debuglevel)
opener = compat_urllib_request.build_opener(
https_handler, proxy_handler, cookie_processor, ydlh)
# Delete the default user-agent header, which would otherwise apply in
# cases where our custom HTTP handler doesn't come into play
# (See https://github.com/rg3/youtube-dl/issues/1309 for details)
opener.addheaders = []
self._opener = opener
def encode(self, s):
if isinstance(s, bytes):
return s # Already encoded
try:
return s.encode(self.get_encoding())
except UnicodeEncodeError as err:
err.reason = err.reason + '. Check your system encoding configuration or use the --encoding option.'
raise
def get_encoding(self):
encoding = self.params.get('encoding')
if encoding is None:
encoding = preferredencoding()
return encoding | unknown | codeparrot/codeparrot-clean | ||
from __future__ import absolute_import
from pip._internal.cli.base_command import Command
from pip._internal.cli.status_codes import SUCCESS
from pip._internal.exceptions import CommandError
class HelpCommand(Command):
"""Show help for commands"""
name = 'help'
usage = """
%prog <command>"""
summary = 'Show help for commands.'
ignore_require_venv = True
def run(self, options, args):
from pip._internal.commands import commands_dict, get_similar_commands
try:
# 'pip help' with no args is handled by pip.__init__.parseopt()
cmd_name = args[0] # the command we need help for
except IndexError:
return SUCCESS
if cmd_name not in commands_dict:
guess = get_similar_commands(cmd_name)
msg = ['unknown command "%s"' % cmd_name]
if guess:
msg.append('maybe you meant "%s"' % guess)
raise CommandError(' - '.join(msg))
command = commands_dict[cmd_name]()
command.parser.print_help()
return SUCCESS | unknown | codeparrot/codeparrot-clean | ||
##########################################################################
#
# Copyright (c) 2008-2013, Image Engine Design Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# * Neither the name of Image Engine Design nor the names of any
# other contributors to this software may be used to endorse or
# promote products derived from this software without specific prior
# written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
##########################################################################
import unittest
import IECore
import IECoreGL
IECoreGL.init( False )
import os.path
import os
import shutil
class CameraTest( unittest.TestCase ) :
def testPositioning( self ) :
# render a plane at z = 0 with the default camera
r = IECoreGL.Renderer()
r.setOption( "gl:mode", IECore.StringData( "immediate" ) )
r.setOption( "gl:searchPath:shader", IECore.StringData( os.path.dirname( __file__ ) + "/shaders" ) )
r.display( os.path.dirname( __file__ ) + "/output/testCamera.tif", "tiff", "rgba", {} )
r.camera( "main", { "resolution" : IECore.V2iData( IECore.V2i( 512 ) ), "projection" : IECore.StringData( "perspective" ) } )
r.worldBegin()
r.shader( "surface", "color", { "colorValue" : IECore.Color3fData( IECore.Color3f( 1, 0, 0 ) ) } )
IECore.MeshPrimitive.createPlane( IECore.Box2f( IECore.V2f( -0.1 ), IECore.V2f( 0.1 ) ) ).render( r )
r.worldEnd()
# check that nothing appears in the output image
i = IECore.Reader.create( os.path.dirname( __file__ ) + "/output/testCamera.tif" ).read()
e = IECore.PrimitiveEvaluator.create( i )
result = e.createResult()
a = e.G()
e.pointAtUV( IECore.V2f( 0.5, 0.5 ), result )
self.assertEqual( result.floatPrimVar( a ), 0 )
# render a plane at z = 0 with the camera moved back a touch to see it
r = IECoreGL.Renderer()
r.setOption( "gl:mode", IECore.StringData( "immediate" ) )
r.setOption( "gl:searchPath:shader", IECore.StringData( os.path.dirname( __file__ ) + "/shaders" ) )
r.display( os.path.dirname( __file__ ) + "/output/testCamera.tif", "tiff", "rgba", {} )
r.transformBegin()
r.concatTransform( IECore.M44f.createTranslated( IECore.V3f( 0, 0, 1 ) ) )
r.camera( "main", { "resolution" : IECore.V2iData( IECore.V2i( 512 ) ), "projection" : IECore.StringData( "perspective" ) } )
r.transformEnd()
r.worldBegin()
r.shader( "surface", "color", { "colorValue" : IECore.Color3fData( IECore.Color3f( 1, 0, 0 ) ) } )
IECore.MeshPrimitive.createPlane( IECore.Box2f( IECore.V2f( -0.1 ), IECore.V2f( 0.1 ) ) ).render( r )
r.worldEnd()
# check that something appears in the output image
i = IECore.Reader.create( os.path.dirname( __file__ ) + "/output/testCamera.tif" ).read()
e = IECore.PrimitiveEvaluator.create( i )
result = e.createResult()
a = e.A()
e.pointAtUV( IECore.V2f( 0.5, 0.5 ), result )
self.assertEqual( result.floatPrimVar( a ), 1 )
def testXYOrientation( self ) :
# render a red square at x==1, and a green one at y==1
r = IECoreGL.Renderer()
r.setOption( "gl:mode", IECore.StringData( "immediate" ) )
r.setOption( "gl:searchPath:shader", IECore.StringData( os.path.dirname( __file__ ) + "/shaders" ) )
r.display( os.path.dirname( __file__ ) + "/output/testCamera.tif", "tiff", "rgba", {} )
r.transformBegin()
r.concatTransform( IECore.M44f.createTranslated( IECore.V3f( 0, 0, 1 ) ) )
r.camera( "main", { "resolution" : IECore.V2iData( IECore.V2i( 512 ) ) } )
r.transformEnd()
r.worldBegin()
r.shader( "surface", "color", { "colorValue" : IECore.Color3fData( IECore.Color3f( 1, 0, 0 ) ) } )
IECore.MeshPrimitive.createPlane( IECore.Box2f( IECore.V2f( 0.75, -0.25 ), IECore.V2f( 1.25, 0.25 ) ) ).render( r )
r.shader( "surface", "color", { "colorValue" : IECore.Color3fData( IECore.Color3f( 0, 1, 0 ) ) } )
IECore.MeshPrimitive.createPlane( IECore.Box2f( IECore.V2f( -0.25, 0.75 ), IECore.V2f( 0.25, 1.25 ) ) ).render( r )
r.worldEnd()
# check we get the colors we'd expect where we expect them
i = IECore.Reader.create( os.path.dirname( __file__ ) + "/output/testCamera.tif" ).read()
e = IECore.PrimitiveEvaluator.create( i )
result = e.createResult()
a = e.A()
r = e.R()
g = e.G()
b = e.B()
e.pointAtUV( IECore.V2f( 1, 0.5 ), result )
self.assertEqual( result.floatPrimVar( a ), 1 )
self.assertEqual( result.floatPrimVar( r ), 1 )
self.assertEqual( result.floatPrimVar( g ), 0 )
self.assertEqual( result.floatPrimVar( b ), 0 )
e.pointAtUV( IECore.V2f( 0.5, 0 ), result )
self.assertEqual( result.floatPrimVar( a ), 1 )
self.assertEqual( result.floatPrimVar( r ), 0 )
self.assertEqual( result.floatPrimVar( g ), 1 )
self.assertEqual( result.floatPrimVar( b ), 0 )
def setUp( self ) :
if not os.path.isdir( "test/IECoreGL/output" ) :
os.makedirs( "test/IECoreGL/output" )
def tearDown( self ) :
if os.path.isdir( "test/IECoreGL/output" ) :
shutil.rmtree( "test/IECoreGL/output" )
if __name__ == "__main__":
unittest.main() | unknown | codeparrot/codeparrot-clean | ||
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
int issue8811Initialized = 0;
void issue8811Init() {
} | c | github | https://github.com/golang/go | src/cmd/cgo/internal/test/issue8811.c |
/**
* Constant-time functions
*/
/*
* Copyright The Mbed TLS Contributors
* SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
*/
#ifndef MBEDTLS_CONSTANT_TIME_H
#define MBEDTLS_CONSTANT_TIME_H
#include <stddef.h>
/** Constant-time buffer comparison without branches.
*
* This is equivalent to the standard memcmp function, but is likely to be
* compiled to code using bitwise operations rather than a branch, such that
* the time taken is constant w.r.t. the data pointed to by \p a and \p b,
* and w.r.t. whether \p a and \p b are equal or not. It is not constant-time
* w.r.t. \p n .
*
* This function can be used to write constant-time code by replacing branches
* with bit operations using masks.
*
* \param a Pointer to the first buffer, containing at least \p n bytes. May not be NULL.
* \param b Pointer to the second buffer, containing at least \p n bytes. May not be NULL.
* \param n The number of bytes to compare.
*
* \return Zero if the contents of the two buffers are the same,
* otherwise non-zero.
*/
int mbedtls_ct_memcmp(const void *a,
const void *b,
size_t n);
#endif /* MBEDTLS_CONSTANT_TIME_H */ | c | github | https://github.com/nodejs/node | deps/LIEF/third-party/mbedtls/include/mbedtls/constant_time.h |
/*-------------------------------------------------------------------------
*
* htup_details.h
* POSTGRES heap tuple header definitions.
*
*
* Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/include/access/htup_details.h
*
*-------------------------------------------------------------------------
*/
#ifndef HTUP_DETAILS_H
#define HTUP_DETAILS_H
#include "access/htup.h"
#include "access/transam.h"
#include "access/tupdesc.h"
#include "access/tupmacs.h"
#include "storage/bufpage.h"
#include "varatt.h"
/*
* MaxTupleAttributeNumber limits the number of (user) columns in a tuple.
* The key limit on this value is that the size of the fixed overhead for
* a tuple, plus the size of the null-values bitmap (at 1 bit per column),
* plus MAXALIGN alignment, must fit into t_hoff which is uint8. On most
* machines the upper limit without making t_hoff wider would be a little
* over 1700. We use round numbers here and for MaxHeapAttributeNumber
* so that alterations in HeapTupleHeaderData layout won't change the
* supported max number of columns.
*/
#define MaxTupleAttributeNumber 1664 /* 8 * 208 */
/*
* MaxHeapAttributeNumber limits the number of (user) columns in a table.
* This should be somewhat less than MaxTupleAttributeNumber. It must be
* at least one less, else we will fail to do UPDATEs on a maximal-width
* table (because UPDATE has to form working tuples that include CTID).
* In practice we want some additional daylight so that we can gracefully
* support operations that add hidden "resjunk" columns, for example
* SELECT * FROM wide_table ORDER BY foo, bar, baz.
* In any case, depending on column data types you will likely be running
* into the disk-block-based limit on overall tuple size if you have more
* than a thousand or so columns. TOAST won't help.
*/
#define MaxHeapAttributeNumber 1600 /* 8 * 200 */
/*
* Heap tuple header. To avoid wasting space, the fields should be
* laid out in such a way as to avoid structure padding.
*
* Datums of composite types (row types) share the same general structure
* as on-disk tuples, so that the same routines can be used to build and
* examine them. However the requirements are slightly different: a Datum
* does not need any transaction visibility information, and it does need
* a length word and some embedded type information. We can achieve this
* by overlaying the xmin/cmin/xmax/cmax/xvac fields of a heap tuple
* with the fields needed in the Datum case. Typically, all tuples built
* in-memory will be initialized with the Datum fields; but when a tuple is
* about to be inserted in a table, the transaction fields will be filled,
* overwriting the datum fields.
*
* The overall structure of a heap tuple looks like:
* fixed fields (HeapTupleHeaderData struct)
* nulls bitmap (if HEAP_HASNULL is set in t_infomask)
* alignment padding (as needed to make user data MAXALIGN'd)
* object ID (if HEAP_HASOID_OLD is set in t_infomask, not created
* anymore)
* user data fields
*
* We store five "virtual" fields Xmin, Cmin, Xmax, Cmax, and Xvac in three
* physical fields. Xmin and Xmax are always really stored, but Cmin, Cmax
* and Xvac share a field. This works because we know that Cmin and Cmax
* are only interesting for the lifetime of the inserting and deleting
* transaction respectively. If a tuple is inserted and deleted in the same
* transaction, we store a "combo" command id that can be mapped to the real
* cmin and cmax, but only by use of local state within the originating
* backend. See combocid.c for more details. Meanwhile, Xvac is only set by
* old-style VACUUM FULL, which does not have any command sub-structure and so
* does not need either Cmin or Cmax. (This requires that old-style VACUUM
* FULL never try to move a tuple whose Cmin or Cmax is still interesting,
* ie, an insert-in-progress or delete-in-progress tuple.)
*
* A word about t_ctid: whenever a new tuple is stored on disk, its t_ctid
* is initialized with its own TID (location). If the tuple is ever updated,
* its t_ctid is changed to point to the replacement version of the tuple. Or
* if the tuple is moved from one partition to another, due to an update of
* the partition key, t_ctid is set to a special value to indicate that
* (see ItemPointerSetMovedPartitions). Thus, a tuple is the latest version
* of its row iff XMAX is invalid or
* t_ctid points to itself (in which case, if XMAX is valid, the tuple is
* either locked or deleted). One can follow the chain of t_ctid links
* to find the newest version of the row, unless it was moved to a different
* partition. Beware however that VACUUM might
* erase the pointed-to (newer) tuple before erasing the pointing (older)
* tuple. Hence, when following a t_ctid link, it is necessary to check
* to see if the referenced slot is empty or contains an unrelated tuple.
* Check that the referenced tuple has XMIN equal to the referencing tuple's
* XMAX to verify that it is actually the descendant version and not an
* unrelated tuple stored into a slot recently freed by VACUUM. If either
* check fails, one may assume that there is no live descendant version.
*
* t_ctid is sometimes used to store a speculative insertion token, instead
* of a real TID. A speculative token is set on a tuple that's being
* inserted, until the inserter is sure that it wants to go ahead with the
* insertion. Hence a token should only be seen on a tuple with an XMAX
* that's still in-progress, or invalid/aborted. The token is replaced with
* the tuple's real TID when the insertion is confirmed. One should never
* see a speculative insertion token while following a chain of t_ctid links,
* because they are not used on updates, only insertions.
*
* Following the fixed header fields, the nulls bitmap is stored (beginning
* at t_bits). The bitmap is *not* stored if t_infomask shows that there
* are no nulls in the tuple. If an OID field is present (as indicated by
* t_infomask), then it is stored just before the user data, which begins at
* the offset shown by t_hoff. Note that t_hoff must be a multiple of
* MAXALIGN.
*/
typedef struct HeapTupleFields
{
TransactionId t_xmin; /* inserting xact ID */
TransactionId t_xmax; /* deleting or locking xact ID */
union
{
CommandId t_cid; /* inserting or deleting command ID, or both */
TransactionId t_xvac; /* old-style VACUUM FULL xact ID */
} t_field3;
} HeapTupleFields;
typedef struct DatumTupleFields
{
int32 datum_len_; /* varlena header (do not touch directly!) */
int32 datum_typmod; /* -1, or identifier of a record type */
Oid datum_typeid; /* composite type OID, or RECORDOID */
/*
* datum_typeid cannot be a domain over composite, only plain composite,
* even if the datum is meant as a value of a domain-over-composite type.
* This is in line with the general principle that CoerceToDomain does not
* change the physical representation of the base type value.
*
* Note: field ordering is chosen with thought that Oid might someday
* widen to 64 bits.
*/
} DatumTupleFields;
struct HeapTupleHeaderData
{
union
{
HeapTupleFields t_heap;
DatumTupleFields t_datum;
} t_choice;
ItemPointerData t_ctid; /* current TID of this or newer tuple (or a
* speculative insertion token) */
/* Fields below here must match MinimalTupleData! */
#define FIELDNO_HEAPTUPLEHEADERDATA_INFOMASK2 2
uint16 t_infomask2; /* number of attributes + various flags */
#define FIELDNO_HEAPTUPLEHEADERDATA_INFOMASK 3
uint16 t_infomask; /* various flag bits, see below */
#define FIELDNO_HEAPTUPLEHEADERDATA_HOFF 4
uint8 t_hoff; /* sizeof header incl. bitmap, padding */
/* ^ - 23 bytes - ^ */
#define FIELDNO_HEAPTUPLEHEADERDATA_BITS 5
bits8 t_bits[FLEXIBLE_ARRAY_MEMBER]; /* bitmap of NULLs */
/* MORE DATA FOLLOWS AT END OF STRUCT */
};
/* typedef appears in htup.h */
#define SizeofHeapTupleHeader offsetof(HeapTupleHeaderData, t_bits)
/*
* information stored in t_infomask:
*/
#define HEAP_HASNULL 0x0001 /* has null attribute(s) */
#define HEAP_HASVARWIDTH 0x0002 /* has variable-width attribute(s) */
#define HEAP_HASEXTERNAL 0x0004 /* has external stored attribute(s) */
#define HEAP_HASOID_OLD 0x0008 /* has an object-id field */
#define HEAP_XMAX_KEYSHR_LOCK 0x0010 /* xmax is a key-shared locker */
#define HEAP_COMBOCID 0x0020 /* t_cid is a combo CID */
#define HEAP_XMAX_EXCL_LOCK 0x0040 /* xmax is exclusive locker */
#define HEAP_XMAX_LOCK_ONLY 0x0080 /* xmax, if valid, is only a locker */
/* xmax is a shared locker */
#define HEAP_XMAX_SHR_LOCK (HEAP_XMAX_EXCL_LOCK | HEAP_XMAX_KEYSHR_LOCK)
#define HEAP_LOCK_MASK (HEAP_XMAX_SHR_LOCK | HEAP_XMAX_EXCL_LOCK | \
HEAP_XMAX_KEYSHR_LOCK)
#define HEAP_XMIN_COMMITTED 0x0100 /* t_xmin committed */
#define HEAP_XMIN_INVALID 0x0200 /* t_xmin invalid/aborted */
#define HEAP_XMIN_FROZEN (HEAP_XMIN_COMMITTED|HEAP_XMIN_INVALID)
#define HEAP_XMAX_COMMITTED 0x0400 /* t_xmax committed */
#define HEAP_XMAX_INVALID 0x0800 /* t_xmax invalid/aborted */
#define HEAP_XMAX_IS_MULTI 0x1000 /* t_xmax is a MultiXactId */
#define HEAP_UPDATED 0x2000 /* this is UPDATEd version of row */
#define HEAP_MOVED_OFF 0x4000 /* moved to another place by pre-9.0
* VACUUM FULL; kept for binary
* upgrade support */
#define HEAP_MOVED_IN 0x8000 /* moved from another place by pre-9.0
* VACUUM FULL; kept for binary
* upgrade support */
#define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
#define HEAP_XACT_MASK 0xFFF0 /* visibility-related bits */
/*
* A tuple is only locked (i.e. not updated by its Xmax) if the
* HEAP_XMAX_LOCK_ONLY bit is set; or, for pg_upgrade's sake, if the Xmax is
* not a multi and the EXCL_LOCK bit is set.
*
* See also HeapTupleHeaderIsOnlyLocked, which also checks for a possible
* aborted updater transaction.
*/
static inline bool
HEAP_XMAX_IS_LOCKED_ONLY(uint16 infomask)
{
return (infomask & HEAP_XMAX_LOCK_ONLY) ||
(infomask & (HEAP_XMAX_IS_MULTI | HEAP_LOCK_MASK)) == HEAP_XMAX_EXCL_LOCK;
}
/*
* A tuple that has HEAP_XMAX_IS_MULTI and HEAP_XMAX_LOCK_ONLY but neither of
* HEAP_XMAX_EXCL_LOCK and HEAP_XMAX_KEYSHR_LOCK must come from a tuple that was
* share-locked in 9.2 or earlier and then pg_upgrade'd.
*
* In 9.2 and prior, HEAP_XMAX_IS_MULTI was only set when there were multiple
* FOR SHARE lockers of that tuple. That set HEAP_XMAX_LOCK_ONLY (with a
* different name back then) but neither of HEAP_XMAX_EXCL_LOCK and
* HEAP_XMAX_KEYSHR_LOCK. That combination is no longer possible in 9.3 and
* up, so if we see that combination we know for certain that the tuple was
* locked in an earlier release; since all such lockers are gone (they cannot
* survive through pg_upgrade), such tuples can safely be considered not
* locked.
*
* We must not resolve such multixacts locally, because the result would be
* bogus, regardless of where they stand with respect to the current valid
* multixact range.
*/
static inline bool
HEAP_LOCKED_UPGRADED(uint16 infomask)
{
return
(infomask & HEAP_XMAX_IS_MULTI) != 0 &&
(infomask & HEAP_XMAX_LOCK_ONLY) != 0 &&
(infomask & (HEAP_XMAX_EXCL_LOCK | HEAP_XMAX_KEYSHR_LOCK)) == 0;
}
/*
* Use these to test whether a particular lock is applied to a tuple
*/
static inline bool
HEAP_XMAX_IS_SHR_LOCKED(uint16 infomask)
{
return (infomask & HEAP_LOCK_MASK) == HEAP_XMAX_SHR_LOCK;
}
static inline bool
HEAP_XMAX_IS_EXCL_LOCKED(uint16 infomask)
{
return (infomask & HEAP_LOCK_MASK) == HEAP_XMAX_EXCL_LOCK;
}
static inline bool
HEAP_XMAX_IS_KEYSHR_LOCKED(uint16 infomask)
{
return (infomask & HEAP_LOCK_MASK) == HEAP_XMAX_KEYSHR_LOCK;
}
/* turn these all off when Xmax is to change */
#define HEAP_XMAX_BITS (HEAP_XMAX_COMMITTED | HEAP_XMAX_INVALID | \
HEAP_XMAX_IS_MULTI | HEAP_LOCK_MASK | HEAP_XMAX_LOCK_ONLY)
/*
* information stored in t_infomask2:
*/
#define HEAP_NATTS_MASK 0x07FF /* 11 bits for number of attributes */
/* bits 0x1800 are available */
#define HEAP_KEYS_UPDATED 0x2000 /* tuple was updated and key cols
* modified, or tuple deleted */
#define HEAP_HOT_UPDATED 0x4000 /* tuple was HOT-updated */
#define HEAP_ONLY_TUPLE 0x8000 /* this is heap-only tuple */
#define HEAP2_XACT_MASK 0xE000 /* visibility-related bits */
/*
* HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins. It is
* only used in tuples that are in the hash table, and those don't need
* any visibility information, so we can overlay it on a visibility flag
* instead of using up a dedicated bit.
*/
#define HEAP_TUPLE_HAS_MATCH HEAP_ONLY_TUPLE /* tuple has a join match */
/*
* HeapTupleHeader accessor functions
*/
static bool HeapTupleHeaderXminFrozen(const HeapTupleHeaderData *tup);
/*
* HeapTupleHeaderGetRawXmin returns the "raw" xmin field, which is the xid
* originally used to insert the tuple. However, the tuple might actually
* be frozen (via HeapTupleHeaderSetXminFrozen) in which case the tuple's xmin
* is visible to every snapshot. Prior to PostgreSQL 9.4, we actually changed
* the xmin to FrozenTransactionId, and that value may still be encountered
* on disk.
*/
static inline TransactionId
HeapTupleHeaderGetRawXmin(const HeapTupleHeaderData *tup)
{
return tup->t_choice.t_heap.t_xmin;
}
static inline TransactionId
HeapTupleHeaderGetXmin(const HeapTupleHeaderData *tup)
{
return HeapTupleHeaderXminFrozen(tup) ?
FrozenTransactionId : HeapTupleHeaderGetRawXmin(tup);
}
static inline void
HeapTupleHeaderSetXmin(HeapTupleHeaderData *tup, TransactionId xid)
{
tup->t_choice.t_heap.t_xmin = xid;
}
static inline bool
HeapTupleHeaderXminCommitted(const HeapTupleHeaderData *tup)
{
return (tup->t_infomask & HEAP_XMIN_COMMITTED) != 0;
}
static inline bool
HeapTupleHeaderXminInvalid(const HeapTupleHeaderData *tup) \
{
return (tup->t_infomask & (HEAP_XMIN_COMMITTED | HEAP_XMIN_INVALID)) ==
HEAP_XMIN_INVALID;
}
static inline bool
HeapTupleHeaderXminFrozen(const HeapTupleHeaderData *tup)
{
return (tup->t_infomask & HEAP_XMIN_FROZEN) == HEAP_XMIN_FROZEN;
}
static inline void
HeapTupleHeaderSetXminFrozen(HeapTupleHeaderData *tup)
{
Assert(!HeapTupleHeaderXminInvalid(tup));
tup->t_infomask |= HEAP_XMIN_FROZEN;
}
static inline TransactionId
HeapTupleHeaderGetRawXmax(const HeapTupleHeaderData *tup)
{
return tup->t_choice.t_heap.t_xmax;
}
static inline void
HeapTupleHeaderSetXmax(HeapTupleHeaderData *tup, TransactionId xid)
{
tup->t_choice.t_heap.t_xmax = xid;
}
#ifndef FRONTEND
/*
* HeapTupleHeaderGetRawXmax gets you the raw Xmax field. To find out the Xid
* that updated a tuple, you might need to resolve the MultiXactId if certain
* bits are set. HeapTupleHeaderGetUpdateXid checks those bits and takes care
* to resolve the MultiXactId if necessary. This might involve multixact I/O,
* so it should only be used if absolutely necessary.
*/
static inline TransactionId
HeapTupleHeaderGetUpdateXid(const HeapTupleHeaderData *tup)
{
if (!((tup)->t_infomask & HEAP_XMAX_INVALID) &&
((tup)->t_infomask & HEAP_XMAX_IS_MULTI) &&
!((tup)->t_infomask & HEAP_XMAX_LOCK_ONLY))
return HeapTupleGetUpdateXid(tup);
else
return HeapTupleHeaderGetRawXmax(tup);
}
#endif /* FRONTEND */
/*
* HeapTupleHeaderGetRawCommandId will give you what's in the header whether
* it is useful or not. Most code should use HeapTupleHeaderGetCmin or
* HeapTupleHeaderGetCmax instead, but note that those Assert that you can
* get a legitimate result, ie you are in the originating transaction!
*/
static inline CommandId
HeapTupleHeaderGetRawCommandId(const HeapTupleHeaderData *tup)
{
return tup->t_choice.t_heap.t_field3.t_cid;
}
/* SetCmin is reasonably simple since we never need a combo CID */
static inline void
HeapTupleHeaderSetCmin(HeapTupleHeaderData *tup, CommandId cid)
{
Assert(!(tup->t_infomask & HEAP_MOVED));
tup->t_choice.t_heap.t_field3.t_cid = cid;
tup->t_infomask &= ~HEAP_COMBOCID;
}
/* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
static inline void
HeapTupleHeaderSetCmax(HeapTupleHeaderData *tup, CommandId cid, bool iscombo)
{
Assert(!((tup)->t_infomask & HEAP_MOVED));
tup->t_choice.t_heap.t_field3.t_cid = cid;
if (iscombo)
tup->t_infomask |= HEAP_COMBOCID;
else
tup->t_infomask &= ~HEAP_COMBOCID;
}
static inline TransactionId
HeapTupleHeaderGetXvac(const HeapTupleHeaderData *tup)
{
if (tup->t_infomask & HEAP_MOVED)
return tup->t_choice.t_heap.t_field3.t_xvac;
else
return InvalidTransactionId;
}
static inline void
HeapTupleHeaderSetXvac(HeapTupleHeaderData *tup, TransactionId xid)
{
Assert(tup->t_infomask & HEAP_MOVED);
tup->t_choice.t_heap.t_field3.t_xvac = xid;
}
StaticAssertDecl(MaxOffsetNumber < SpecTokenOffsetNumber,
"invalid speculative token constant");
static inline bool
HeapTupleHeaderIsSpeculative(const HeapTupleHeaderData *tup)
{
return ItemPointerGetOffsetNumberNoCheck(&tup->t_ctid) == SpecTokenOffsetNumber;
}
static inline BlockNumber
HeapTupleHeaderGetSpeculativeToken(const HeapTupleHeaderData *tup)
{
Assert(HeapTupleHeaderIsSpeculative(tup));
return ItemPointerGetBlockNumber(&tup->t_ctid);
}
static inline void
HeapTupleHeaderSetSpeculativeToken(HeapTupleHeaderData *tup, BlockNumber token)
{
ItemPointerSet(&tup->t_ctid, token, SpecTokenOffsetNumber);
}
static inline bool
HeapTupleHeaderIndicatesMovedPartitions(const HeapTupleHeaderData *tup)
{
return ItemPointerIndicatesMovedPartitions(&tup->t_ctid);
}
static inline void
HeapTupleHeaderSetMovedPartitions(HeapTupleHeaderData *tup)
{
ItemPointerSetMovedPartitions(&tup->t_ctid);
}
static inline uint32
HeapTupleHeaderGetDatumLength(const HeapTupleHeaderData *tup)
{
return VARSIZE(tup);
}
static inline void
HeapTupleHeaderSetDatumLength(HeapTupleHeaderData *tup, uint32 len)
{
SET_VARSIZE(tup, len);
}
static inline Oid
HeapTupleHeaderGetTypeId(const HeapTupleHeaderData *tup)
{
return tup->t_choice.t_datum.datum_typeid;
}
static inline void
HeapTupleHeaderSetTypeId(HeapTupleHeaderData *tup, Oid datum_typeid)
{
tup->t_choice.t_datum.datum_typeid = datum_typeid;
}
static inline int32
HeapTupleHeaderGetTypMod(const HeapTupleHeaderData *tup)
{
return tup->t_choice.t_datum.datum_typmod;
}
static inline void
HeapTupleHeaderSetTypMod(HeapTupleHeaderData *tup, int32 typmod)
{
tup->t_choice.t_datum.datum_typmod = typmod;
}
/*
* Note that we stop considering a tuple HOT-updated as soon as it is known
* aborted or the would-be updating transaction is known aborted. For best
* efficiency, check tuple visibility before using this function, so that the
* INVALID bits will be as up to date as possible.
*/
static inline bool
HeapTupleHeaderIsHotUpdated(const HeapTupleHeaderData *tup)
{
return
(tup->t_infomask2 & HEAP_HOT_UPDATED) != 0 &&
(tup->t_infomask & HEAP_XMAX_INVALID) == 0 &&
!HeapTupleHeaderXminInvalid(tup);
}
static inline void
HeapTupleHeaderSetHotUpdated(HeapTupleHeaderData *tup)
{
tup->t_infomask2 |= HEAP_HOT_UPDATED;
}
static inline void
HeapTupleHeaderClearHotUpdated(HeapTupleHeaderData *tup)
{
tup->t_infomask2 &= ~HEAP_HOT_UPDATED;
}
static inline bool
HeapTupleHeaderIsHeapOnly(const HeapTupleHeaderData *tup) \
{
return (tup->t_infomask2 & HEAP_ONLY_TUPLE) != 0;
}
static inline void
HeapTupleHeaderSetHeapOnly(HeapTupleHeaderData *tup)
{
tup->t_infomask2 |= HEAP_ONLY_TUPLE;
}
static inline void
HeapTupleHeaderClearHeapOnly(HeapTupleHeaderData *tup)
{
tup->t_infomask2 &= ~HEAP_ONLY_TUPLE;
}
/*
* These are used with both HeapTuple and MinimalTuple, so they must be
* macros.
*/
#define HeapTupleHeaderGetNatts(tup) \
((tup)->t_infomask2 & HEAP_NATTS_MASK)
#define HeapTupleHeaderSetNatts(tup, natts) \
( \
(tup)->t_infomask2 = ((tup)->t_infomask2 & ~HEAP_NATTS_MASK) | (natts) \
)
#define HeapTupleHeaderHasExternal(tup) \
(((tup)->t_infomask & HEAP_HASEXTERNAL) != 0)
/*
* BITMAPLEN(NATTS) -
* Computes size of null bitmap given number of data columns.
*/
static inline int
BITMAPLEN(int NATTS)
{
return (NATTS + 7) / 8;
}
/*
* MaxHeapTupleSize is the maximum allowed size of a heap tuple, including
* header and MAXALIGN alignment padding. Basically it's BLCKSZ minus the
* other stuff that has to be on a disk page. Since heap pages use no
* "special space", there's no deduction for that.
*
* NOTE: we allow for the ItemId that must point to the tuple, ensuring that
* an otherwise-empty page can indeed hold a tuple of this size. Because
* ItemIds and tuples have different alignment requirements, don't assume that
* you can, say, fit 2 tuples of size MaxHeapTupleSize/2 on the same page.
*/
#define MaxHeapTupleSize (BLCKSZ - MAXALIGN(SizeOfPageHeaderData + sizeof(ItemIdData)))
#define MinHeapTupleSize MAXALIGN(SizeofHeapTupleHeader)
/*
* MaxHeapTuplesPerPage is an upper bound on the number of tuples that can
* fit on one heap page. (Note that indexes could have more, because they
* use a smaller tuple header.) We arrive at the divisor because each tuple
* must be maxaligned, and it must have an associated line pointer.
*
* Note: with HOT, there could theoretically be more line pointers (not actual
* tuples) than this on a heap page. However we constrain the number of line
* pointers to this anyway, to avoid excessive line-pointer bloat and not
* require increases in the size of work arrays.
*/
#define MaxHeapTuplesPerPage \
((int) ((BLCKSZ - SizeOfPageHeaderData) / \
(MAXALIGN(SizeofHeapTupleHeader) + sizeof(ItemIdData))))
/*
* MaxAttrSize is a somewhat arbitrary upper limit on the declared size of
* data fields of char(n) and similar types. It need not have anything
* directly to do with the *actual* upper limit of varlena values, which
* is currently 1Gb (see TOAST structures in varatt.h). I've set it
* at 10Mb which seems like a reasonable number --- tgl 8/6/00.
*/
#define MaxAttrSize (10 * 1024 * 1024)
/*
* MinimalTuple is an alternative representation that is used for transient
* tuples inside the executor, in places where transaction status information
* is not required, the tuple rowtype is known, and shaving off a few bytes
* is worthwhile because we need to store many tuples. The representation
* is chosen so that tuple access routines can work with either full or
* minimal tuples via a HeapTupleData pointer structure. The access routines
* see no difference, except that they must not access the transaction status
* or t_ctid fields because those aren't there.
*
* For the most part, MinimalTuples should be accessed via TupleTableSlot
* routines. These routines will prevent access to the "system columns"
* and thereby prevent accidental use of the nonexistent fields.
*
* MinimalTupleData contains a length word, some padding, and fields matching
* HeapTupleHeaderData beginning with t_infomask2. The padding is chosen so
* that offsetof(t_infomask2) is the same modulo MAXIMUM_ALIGNOF in both
* structs. This makes data alignment rules equivalent in both cases.
*
* When a minimal tuple is accessed via a HeapTupleData pointer, t_data is
* set to point MINIMAL_TUPLE_OFFSET bytes before the actual start of the
* minimal tuple --- that is, where a full tuple matching the minimal tuple's
* data would start. This trick is what makes the structs seem equivalent.
*
* Note that t_hoff is computed the same as in a full tuple, hence it includes
* the MINIMAL_TUPLE_OFFSET distance. t_len does not include that, however.
*
* MINIMAL_TUPLE_DATA_OFFSET is the offset to the first useful (non-pad) data
* other than the length word. tuplesort.c and tuplestore.c use this to avoid
* writing the padding to disk.
*/
#define MINIMAL_TUPLE_OFFSET \
((offsetof(HeapTupleHeaderData, t_infomask2) - sizeof(uint32)) / MAXIMUM_ALIGNOF * MAXIMUM_ALIGNOF)
#define MINIMAL_TUPLE_PADDING \
((offsetof(HeapTupleHeaderData, t_infomask2) - sizeof(uint32)) % MAXIMUM_ALIGNOF)
#define MINIMAL_TUPLE_DATA_OFFSET \
offsetof(MinimalTupleData, t_infomask2)
struct MinimalTupleData
{
uint32 t_len; /* actual length of minimal tuple */
char mt_padding[MINIMAL_TUPLE_PADDING];
/* Fields below here must match HeapTupleHeaderData! */
uint16 t_infomask2; /* number of attributes + various flags */
uint16 t_infomask; /* various flag bits, see below */
uint8 t_hoff; /* sizeof header incl. bitmap, padding */
/* ^ - 23 bytes - ^ */
bits8 t_bits[FLEXIBLE_ARRAY_MEMBER]; /* bitmap of NULLs */
/* MORE DATA FOLLOWS AT END OF STRUCT */
};
/* typedef appears in htup.h */
#define SizeofMinimalTupleHeader offsetof(MinimalTupleData, t_bits)
/*
* MinimalTuple accessor functions
*/
static inline bool
HeapTupleHeaderHasMatch(const MinimalTupleData *tup)
{
return (tup->t_infomask2 & HEAP_TUPLE_HAS_MATCH) != 0;
}
static inline void
HeapTupleHeaderSetMatch(MinimalTupleData *tup)
{
tup->t_infomask2 |= HEAP_TUPLE_HAS_MATCH;
}
static inline void
HeapTupleHeaderClearMatch(MinimalTupleData *tup)
{
tup->t_infomask2 &= ~HEAP_TUPLE_HAS_MATCH;
}
/*
* GETSTRUCT - given a HeapTuple pointer, return address of the user data
*/
static inline void *
GETSTRUCT(const HeapTupleData *tuple)
{
return ((char *) (tuple->t_data) + tuple->t_data->t_hoff);
}
/*
* Accessor functions to be used with HeapTuple pointers.
*/
static inline bool
HeapTupleHasNulls(const HeapTupleData *tuple)
{
return (tuple->t_data->t_infomask & HEAP_HASNULL) != 0;
}
static inline bool
HeapTupleNoNulls(const HeapTupleData *tuple)
{
return !HeapTupleHasNulls(tuple);
}
static inline bool
HeapTupleHasVarWidth(const HeapTupleData *tuple)
{
return (tuple->t_data->t_infomask & HEAP_HASVARWIDTH) != 0;
}
static inline bool
HeapTupleAllFixed(const HeapTupleData *tuple)
{
return !HeapTupleHasVarWidth(tuple);
}
static inline bool
HeapTupleHasExternal(const HeapTupleData *tuple)
{
return (tuple->t_data->t_infomask & HEAP_HASEXTERNAL) != 0;
}
static inline bool
HeapTupleIsHotUpdated(const HeapTupleData *tuple)
{
return HeapTupleHeaderIsHotUpdated(tuple->t_data);
}
static inline void
HeapTupleSetHotUpdated(const HeapTupleData *tuple)
{
HeapTupleHeaderSetHotUpdated(tuple->t_data);
}
static inline void
HeapTupleClearHotUpdated(const HeapTupleData *tuple)
{
HeapTupleHeaderClearHotUpdated(tuple->t_data);
}
static inline bool
HeapTupleIsHeapOnly(const HeapTupleData *tuple)
{
return HeapTupleHeaderIsHeapOnly(tuple->t_data);
}
static inline void
HeapTupleSetHeapOnly(const HeapTupleData *tuple)
{
HeapTupleHeaderSetHeapOnly(tuple->t_data);
}
static inline void
HeapTupleClearHeapOnly(const HeapTupleData *tuple)
{
HeapTupleHeaderClearHeapOnly(tuple->t_data);
}
/* prototypes for functions in common/heaptuple.c */
extern Size heap_compute_data_size(TupleDesc tupleDesc,
const Datum *values, const bool *isnull);
extern void heap_fill_tuple(TupleDesc tupleDesc,
const Datum *values, const bool *isnull,
char *data, Size data_size,
uint16 *infomask, bits8 *bit);
extern bool heap_attisnull(HeapTuple tup, int attnum, TupleDesc tupleDesc);
extern Datum nocachegetattr(HeapTuple tup, int attnum,
TupleDesc tupleDesc);
extern Datum heap_getsysattr(HeapTuple tup, int attnum, TupleDesc tupleDesc,
bool *isnull);
extern Datum getmissingattr(TupleDesc tupleDesc,
int attnum, bool *isnull);
extern HeapTuple heap_copytuple(HeapTuple tuple);
extern void heap_copytuple_with_tuple(HeapTuple src, HeapTuple dest);
extern Datum heap_copy_tuple_as_datum(HeapTuple tuple, TupleDesc tupleDesc);
extern HeapTuple heap_form_tuple(TupleDesc tupleDescriptor,
const Datum *values, const bool *isnull);
extern HeapTuple heap_modify_tuple(HeapTuple tuple,
TupleDesc tupleDesc,
const Datum *replValues,
const bool *replIsnull,
const bool *doReplace);
extern HeapTuple heap_modify_tuple_by_cols(HeapTuple tuple,
TupleDesc tupleDesc,
int nCols,
const int *replCols,
const Datum *replValues,
const bool *replIsnull);
extern void heap_deform_tuple(HeapTuple tuple, TupleDesc tupleDesc,
Datum *values, bool *isnull);
extern void heap_freetuple(HeapTuple htup);
extern MinimalTuple heap_form_minimal_tuple(TupleDesc tupleDescriptor,
const Datum *values, const bool *isnull,
Size extra);
extern void heap_free_minimal_tuple(MinimalTuple mtup);
extern MinimalTuple heap_copy_minimal_tuple(MinimalTuple mtup, Size extra);
extern HeapTuple heap_tuple_from_minimal_tuple(MinimalTuple mtup);
extern MinimalTuple minimal_tuple_from_heap_tuple(HeapTuple htup, Size extra);
extern size_t varsize_any(void *p);
extern HeapTuple heap_expand_tuple(HeapTuple sourceTuple, TupleDesc tupleDesc);
extern MinimalTuple minimal_expand_tuple(HeapTuple sourceTuple, TupleDesc tupleDesc);
#ifndef FRONTEND
/*
* fastgetattr
* Fetch a user attribute's value as a Datum (might be either a
* value, or a pointer into the data area of the tuple).
*
* This must not be used when a system attribute might be requested.
* Furthermore, the passed attnum MUST be valid. Use heap_getattr()
* instead, if in doubt.
*
* This gets called many times, so we macro the cacheable and NULL
* lookups, and call nocachegetattr() for the rest.
*/
static inline Datum
fastgetattr(HeapTuple tup, int attnum, TupleDesc tupleDesc, bool *isnull)
{
Assert(attnum > 0);
*isnull = false;
if (HeapTupleNoNulls(tup))
{
CompactAttribute *att;
att = TupleDescCompactAttr(tupleDesc, attnum - 1);
if (att->attcacheoff >= 0)
return fetchatt(att, (char *) tup->t_data + tup->t_data->t_hoff +
att->attcacheoff);
else
return nocachegetattr(tup, attnum, tupleDesc);
}
else
{
if (att_isnull(attnum - 1, tup->t_data->t_bits))
{
*isnull = true;
return (Datum) 0;
}
else
return nocachegetattr(tup, attnum, tupleDesc);
}
}
/*
* heap_getattr
* Extract an attribute of a heap tuple and return it as a Datum.
* This works for either system or user attributes. The given attnum
* is properly range-checked.
*
* If the field in question has a NULL value, we return a zero Datum
* and set *isnull == true. Otherwise, we set *isnull == false.
*
* <tup> is the pointer to the heap tuple. <attnum> is the attribute
* number of the column (field) caller wants. <tupleDesc> is a
* pointer to the structure describing the row and all its fields.
*
*/
static inline Datum
heap_getattr(HeapTuple tup, int attnum, TupleDesc tupleDesc, bool *isnull)
{
if (attnum > 0)
{
if (attnum > (int) HeapTupleHeaderGetNatts(tup->t_data))
return getmissingattr(tupleDesc, attnum, isnull);
else
return fastgetattr(tup, attnum, tupleDesc, isnull);
}
else
return heap_getsysattr(tup, attnum, tupleDesc, isnull);
}
#endif /* FRONTEND */
#endif /* HTUP_DETAILS_H */ | c | github | https://github.com/postgres/postgres | src/include/access/htup_details.h |
# This code was mostly based on ipaddr-py
# Copyright 2007 Google Inc. http://code.google.com/p/ipaddr-py/
# Licensed under the Apache License, Version 2.0 (the "License").
from django.core.exceptions import ValidationError
from django.utils.six.moves import xrange
def clean_ipv6_address(ip_str, unpack_ipv4=False,
error_message="This is not a valid IPv6 address"):
"""
Cleans a IPv6 address string.
Validity is checked by calling is_valid_ipv6_address() - if an
invalid address is passed, ValidationError is raised.
Replaces the longest continious zero-sequence with "::" and
removes leading zeroes and makes sure all hextets are lowercase.
Args:
ip_str: A valid IPv6 address.
unpack_ipv4: if an IPv4-mapped address is found,
return the plain IPv4 address (default=False).
error_message: A error message for in the ValidationError.
Returns:
A compressed IPv6 address, or the same value
"""
best_doublecolon_start = -1
best_doublecolon_len = 0
doublecolon_start = -1
doublecolon_len = 0
if not is_valid_ipv6_address(ip_str):
raise ValidationError(error_message)
# This algorithm can only handle fully exploded
# IP strings
ip_str = _explode_shorthand_ip_string(ip_str)
ip_str = _sanitize_ipv4_mapping(ip_str)
# If needed, unpack the IPv4 and return straight away
# - no need in running the rest of the algorithm
if unpack_ipv4:
ipv4_unpacked = _unpack_ipv4(ip_str)
if ipv4_unpacked:
return ipv4_unpacked
hextets = ip_str.split(":")
for index in range(len(hextets)):
# Remove leading zeroes
hextets[index] = hextets[index].lstrip('0')
if not hextets[index]:
hextets[index] = '0'
# Determine best hextet to compress
if hextets[index] == '0':
doublecolon_len += 1
if doublecolon_start == -1:
# Start of a sequence of zeros.
doublecolon_start = index
if doublecolon_len > best_doublecolon_len:
# This is the longest sequence of zeros so far.
best_doublecolon_len = doublecolon_len
best_doublecolon_start = doublecolon_start
else:
doublecolon_len = 0
doublecolon_start = -1
# Compress the most suitable hextet
if best_doublecolon_len > 1:
best_doublecolon_end = (best_doublecolon_start +
best_doublecolon_len)
# For zeros at the end of the address.
if best_doublecolon_end == len(hextets):
hextets += ['']
hextets[best_doublecolon_start:best_doublecolon_end] = ['']
# For zeros at the beginning of the address.
if best_doublecolon_start == 0:
hextets = [''] + hextets
result = ":".join(hextets)
return result.lower()
def _sanitize_ipv4_mapping(ip_str):
"""
Sanitize IPv4 mapping in a expanded IPv6 address.
This converts ::ffff:0a0a:0a0a to ::ffff:10.10.10.10.
If there is nothing to sanitize, returns an unchanged
string.
Args:
ip_str: A string, the expanded IPv6 address.
Returns:
The sanitized output string, if applicable.
"""
if not ip_str.lower().startswith('0000:0000:0000:0000:0000:ffff:'):
# not an ipv4 mapping
return ip_str
hextets = ip_str.split(':')
if '.' in hextets[-1]:
# already sanitized
return ip_str
ipv4_address = "%d.%d.%d.%d" % (
int(hextets[6][0:2], 16),
int(hextets[6][2:4], 16),
int(hextets[7][0:2], 16),
int(hextets[7][2:4], 16),
)
result = ':'.join(hextets[0:6])
result += ':' + ipv4_address
return result
def _unpack_ipv4(ip_str):
"""
Unpack an IPv4 address that was mapped in a compressed IPv6 address.
This converts 0000:0000:0000:0000:0000:ffff:10.10.10.10 to 10.10.10.10.
If there is nothing to sanitize, returns None.
Args:
ip_str: A string, the expanded IPv6 address.
Returns:
The unpacked IPv4 address, or None if there was nothing to unpack.
"""
if not ip_str.lower().startswith('0000:0000:0000:0000:0000:ffff:'):
return None
hextets = ip_str.split(':')
return hextets[-1]
def is_valid_ipv6_address(ip_str):
"""
Ensure we have a valid IPv6 address.
Args:
ip_str: A string, the IPv6 address.
Returns:
A boolean, True if this is a valid IPv6 address.
"""
from django.core.validators import validate_ipv4_address
# We need to have at least one ':'.
if ':' not in ip_str:
return False
# We can only have one '::' shortener.
if ip_str.count('::') > 1:
return False
# '::' should be encompassed by start, digits or end.
if ':::' in ip_str:
return False
# A single colon can neither start nor end an address.
if ((ip_str.startswith(':') and not ip_str.startswith('::')) or
(ip_str.endswith(':') and not ip_str.endswith('::'))):
return False
# We can never have more than 7 ':' (1::2:3:4:5:6:7:8 is invalid)
if ip_str.count(':') > 7:
return False
# If we have no concatenation, we need to have 8 fields with 7 ':'.
if '::' not in ip_str and ip_str.count(':') != 7:
# We might have an IPv4 mapped address.
if ip_str.count('.') != 3:
return False
ip_str = _explode_shorthand_ip_string(ip_str)
# Now that we have that all squared away, let's check that each of the
# hextets are between 0x0 and 0xFFFF.
for hextet in ip_str.split(':'):
if hextet.count('.') == 3:
# If we have an IPv4 mapped address, the IPv4 portion has to
# be at the end of the IPv6 portion.
if not ip_str.split(':')[-1] == hextet:
return False
try:
validate_ipv4_address(hextet)
except ValidationError:
return False
else:
try:
# a value error here means that we got a bad hextet,
# something like 0xzzzz
if int(hextet, 16) < 0x0 or int(hextet, 16) > 0xFFFF:
return False
except ValueError:
return False
return True
def _explode_shorthand_ip_string(ip_str):
"""
Expand a shortened IPv6 address.
Args:
ip_str: A string, the IPv6 address.
Returns:
A string, the expanded IPv6 address.
"""
if not _is_shorthand_ip(ip_str):
# We've already got a longhand ip_str.
return ip_str
new_ip = []
hextet = ip_str.split('::')
# If there is a ::, we need to expand it with zeroes
# to get to 8 hextets - unless there is a dot in the last hextet,
# meaning we're doing v4-mapping
if '.' in ip_str.split(':')[-1]:
fill_to = 7
else:
fill_to = 8
if len(hextet) > 1:
sep = len(hextet[0].split(':')) + len(hextet[1].split(':'))
new_ip = hextet[0].split(':')
for _ in xrange(fill_to - sep):
new_ip.append('0000')
new_ip += hextet[1].split(':')
else:
new_ip = ip_str.split(':')
# Now need to make sure every hextet is 4 lower case characters.
# If a hextet is < 4 characters, we've got missing leading 0's.
ret_ip = []
for hextet in new_ip:
ret_ip.append(('0' * (4 - len(hextet)) + hextet).lower())
return ':'.join(ret_ip)
def _is_shorthand_ip(ip_str):
"""Determine if the address is shortened.
Args:
ip_str: A string, the IPv6 address.
Returns:
A boolean, True if the address is shortened.
"""
if ip_str.count('::') == 1:
return True
if any(len(x) < 4 for x in ip_str.split(':')):
return True
return False | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2012-2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Helper module for systemd service readiness notification.
"""
import os
import socket
import sys
from designate.openstack.common import log as logging
LOG = logging.getLogger(__name__)
def _abstractify(socket_name):
if socket_name.startswith('@'):
# abstract namespace socket
socket_name = '\0%s' % socket_name[1:]
return socket_name
def _sd_notify(unset_env, msg):
notify_socket = os.getenv('NOTIFY_SOCKET')
if notify_socket:
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
try:
sock.connect(_abstractify(notify_socket))
sock.sendall(msg)
if unset_env:
del os.environ['NOTIFY_SOCKET']
except EnvironmentError:
LOG.debug("Systemd notification failed", exc_info=True)
finally:
sock.close()
def notify():
"""Send notification to Systemd that service is ready.
For details see
http://www.freedesktop.org/software/systemd/man/sd_notify.html
"""
_sd_notify(False, 'READY=1')
def notify_once():
"""Send notification once to Systemd that service is ready.
Systemd sets NOTIFY_SOCKET environment variable with the name of the
socket listening for notifications from services.
This method removes the NOTIFY_SOCKET environment variable to ensure
notification is sent only once.
"""
_sd_notify(True, 'READY=1')
def onready(notify_socket, timeout):
"""Wait for systemd style notification on the socket.
:param notify_socket: local socket address
:type notify_socket: string
:param timeout: socket timeout
:type timeout: float
:returns: 0 service ready
1 service not ready
2 timeout occured
"""
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
sock.settimeout(timeout)
sock.bind(_abstractify(notify_socket))
try:
msg = sock.recv(512)
except socket.timeout:
return 2
finally:
sock.close()
if 'READY=1' in msg:
return 0
else:
return 1
if __name__ == '__main__':
# simple CLI for testing
if len(sys.argv) == 1:
notify()
elif len(sys.argv) >= 2:
timeout = float(sys.argv[1])
notify_socket = os.getenv('NOTIFY_SOCKET')
if notify_socket:
retval = onready(notify_socket, timeout)
sys.exit(retval) | unknown | codeparrot/codeparrot-clean | ||
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""This is a simplified Makefile generator for single-target gyp files.
It was originally designed for generating readable Makefiles for the
the NaCL examples.
"""
# pylint: disable=C0301
import os
import gyp.common
from gyp.common import GetEnvironFallback
from gyp.generator.make import QuoteIfNecessary
generator_default_variables = {
'EXECUTABLE_PREFIX': '',
'EXECUTABLE_SUFFIX': '',
'STATIC_LIB_PREFIX': 'lib',
'SHARED_LIB_PREFIX': 'lib',
'STATIC_LIB_SUFFIX': '.a',
'INTERMEDIATE_DIR': '$(BUILDDIR)/$(BUILDTYPE)/obj',
'SHARED_INTERMEDIATE_DIR': '$(obj)/gen',
'PRODUCT_DIR': '$(BUILDDIR)/$(BUILDTYPE)',
'CONFIGURATION_NAME': '$(BUILDTYPE)',
}
generator_additional_non_configuration_keys = [
'make_valid_configurations',
]
preamble = """\
#
# GNU Make based build file. For details on GNU Make see:
# http://www.gnu.org/software/make/manual/make.html
#
# This file was generated by gyp (http://code.google.com/p/gyp/)
# Default build configuration
BUILDTYPE = %(default_config)s
# All possible build configurations
BUILDTYPES = %(all_configs)s
# Check for valid build configuration
ifeq (,$(findstring $(BUILDTYPE),$(BUILDTYPES)))
$(warning Possible build configurations are: $(BUILDTYPES))
$(warning Cannot use BUILDTYPE=$(BUILDTYPE) with this Makefile.)
all:
else
# Target toolchain
CC.target ?= %(CC.target)s
CFLAGS.target ?= $(CFLAGS)
CXX.target ?= %(CXX.target)s
CXXFLAGS.target ?= $(CXXFLAGS)
LINK.target ?= %(LINK.target)s
LDFLAGS.target ?= $(LDFLAGS)
AR.target ?= %(AR.target)s
ARFLAGS.target ?= %(ARFLAGS.target)s
# Host toolchain
CC.host ?= gcc
CFLAGS.host ?=
CXX.host ?= g++
CXXFLAGS.host ?=
LINK.host ?= g++
LDFLAGS.host ?=
AR.host ?= ar
ARFLAGS.host := %(ARFLAGS.host)s
BUILDDIR = build
DEPFLAGS = -MMD
DEPFILES :=
.PHONY: all clean
all:
clean:
\trm -rf $(BUILDDIR)
"""
none_section = """
TARGET = $(BUILDDIR)/$(BUILDTYPE)/%(target)s.stamp
all: $(TARGET)
INPUTS = %(inputs)s
$(TARGET): $(INPUTS)
"""
target_section = """
TARGET = %(product)s
all: $(TARGET)
SOURCES = %(sources)s
LIBS_%(target_name_var)s_$(BUILDTYPE) = %(libs)s
OBJS = %(objs)s
DEPFILES += $(OBJS:%%.o=%%.d)
# Suffix rules, putting all outputs into build folder.
$(BUILDDIR)/$(BUILDTYPE)/obj_%(target)s/%%.o: %%.c
\t@mkdir -p $(dir $@)
\t$(CC.%(toolset)s) $(CFLAGS_%(target_name_var)s_$(BUILDTYPE)) -c -o $@ $<
$(BUILDDIR)/$(BUILDTYPE)/obj_%(target)s/%%.o: %%.cc
\t@mkdir -p $(dir $@)
\t$(CXX.%(toolset)s) $(CXXFLAGS_%(target_name_var)s_$(BUILDTYPE)) -c -o $@ $<
"""
lib_section = """
$(TARGET): $(OBJS)
\t@mkdir -p $(dir $@)
\t$(AR.%(toolset)s) $(ARFLAGS.%(toolset)s) $(ARFLAGS_%(target_name_var)s_$(BUILDTYPE)) $@ $^
"""
link_section = """
$(TARGET): $(OBJS)
\t@mkdir -p $(dir $@)
\t$(LINK.%(toolset)s) $(LDFLAGS_%(target_name_var)s_$(BUILDTYPE)) $(LDFLAGS.%(toolset)s) -o $@ -Wl,--start-group $^ $(LIBS_%(target_name_var)s_$(BUILDTYPE)) -Wl,--end-group
"""
def MakeList(value_list, prefix='', quoter=QuoteIfNecessary, initial_indent=0):
"""Construct from a list of values a string that can be assigned to a make
variable. This uses line continuations and limits line length to 80 chars.
"""
if not value_list:
return ''
value_list = [quoter(prefix + l) for l in value_list]
lines = []
line = ' ' * initial_indent
for value in value_list:
if len(line) + len(value) >= 79:
lines.append(line)
line = ''
elif line:
line += ' '
line += value
lines.append(line)
rtn = ' \\\n\t'.join(lines)
return rtn.lstrip()
def WriteList(makefile, value_list, variable, prefix='', quoter=QuoteIfNecessary):
values = MakeList(value_list, prefix, quoter, initial_indent=len(variable)+4)
makefile.write("\n%s := %s\n" % (variable, values))
def WriteConfig(makefile, name, config, target_type):
WriteList(makefile, config.get('defines', []), 'DEFS_%s' % name, '-D')
WriteList(makefile, config.get('cflags', []), 'CPPFLAGS_%s' % name)
WriteList(makefile, config.get('arflags', []), 'ARFLAGS_%s' % name)
ldflags = config.get('ldflags', [])
if target_type == 'shared_library':
ldflags.insert(0, '-shared')
WriteList(makefile, ldflags, 'LDFLAGS_%s' % name)
include_dirs = config.get('include_dirs', [])
include_dirs = ["-I%s" % i for i in include_dirs]
common_flags = ['$(CPPFLAGS_%s)' % name, '$(DEFS_%s)' % name, '$(DEPFLAGS)']
common_flags += include_dirs
WriteList(makefile, common_flags + config.get('cflags_c', []), 'CFLAGS_%s' % name)
WriteList(makefile, common_flags + config.get('cflags_cc', []), 'CXXFLAGS_%s' % name)
def WriteActions(makefile, actions, target_type):
for action in actions:
cmd = gyp.common.EncodePOSIXShellList(action['action'])
makefile.write("\t%s\n" % cmd)
if target_type == 'none':
makefile.write("\ttouch $@\n")
makefile.write("\n")
def WriteTarget(makefile, target_info):
valid_conf = ' '.join(target_info.get('make_valid_configurations', []))
if valid_conf:
makefile.write("\nifneq (,$(findstring $(BUILDTYPE),%s))\n" % valid_conf)
makefile.write('''
##
# Settings for the '%(target_name)s'
##
''' % target_info)
sources = target_info.get('sources', [])
exts = ['.cc', '.c', '.cxx', '.cpp']
sources = [s for s in sources if os.path.splitext(s)[1] in exts]
objects = [os.path.splitext(src)[0] for src in sources]
objects = [obj + '.o' for obj in objects]
target_name_var = target_info['target_name']
target_name_var = target_name_var.replace('.', '_')
for name, config in target_info['configurations'].items():
name = target_name_var + '_' + name
WriteConfig(makefile, name, config, target_info['type'])
actions = target_info.get('actions', [])
params = {
'target': target_info['target_name'],
'product': target_info['target_name'],
'target_name_var': target_name_var,
}
if 'product_name' in target_info:
params['product'] = target_info['product_name']
if target_info['type'] == 'static_library':
prefix = 'lib'
elif target_info['type'] == 'shared_library':
prefix = 'lib'
else:
prefix = ''
if prefix and not params['product'].startswith(prefix):
params['product'] = prefix + params['product']
dirname = target_info.get('product_dir', '$(BUILDDIR)/$(BUILDTYPE)')
params['product'] = os.path.join(dirname, params['product'])
if target_info['type'] == 'none':
params.update({
'inputs': MakeList(actions[0]['inputs'])
})
makefile.write(none_section % params)
else:
builddir = '$(BUILDDIR)/$(BUILDTYPE)/obj_%s' % target_info['target_name']
params.update({
'sources': MakeList(sources),
'libs': MakeList(target_info['libraries']),
'objs': MakeList(["%s/%s" % (builddir, obj) for obj in objects]),
'toolset': target_info['toolset']
})
makefile.write(target_section % params)
if target_info['type'] == 'static_library':
makefile.write(lib_section % params)
else:
makefile.write(link_section % params)
WriteActions(makefile, actions, target_info['type'])
if valid_conf:
makefile.write('endif\n')
def GenerateOutput(target_list, target_dicts, data, params):
"""Main entry point for this generator.
gyp will call this function.
"""
options = params['options']
makefilename = os.path.join(options.toplevel_dir, 'Makefile')
makefile = open(makefilename, 'w')
build_file, _, _ = gyp.common.ParseQualifiedTarget(target_list[0])
make_global_settings = data[build_file].get('make_global_settings', [])
settings_map = dict((key, value) for key, value in make_global_settings)
target_info = target_dicts[target_list[0]]
params = {
'CC.target': GetEnvironFallback(['CC_target'], '$(CC)'),
'AR.target': GetEnvironFallback(['AR_target'], '$(AR)'),
'ARFLAGS.target': GetEnvironFallback(['ARFLAGS_target'], 'crs'),
'CXX.target': GetEnvironFallback(['CXX_target'], '$(CXX)'),
'LINK.target': GetEnvironFallback(['LINK_target'], '$(LINK)') ,
'ARFLAGS.host': GetEnvironFallback(['ARFLAGS_host'], 'crs'),
'default_config': target_info['default_configuration'],
'all_configs': ' '.join(target_info['configurations'].keys()),
}
params.update(settings_map)
makefile.write(preamble % params)
for target_info in target_dicts.values():
WriteTarget(makefile, target_info)
makefile.write('''
# include (if they exists) the .d dependancy files that the compiler generates
-include $(DEPFILES)
endif
''')
makefile.close() | unknown | codeparrot/codeparrot-clean | ||
from typing import TYPE_CHECKING, Any
from langchain_classic._api import create_importer
if TYPE_CHECKING:
from langchain_community.llms import HuggingFaceHub
# Create a way to dynamically look up deprecated imports.
# Used to consolidate logic for raising deprecation warnings and
# handling optional imports.
DEPRECATED_LOOKUP = {"HuggingFaceHub": "langchain_community.llms"}
_import_attribute = create_importer(__package__, deprecated_lookups=DEPRECATED_LOOKUP)
def __getattr__(name: str) -> Any:
"""Look up attributes dynamically."""
return _import_attribute(name)
__all__ = [
"HuggingFaceHub",
] | python | github | https://github.com/langchain-ai/langchain | libs/langchain/langchain_classic/llms/huggingface_hub.py |
############################################################################
#
# Program: GDCM (Grassroots DICOM). A DICOM library
#
# Copyright (c) 2006-2011 Mathieu Malaterre
# All rights reserved.
# See Copyright.txt or http://gdcm.sourceforge.net/Copyright.html for details.
#
# This software is distributed WITHOUT ANY WARRANTY; without even
# the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
# PURPOSE. See the above copyright notice for more information.
#
############################################################################
import gdcm
import os,sys
def TestKakadu(filename, kdu_expand):
fn = gdcm.Filename(filename)
testdir = fn.GetPath()
testbasename = fn.GetName()
ext = fn.GetExtension()
#print ext
#kakadu_path = '/home/mmalaterre/Software/Kakadu60'
kakadu_path = os.path.dirname( kdu_expand )
#kdu_expand = kakadu_path + '/kdu_expand'
kdu_args = ' -quiet -i '
output_dcm = testdir + '/kakadu/' + testbasename
output_j2k = output_dcm + '.j2k'
output_ppm = output_dcm + '.ppm' #
output_raw = output_dcm + '.rawl' # FIXME: little endian only...
kdu_expand += kdu_args + output_j2k + ' -o ' + output_raw
# $ ./bin/gdcmraw -i .../TestImageChangeTransferSyntax2/012345.002.050.dcm -o toto.j2k
executable_output_path = gdcm.GDCM_EXECUTABLE_OUTPUT_PATH
gdcmraw = executable_output_path + '/gdcmraw'
outputfilename = output_j2k
gdcmraw_args = ' -i ' + filename + ' -o ' + outputfilename
gdcmraw += gdcmraw_args
#print gdcmraw
ret = os.system( gdcmraw )
#print "ret:",ret
#print kdu_expand
os.environ["LD_LIBRARY_PATH"]=kakadu_path
ret = os.system( kdu_expand )
# now need to skip the ppm header:
dd_cmd = 'dd bs=15 skip=1 if=%s of = %s'%(output_ppm,output_raw)
#print "ret:",ret
md5 = gdcm.Testing.ComputeFileMD5( output_raw )
# ok this is the md5 as computed after decompression using kdu_expand
# let see if it match out previously (stored) md5:
ref = gdcm.Testing.GetMD5FromFile(filename)
#print ref
retval = 0
if ref != md5:
img = gdcm.ImageReader()
img.SetFileName( filename )
img.Read()
if img.GetImage().GetDimension(2) != 1:
print("Test do not handle multiframes for now")
elif img.GetImage().GetPixelFormat().GetSamplesPerPixel() != 1:
print("Test do not handle RGB for now. kdu_expand expand as RRR GGG BBB by default")
else:
print("md5 are different: %s should be: %s for file %s"%(md5,ref,filename))
print("raw file was: %s"%(output_raw))
retval = 1
return retval
if __name__ == "__main__":
success = 0
#try:
# filename = os.sys.argv[1]
# success += TestKakadu( filename )
#except:
# loop over all files:
#t = gdcm.Testing()
#nfiles = t.GetNumberOfFileNames()
#for i in range(0,nfiles):
# filename = t.GetFileName(i)
# success += TestKakadu( filename )
d = gdcm.Directory()
tempdir = gdcm.Testing.GetTempDirectory()
j2ksubdir = 'TestImageChangeTransferSyntax2' # FIXME hardcoded !
nfiles = d.Load( tempdir + '/' + j2ksubdir )
# make sure the output dir for temporary j2k files exists:
md = gdcm.System.MakeDirectory( tempdir + '/' + j2ksubdir + '/kakadu' );
if not md:
sys.exit(1)
files = d.GetFilenames()
for i in range(0,nfiles):
filename = files[i]
success += TestKakadu( filename, os.sys.argv[1] )
# Test succeed ?
sys.exit(success) | unknown | codeparrot/codeparrot-clean | ||
# Copyright (c) 2011 The Chromium OS Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Unit tests for the gdata_lib module."""
from __future__ import print_function
import getpass
import mox
import re
import atom.service
import gdata.projecthosting.client as gd_ph_client
import gdata.spreadsheet.service
from chromite.lib import cros_test_lib
from chromite.lib import gdata_lib
from chromite.lib import osutils
# pylint: disable=W0212,E1101
class GdataLibTest(cros_test_lib.OutputTestCase):
"""Tests for methods that escape/unescape strings for speadsheets."""
def testPrepColNameForSS(self):
tests = {
'foo': 'foo',
'Foo': 'foo',
'FOO': 'foo',
'foo bar': 'foobar',
'Foo Bar': 'foobar',
'F O O B A R': 'foobar',
'Foo/Bar': 'foobar',
'Foo Bar/Dude': 'foobardude',
'foo/bar': 'foobar',
}
for col in tests:
expected = tests[col]
self.assertEquals(expected, gdata_lib.PrepColNameForSS(col))
self.assertEquals(expected, gdata_lib.PrepColNameForSS(expected))
def testPrepValForSS(self):
tests = {
None: None,
'': '',
'foo': 'foo',
'foo123': 'foo123',
'123': "'123",
'1.2': "'1.2",
}
for val in tests:
expected = tests[val]
self.assertEquals(expected, gdata_lib.PrepValForSS(val))
def testPrepRowForSS(self):
vals = {
None: None,
'': '',
'foo': 'foo',
'foo123': 'foo123',
'123': "'123",
'1.2': "'1.2",
}
# Create before and after rows (rowIn, rowOut).
rowIn = {}
rowOut = {}
col = 'a' # Column names not important.
for valIn in vals:
valOut = vals[valIn]
rowIn[col] = valIn
rowOut[col] = valOut
col = chr(ord(col) + 1) # Change column name.
self.assertEquals(rowOut, gdata_lib.PrepRowForSS(rowIn))
def testScrubValFromSS(self):
tests = {
None: None,
'foo': 'foo',
'123': '123',
"'123": '123',
}
for val in tests:
expected = tests[val]
self.assertEquals(expected, gdata_lib.ScrubValFromSS(val))
class CredsTest(cros_test_lib.MockOutputTestCase):
"""Tests related to user credentials."""
USER = 'somedude@chromium.org'
PASSWORD = 'worldsbestpassword'
DOCS_TOKEN = 'SomeDocsAuthToken'
TRACKER_TOKEN = 'SomeTrackerAuthToken'
@osutils.TempFileDecorator
def testStoreLoadCreds(self):
# This is the replay script for the test.
creds = gdata_lib.Creds()
# This is the test verification.
with self.OutputCapturer():
creds.SetCreds(self.USER, self.PASSWORD)
self.assertEquals(self.USER, creds.user)
self.assertEquals(self.PASSWORD, creds.password)
self.assertTrue(creds.creds_dirty)
creds.StoreCreds(self.tempfile)
self.assertEquals(self.USER, creds.user)
self.assertEquals(self.PASSWORD, creds.password)
self.assertFalse(creds.creds_dirty)
# Clear user/password before loading from just-created file.
creds.user = None
creds.password = None
self.assertEquals(None, creds.user)
self.assertEquals(None, creds.password)
creds.LoadCreds(self.tempfile)
self.assertEquals(self.USER, creds.user)
self.assertEquals(self.PASSWORD, creds.password)
self.assertFalse(creds.creds_dirty)
@osutils.TempFileDecorator
def testStoreLoadToken(self):
# This is the replay script for the test.
creds = gdata_lib.Creds()
creds.user = self.USER
# This is the test verification.
with self.OutputCapturer():
creds.SetDocsAuthToken(self.DOCS_TOKEN)
self.assertEquals(self.DOCS_TOKEN, creds.docs_auth_token)
self.assertTrue(creds.token_dirty)
creds.SetTrackerAuthToken(self.TRACKER_TOKEN)
self.assertEquals(self.TRACKER_TOKEN, creds.tracker_auth_token)
self.assertTrue(creds.token_dirty)
creds.StoreAuthToken(self.tempfile)
self.assertEquals(self.DOCS_TOKEN, creds.docs_auth_token)
self.assertEquals(self.TRACKER_TOKEN, creds.tracker_auth_token)
self.assertFalse(creds.token_dirty)
# Clear auth_tokens before loading from just-created file.
creds.docs_auth_token = None
creds.tracker_auth_token = None
creds.user = None
creds.LoadAuthToken(self.tempfile)
self.assertEquals(self.DOCS_TOKEN, creds.docs_auth_token)
self.assertEquals(self.TRACKER_TOKEN, creds.tracker_auth_token)
self.assertFalse(creds.token_dirty)
self.assertEquals(self.USER, creds.user)
def testSetCreds(self):
# This is the replay script for the test.
creds = gdata_lib.Creds()
# This is the test verification.
creds.SetCreds(self.USER, password=self.PASSWORD)
self.assertEquals(self.USER, creds.user)
self.assertEquals(self.PASSWORD, creds.password)
self.assertTrue(creds.creds_dirty)
def testSetCredsNoPassword(self):
# Add test-specific mocks/stubs
self.PatchObject(getpass, 'getpass', return_value=self.PASSWORD)
# This is the replay script for the test.
creds = gdata_lib.Creds()
# This is the test verification.
creds.SetCreds(self.USER)
self.assertEquals(self.USER, creds.user)
self.assertEquals(self.PASSWORD, creds.password)
self.assertTrue(creds.creds_dirty)
def testSetDocsToken(self):
# This is the replay script for the test.
creds = gdata_lib.Creds()
# This is the test verification.
creds.SetDocsAuthToken(self.DOCS_TOKEN)
self.assertEquals(self.DOCS_TOKEN, creds.docs_auth_token)
self.assertTrue(creds.token_dirty)
def testSetTrackerToken(self):
# This is the replay script for the test.
creds = gdata_lib.Creds()
# This is the test verification.
creds.SetTrackerAuthToken(self.TRACKER_TOKEN)
self.assertEquals(self.TRACKER_TOKEN, creds.tracker_auth_token)
self.assertTrue(creds.token_dirty)
class SpreadsheetRowTest(cros_test_lib.OutputTestCase):
"""Tests related to spreadsheet row interaction."""
SS_ROW_OBJ = 'SSRowObj'
SS_ROW_NUM = 5
def testEmpty(self):
row = gdata_lib.SpreadsheetRow(self.SS_ROW_OBJ, self.SS_ROW_NUM)
self.assertEquals(0, len(row))
self.assertEquals(self.SS_ROW_OBJ, row.ss_row_obj)
self.assertEquals(self.SS_ROW_NUM, row.ss_row_num)
self.assertRaises(TypeError, row, '__setitem__', 'abc', 'xyz')
self.assertEquals(0, len(row))
self.assertFalse('abc' in row)
def testInit(self):
starting_vals = {'abc': 'xyz', 'foo': 'bar'}
row = gdata_lib.SpreadsheetRow(self.SS_ROW_OBJ, self.SS_ROW_NUM,
starting_vals)
self.assertEquals(len(starting_vals), len(row))
self.assertEquals(starting_vals, row)
self.assertEquals(row['abc'], 'xyz')
self.assertTrue('abc' in row)
self.assertEquals(row['foo'], 'bar')
self.assertTrue('foo' in row)
self.assertEquals(self.SS_ROW_OBJ, row.ss_row_obj)
self.assertEquals(self.SS_ROW_NUM, row.ss_row_num)
self.assertRaises(TypeError, row, '__delitem__', 'abc')
self.assertEquals(len(starting_vals), len(row))
self.assertTrue('abc' in row)
class SpreadsheetCommTest(cros_test_lib.MoxOutputTestCase):
"""Test Speadsheet communication."""
SS_KEY = 'TheSSKey'
WS_NAME = 'TheWSName'
WS_KEY = 'TheWSKey'
USER = 'dude'
PASSWORD = 'shhh'
TOKEN = 'authtoken'
COLUMNS = ('greeting', 'name', 'title')
ROWS = (
{'greeting': 'Hi', 'name': 'George', 'title': 'Mr.'},
{'greeting': 'Howdy', 'name': 'Billy Bob', 'title': 'Mr.'},
{'greeting': 'Yo', 'name': 'Adriane', 'title': 'Ms.'},
)
def MockScomm(self, connect=True):
"""Return a mocked SpreadsheetComm"""
mocked_scomm = self.mox.CreateMock(gdata_lib.SpreadsheetComm)
mocked_scomm._columns = None
mocked_scomm._rows = None
if connect:
mocked_gdclient = self.mox.CreateMock(gdata_lib.RetrySpreadsheetsService)
mocked_scomm.gd_client = mocked_gdclient
mocked_scomm.ss_key = self.SS_KEY
mocked_scomm.ws_name = self.WS_NAME
mocked_scomm.ws_key = self.WS_KEY
else:
mocked_scomm.gd_client = None
mocked_scomm.ss_key = None
mocked_scomm.ws_name = None
mocked_scomm.ws_key = None
return mocked_scomm
def NewScomm(self, gd_client=None, connect=True):
"""Return a non-mocked SpreadsheetComm."""
scomm = gdata_lib.SpreadsheetComm()
scomm.gd_client = gd_client
if connect:
scomm.ss_key = self.SS_KEY
scomm.ws_name = self.WS_NAME
scomm.ws_key = self.WS_KEY
else:
scomm.ss_key = None
scomm.ws_name = None
scomm.ws_key = None
return scomm
def GenerateCreds(self, skip_user=False, skip_token=False):
creds = gdata_lib.Creds()
if not skip_user:
creds.user = self.USER
creds.password = self.PASSWORD
if not skip_token:
creds.docs_auth_token = self.TOKEN
return creds
def testConnect(self):
mocked_scomm = self.MockScomm(connect=False)
creds = self.GenerateCreds()
# This is the replay script for the test.
mocked_scomm._Login(creds, 'chromiumos')
mocked_scomm.SetCurrentWorksheet(self.WS_NAME, ss_key=self.SS_KEY)
self.mox.ReplayAll()
# This is the test verification.
gdata_lib.SpreadsheetComm.Connect(mocked_scomm, creds,
self.SS_KEY, self.WS_NAME)
self.mox.VerifyAll()
def testColumns(self):
"""Test the .columns property. Testing a property gets ugly."""
self.mox.StubOutWithMock(gdata.spreadsheet.service, 'CellQuery')
mocked_gdclient = self.mox.CreateMock(gdata_lib.RetrySpreadsheetsService)
scomm = self.NewScomm(gd_client=mocked_gdclient, connect=True)
query = {'max-row': '1'}
# Simulate a Cells feed from spreadsheet for the column row.
cols = [c[0].upper() + c[1:] for c in self.COLUMNS]
entry = [cros_test_lib.EasyAttr(
content=cros_test_lib.EasyAttr(text=c)) for c in cols]
feed = cros_test_lib.EasyAttr(entry=entry)
# This is the replay script for the test.
gdata.spreadsheet.service.CellQuery().AndReturn(query)
mocked_gdclient.GetCellsFeed(
self.SS_KEY, self.WS_KEY, query=query).AndReturn(feed)
self.mox.ReplayAll()
# This is the test verification.
result = scomm.columns
del scomm # Force deletion now before VerifyAll.
self.mox.VerifyAll()
expected_result = self.COLUMNS
self.assertEquals(expected_result, result)
def testRows(self):
"""Test the .rows property. Testing a property gets ugly."""
mocked_gdclient = self.mox.CreateMock(gdata_lib.RetrySpreadsheetsService)
scomm = self.NewScomm(gd_client=mocked_gdclient, connect=True)
# Simulate a List feed from spreadsheet for all rows.
rows = [
{'col_name': 'Joe', 'col_age': '12', 'col_zip': '12345'},
{'col_name': 'Bob', 'col_age': '15', 'col_zip': '54321'},
]
entry = []
for row in rows:
custom = dict((k, cros_test_lib.EasyAttr(text=v))
for (k, v) in row.iteritems())
entry.append(cros_test_lib.EasyAttr(custom=custom))
feed = cros_test_lib.EasyAttr(entry=entry)
# This is the replay script for the test.
mocked_gdclient.GetListFeed(self.SS_KEY, self.WS_KEY).AndReturn(feed)
self.mox.ReplayAll()
# This is the test verification.
result = scomm.rows
del scomm # Force deletion now before VerifyAll.
self.mox.VerifyAll()
self.assertEquals(tuple(rows), result)
# Result tuple should have spreadsheet row num as attribute on each row.
self.assertEquals(2, result[0].ss_row_num)
self.assertEquals(3, result[1].ss_row_num)
# Result tuple should have spreadsheet row obj as attribute on each row.
self.assertEquals(entry[0], result[0].ss_row_obj)
self.assertEquals(entry[1], result[1].ss_row_obj)
def testSetCurrentWorksheetStart(self):
mocked_scomm = self.MockScomm(connect=True)
# Undo worksheet settings.
mocked_scomm.ss_key = None
mocked_scomm.ws_name = None
mocked_scomm.ws_key = None
# This is the replay script for the test.
mocked_scomm._ClearCache()
mocked_scomm._GetWorksheetKey(
self.SS_KEY, self.WS_NAME).AndReturn(self.WS_KEY)
mocked_scomm._ClearCache()
self.mox.ReplayAll()
# This is the test verification.
gdata_lib.SpreadsheetComm.SetCurrentWorksheet(mocked_scomm, self.WS_NAME,
ss_key=self.SS_KEY)
self.mox.VerifyAll()
self.assertEquals(self.SS_KEY, mocked_scomm.ss_key)
self.assertEquals(self.WS_KEY, mocked_scomm.ws_key)
self.assertEquals(self.WS_NAME, mocked_scomm.ws_name)
def testSetCurrentWorksheetRestart(self):
mocked_scomm = self.MockScomm(connect=True)
other_ws_name = 'OtherWSName'
other_ws_key = 'OtherWSKey'
# This is the replay script for the test.
mocked_scomm._GetWorksheetKey(
self.SS_KEY, other_ws_name).AndReturn(other_ws_key)
mocked_scomm._ClearCache()
self.mox.ReplayAll()
# This is the test verification.
gdata_lib.SpreadsheetComm.SetCurrentWorksheet(mocked_scomm, other_ws_name)
self.mox.VerifyAll()
self.assertEquals(self.SS_KEY, mocked_scomm.ss_key)
self.assertEquals(other_ws_key, mocked_scomm.ws_key)
self.assertEquals(other_ws_name, mocked_scomm.ws_name)
def testClearCache(self):
rows = 'SomeRows'
cols = 'SomeColumns'
scomm = self.NewScomm()
scomm._rows = rows
scomm._columns = cols
scomm._ClearCache(keep_columns=True)
self.assertTrue(scomm._rows is None)
self.assertEquals(cols, scomm._columns)
scomm._rows = rows
scomm._columns = cols
scomm._ClearCache(keep_columns=False)
self.assertTrue(scomm._rows is None)
self.assertTrue(scomm._columns is None)
scomm._rows = rows
scomm._columns = cols
scomm._ClearCache()
self.assertTrue(scomm._rows is None)
self.assertTrue(scomm._columns is None)
def testLoginWithUserPassword(self):
mocked_scomm = self.MockScomm(connect=False)
creds = self.GenerateCreds(skip_token=True)
self.mox.StubOutClassWithMocks(gdata_lib, 'RetrySpreadsheetsService')
source = 'SomeSource'
# This is the replay script for the test.
mocked_gdclient = gdata_lib.RetrySpreadsheetsService()
mocked_gdclient.ProgrammaticLogin()
mocked_gdclient.GetClientLoginToken().AndReturn(self.TOKEN)
self.mox.ReplayAll()
# This is the test verification.
with self.OutputCapturer():
gdata_lib.SpreadsheetComm._Login(mocked_scomm, creds, source)
self.mox.VerifyAll()
self.assertEquals(self.USER, mocked_gdclient.email)
self.assertEquals(self.PASSWORD, mocked_gdclient.password)
self.assertEquals(self.TOKEN, creds.docs_auth_token)
self.assertEquals(source, mocked_gdclient.source)
self.assertEquals(mocked_gdclient, mocked_scomm.gd_client)
def testLoginWithToken(self):
mocked_scomm = self.MockScomm(connect=False)
creds = self.GenerateCreds(skip_user=True)
self.mox.StubOutClassWithMocks(gdata_lib, 'RetrySpreadsheetsService')
source = 'SomeSource'
# This is the replay script for the test.
mocked_gdclient = gdata_lib.RetrySpreadsheetsService()
mocked_gdclient.SetClientLoginToken(creds.docs_auth_token)
self.mox.ReplayAll()
# This is the test verification.
with self.OutputCapturer():
gdata_lib.SpreadsheetComm._Login(mocked_scomm, creds, source)
self.mox.VerifyAll()
self.assertFalse(hasattr(mocked_gdclient, 'email'))
self.assertFalse(hasattr(mocked_gdclient, 'password'))
self.assertEquals(source, mocked_gdclient.source)
self.assertEquals(mocked_gdclient, mocked_scomm.gd_client)
def testGetWorksheetKey(self):
mocked_scomm = self.MockScomm()
entrylist = [
cros_test_lib.EasyAttr(
title=cros_test_lib.EasyAttr(text='Foo'), id='NotImportant'),
cros_test_lib.EasyAttr(
title=cros_test_lib.EasyAttr(text=self.WS_NAME),
id=cros_test_lib.EasyAttr(text='/some/path/%s' % self.WS_KEY)),
cros_test_lib.EasyAttr(
title=cros_test_lib.EasyAttr(text='Bar'), id='NotImportant'),
]
feed = cros_test_lib.EasyAttr(entry=entrylist)
# This is the replay script for the test.
mocked_scomm.gd_client.GetWorksheetsFeed(self.SS_KEY).AndReturn(feed)
self.mox.ReplayAll()
# This is the test verification.
gdata_lib.SpreadsheetComm._GetWorksheetKey(mocked_scomm,
self.SS_KEY, self.WS_NAME)
self.mox.VerifyAll()
def testGetColumns(self):
mocked_scomm = self.MockScomm()
mocked_scomm.columns = 'SomeColumns'
# Replay script
self.mox.ReplayAll()
# This is the test verification.
result = gdata_lib.SpreadsheetComm.GetColumns(mocked_scomm)
self.mox.VerifyAll()
self.assertEquals('SomeColumns', result)
def testGetColumnIndex(self):
# Note that spreadsheet column indices start at 1.
mocked_scomm = self.MockScomm()
mocked_scomm.columns = ['these', 'are', 'column', 'names']
# This is the replay script for the test.
self.mox.ReplayAll()
# This is the test verification.
result = gdata_lib.SpreadsheetComm.GetColumnIndex(mocked_scomm, 'are')
self.mox.VerifyAll()
self.assertEquals(2, result)
def testGetRows(self):
mocked_scomm = self.MockScomm()
rows = []
for row_ix, row_dict in enumerate(self.ROWS):
rows.append(gdata_lib.SpreadsheetRow('SSRowObj%d' % (row_ix + 2),
(row_ix + 2), row_dict))
mocked_scomm.rows = tuple(rows)
# This is the replay script for the test.
self.mox.ReplayAll()
# This is the test verification.
result = gdata_lib.SpreadsheetComm.GetRows(mocked_scomm)
self.mox.VerifyAll()
self.assertEquals(self.ROWS, result)
for row_ix in xrange(len(self.ROWS)):
self.assertEquals(row_ix + 2, result[row_ix].ss_row_num)
self.assertEquals('SSRowObj%d' % (row_ix + 2), result[row_ix].ss_row_obj)
def testGetRowCacheByCol(self):
mocked_scomm = self.MockScomm()
# This is the replay script for the test.
mocked_scomm.GetRows().AndReturn(self.ROWS)
self.mox.ReplayAll()
# This is the test verification.
result = gdata_lib.SpreadsheetComm.GetRowCacheByCol(mocked_scomm, 'name')
self.mox.VerifyAll()
# Result is a dict of rows by the 'name' column.
for row in self.ROWS:
name = row['name']
self.assertEquals(row, result[name])
def testGetRowCacheByColDuplicates(self):
mocked_scomm = self.MockScomm()
# Create new row list with duplicates by name column.
rows = []
for row in self.ROWS:
new_row = dict(row)
new_row['greeting'] = row['greeting'] + ' there'
rows.append(new_row)
rows.extend(self.ROWS)
# This is the replay script for the test.
mocked_scomm.GetRows().AndReturn(tuple(rows))
self.mox.ReplayAll()
# This is the test verification.
result = gdata_lib.SpreadsheetComm.GetRowCacheByCol(mocked_scomm, 'name')
self.mox.VerifyAll()
# Result is a dict of rows by the 'name' column. In this
# test each result should be a list of the rows with the same
# value in the 'name' column.
num_rows = len(rows)
for ix in xrange(num_rows / 2):
row1 = rows[ix]
row2 = rows[ix + (num_rows / 2)]
name = row1['name']
self.assertEquals(name, row2['name'])
expected_rows = [row1, row2]
self.assertEquals(expected_rows, result[name])
def testInsertRow(self):
mocked_scomm = self.MockScomm()
row = 'TheRow'
# Replay script
mocked_scomm.gd_client.InsertRow(row, mocked_scomm.ss_key,
mocked_scomm.ws_key)
mocked_scomm._ClearCache(keep_columns=True)
self.mox.ReplayAll()
# This is the test verification.
gdata_lib.SpreadsheetComm.InsertRow(mocked_scomm, row)
self.mox.VerifyAll()
def testUpdateRowCellByCell(self):
mocked_scomm = self.MockScomm()
rowIx = 5
row = {'a': 123, 'b': 234, 'c': 345}
colIndices = {'a': 1, 'b': None, 'c': 4}
# Replay script
for colName in row:
colIx = colIndices[colName]
mocked_scomm.GetColumnIndex(colName).AndReturn(colIx)
if colIx is not None:
mocked_scomm.ReplaceCellValue(rowIx, colIx, row[colName])
mocked_scomm._ClearCache(keep_columns=True)
self.mox.ReplayAll()
# This is the test verification.
gdata_lib.SpreadsheetComm.UpdateRowCellByCell(mocked_scomm, rowIx, row)
self.mox.VerifyAll()
def testDeleteRow(self):
mocked_scomm = self.MockScomm()
ss_row = 'TheRow'
# Replay script
mocked_scomm.gd_client.DeleteRow(ss_row)
mocked_scomm._ClearCache(keep_columns=True)
self.mox.ReplayAll()
# This is the test verification.
gdata_lib.SpreadsheetComm.DeleteRow(mocked_scomm, ss_row)
self.mox.VerifyAll()
def testReplaceCellValue(self):
mocked_scomm = self.MockScomm()
rowIx = 14
colIx = 4
val = 'TheValue'
# Replay script
mocked_scomm.gd_client.UpdateCell(rowIx, colIx, val,
mocked_scomm.ss_key, mocked_scomm.ws_key)
mocked_scomm._ClearCache(keep_columns=True)
self.mox.ReplayAll()
# Verify
gdata_lib.SpreadsheetComm.ReplaceCellValue(mocked_scomm, rowIx, colIx, val)
self.mox.VerifyAll()
def testClearCellValue(self):
mocked_scomm = self.MockScomm()
rowIx = 14
colIx = 4
# Replay script
mocked_scomm.ReplaceCellValue(rowIx, colIx, None)
self.mox.ReplayAll()
# Verify
gdata_lib.SpreadsheetComm.ClearCellValue(mocked_scomm, rowIx, colIx)
self.mox.VerifyAll()
class IssueCommentTest(cros_test_lib.TestCase):
"""Test creating comments."""
def testInit(self):
title = 'Greetings, Earthlings'
text = 'I come in peace.'
ic = gdata_lib.IssueComment(title, text)
self.assertEquals(title, ic.title)
self.assertEquals(text, ic.text)
def createTrackerIssue(tid, labels, owner, status, content, title):
tissue = cros_test_lib.EasyAttr()
tissue.id = cros_test_lib.EasyAttr(
text='http://www/some/path/%d' % tid)
tissue.label = [cros_test_lib.EasyAttr(text=l) for l in labels]
tissue.owner = cros_test_lib.EasyAttr(
username=cros_test_lib.EasyAttr(text=owner))
tissue.status = cros_test_lib.EasyAttr(text=status)
tissue.content = cros_test_lib.EasyAttr(text=content)
tissue.title = cros_test_lib.EasyAttr(text=title)
return tissue
class IssueTest(cros_test_lib.MoxTestCase):
"""Test creating a bug."""
def testInitOverride(self):
owner = 'somedude@chromium.org'
status = 'Assigned'
issue = gdata_lib.Issue(owner=owner, status=status)
self.assertEquals(owner, issue.owner)
self.assertEquals(status, issue.status)
def testInitInvalidOverride(self):
self.assertRaises(ValueError, gdata_lib.Issue,
foobar='NotARealAttr')
def testInitFromTracker(self):
# Need to create a dummy Tracker Issue object.
tissue_id = 123
tissue_labels = ['Iteration-10', 'Effort-2']
tissue_owner = 'thedude@chromium.org'
tissue_status = 'Available'
tissue_content = 'The summary message'
tissue_title = 'The Big Title'
tissue = createTrackerIssue(tid=tissue_id, labels=tissue_labels,
owner=tissue_owner, status=tissue_status,
content=tissue_content, title=tissue_title)
mocked_issue = self.mox.CreateMock(gdata_lib.Issue)
# Replay script
mocked_issue.GetTrackerIssueComments(tissue_id, 'TheProject').AndReturn([])
self.mox.ReplayAll()
# Verify
gdata_lib.Issue.InitFromTracker(mocked_issue, tissue, 'TheProject')
self.mox.VerifyAll()
self.assertEquals(tissue_id, mocked_issue.id)
self.assertEquals(tissue_labels, mocked_issue.labels)
self.assertEquals(tissue_owner, mocked_issue.owner)
self.assertEquals(tissue_status, mocked_issue.status)
self.assertEquals(tissue_content, mocked_issue.summary)
self.assertEquals(tissue_title, mocked_issue.title)
self.assertEquals([], mocked_issue.comments)
class TrackerCommTest(cros_test_lib.MoxOutputTestCase):
"""Test bug tracker communication."""
def testConnectEmail(self):
source = 'TheSource'
token = 'TheToken'
creds = gdata_lib.Creds()
creds.user = 'dude'
creds.password = 'shhh'
creds.tracker_auth_token = None
self.mox.StubOutClassWithMocks(gd_ph_client, 'ProjectHostingClient')
mocked_tcomm = self.mox.CreateMock(gdata_lib.TrackerComm)
def set_token(*_args, **_kwargs):
mocked_itclient.auth_token = cros_test_lib.EasyAttr(token_string=token)
# Replay script
mocked_itclient = gd_ph_client.ProjectHostingClient()
mocked_itclient.ClientLogin(
creds.user, creds.password, source=source, service='code',
account_type='GOOGLE').WithSideEffects(set_token)
self.mox.ReplayAll()
# Verify
with self.OutputCapturer():
gdata_lib.TrackerComm.Connect(mocked_tcomm, creds, 'TheProject',
source=source)
self.mox.VerifyAll()
self.assertEquals(mocked_tcomm.it_client, mocked_itclient)
def testConnectToken(self):
source = 'TheSource'
token = 'TheToken'
creds = gdata_lib.Creds()
creds.user = 'dude'
creds.password = 'shhh'
creds.tracker_auth_token = token
mocked_tcomm = self.mox.CreateMock(gdata_lib.TrackerComm)
self.mox.StubOutClassWithMocks(gd_ph_client, 'ProjectHostingClient')
self.mox.StubOutClassWithMocks(gdata.gauth, 'ClientLoginToken')
# Replay script
mocked_itclient = gd_ph_client.ProjectHostingClient()
mocked_token = gdata.gauth.ClientLoginToken(token)
self.mox.ReplayAll()
# Verify
with self.OutputCapturer():
gdata_lib.TrackerComm.Connect(mocked_tcomm, creds, 'TheProject',
source=source)
self.mox.VerifyAll()
self.assertEquals(mocked_tcomm.it_client, mocked_itclient)
self.assertEquals(mocked_itclient.auth_token, mocked_token)
def testGetTrackerIssueById(self):
mocked_itclient = self.mox.CreateMock(gd_ph_client.ProjectHostingClient)
tcomm = gdata_lib.TrackerComm()
tcomm.it_client = mocked_itclient
tcomm.project_name = 'TheProject'
self.mox.StubOutClassWithMocks(gd_ph_client, 'Query')
self.mox.StubOutClassWithMocks(gdata_lib, 'Issue')
self.mox.StubOutWithMock(gdata_lib.Issue, 'InitFromTracker')
issue_id = 12345
feed = cros_test_lib.EasyAttr(entry=['hi', 'there'])
# Replay script
mocked_query = gd_ph_client.Query(issue_id=str(issue_id))
mocked_itclient.get_issues(
'TheProject', query=mocked_query).AndReturn(feed)
mocked_issue = gdata_lib.Issue()
mocked_issue.InitFromTracker(feed.entry[0], 'TheProject')
self.mox.ReplayAll()
# Verify
issue = tcomm.GetTrackerIssueById(issue_id)
self.mox.VerifyAll()
self.assertEquals(mocked_issue, issue)
def testGetTrackerIssuesByText(self):
author = 'TheAuthor'
project = 'TheProject'
text = "find me"
# Set up the fake tracker issue.
tissue_id = 1
tissue_labels = ['auto-filed']
tissue_owner = 'someone@chromium.org'
tissue_status = 'Available'
tissue_content = 'find me in body'
tissue_title = 'find me in title'
tissue = createTrackerIssue(tid=tissue_id, labels=tissue_labels,
owner=tissue_owner, status=tissue_status,
content=tissue_content, title=tissue_title)
issue = gdata_lib.Issue(id=tissue_id, labels=tissue_labels,
owner=tissue_owner, status=tissue_status,
title=tissue_title, summary=tissue_content)
# This will get called as part of Issue.InitFromTracker.
self.mox.StubOutWithMock(gdata_lib.Issue, 'GetTrackerIssueComments')
mocked_itclient = self.mox.CreateMock(gd_ph_client.ProjectHostingClient)
tcomm = gdata_lib.TrackerComm()
tcomm.author = author
tcomm.it_client = mocked_itclient
tcomm.project_name = project
# We expect a Query instance to be passed into get_issues.
# pylint: disable=E1120
self.mox.StubOutClassWithMocks(gd_ph_client, 'Query')
mocked_query = gd_ph_client.Query(text_query='%s is:open' % text)
feed = cros_test_lib.EasyAttr(entry=[tissue])
mocked_itclient.get_issues(project, query=mocked_query).AndReturn(feed)
gdata_lib.Issue.GetTrackerIssueComments(1, project).AndReturn([])
self.mox.ReplayAll()
issues = tcomm.GetTrackerIssuesByText(text)
self.assertEquals(issues, [issue])
def testCreateTrackerIssue(self):
author = 'TheAuthor'
mocked_itclient = self.mox.CreateMock(gd_ph_client.ProjectHostingClient)
mocked_tcomm = self.mox.CreateMock(gdata_lib.TrackerComm)
mocked_tcomm.author = author
mocked_tcomm.it_client = mocked_itclient
mocked_tcomm.project_name = 'TheProject'
issue = cros_test_lib.EasyAttr(title='TheTitle',
summary='TheSummary',
status='TheStatus',
owner='TheOwner',
labels='TheLabels',
ccs=[])
# Replay script
issue_id = cros_test_lib.EasyAttr(
id=cros_test_lib.EasyAttr(text='foo/bar/123'))
mocked_itclient.add_issue(
project_name='TheProject',
title=issue.title,
content=issue.summary,
author=author,
status=issue.status,
owner=issue.owner,
labels=issue.labels,
ccs=issue.ccs).AndReturn(issue_id)
self.mox.ReplayAll()
# Verify
result = gdata_lib.TrackerComm.CreateTrackerIssue(mocked_tcomm, issue)
self.mox.VerifyAll()
self.assertEquals(123, result)
def testAppendTrackerIssueById(self):
author = 'TheAuthor'
project_name = 'TheProject'
mocked_itclient = self.mox.CreateMock(gd_ph_client.ProjectHostingClient)
mocked_tcomm = self.mox.CreateMock(gdata_lib.TrackerComm)
mocked_tcomm.author = author
mocked_tcomm.it_client = mocked_itclient
mocked_tcomm.project_name = project_name
issue_id = 54321
comment = 'TheComment'
# Replay script
mocked_itclient.update_issue(project_name=project_name,
issue_id=issue_id,
author=author,
comment=comment,
owner=None)
self.mox.ReplayAll()
# Verify
result = gdata_lib.TrackerComm.AppendTrackerIssueById(mocked_tcomm,
issue_id, comment)
self.mox.VerifyAll()
self.assertEquals(issue_id, result)
class RetrySpreadsheetsServiceTest(cros_test_lib.MoxOutputTestCase):
"""Test Spreadsheet server retry helper."""
def testRequest(self):
"""Test that calling request method invokes _RetryRequest wrapper."""
# pylint: disable=W0212
self.mox.StubOutWithMock(gdata_lib.RetrySpreadsheetsService,
'_RetryRequest')
# Use a real RetrySpreadsheetsService object rather than a mocked
# one, because the .request method only exists if __init__ is run.
# Also split up __new__ and __init__ in order to grab the original
# rss.request method (inherited from base class at that point).
rss = gdata_lib.RetrySpreadsheetsService.__new__(
gdata_lib.RetrySpreadsheetsService)
orig_request = rss.request
rss.__init__()
args = ('GET', 'http://foo.bar')
# This is the replay script for the test.
gdata_lib.RetrySpreadsheetsService._RetryRequest(
orig_request, *args).AndReturn('wrapped')
self.mox.ReplayAll()
# This is the test verification.
retval = rss.request(*args)
self.mox.VerifyAll()
self.assertEquals('wrapped', retval)
def _TestHttpClientRetryRequest(self, statuses):
"""Test retry logic in http_client request during ProgrammaticLogin.
|statuses| is list of http codes to simulate, where 200 means success.
"""
expect_success = statuses[-1] == 200
self.mox.StubOutWithMock(atom.http.ProxiedHttpClient, 'request')
rss = gdata_lib.RetrySpreadsheetsService()
args = ('POST', 'https://www.google.com/accounts/ClientLogin')
def _read():
return 'Some response text'
# This is the replay script for the test.
# Simulate the return codes in statuses.
for status in statuses:
retstatus = cros_test_lib.EasyAttr(status=status, read=_read)
atom.http.ProxiedHttpClient.request(
*args, data=mox.IgnoreArg(),
headers=mox.IgnoreArg()).AndReturn(retstatus)
self.mox.ReplayAll()
# This is the test verification.
with self.OutputCapturer():
if expect_success:
rss.ProgrammaticLogin()
else:
self.assertRaises(gdata.service.Error, rss.ProgrammaticLogin)
self.mox.VerifyAll()
if not expect_success:
# Retries did not help, request still failed.
regexp = re.compile(r'^Giving up on HTTP request')
self.AssertOutputContainsWarning(regexp=regexp)
elif len(statuses) > 1:
# Warning expected if retries were needed.
self.AssertOutputContainsWarning()
else:
# First try worked, expect no warnings.
self.AssertOutputContainsWarning(invert=True)
def testHttpClientRetryRequest(self):
self._TestHttpClientRetryRequest([200])
def testHttpClientRetryRequest403(self):
self._TestHttpClientRetryRequest([403, 200])
def testHttpClientRetryRequest403x2(self):
self._TestHttpClientRetryRequest([403, 403, 200])
def testHttpClientRetryRequest403x3(self):
self._TestHttpClientRetryRequest([403, 403, 403, 200])
def testHttpClientRetryRequest403x4(self):
self._TestHttpClientRetryRequest([403, 403, 403, 403, 200])
def testHttpClientRetryRequest403x5(self):
# This one should exhaust the retries.
self._TestHttpClientRetryRequest([403, 403, 403, 403, 403])
def _TestRetryRequest(self, statuses):
"""Test retry logic for request method.
|statuses| is list of http codes to simulate, where 200 means success.
"""
expect_success = statuses[-1] == 200
expected_status_index = len(statuses) - 1 if expect_success else 0
mocked_ss = self.mox.CreateMock(gdata_lib.RetrySpreadsheetsService)
args = ('GET', 'http://foo.bar')
# This is the replay script for the test.
for ix, status in enumerate(statuses):
# Add index of status to track which status the request function is
# returning. It is expected to return the last return status if
# successful (retries or not), but first return status if failed.
retval = cros_test_lib.EasyAttr(status=status, index=ix)
mocked_ss.request(*args).AndReturn(retval)
self.mox.ReplayAll()
# This is the test verification.
with self.OutputCapturer():
# pylint: disable=W0212
rval = gdata_lib.RetrySpreadsheetsService._RetryRequest(mocked_ss,
mocked_ss.request,
*args)
self.mox.VerifyAll()
self.assertEquals(statuses[expected_status_index], rval.status)
self.assertEquals(expected_status_index, rval.index)
if not expect_success:
# Retries did not help, request still failed.
regexp = re.compile(r'^Giving up on HTTP request')
self.AssertOutputContainsWarning(regexp=regexp)
elif expected_status_index > 0:
# Warning expected if retries were needed.
self.AssertOutputContainsWarning()
else:
# First try worked, expect no warnings.
self.AssertOutputContainsWarning(invert=True)
def testRetryRequest(self):
self._TestRetryRequest([200])
def testRetryRequest403(self):
self._TestRetryRequest([403, 200])
def testRetryRequest403x2(self):
self._TestRetryRequest([403, 403, 200])
def testRetryRequest403x3(self):
self._TestRetryRequest([403, 403, 403, 200])
def testRetryRequest403x4(self):
self._TestRetryRequest([403, 403, 403, 403, 200])
def testRetryRequest403x5(self):
# This one should exhaust the retries.
self._TestRetryRequest([403, 403, 403, 403, 403]) | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
# $Id$
# Copyright (c) 2000-2015 Board of Trustees of Leland Stanford Jr. University,
# all rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# STANFORD UNIVERSITY BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
# IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
# Except as contained in this notice, the name of Stanford University shall not
# be used in advertising or otherwise to promote the sale, use or other dealings
# in this Software without prior written authorization from Stanford University.
########
#
# This script is no longer supported and may be removed in the future.
#
########
import optparse
import ConfigParser
import os
import sys
import urllib2
import fix_auth_failure
import lockss_daemon
__author__ = "Barry Hayes"
__maintainer__ = "Barry Hayes"
__version__ = "1.0.3"
class _SectionAdder(object):
"""Wrap a python configuration section around a file that doesn't
have one."""
def __init__(self, section, fp):
self.section_done = False
self.section = section
self.fp = fp
def readline(self):
if not self.section_done:
self.section_done = True
return '[%s]' % self.section
else:
return self.fp.readline()
def _parser():
"""Make a parser for the arguments."""
parser = optparse.OptionParser(
description='Move cache directories on a LOCKSS daemon')
parser.add_option('-u', '--username', metavar='U', help='UI username')
parser.add_option('-p', '--password', metavar='P', help='UI password')
parser.add_option('-v', '--verbose', dest='verbose', action='store_true',
default=False)
parser.add_option('-q', '--quiet', dest='verbose', action='store_false')
parser.add_option('-f', '--force', dest='verify', action='store_false',
help='ignore auids not present on the daemon, '
'never prompt')
parser.add_option('-i', '--verify', dest='verify', action='store_true',
default=False, help='prompt before each move')
parser.add_option('-c', '--commands', action='store_true', default=False,
help='print mv commands, but do not move files')
parser.add_option('-d', '--directory', default='.',
help='the daemon directory where ./cache is '
'(default: \'%default\')')
parser.add_option('--dest', default='deleted',
help='where under the daemon directory the cache '
'entries are moved to (default: \'%default\')')
return parser
def _process_args():
parser = _parser()
(options, arguments) = parser.parse_args()
if arguments != []:
parser.error('There should be no arguments. Try --help')
return options
def _auid(cache_dir):
"""Return the AUID for the given cache dir."""
# If the #au_id_file isn't present, or doesn't contain an au.id
# entry, the daemon doesn't list the directory in the table, so no
# need to check either condition.
path = os.path.join(cache_dir, '#au_id_file')
config = ConfigParser.ConfigParser()
f = open(os.path.join(path))
try:
config.readfp(_SectionAdder('foo', f))
auid = config.get('foo', 'au.id')
# If this fails, something very odd is going on, and a human
# should check.
assert auid
finally:
f.close()
return auid
def main():
options = _process_args()
src = options.directory
local_txt = os.path.join(src, 'local.txt')
if not os.path.isdir(os.path.join(src, 'cache')):
raise Exception('%s doesn\'t look like a daemon directory. '
'Try --directory.' % src)
if 'LOCKSS_IPADDR' in os.environ: ipAddr = os.environ['LOCKSS_IPADDR']
else: ipAddr = '127.0.0.1'
if 'LOCKSS_UI_PORT' in os.environ:
port = os.environ['LOCKSS_UI_PORT']
else:
if not os.path.isfile(local_txt):
raise Exception('LOCKSS_UI_PORT is not set but there is no'
'%s' % (local_txt,))
config = ConfigParser.ConfigParser()
local_config = open(local_txt)
try:
config.readfp(_SectionAdder('foo', local_config))
port = config.get('foo', 'org.lockss.ui.port')
finally:
local_config.close()
fix_auth_failure.fix_auth_failure()
client = lockss_daemon.Client(ipAddr, port,
options.username, options.password)
repos = client._getStatusTable( 'RepositoryTable' )[ 1 ]
no_auid = [r for r in repos if r['status'] == 'No AUID']
if no_auid:
print 'Warning: These cache directories have no AUID:'
for r in no_auid:
print r['dir']
print
deleted = [r for r in repos if r['status'] == 'Deleted']
for r in deleted:
r['auid'] = _auid(os.path.join(src, r['dir']))
deleted.sort(key=lambda r: r['auid'])
move_all = False
if options.verbose:
if deleted:
print 'These AUs have been deleted on the daemon:'
for r in deleted:
print r['auid']
if options.verify:
move_all = raw_input('move all [y]? ').startswith('y')
else:
print 'No deleted AUs.'
verify_each = options.verify and not move_all
dst = os.path.join(options.directory, options.dest)
for r in deleted:
dir = r['dir']
if not verify_each or \
verify_each and \
raw_input('move %s [n]? ' % r['auid']).startswith('y'):
src_r = os.path.join(src, dir)
if os.path.isabs(dir):
if not dir.startswith(options.directory): print 'Absolute/relative path mismatch: %s' % (dir,)
dst_r = os.path.join(dst, dir[len(options.directory)+1:])
else: dst_r = os.path.join(dst, dir)
if options.commands:
print "mv %s %s # %s" % (src_r, dst_r, r['auid'])
else:
os.renames(src_r, dst_r)
if __name__ == '__main__':
print 'Warning: This script is no longer supported.'
main() | unknown | codeparrot/codeparrot-clean | ||
#-----------------------------------------------------------------------------
# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.
# All rights reserved.
#
# The full license is in the file LICENSE.txt, distributed with this software.
#-----------------------------------------------------------------------------
''' Implement and provide message protocols for communication between Bokeh
Servers and clients.
'''
#-----------------------------------------------------------------------------
# Boilerplate
#-----------------------------------------------------------------------------
from __future__ import absolute_import, division, print_function, unicode_literals
import logging
log = logging.getLogger(__name__)
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
# Standard library imports
# External imports
from tornado.escape import json_decode
# Bokeh imports
from . import messages
from . import versions
from .exceptions import ProtocolError
#-----------------------------------------------------------------------------
# Globals and constants
#-----------------------------------------------------------------------------
__all__ = (
'Protocol',
)
#-----------------------------------------------------------------------------
# General API
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Dev API
#-----------------------------------------------------------------------------
class Protocol(object):
''' Provide a message factory for a given version of the Bokeh Server
message protocol.
Args:
version (str) : a string identifying a protocol version, e.g. "1.0"
'''
def __init__(self, version):
if version not in versions.spec:
raise ProtocolError("Unknown protocol version %r" % version)
self._version = version
self._messages = dict()
for msgtype, revision in versions.spec[version]:
self._messages[msgtype] = messages.index[(msgtype, revision)]
def __repr__(self):
return "Protocol(%r)" % self.version
def create(self, msgtype, *args, **kwargs):
''' Create a new Message instance for the given type.
Args:
msgtype (str) :
'''
if msgtype not in self._messages:
raise ProtocolError("Unknown message type %r for protocol version %s" % (msgtype, self._version))
return self._messages[msgtype].create(*args, **kwargs)
def assemble(self, header_json, metadata_json, content_json):
''' Create a Message instance assembled from json fragments.
Args:
header_json (``JSON``) :
metadata_json (``JSON``) :
content_json (``JSON``) :
Returns:
message
'''
header = json_decode(header_json)
if 'msgtype' not in header:
log.error("Bad header with no msgtype was: %r", header)
raise ProtocolError("No 'msgtype' in header")
return self._messages[header['msgtype']].assemble(
header_json, metadata_json, content_json
)
@property
def version(self):
return self._version
#-----------------------------------------------------------------------------
# Private API
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Code
#----------------------------------------------------------------------------- | unknown | codeparrot/codeparrot-clean | ||
exports.abc = "abc";
exports.default = "default";
const flagIt = () => (exports.__esModule = true);
const query = __resourceQuery;
if (query.includes("yes")) flagIt(); | javascript | github | https://github.com/webpack/webpack | test/cases/cjs-tree-shaking/mjs/cjs-dynamic.js |
###
# Copyright (c) 2002-2005, Jeremiah Fincher
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions, and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions, and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the author of this software nor the name of
# contributors to this software may be used to endorse or promote products
# derived from this software without specific prior written consent.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
###
def window(L, size):
"""list * size -> window iterable
Returns a sliding 'window' through the list L of size size."""
assert not isinstance(L, int), 'Argument order swapped: window(L, size)'
if size < 1:
raise ValueError, 'size <= 0 disallowed.'
for i in xrange(len(L) - (size-1)):
yield L[i:i+size]
def mapinto(f, L):
for (i, x) in enumerate(L):
L[i] = f(x)
# vim:set shiftwidth=4 softtabstop=4 expandtab textwidth=79: | unknown | codeparrot/codeparrot-clean | ||
// Text truncate
// Requires inline-block or block for proper styling
@mixin text-truncate() {
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
} | unknown | github | https://github.com/twbs/bootstrap | scss/mixins/_text-truncate.scss |
import wx, os
from pyo import *
s=Server(duplex=1).boot()
wildcard = "AIF (*.aif)|*.aif;*.aiff|" \
"WAV (*.wav)|*.wav;*.wave;*.WAV|" \
class ControlPanel(wx.Panel):
def __init__(self, parent, fxp=None):
wx.Panel.__init__(self, parent)
self.StartStopText = wx.StaticText(self, id=-1, label="Path", pos=(10,12), size=wx.DefaultSize)
self.StartStop = wx.ToggleButton(self, id=-1, label="Start / Stop", pos=(2,28), size=wx.DefaultSize)
self.StartStop.Bind(wx.EVT_TOGGLEBUTTON, self.handleAudio)
self.fxp = fxp
Effects= ['-- None --', 'FM', 'BP', 'Distortion', 'Reverb', 'Delay', 'Ring Modulation',
'Flanger', 'Vocoder', 'Phaser']
self.choiceText = wx.StaticText(self, id=-1, label="Effect", pos=(10,126), size=wx.DefaultSize)
self.choice = wx.Choice(self, id=-1, pos=(2,140), size=wx.DefaultSize, choices=Effects)
self.choice.Bind(wx.EVT_CHOICE, self.changeFx)
SRC = ['Sound File', 'Record Clip', 'Live from microphone']
self.choiceText = wx.StaticText(self, id=-1, label="Sound Source", pos=(10,86), size=wx.DefaultSize)
self.choice = wx.Choice(self, id=-1, pos=(2,102), size=wx.DefaultSize, choices=SRC)
self.choice.Bind(wx.EVT_CHOICE, self.changeSrc)
self.b = wx.Button(self, -1, "Open sound file", (2,58))
self.b.Bind(wx.EVT_BUTTON, self.OnButton, self.b)
self.c = wx.Button(self, -1, "Clip Record", (2,58))
self.c.Bind(wx.EVT_BUTTON, self.OnButtonRec, self.c)
self.c.Hide()
self.r = wx.ToggleButton(self, -1, "Rec Start", (25,300))
self.r.Bind(wx.EVT_TOGGLEBUTTON, self.handleRec, self.r)
self.slidervol = wx.Slider(self, -1, 1000, 0, 1000, (50, 200), (50, 100),
wx.SL_VERTICAL | wx.SL_INVERSE)
self.Bind(wx.EVT_SLIDER, self.handleGlobalAmp, self.slidervol)
def handleGlobalAmp(self, evt):
s.amp = evt.GetInt() * 0.001
def OnClose(self):
if s.getIsStarted():
s.stop()
def OnButton(self, evt):
dlg = wx.FileDialog(
self, message="Choose a file",
defaultDir=os.getcwd(),
defaultFile="",
wildcard=wildcard,
style=wx.OPEN | wx.CHANGE_DIR
)
if dlg.ShowModal() == wx.ID_OK:
path = dlg.GetPath()
self.GetParent().GetParent().Audio.setSound(path)
dlg.Destroy()
def handleAudio(self, evt):
if evt.GetInt() == 1:
self.GetParent().GetParent().MainPanel.timer.Start(50)
s.start()
else:
self.GetParent().GetParent().MainPanel.timer.Stop()
s.stop()
def changeFx(self, evt):
for i in range(1):
self.fxp[i].changeFx(evt.GetString())
def OnButtonRec(self, evt):
self.GetParent().GetParent().Audio.starttabrec()
def handleRec(self, evt):
if evt.GetInt() == 1:
s.recstart('test.wav')
else:
s.recstop()
def changeSrc(self, evt):
self.GetParent().GetParent().Audio.srcout(evt.GetString())
if evt.GetString() == 'Sound File':
self.b.Show()
self.c.Hide()
elif evt.GetString() == 'Record Clip':
self.c.Show()
self.b.Hide()
elif evt.GetString() == 'Live from microphone':
#self.m.Show()
self.b.Hide()
self.c.Hide() | unknown | codeparrot/codeparrot-clean | ||
/* Return the full version string. */
#include "Python.h"
#include "patchlevel.h"
static int initialized = 0;
static char version[300];
void _Py_InitVersion(void)
{
if (initialized) {
return;
}
initialized = 1;
#ifdef Py_GIL_DISABLED
const char *buildinfo_format = "%.80s free-threading build (%.80s) %.80s";
#else
const char *buildinfo_format = "%.80s (%.80s) %.80s";
#endif
PyOS_snprintf(version, sizeof(version), buildinfo_format,
PY_VERSION, Py_GetBuildInfo(), Py_GetCompiler());
}
const char *
Py_GetVersion(void)
{
_Py_InitVersion();
return version;
}
// Export the Python hex version as a constant.
const unsigned long Py_Version = PY_VERSION_HEX; | c | github | https://github.com/python/cpython | Python/getversion.c |
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for checkpointing the sequence datasets."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
import numpy as np
from tensorflow.python.data.kernel_tests import checkpoint_test_base
from tensorflow.python.data.kernel_tests import test_base
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.framework import combinations
from tensorflow.python.platform import test
class SkipDatasetCheckpointTest(checkpoint_test_base.CheckpointTestBase,
parameterized.TestCase):
def _build_skip_dataset(self, count):
components = (np.arange(10),)
return dataset_ops.Dataset.from_tensor_slices(components).skip(count)
@combinations.generate(test_base.default_test_combinations())
def testSkipFewerThanInputs(self):
count = 4
num_outputs = 10 - count
self.run_core_tests(lambda: self._build_skip_dataset(count), num_outputs)
@combinations.generate(test_base.default_test_combinations())
def testSkipVarious(self):
# Skip more than inputs
self.run_core_tests(lambda: self._build_skip_dataset(20), 0)
# Skip exactly the input size
self.run_core_tests(lambda: self._build_skip_dataset(10), 0)
self.run_core_tests(lambda: self._build_skip_dataset(-1), 0)
# Skip nothing
self.run_core_tests(lambda: self._build_skip_dataset(0), 10)
@combinations.generate(test_base.default_test_combinations())
def testInvalidSkip(self):
with self.assertRaisesRegex(ValueError,
'Shape must be rank 0 but is rank 1'):
self.run_core_tests(lambda: self._build_skip_dataset([1, 2]), 0)
class TakeDatasetCheckpointTest(checkpoint_test_base.CheckpointTestBase,
parameterized.TestCase):
def _build_take_dataset(self, count):
components = (np.arange(10),)
return dataset_ops.Dataset.from_tensor_slices(components).take(count)
@combinations.generate(test_base.default_test_combinations())
def testTakeFewerThanInputs(self):
count = 4
self.run_core_tests(lambda: self._build_take_dataset(count), count)
@combinations.generate(test_base.default_test_combinations())
def testTakeVarious(self):
# Take more than inputs
self.run_core_tests(lambda: self._build_take_dataset(20), 10)
# Take exactly the input size
self.run_core_tests(lambda: self._build_take_dataset(10), 10)
# Take all
self.run_core_tests(lambda: self._build_take_dataset(-1), 10)
# Take nothing
self.run_core_tests(lambda: self._build_take_dataset(0), 0)
def testInvalidTake(self):
with self.assertRaisesRegex(ValueError,
'Shape must be rank 0 but is rank 1'):
self.run_core_tests(lambda: self._build_take_dataset([1, 2]), 0)
class RepeatDatasetCheckpointTest(checkpoint_test_base.CheckpointTestBase,
parameterized.TestCase):
def _build_repeat_dataset(self, count, take_count=3):
components = (np.arange(10),)
return dataset_ops.Dataset.from_tensor_slices(components).take(
take_count).repeat(count)
@combinations.generate(test_base.default_test_combinations())
def testFiniteRepeat(self):
count = 10
self.run_core_tests(lambda: self._build_repeat_dataset(count), 3 * count)
@combinations.generate(test_base.default_test_combinations())
def testEmptyRepeat(self):
self.run_core_tests(lambda: self._build_repeat_dataset(0), 0)
@combinations.generate(test_base.default_test_combinations())
def testInfiniteRepeat(self):
self.verify_unused_iterator(
lambda: self._build_repeat_dataset(-1), 10, verify_exhausted=False)
self.verify_multiple_breaks(
lambda: self._build_repeat_dataset(-1), 20, verify_exhausted=False)
self.verify_reset_restored_iterator(
lambda: self._build_repeat_dataset(-1), 20, verify_exhausted=False)
# Test repeat empty dataset
self.run_core_tests(lambda: self._build_repeat_dataset(-1, 0), 0)
@combinations.generate(test_base.default_test_combinations())
def testInvalidRepeat(self):
with self.assertRaisesRegex(ValueError,
'Shape must be rank 0 but is rank 1'):
self.run_core_tests(lambda: self._build_repeat_dataset([1, 2], 0), 0)
if __name__ == '__main__':
test.main() | unknown | codeparrot/codeparrot-clean | ||
// Copyright The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package fileutil
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
"github.com/prometheus/prometheus/util/testutil"
)
func TestLocking(t *testing.T) {
dir := testutil.NewTemporaryDirectory("test_flock", t)
defer dir.Close()
fileName := filepath.Join(dir.Path(), "LOCK")
_, err := os.Stat(fileName)
require.Error(t, err, "File %q unexpectedly exists.", fileName)
lock, existed, err := Flock(fileName)
require.NoError(t, err, "Error locking file %q", fileName)
require.False(t, existed, "File %q reported as existing during locking.", fileName)
// File must now exist.
_, err = os.Stat(fileName)
require.NoError(t, err, "Could not stat file %q expected to exist", fileName)
// Try to lock again.
lockedAgain, existed, err := Flock(fileName)
require.Error(t, err, "File %q locked twice.", fileName)
require.Nil(t, lockedAgain, "Unsuccessful locking did not return nil.")
require.True(t, existed, "Existing file %q not recognized.", fileName)
err = lock.Release()
require.NoError(t, err, "Error releasing lock for file %q", fileName)
// File must still exist.
_, err = os.Stat(fileName)
require.NoError(t, err, "Could not stat file %q expected to exist", fileName)
// Lock existing file.
lock, existed, err = Flock(fileName)
require.NoError(t, err, "Error locking file %q", fileName)
require.True(t, existed, "Existing file %q not recognized.", fileName)
err = lock.Release()
require.NoError(t, err, "Error releasing lock for file %q", fileName)
} | go | github | https://github.com/prometheus/prometheus | tsdb/fileutil/flock_test.go |
#define DISABLE_SIGN_COMPARE_WARNINGS
#include "git-compat-util.h"
#include "config.h"
#include "json-writer.h"
#include "repository.h"
#include "run-command.h"
#include "version.h"
#include "trace2/tr2_dst.h"
#include "trace2/tr2_tbuf.h"
#include "trace2/tr2_sid.h"
#include "trace2/tr2_sysenv.h"
#include "trace2/tr2_tgt.h"
#include "trace2/tr2_tls.h"
#include "trace2/tr2_tmr.h"
static struct tr2_dst tr2dst_event = {
.sysenv_var = TR2_SYSENV_EVENT,
};
/*
* The version number of the JSON data generated by the EVENT target in this
* source file. The version should be incremented if new event types are added,
* if existing fields are removed, or if there are significant changes in
* interpretation of existing events or fields. Smaller changes, such as adding
* a new field to an existing event, do not require an increment to the EVENT
* format version.
*/
#define TR2_EVENT_VERSION "4"
/*
* Region nesting limit for messages written to the event target.
*
* The "region_enter" and "region_leave" messages (especially recursive
* messages such as those produced while diving the worktree or index)
* are primarily intended for the performance target during debugging.
*
* Some of the outer-most messages, however, may be of interest to the
* event target. Use the TR2_SYSENV_EVENT_NESTING setting to increase
* region details in the event target.
*/
static int tr2env_event_max_nesting_levels = 2;
/*
* Use the TR2_SYSENV_EVENT_BRIEF to omit the <time>, <file>, and
* <line> fields from most events.
*/
static int tr2env_event_be_brief;
static int fn_init(void)
{
int want = tr2_dst_trace_want(&tr2dst_event);
int max_nesting;
int want_brief;
const char *nesting;
const char *brief;
if (!want)
return want;
nesting = tr2_sysenv_get(TR2_SYSENV_EVENT_NESTING);
if (nesting && *nesting && ((max_nesting = atoi(nesting)) > 0))
tr2env_event_max_nesting_levels = max_nesting;
brief = tr2_sysenv_get(TR2_SYSENV_EVENT_BRIEF);
if (brief && *brief &&
((want_brief = git_parse_maybe_bool(brief)) != -1))
tr2env_event_be_brief = want_brief;
return want;
}
static void fn_term(void)
{
tr2_dst_trace_disable(&tr2dst_event);
}
/*
* Append common key-value pairs to the currently open JSON object.
* "event:"<event_name>"
* "sid":"<sid>"
* "thread":"<thread_name>"
* "time":"<time>"
* "file":"<filename>"
* "line":<line_number>
* "repo":<repo_id>
*/
static void event_fmt_prepare(const char *event_name, const char *file,
int line, const struct repository *repo,
struct json_writer *jw)
{
struct tr2tls_thread_ctx *ctx = tr2tls_get_self();
struct tr2_tbuf tb_now;
jw_object_string(jw, "event", event_name);
jw_object_string(jw, "sid", tr2_sid_get());
jw_object_string(jw, "thread", ctx->thread_name);
/*
* In brief mode, only emit <time> on these 2 event types.
*/
if (!tr2env_event_be_brief || !strcmp(event_name, "version") ||
!strcmp(event_name, "atexit")) {
tr2_tbuf_utc_datetime_extended(&tb_now);
jw_object_string(jw, "time", tb_now.buf);
}
if (!tr2env_event_be_brief && file && *file) {
jw_object_string(jw, "file", file);
jw_object_intmax(jw, "line", line);
}
if (repo)
jw_object_intmax(jw, "repo", repo->trace2_repo_id);
}
static void fn_too_many_files_fl(const char *file, int line)
{
const char *event_name = "too_many_files";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_version_fl(const char *file, int line)
{
const char *event_name = "version";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_string(&jw, "evt", TR2_EVENT_VERSION);
jw_object_string(&jw, "exe", git_version_string);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
if (tr2dst_event.too_many_files)
fn_too_many_files_fl(file, line);
}
static void fn_start_fl(const char *file, int line,
uint64_t us_elapsed_absolute, const char **argv)
{
const char *event_name = "start";
struct json_writer jw = JSON_WRITER_INIT;
double t_abs = (double)us_elapsed_absolute / 1000000.0;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_double(&jw, "t_abs", 6, t_abs);
jw_object_inline_begin_array(&jw, "argv");
jw_array_argv(&jw, argv);
jw_end(&jw);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_exit_fl(const char *file, int line, uint64_t us_elapsed_absolute,
int code)
{
const char *event_name = "exit";
struct json_writer jw = JSON_WRITER_INIT;
double t_abs = (double)us_elapsed_absolute / 1000000.0;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_double(&jw, "t_abs", 6, t_abs);
jw_object_intmax(&jw, "code", code);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_signal(uint64_t us_elapsed_absolute, int signo)
{
const char *event_name = "signal";
struct json_writer jw = JSON_WRITER_INIT;
double t_abs = (double)us_elapsed_absolute / 1000000.0;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, __FILE__, __LINE__, NULL, &jw);
jw_object_double(&jw, "t_abs", 6, t_abs);
jw_object_intmax(&jw, "signo", signo);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_atexit(uint64_t us_elapsed_absolute, int code)
{
const char *event_name = "atexit";
struct json_writer jw = JSON_WRITER_INIT;
double t_abs = (double)us_elapsed_absolute / 1000000.0;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, __FILE__, __LINE__, NULL, &jw);
jw_object_double(&jw, "t_abs", 6, t_abs);
jw_object_intmax(&jw, "code", code);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void maybe_add_string_va(struct json_writer *jw, const char *field_name,
const char *fmt, va_list ap)
{
if (fmt && *fmt) {
va_list copy_ap;
struct strbuf buf = STRBUF_INIT;
va_copy(copy_ap, ap);
strbuf_vaddf(&buf, fmt, copy_ap);
va_end(copy_ap);
jw_object_string(jw, field_name, buf.buf);
strbuf_release(&buf);
return;
}
}
static void fn_error_va_fl(const char *file, int line, const char *fmt,
va_list ap)
{
const char *event_name = "error";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
maybe_add_string_va(&jw, "msg", fmt, ap);
/*
* Also emit the format string as a field in case
* post-processors want to aggregate common error
* messages by type without argument fields (such
* as pathnames or branch names) cluttering it up.
*/
if (fmt && *fmt)
jw_object_string(&jw, "fmt", fmt);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_command_path_fl(const char *file, int line, const char *pathname)
{
const char *event_name = "cmd_path";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_string(&jw, "path", pathname);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_command_ancestry_fl(const char *file, int line, const char **parent_names)
{
const char *event_name = "cmd_ancestry";
const char *parent_name = NULL;
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_inline_begin_array(&jw, "ancestry");
while ((parent_name = *parent_names++))
jw_array_string(&jw, parent_name);
jw_end(&jw); /* 'ancestry' array */
jw_end(&jw); /* event object */
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_command_name_fl(const char *file, int line, const char *name,
const char *hierarchy)
{
const char *event_name = "cmd_name";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_string(&jw, "name", name);
if (hierarchy && *hierarchy)
jw_object_string(&jw, "hierarchy", hierarchy);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_command_mode_fl(const char *file, int line, const char *mode)
{
const char *event_name = "cmd_mode";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_string(&jw, "name", mode);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_alias_fl(const char *file, int line, const char *alias,
const char **argv)
{
const char *event_name = "alias";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_string(&jw, "alias", alias);
jw_object_inline_begin_array(&jw, "argv");
jw_array_argv(&jw, argv);
jw_end(&jw);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_child_start_fl(const char *file, int line,
uint64_t us_elapsed_absolute UNUSED,
const struct child_process *cmd)
{
const char *event_name = "child_start";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_intmax(&jw, "child_id", cmd->trace2_child_id);
if (cmd->trace2_hook_name) {
jw_object_string(&jw, "child_class", "hook");
jw_object_string(&jw, "hook_name", cmd->trace2_hook_name);
} else {
const char *child_class =
cmd->trace2_child_class ? cmd->trace2_child_class : "?";
jw_object_string(&jw, "child_class", child_class);
}
if (cmd->dir)
jw_object_string(&jw, "cd", cmd->dir);
jw_object_bool(&jw, "use_shell", cmd->use_shell);
jw_object_inline_begin_array(&jw, "argv");
if (cmd->git_cmd)
jw_array_string(&jw, "git");
jw_array_argv(&jw, cmd->args.v);
jw_end(&jw);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_child_exit_fl(const char *file, int line,
uint64_t us_elapsed_absolute UNUSED,
int cid, int pid,
int code, uint64_t us_elapsed_child)
{
const char *event_name = "child_exit";
struct json_writer jw = JSON_WRITER_INIT;
double t_rel = (double)us_elapsed_child / 1000000.0;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_intmax(&jw, "child_id", cid);
jw_object_intmax(&jw, "pid", pid);
jw_object_intmax(&jw, "code", code);
jw_object_double(&jw, "t_rel", 6, t_rel);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_child_ready_fl(const char *file, int line,
uint64_t us_elapsed_absolute UNUSED,
int cid, int pid,
const char *ready, uint64_t us_elapsed_child)
{
const char *event_name = "child_ready";
struct json_writer jw = JSON_WRITER_INIT;
double t_rel = (double)us_elapsed_child / 1000000.0;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_intmax(&jw, "child_id", cid);
jw_object_intmax(&jw, "pid", pid);
jw_object_string(&jw, "ready", ready);
jw_object_double(&jw, "t_rel", 6, t_rel);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_thread_start_fl(const char *file, int line,
uint64_t us_elapsed_absolute UNUSED)
{
const char *event_name = "thread_start";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_thread_exit_fl(const char *file, int line,
uint64_t us_elapsed_absolute UNUSED,
uint64_t us_elapsed_thread)
{
const char *event_name = "thread_exit";
struct json_writer jw = JSON_WRITER_INIT;
double t_rel = (double)us_elapsed_thread / 1000000.0;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_double(&jw, "t_rel", 6, t_rel);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_exec_fl(const char *file, int line,
uint64_t us_elapsed_absolute UNUSED,
int exec_id, const char *exe, const char **argv)
{
const char *event_name = "exec";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_intmax(&jw, "exec_id", exec_id);
if (exe)
jw_object_string(&jw, "exe", exe);
jw_object_inline_begin_array(&jw, "argv");
jw_array_argv(&jw, argv);
jw_end(&jw);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_exec_result_fl(const char *file, int line,
uint64_t us_elapsed_absolute UNUSED,
int exec_id, int code)
{
const char *event_name = "exec_result";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_intmax(&jw, "exec_id", exec_id);
jw_object_intmax(&jw, "code", code);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_param_fl(const char *file, int line, const char *param,
const char *value, const struct key_value_info *kvi)
{
const char *event_name = "def_param";
struct json_writer jw = JSON_WRITER_INIT;
enum config_scope scope = kvi->scope;
const char *scope_name = config_scope_name(scope);
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_string(&jw, "scope", scope_name);
jw_object_string(&jw, "param", param);
if (value)
jw_object_string(&jw, "value", value);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_repo_fl(const char *file, int line,
const struct repository *repo)
{
const char *event_name = "def_repo";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, repo, &jw);
jw_object_string(&jw, "worktree", repo->worktree);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_region_enter_printf_va_fl(const char *file, int line,
uint64_t us_elapsed_absolute UNUSED,
const char *category,
const char *label,
const struct repository *repo,
const char *fmt, va_list ap)
{
const char *event_name = "region_enter";
struct tr2tls_thread_ctx *ctx = tr2tls_get_self();
if (ctx->nr_open_regions <= tr2env_event_max_nesting_levels) {
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, repo, &jw);
jw_object_intmax(&jw, "nesting", ctx->nr_open_regions);
if (category)
jw_object_string(&jw, "category", category);
if (label)
jw_object_string(&jw, "label", label);
maybe_add_string_va(&jw, "msg", fmt, ap);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
}
static void fn_region_leave_printf_va_fl(
const char *file, int line, uint64_t us_elapsed_absolute UNUSED,
uint64_t us_elapsed_region, const char *category, const char *label,
const struct repository *repo, const char *fmt, va_list ap)
{
const char *event_name = "region_leave";
struct tr2tls_thread_ctx *ctx = tr2tls_get_self();
if (ctx->nr_open_regions <= tr2env_event_max_nesting_levels) {
struct json_writer jw = JSON_WRITER_INIT;
double t_rel = (double)us_elapsed_region / 1000000.0;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, repo, &jw);
jw_object_double(&jw, "t_rel", 6, t_rel);
jw_object_intmax(&jw, "nesting", ctx->nr_open_regions);
if (category)
jw_object_string(&jw, "category", category);
if (label)
jw_object_string(&jw, "label", label);
maybe_add_string_va(&jw, "msg", fmt, ap);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
}
static void fn_data_fl(const char *file, int line, uint64_t us_elapsed_absolute,
uint64_t us_elapsed_region, const char *category,
const struct repository *repo, const char *key,
const char *value)
{
const char *event_name = "data";
struct tr2tls_thread_ctx *ctx = tr2tls_get_self();
if (ctx->nr_open_regions <= tr2env_event_max_nesting_levels) {
struct json_writer jw = JSON_WRITER_INIT;
double t_abs = (double)us_elapsed_absolute / 1000000.0;
double t_rel = (double)us_elapsed_region / 1000000.0;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, repo, &jw);
jw_object_double(&jw, "t_abs", 6, t_abs);
jw_object_double(&jw, "t_rel", 6, t_rel);
jw_object_intmax(&jw, "nesting", ctx->nr_open_regions);
jw_object_string(&jw, "category", category);
jw_object_string(&jw, "key", key);
jw_object_string(&jw, "value", value);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
}
static void fn_data_json_fl(const char *file, int line,
uint64_t us_elapsed_absolute,
uint64_t us_elapsed_region, const char *category,
const struct repository *repo, const char *key,
const struct json_writer *value)
{
const char *event_name = "data_json";
struct tr2tls_thread_ctx *ctx = tr2tls_get_self();
if (ctx->nr_open_regions <= tr2env_event_max_nesting_levels) {
struct json_writer jw = JSON_WRITER_INIT;
double t_abs = (double)us_elapsed_absolute / 1000000.0;
double t_rel = (double)us_elapsed_region / 1000000.0;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, repo, &jw);
jw_object_double(&jw, "t_abs", 6, t_abs);
jw_object_double(&jw, "t_rel", 6, t_rel);
jw_object_intmax(&jw, "nesting", ctx->nr_open_regions);
jw_object_string(&jw, "category", category);
jw_object_string(&jw, "key", key);
jw_object_sub_jw(&jw, "value", value);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
}
static void fn_printf_va_fl(const char *file, int line,
uint64_t us_elapsed_absolute,
const char *fmt, va_list ap)
{
const char *event_name = "printf";
struct json_writer jw = JSON_WRITER_INIT;
double t_abs = (double)us_elapsed_absolute / 1000000.0;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, file, line, NULL, &jw);
jw_object_double(&jw, "t_abs", 6, t_abs);
maybe_add_string_va(&jw, "msg", fmt, ap);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_timer(const struct tr2_timer_metadata *meta,
const struct tr2_timer *timer,
int is_final_data)
{
const char *event_name = is_final_data ? "timer" : "th_timer";
struct json_writer jw = JSON_WRITER_INIT;
double t_total = NS_TO_SEC(timer->total_ns);
double t_min = NS_TO_SEC(timer->min_ns);
double t_max = NS_TO_SEC(timer->max_ns);
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, __FILE__, __LINE__, NULL, &jw);
jw_object_string(&jw, "category", meta->category);
jw_object_string(&jw, "name", meta->name);
jw_object_intmax(&jw, "intervals", timer->interval_count);
jw_object_double(&jw, "t_total", 6, t_total);
jw_object_double(&jw, "t_min", 6, t_min);
jw_object_double(&jw, "t_max", 6, t_max);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
static void fn_counter(const struct tr2_counter_metadata *meta,
const struct tr2_counter *counter,
int is_final_data)
{
const char *event_name = is_final_data ? "counter" : "th_counter";
struct json_writer jw = JSON_WRITER_INIT;
jw_object_begin(&jw, 0);
event_fmt_prepare(event_name, __FILE__, __LINE__, NULL, &jw);
jw_object_string(&jw, "category", meta->category);
jw_object_string(&jw, "name", meta->name);
jw_object_intmax(&jw, "count", counter->value);
jw_end(&jw);
tr2_dst_write_line(&tr2dst_event, &jw.json);
jw_release(&jw);
}
struct tr2_tgt tr2_tgt_event = {
.pdst = &tr2dst_event,
.pfn_init = fn_init,
.pfn_term = fn_term,
.pfn_version_fl = fn_version_fl,
.pfn_start_fl = fn_start_fl,
.pfn_exit_fl = fn_exit_fl,
.pfn_signal = fn_signal,
.pfn_atexit = fn_atexit,
.pfn_error_va_fl = fn_error_va_fl,
.pfn_command_path_fl = fn_command_path_fl,
.pfn_command_ancestry_fl = fn_command_ancestry_fl,
.pfn_command_name_fl = fn_command_name_fl,
.pfn_command_mode_fl = fn_command_mode_fl,
.pfn_alias_fl = fn_alias_fl,
.pfn_child_start_fl = fn_child_start_fl,
.pfn_child_exit_fl = fn_child_exit_fl,
.pfn_child_ready_fl = fn_child_ready_fl,
.pfn_thread_start_fl = fn_thread_start_fl,
.pfn_thread_exit_fl = fn_thread_exit_fl,
.pfn_exec_fl = fn_exec_fl,
.pfn_exec_result_fl = fn_exec_result_fl,
.pfn_param_fl = fn_param_fl,
.pfn_repo_fl = fn_repo_fl,
.pfn_region_enter_printf_va_fl = fn_region_enter_printf_va_fl,
.pfn_region_leave_printf_va_fl = fn_region_leave_printf_va_fl,
.pfn_data_fl = fn_data_fl,
.pfn_data_json_fl = fn_data_json_fl,
.pfn_printf_va_fl = fn_printf_va_fl,
.pfn_timer = fn_timer,
.pfn_counter = fn_counter,
}; | c | github | https://github.com/git/git | trace2/tr2_tgt_event.c |
# Copyright (c) 2016 EMC Corporation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cinder.i18n import _
from cinder.volume.drivers.coprhd.helpers import commoncoprhdapi as common
class VirtualPool(common.CoprHDResource):
URI_VPOOL = "/{0}/vpools"
URI_VPOOL_SHOW = URI_VPOOL + "/{1}"
URI_VPOOL_SEARCH = URI_VPOOL + "/search?name={1}"
def vpool_show_uri(self, vpooltype, uri):
"""Makes REST API call and retrieves vpool details based on UUID.
This function will take uri as input and returns with
all parameters of VPOOL like label, urn and type.
:param vpooltype : Type of virtual pool {'block'}
:param uri : unique resource identifier of the vpool
:returns: object containing all the details of vpool
"""
(s, h) = common.service_json_request(
self.ipaddr, self.port,
"GET",
self.URI_VPOOL_SHOW.format(vpooltype, uri), None)
o = common.json_decode(s)
if o['inactive']:
return None
return o
def vpool_query(self, name, vpooltype):
"""Makes REST API call to query the vpool by name and type.
This function will take the VPOOL name and type of VPOOL
as input and get uri of the first occurence of given VPOOL.
:param name: Name of the VPOOL
:param vpooltype: Type of the VPOOL {'block'}
:returns: uri of the given vpool
"""
if common.is_uri(name):
return name
(s, h) = common.service_json_request(
self.ipaddr, self.port, "GET",
self.URI_VPOOL_SEARCH.format(vpooltype, name), None)
o = common.json_decode(s)
if len(o['resource']) > 0:
# Get the Active vpool ID.
for vpool in o['resource']:
if self.vpool_show_uri(vpooltype, vpool['id']) is not None:
return vpool['id']
# Raise not found exception. as we did not find any active vpool.
raise common.CoprHdError(common.CoprHdError.NOT_FOUND_ERR,
(_("VPool %(name)s ( %(vpooltype)s ) :"
" not found") %
{'name': name,
'vpooltype': vpooltype
})) | unknown | codeparrot/codeparrot-clean | ||
from email import message_from_string
from email.utils import parseaddr
from django.conf import settings
from django.core.files.storage import get_storage_class
import commonware.log
import waffle
from email_reply_parser import EmailReplyParser
from access.models import Group
from users.models import UserProfile
from mkt.comm.models import (CommunicationNote, CommunicationNoteRead,
CommunicationThread, CommunicationThreadToken,
user_has_perm_thread)
from mkt.constants import comm
log = commonware.log.getLogger('comm')
class CommEmailParser(object):
"""Utility to parse email replies."""
address_prefix = comm.REPLY_TO_PREFIX
def __init__(self, email_text):
self.email = message_from_string(email_text)
self.reply_text = EmailReplyParser.read(self.email.get_payload()).reply
def _get_address_line(self):
return parseaddr(self.email['to'])
def get_uuid(self):
name, addr = self._get_address_line()
if addr.startswith(self.address_prefix):
# Strip everything between "reply+" and the "@" sign.
uuid = addr[len(self.address_prefix):].split('@')[0]
else:
return False
return uuid
def get_body(self):
return self.reply_text
def save_from_email_reply(reply_text):
parser = CommEmailParser(reply_text)
uuid = parser.get_uuid()
if not uuid:
return False
try:
tok = CommunicationThreadToken.objects.get(uuid=uuid)
except CommunicationThreadToken.DoesNotExist:
log.error('An email was skipped with non-existing uuid %s' % uuid)
return False
if (user_has_perm_thread(tok.thread, tok.user) and tok.is_valid()):
n = CommunicationNote.objects.create(note_type=comm.NO_ACTION,
thread=tok.thread, author=tok.user, body=parser.get_body())
log.info('A new note has been created (from %s using tokenid %s)' %
(tok.user.id, uuid))
return n
return False
def filter_notes_by_read_status(queryset, profile, read_status=True):
"""
Filter read/unread notes using this method.
`read_status` = `True` for read notes, `False` for unread notes.
"""
# Get some read notes from db.
notes = list(CommunicationNoteRead.objects.filter(
user=profile).values_list('note', flat=True))
if read_status:
# Filter and return read notes if they exist.
return queryset.filter(pk__in=notes) if notes else queryset.none()
else:
# Exclude read notes if they exist.
return queryset.exclude(pk__in=notes) if notes else queryset.all()
def get_reply_token(thread, user_id):
tok, created = CommunicationThreadToken.objects.get_or_create(
thread=thread, user_id=user_id)
# We expire a token after it has been used for a maximum number of times.
# This is usually to prevent overusing a single token to spam to threads.
# Since we're re-using tokens, we need to make sure they are valid for
# replying to new notes so we reset their `use_count`.
if not created:
tok.update(use_count=0)
else:
log.info('Created token with UUID %s for user_id: %s.' %
(tok.uuid, user_id))
return tok
def get_recipients(note):
"""
Determine email recipients based on a new note based on those who are on
the thread_cc list and note permissions.
Returns reply-to-tokenized emails.
"""
thread = note.thread
recipients = []
# Whitelist: include recipients.
if note.note_type == comm.ESCALATION:
# Email only senior reviewers on escalations.
seniors = Group.objects.get(name='Senior App Reviewers')
recipients = seniors.users.values_list('id', 'email')
else:
# Get recipients via the CommunicationThreadCC table, which is usually
# populated with the developer, the Mozilla contact, and anyone that
# posts to and reviews the app.
recipients = set(thread.thread_cc.values_list(
'user__id', 'user__email'))
# Blacklist: exclude certain people from receiving the email based on
# permission.
excludes = []
if not note.read_permission_developer:
# Exclude developer.
excludes += thread.addon.authors.values_list('id', 'email')
# Exclude note author.
excludes.append((note.author.id, note.author.email))
# Remove excluded people from the recipients.
recipients = [r for r in recipients if r not in excludes]
# Build reply-to-tokenized email addresses.
new_recipients_list = []
for user_id, user_email in recipients:
tok = get_reply_token(note.thread, user_id)
new_recipients_list.append((user_email, tok.uuid))
return new_recipients_list
def send_mail_comm(note):
"""
Email utility used globally by the Communication Dashboard to send emails.
Given a note (its actions and permissions), recipients are determined and
emails are sent to appropriate people.
"""
from mkt.reviewers.utils import send_mail
if not waffle.switch_is_active('comm-dashboard'):
return
recipients = get_recipients(note)
name = note.thread.addon.name
data = {
'name': name,
'sender': note.author.name,
'comments': note.body,
'thread_id': str(note.thread.id)
}
subject = {
comm.ESCALATION: u'Escalated Review Requested: %s' % name,
}.get(note.note_type, u'Submission Update: %s' % name)
log.info(u'Sending emails for %s' % note.thread.addon)
for email, tok in recipients:
reply_to = '{0}{1}@{2}'.format(comm.REPLY_TO_PREFIX, tok,
settings.POSTFIX_DOMAIN)
send_mail(subject, 'reviewers/emails/decisions/post.txt', data,
[email], perm_setting='app_reviewed', reply_to=reply_to)
def create_comm_note(app, version, author, body, note_type=comm.NO_ACTION,
perms=None):
"""
Creates a note on an app version's thread.
Creates a thread if a thread doesn't already exist.
CC's app's Mozilla contacts to auto-join thread.
app -- app object.
version -- app version.
author -- UserProfile for the note's author.
body -- string/text for note comment.
note_type -- integer for note_type (mkt constant), defaults to 0/NO_ACTION
(e.g. comm.APPROVAL, comm.REJECTION, comm.NO_ACTION).
perms -- object of groups to grant permission to, will set flags on Thread.
(e.g. {'developer': False, 'staff': True}).
"""
if not waffle.switch_is_active('comm-dashboard'):
return None, None
# Dict of {'read_permission_GROUP_TYPE': boolean}.
# Perm for reviewer, senior_reviewer, moz_contact, staff True by default.
# Perm for developer False if is escalation or reviewer comment by default.
perms = perms or {}
if 'developer' not in perms and note_type in (comm.ESCALATION,
comm.REVIEWER_COMMENT):
perms['developer'] = False
create_perms = dict(('read_permission_%s' % key, has_perm)
for key, has_perm in perms.iteritems())
# Create thread + note.
thread, created_thread = app.threads.get_or_create(
version=version, defaults=create_perms)
note = thread.notes.create(
note_type=note_type, body=body, author=author, **create_perms)
post_create_comm_note(note)
return thread, note
def post_create_comm_note(note):
"""Stuff to do after creating note, also used in comm api's post_save."""
thread = note.thread
app = thread.addon
# Add developer to thread.
for developer in app.authors.all():
thread.join_thread(developer)
# Add Mozilla contact to thread.
for email in app.get_mozilla_contacts():
try:
moz_contact = UserProfile.objects.get(email=email)
thread.join_thread(moz_contact)
except UserProfile.DoesNotExist:
pass
# Add note author to thread.
author = note.author
cc, created_cc = thread.join_thread(author)
if not created_cc:
# Mark their own note as read.
note.mark_read(note.author)
# Send out emails.
send_mail_comm(note)
def create_attachments(note, formset):
"""Create attachments from CommAttachmentFormSet onto note."""
errors = []
storage = get_storage_class()()
for form in formset:
if not form.is_valid():
errors.append(form.errors)
continue
data = form.cleaned_data
attachment = data['attachment']
attachment_name = _save_attachment(
storage, attachment,
'%s/%s' % (settings.REVIEWER_ATTACHMENTS_PATH, attachment.name))
note.attachments.create(
description=data.get('description'), filepath=attachment_name,
mimetype=attachment.content_type)
return errors
def _save_attachment(storage, attachment, filepath):
"""Saves an attachment and returns the filename."""
filepath = storage.save(filepath, attachment)
# In case of duplicate filename, storage suffixes filename.
return filepath.split('/')[-1] | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
#
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Generated by synthtool. DO NOT EDIT!
from __future__ import absolute_import
import os
import shutil
import nox
LOCAL_DEPS = (os.path.join("..", "api_core"), os.path.join("..", "core"))
BLACK_VERSION = "black==19.3b0"
BLACK_PATHS = ["docs", "google", "tests", "noxfile.py", "setup.py"]
if os.path.exists("samples"):
BLACK_PATHS.append("samples")
@nox.session(python="3.7")
def lint(session):
"""Run linters.
Returns a failure if the linters find linting errors or sufficiently
serious code quality issues.
"""
session.install("flake8", BLACK_VERSION, *LOCAL_DEPS)
session.run("black", "--check", *BLACK_PATHS)
session.run("flake8", "google", "tests")
@nox.session(python="3.6")
def blacken(session):
"""Run black.
Format code to uniform standard.
This currently uses Python 3.6 due to the automated Kokoro run of synthtool.
That run uses an image that doesn't have 3.6 installed. Before updating this
check the state of the `gcp_ubuntu_config` we use for that Kokoro run.
"""
session.install(BLACK_VERSION)
session.run("black", *BLACK_PATHS)
@nox.session(python="3.7")
def lint_setup_py(session):
"""Verify that setup.py is valid (including RST check)."""
session.install("docutils", "pygments")
session.run("python", "setup.py", "check", "--restructuredtext", "--strict")
def default(session):
# Install all test dependencies, then install this package in-place.
session.install("mock", "pytest", "pytest-cov")
for local_dep in LOCAL_DEPS:
session.install("-e", local_dep)
session.install("-e", ".")
# Run py.test against the unit tests.
session.run(
"py.test",
"--quiet",
"--cov=google.cloud",
"--cov=tests.unit",
"--cov-append",
"--cov-config=.coveragerc",
"--cov-report=",
"--cov-fail-under=0",
os.path.join("tests", "unit"),
*session.posargs,
)
@nox.session(python=["2.7", "3.5", "3.6", "3.7"])
def unit(session):
"""Run the unit test suite."""
default(session)
@nox.session(python=["2.7", "3.7"])
def system(session):
"""Run the system test suite."""
system_test_path = os.path.join("tests", "system.py")
system_test_folder_path = os.path.join("tests", "system")
# Sanity check: Only run tests if the environment variable is set.
if not os.environ.get("GOOGLE_APPLICATION_CREDENTIALS", ""):
session.skip("Credentials must be set via environment variable")
system_test_exists = os.path.exists(system_test_path)
system_test_folder_exists = os.path.exists(system_test_folder_path)
# Sanity check: only run tests if found.
if not system_test_exists and not system_test_folder_exists:
session.skip("System tests were not found")
# Use pre-release gRPC for system tests.
session.install("--pre", "grpcio")
# Install all test dependencies, then install this package into the
# virtualenv's dist-packages.
session.install("mock", "pytest")
for local_dep in LOCAL_DEPS:
session.install("-e", local_dep)
session.install("-e", "../test_utils/")
session.install("-e", ".")
# Run py.test against the system tests.
if system_test_exists:
session.run("py.test", "--quiet", system_test_path, *session.posargs)
if system_test_folder_exists:
session.run("py.test", "--quiet", system_test_folder_path, *session.posargs)
@nox.session(python="3.7")
def cover(session):
"""Run the final coverage report.
This outputs the coverage report aggregating coverage from the unit
test runs (not system test runs), and then erases coverage data.
"""
session.install("coverage", "pytest-cov")
session.run("coverage", "report", "--show-missing", "--fail-under=100")
session.run("coverage", "erase")
@nox.session(python="3.7")
def docs(session):
"""Build the docs for this library."""
session.install("-e", ".")
session.install("sphinx", "alabaster", "recommonmark")
shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True)
session.run(
"sphinx-build",
"-W", # warnings as errors
"-T", # show full traceback on exception
"-N", # no colors
"-b",
"html",
"-d",
os.path.join("docs", "_build", "doctrees", ""),
os.path.join("docs", ""),
os.path.join("docs", "_build", "html", ""),
) | unknown | codeparrot/codeparrot-clean | ||
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "migrants.settings")
import django
django.setup()
from migrants.data import centers
from invoke import task
from openpyxl import load_workbook
from openpyxl.cell import get_column_letter
import pycountry
from migrants.base.models import Country, DataCategory, MigrationInfo
try:
_range = xrange # py2 :(
except NameError:
_range = range # py3 :)
country_columns = ['id', 'alt_name', 'order', 'area']
country_columns = {index + 1: col for index, col in enumerate(country_columns)}
country_columns[7] = 'region'
def _countries_by_id():
countries = {}
for country in pycountry.countries:
country_dict = {
"alpha2": country.alpha2,
"name": country.name,
}
countries[int(country.numeric)] = country_dict
return countries
def _db_countries_by_name():
countries = Country.objects.all()
if not countries.exists():
raise Exception("Load the countries first !")
return {country.alt_name: country for country in countries}
def _get_worksheet(name, filename='data.xlsx'):
wb = load_workbook(filename=filename, use_iterators=False)
ws = wb.get_sheet_by_name(name=name)
return ws
@task
def import_countries():
countries = _countries_by_id()
ws = _get_worksheet('ANNEX')
to_insert = []
for row in _range(16, 248):
country_dict = {}
for col_idx, name in country_columns.items():
col = get_column_letter(col_idx)
key = "{}{}".format(col, row)
country_dict[name] = ws.cell(key).value
try:
pk = country_dict['id']
country_dict.update(countries[pk])
to_insert.append(Country(**country_dict))
except KeyError:
# Channel Islands
print("Skipping country {} ...".format(country_dict['alt_name']))
Country.objects.bulk_create(to_insert)
@task
def import_categories():
ws = _get_worksheet('CONTENTS')
categories = []
for row in _range(17, 29):
table_key = '{}{}'.format(get_column_letter(1), row)
title_key = '{}{}'.format(get_column_letter(2), row)
table = ws.cell(table_key).value
title = ws.cell(title_key).value
title, year = title[0:-7], int(title[-4:])
pk = int(table.split(" ")[-1])
category = DataCategory(id=pk, title=title, year=year)
categories.append(category)
DataCategory.objects.bulk_create(categories)
def import_category_country(ws, row, category, countries):
result = []
destination_name = ws.cell("B{}".format(row)).value
try:
destication_country = countries[destination_name]
except KeyError:
# This are the regions e.g Middle Africa (they have no data)
print("Skippiing {} ...".format(destination_name))
return
for col_index in _range(10, 999):
col = get_column_letter(col_index)
if col == 'IH':
break
people = ws.cell("{}{}".format(col, row)).value
if not people:
continue
try:
people = int(people.replace(" ", ""))
except AttributeError:
# Allready an int, ignore it
pass
origin_country_name = ws.cell("{}16".format(col)).value
try:
origin_country = countries[origin_country_name]
except KeyError:
# Channel Islands
print("Skipping country {} ....".format(origin_country_name))
continue
info = MigrationInfo(destination=destication_country,
origin=origin_country,
people=people,
category=category)
result.append(info)
MigrationInfo.objects.bulk_create(result)
def import_category(category, countries):
sheet = 'Table {}'.format(category.id)
ws = _get_worksheet(sheet)
for row in _range(25, 282):
import_category_country(ws, row, category, countries)
@task
def import_data():
countries = _db_countries_by_name()
for category in DataCategory.objects.all():
import_category(category, countries)
@task
def add_center():
for code, latitude, longitute in centers:
Country.objects.filter(alpha2=code).update(
center_lat=float(latitude), center_long=float(longitute))
pending = Country.objects.filter(
center_lat=0
).values_list("alpha2", flat=True)
print(pending)
@task
def fix_categories():
categories = DataCategory.objects.all()
for category in categories:
title = category.title
title = title.replace(
"and by major area, region, country or area of destinatio", "")
category.title = title
category.save()
@task
def import_all():
import_countries()
import_categories()
import_data()
add_center() | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016, Gregory Shulov (gregory.shulov@gmail.com)
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: infini_pool
version_added: 2.3
short_description: Create, Delete and Modify Pools on Infinibox
description:
- This module to creates, deletes or modifies pools on Infinibox.
author: Gregory Shulov (@GR360RY)
options:
name:
description:
- Pool Name
required: true
state:
description:
- Creates/Modifies Pool when present or removes when absent
required: false
default: present
choices: [ "present", "absent" ]
size:
description:
- Pool Physical Capacity in MB, GB or TB units.
If pool size is not set on pool creation, size will be equal to 1TB.
See examples.
required: false
vsize:
description:
- Pool Virtual Capacity in MB, GB or TB units.
If pool vsize is not set on pool creation, Virtual Capacity will be equal to Physical Capacity.
See examples.
required: false
ssd_cache:
description:
- Enable/Disable SSD Cache on Pool
required: false
default: yes
choices: [ "yes", "no" ]
notes:
- Infinibox Admin level access is required for pool modifications
extends_documentation_fragment:
- infinibox
'''
EXAMPLES = '''
- name: Make sure pool foo exists. Set pool physical capacity to 10TB
infini_pool:
name: foo
size: 10TB
vsize: 10TB
user: admin
password: secret
system: ibox001
- name: Disable SSD Cache on pool
infini_pool:
name: foo
ssd_cache: no
user: admin
password: secret
system: ibox001
'''
RETURN = '''
'''
HAS_INFINISDK = True
try:
from infinisdk import InfiniBox, core
except ImportError:
HAS_INFINISDK = False
from ansible.module_utils.infinibox import *
from capacity import KiB, Capacity
@api_wrapper
def get_pool(module, system):
"""Return Pool on None"""
try:
return system.pools.get(name=module.params['name'])
except:
return None
@api_wrapper
def create_pool(module, system):
"""Create Pool"""
name = module.params['name']
size = module.params['size']
vsize = module.params['vsize']
ssd_cache = module.params['ssd_cache']
if not module.check_mode:
if not size and not vsize:
pool = system.pools.create(name=name, physical_capacity=Capacity('1TB'), virtual_capacity=Capacity('1TB'))
elif size and not vsize:
pool = system.pools.create(name=name, physical_capacity=Capacity(size), virtual_capacity=Capacity(size))
elif not size and vsize:
pool = system.pools.create(name=name, physical_capacity=Capacity('1TB'), virtual_capacity=Capacity(vsize))
else:
pool = system.pools.create(name=name, physical_capacity=Capacity(size), virtual_capacity=Capacity(vsize))
# Default value of ssd_cache is True. Disable ssd chacing if False
if not ssd_cache:
pool.update_ssd_enabled(ssd_cache)
module.exit_json(changed=True)
@api_wrapper
def update_pool(module, system, pool):
"""Update Pool"""
changed = False
size = module.params['size']
vsize = module.params['vsize']
ssd_cache = module.params['ssd_cache']
# Roundup the capacity to mimic Infinibox behaviour
if size:
physical_capacity = Capacity(size).roundup(6 * 64 * KiB)
if pool.get_physical_capacity() != physical_capacity:
if not module.check_mode:
pool.update_physical_capacity(physical_capacity)
changed = True
if vsize:
virtual_capacity = Capacity(vsize).roundup(6 * 64 * KiB)
if pool.get_virtual_capacity() != virtual_capacity:
if not module.check_mode:
pool.update_virtual_capacity(virtual_capacity)
changed = True
if pool.get_ssd_enabled() != ssd_cache:
if not module.check_mode:
pool.update_ssd_enabled(ssd_cache)
changed = True
module.exit_json(changed=changed)
@api_wrapper
def delete_pool(module, pool):
"""Delete Pool"""
if not module.check_mode:
pool.delete()
module.exit_json(changed=True)
def main():
argument_spec = infinibox_argument_spec()
argument_spec.update(
dict(
name = dict(required=True),
state = dict(default='present', choices=['present', 'absent']),
size = dict(),
vsize = dict(),
ssd_cache = dict(type='bool', default=True)
)
)
module = AnsibleModule(argument_spec, supports_check_mode=True)
if not HAS_INFINISDK:
module.fail_json(msg='infinisdk is required for this module')
if module.params['size']:
try:
Capacity(module.params['size'])
except:
module.fail_json(msg='size (Physical Capacity) should be defined in MB, GB, TB or PB units')
if module.params['vsize']:
try:
Capacity(module.params['vsize'])
except:
module.fail_json(msg='vsize (Virtual Capacity) should be defined in MB, GB, TB or PB units')
state = module.params['state']
system = get_system(module)
pool = get_pool(module, system)
if state == 'present' and not pool:
create_pool(module, system)
elif state == 'present' and pool:
update_pool(module, system, pool)
elif state == 'absent' and pool:
delete_pool(module, pool)
elif state == 'absent' and not pool:
module.exit_json(changed=False)
# Import Ansible Utilities
from ansible.module_utils.basic import AnsibleModule
if __name__ == '__main__':
main() | unknown | codeparrot/codeparrot-clean | ||
"""
====================
Theil-Sen Regression
====================
Computes a Theil-Sen Regression on a synthetic dataset.
See :ref:`theil_sen_regression` for more information on the regressor.
Compared to the OLS (ordinary least squares) estimator, the Theil-Sen
estimator is robust against outliers. It has a breakdown point of about 29.3%
in case of a simple linear regression which means that it can tolerate
arbitrary corrupted data (outliers) of up to 29.3% in the two-dimensional
case.
The estimation of the model is done by calculating the slopes and intercepts
of a subpopulation of all possible combinations of p subsample points. If an
intercept is fitted, p must be greater than or equal to n_features + 1. The
final slope and intercept is then defined as the spatial median of these
slopes and intercepts.
In certain cases Theil-Sen performs better than :ref:`RANSAC
<ransac_regression>` which is also a robust method. This is illustrated in the
second example below where outliers with respect to the x-axis perturb RANSAC.
Tuning the ``residual_threshold`` parameter of RANSAC remedies this but in
general a priori knowledge about the data and the nature of the outliers is
needed.
Due to the computational complexity of Theil-Sen it is recommended to use it
only for small problems in terms of number of samples and features. For larger
problems the ``max_subpopulation`` parameter restricts the magnitude of all
possible combinations of p subsample points to a randomly chosen subset and
therefore also limits the runtime. Therefore, Theil-Sen is applicable to larger
problems with the drawback of losing some of its mathematical properties since
it then works on a random subset.
"""
# Author: Florian Wilhelm -- <florian.wilhelm@gmail.com>
# License: BSD 3 clause
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, TheilSenRegressor
from sklearn.linear_model import RANSACRegressor
print(__doc__)
estimators = [('OLS', LinearRegression()),
('Theil-Sen', TheilSenRegressor(random_state=42)),
('RANSAC', RANSACRegressor(random_state=42)), ]
colors = {'OLS': 'turquoise', 'Theil-Sen': 'gold', 'RANSAC': 'lightgreen'}
lw = 2
# #############################################################################
# Outliers only in the y direction
np.random.seed(0)
n_samples = 200
# Linear model y = 3*x + N(2, 0.1**2)
x = np.random.randn(n_samples)
w = 3.
c = 2.
noise = 0.1 * np.random.randn(n_samples)
y = w * x + c + noise
# 10% outliers
y[-20:] += -20 * x[-20:]
X = x[:, np.newaxis]
plt.scatter(x, y, color='indigo', marker='x', s=40)
line_x = np.array([-3, 3])
for name, estimator in estimators:
t0 = time.time()
estimator.fit(X, y)
elapsed_time = time.time() - t0
y_pred = estimator.predict(line_x.reshape(2, 1))
plt.plot(line_x, y_pred, color=colors[name], linewidth=lw,
label='%s (fit time: %.2fs)' % (name, elapsed_time))
plt.axis('tight')
plt.legend(loc='upper left')
plt.title("Corrupt y")
# #############################################################################
# Outliers in the X direction
np.random.seed(0)
# Linear model y = 3*x + N(2, 0.1**2)
x = np.random.randn(n_samples)
noise = 0.1 * np.random.randn(n_samples)
y = 3 * x + 2 + noise
# 10% outliers
x[-20:] = 9.9
y[-20:] += 22
X = x[:, np.newaxis]
plt.figure()
plt.scatter(x, y, color='indigo', marker='x', s=40)
line_x = np.array([-3, 10])
for name, estimator in estimators:
t0 = time.time()
estimator.fit(X, y)
elapsed_time = time.time() - t0
y_pred = estimator.predict(line_x.reshape(2, 1))
plt.plot(line_x, y_pred, color=colors[name], linewidth=lw,
label='%s (fit time: %.2fs)' % (name, elapsed_time))
plt.axis('tight')
plt.legend(loc='upper left')
plt.title("Corrupt x")
plt.show() | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# + Python 3 support
# + sublime text 3 support
import sublime
import sublime_plugin
# import sys
import os
import glob
import datetime
import zipfile
import re
PACKAGE_NAME = 'SublimeTmpl'
TMLP_DIR = 'templates'
KEY_SYNTAX = 'syntax'
KEY_FILE_EXT = 'extension'
# st3: Installed Packages/xx.sublime-package
BASE_PATH = os.path.abspath(os.path.dirname(__file__))
PACKAGES_PATH = sublime.packages_path() # for ST2
# sys.path += [BASE_PATH]
# sys.path.append(BASE_PATH)
# import sys;print(sys.path)
IS_GTE_ST3 = int(sublime.version()[0]) >= 3
DISABLE_KEYMAP = None
class SublimeTmplCommand(sublime_plugin.TextCommand):
def run(self, edit, type='html'):
view = self.view
opts = self.get_settings(type)
tmpl = self.get_code(type)
# print('global', DISABLE_KEYMAP, IS_GTE_ST3);
if DISABLE_KEYMAP:
return False
# print(KEY_SYNTAX in opts)
self.tab = self.creat_tab(view)
self.set_syntax(opts)
self.set_code(tmpl)
def get_settings(self, type=None):
settings = sublime.load_settings(PACKAGE_NAME + '.sublime-settings')
if not type:
return settings
# print(settings.get('html')['syntax'])
opts = settings.get(type, [])
# print(opts)
return opts
def open_file(self, path, mode='r'):
fp = open(path, mode)
code = fp.read()
fp.close()
return code
def get_code(self, type):
code = ''
file_name = "%s.tmpl" % type
isIOError = False
if IS_GTE_ST3:
tmpl_dir = 'Packages/' + PACKAGE_NAME + '/' + TMLP_DIR + '/'
user_tmpl_dir = 'Packages/User/' + \
PACKAGE_NAME + '/' + TMLP_DIR + '/'
# tmpl_dir = os.path.join('Packages', PACKAGE_NAME , TMLP_DIR)
else:
tmpl_dir = os.path.join(PACKAGES_PATH, PACKAGE_NAME, TMLP_DIR)
user_tmpl_dir = os.path.join(
PACKAGES_PATH, 'User', PACKAGE_NAME, TMLP_DIR)
self.user_tmpl_path = os.path.join(user_tmpl_dir, file_name)
self.tmpl_path = os.path.join(tmpl_dir, file_name)
if IS_GTE_ST3:
try:
code = sublime.load_resource(self.user_tmpl_path)
except IOError:
try:
code = sublime.load_resource(self.tmpl_path)
except IOError:
isIOError = True
else:
if os.path.isfile(self.user_tmpl_path):
code = self.open_file(self.user_tmpl_path)
elif os.path.isfile(self.tmpl_path):
code = self.open_file(self.tmpl_path)
else:
isIOError = True
# print(self.tmpl_path)
if isIOError:
sublime.message_dialog('[Warning] No such file: ' + self.tmpl_path
+ ' or ' + self.user_tmpl_path)
return self.format_tag(code)
def format_tag(self, code):
code = code.replace('\r', '') # replace \r\n -> \n
# format
settings = self.get_settings()
format = settings.get('date_format', '%Y-%m-%d')
date = datetime.datetime.now().strftime(format)
if not IS_GTE_ST3:
code = code.decode('utf8') # for st2 && Chinese characters
code = code.replace('${date}', date)
attr = settings.get('attr', {})
for key in attr:
code = code.replace('${%s}' % key, attr.get(key, ''))
return code
def creat_tab(self, view):
win = view.window()
tab = win.new_file()
return tab
def set_code(self, code):
tab = self.tab
# tab.set_name('untitled.' + self.type)
# insert codes
tab.run_command('insert_snippet', {'contents': code})
def set_syntax(self, opts):
v = self.tab
# syntax = self.view.settings().get('syntax') # from current file
syntax = opts[KEY_SYNTAX] if KEY_SYNTAX in opts else ''
# print(syntax) # tab.set_syntax_file('Packages/Diff/Diff.tmLanguage')
v.set_syntax_file(syntax)
# print(opts[KEY_FILE_EXT])
if KEY_FILE_EXT in opts:
v.settings().set('default_extension', opts[KEY_FILE_EXT])
class SublimeTmplEventListener(sublime_plugin.EventListener):
def on_query_context(self, view, key, operator, operand, match_all):
settings = sublime.load_settings(PACKAGE_NAME + '.sublime-settings')
disable_keymap_actions = settings.get('disable_keymap_actions', '')
# print ("key1: %s, %s" % (key, disable_keymap_actions))
global DISABLE_KEYMAP
DISABLE_KEYMAP = False;
if not key.startswith('sublime_tmpl.'):
return None
if not disable_keymap_actions: # no disabled actions
return True
elif disable_keymap_actions == 'all' or disable_keymap_actions == True: # disable all actions
DISABLE_KEYMAP = True;
return False
prefix, name = key.split('.')
ret = name not in re.split(r'\s*,\s*', disable_keymap_actions.strip())
# print(name, ret)
DISABLE_KEYMAP = True if not ret else False;
return ret
def plugin_loaded(): # for ST3 >= 3016
# global PACKAGES_PATH
PACKAGES_PATH = sublime.packages_path()
TARGET_PATH = os.path.join(PACKAGES_PATH, PACKAGE_NAME)
# print(BASE_PATH, os.path.dirname(BASE_PATH))
# print(TARGET_PATH)
# auto create custom_path
custom_path = os.path.join(PACKAGES_PATH, 'User', PACKAGE_NAME, TMLP_DIR)
# print(custom_path, os.path.isdir(custom_path))
if not os.path.isdir(custom_path):
os.makedirs(custom_path)
# first run
if not os.path.isdir(TARGET_PATH):
os.makedirs(os.path.join(TARGET_PATH, TMLP_DIR))
# copy user files
tmpl_dir = TMLP_DIR + '/'
file_list = [
'Default.sublime-commands', 'Main.sublime-menu',
# if don't copy .py, ST3 throw: ImportError: No module named
'sublime-tmpl.py',
'README.md',
tmpl_dir + 'css.tmpl', tmpl_dir + 'html.tmpl',
tmpl_dir + 'js.tmpl', tmpl_dir + 'php.tmpl',
tmpl_dir + 'python.tmpl', tmpl_dir + 'ruby.tmpl',
tmpl_dir + 'xml.tmpl'
]
try:
extract_zip_resource(BASE_PATH, file_list, TARGET_PATH)
except Exception as e:
print(e)
# old: *.user.tmpl compatible fix
files = glob.iglob(os.path.join(os.path.join(TARGET_PATH, TMLP_DIR), '*.user.tmpl'))
for file in files:
filename = os.path.basename(file).replace('.user.tmpl', '.tmpl')
# print(file, '=>', os.path.join(custom_path, filename));
os.rename(file, os.path.join(custom_path, filename))
# old: settings-custom_path compatible fix
settings = sublime.load_settings(PACKAGE_NAME + '.sublime-settings')
old_custom_path = settings.get('custom_path', '')
if old_custom_path and os.path.isdir(old_custom_path):
# print(old_custom_path)
files = glob.iglob(os.path.join(old_custom_path, '*.tmpl'))
for file in files:
filename = os.path.basename(file).replace('.user.tmpl', '.tmpl')
# print(file, '=>', os.path.join(custom_path, filename))
os.rename(file, os.path.join(custom_path, filename))
if not IS_GTE_ST3:
sublime.set_timeout(plugin_loaded, 0)
def extract_zip_resource(path_to_zip, file_list, extract_dir=None):
if extract_dir is None:
return
# print(extract_dir)
if os.path.exists(path_to_zip):
z = zipfile.ZipFile(path_to_zip, 'r')
for f in z.namelist():
# if f.endswith('.tmpl'):
if f in file_list:
# print(f)
z.extract(f, extract_dir)
z.close() | unknown | codeparrot/codeparrot-clean | ||
from common.exceptions import LogicError
from plenum.common.constants import TXN_TYPE
from plenum.common.messages.node_messages import RequestNack
from plenum.common.request import Request
from plenum.server.request_handlers.handler_interfaces.read_request_handler import ReadRequestHandler
from plenum.server.request_managers.request_manager import RequestManager
class ReadRequestManager(RequestManager):
def static_validation(self, request: Request):
handler = self.request_handlers.get(request.operation[TXN_TYPE], None)
if handler is None:
raise LogicError
handler.static_validation(request)
def register_req_handler(self, handler: ReadRequestHandler, ledger_id=None):
if not isinstance(handler, ReadRequestHandler):
raise LogicError
self._register_req_handler(handler, ledger_id=ledger_id)
def get_result(self, request: Request):
handler = self.request_handlers.get(request.operation[TXN_TYPE], None)
if handler is None:
return RequestNack(request.identifier, request.reqId)
return handler.get_result(request) | unknown | codeparrot/codeparrot-clean | ||
"use strict";
import { fileURLToPath } from "node:url";
import { isSwcError, wrapAndReThrowSwcError } from "./errors.js";
import { transformSync } from "./index.js";
export async function load(url, context, nextLoad) {
const { format } = context;
if (format?.endsWith("-typescript")) {
try {
const { source } = await nextLoad(url, {
...context,
format
});
const { code } = transformSync(source.toString(), {
mode: "strip-only",
filename: fileURLToPath(url)
});
return {
format: format.replace("-typescript", ""),
// Source map is not necessary in strip-only mode. However, to map the source
// file in debuggers to the original TypeScript source, add a sourceURL magic
// comment to hint that it is a generated source.
source: `${code}
//# sourceURL=${url}`
};
} catch (error) {
if (isSwcError(error)) {
wrapAndReThrowSwcError(error);
}
throw error;
}
}
return nextLoad(url, context);
} | javascript | github | https://github.com/nodejs/node | deps/amaro/dist/strip-loader.js |
# -*- test-case-name: twisted.trial.test.test_script -*-
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
from __future__ import print_function
import gc
import inspect
import os
import pdb
import random
import sys
import time
import warnings
from twisted.internet import defer
from twisted.application import app
from twisted.python import usage, reflect, failure
from twisted.python.filepath import FilePath
from twisted import plugin
from twisted.python.util import spewer
from twisted.trial import runner, itrial, reporter
# Yea, this is stupid. Leave it for for command-line compatibility for a
# while, though.
TBFORMAT_MAP = {
'plain': 'default',
'default': 'default',
'emacs': 'brief',
'brief': 'brief',
'cgitb': 'verbose',
'verbose': 'verbose'
}
def _parseLocalVariables(line):
"""
Accepts a single line in Emacs local variable declaration format and
returns a dict of all the variables {name: value}.
Raises ValueError if 'line' is in the wrong format.
See http://www.gnu.org/software/emacs/manual/html_node/File-Variables.html
"""
paren = '-*-'
start = line.find(paren) + len(paren)
end = line.rfind(paren)
if start == -1 or end == -1:
raise ValueError("%r not a valid local variable declaration" % (line,))
items = line[start:end].split(';')
localVars = {}
for item in items:
if len(item.strip()) == 0:
continue
split = item.split(':')
if len(split) != 2:
raise ValueError("%r contains invalid declaration %r"
% (line, item))
localVars[split[0].strip()] = split[1].strip()
return localVars
def loadLocalVariables(filename):
"""
Accepts a filename and attempts to load the Emacs variable declarations
from that file, simulating what Emacs does.
See http://www.gnu.org/software/emacs/manual/html_node/File-Variables.html
"""
f = file(filename, "r")
lines = [f.readline(), f.readline()]
f.close()
for line in lines:
try:
return _parseLocalVariables(line)
except ValueError:
pass
return {}
def getTestModules(filename):
testCaseVar = loadLocalVariables(filename).get('test-case-name', None)
if testCaseVar is None:
return []
return testCaseVar.split(',')
def isTestFile(filename):
"""
Returns true if 'filename' looks like a file containing unit tests.
False otherwise. Doesn't care whether filename exists.
"""
basename = os.path.basename(filename)
return (basename.startswith('test_')
and os.path.splitext(basename)[1] == ('.py'))
def _reporterAction():
return usage.CompleteList([p.longOpt for p in
plugin.getPlugins(itrial.IReporter)])
def _maybeFindSourceLine(testThing):
"""
Try to find the source line of the given test thing.
@param testThing: the test item to attempt to inspect
@type testThing: an L{TestCase}, test method, or module, though only the
former two have a chance to succeed
@rtype: int
@return: the starting source line, or -1 if one couldn't be found
"""
# an instance of L{TestCase} -- locate the test it will run
method = getattr(testThing, "_testMethodName", None)
if method is not None:
testThing = getattr(testThing, method)
# If it's a function, we can get the line number even if the source file no
# longer exists
code = getattr(testThing, "__code__", None)
if code is not None:
return code.co_firstlineno
try:
return inspect.getsourcelines(testThing)[1]
except (IOError, TypeError):
# either testThing is a module, which raised a TypeError, or the file
# couldn't be read
return -1
# orders which can be passed to trial --order
_runOrders = {
"alphabetical" : (
"alphabetical order for test methods, arbitrary order for test cases",
runner.name),
"toptobottom" : (
"attempt to run test cases and methods in the order they were defined",
_maybeFindSourceLine),
}
def _checkKnownRunOrder(order):
"""
Check that the given order is a known test running order.
Does nothing else, since looking up the appropriate callable to sort the
tests should be done when it actually will be used, as the default argument
will not be coerced by this function.
@param order: one of the known orders in C{_runOrders}
@return: the order unmodified
"""
if order not in _runOrders:
raise usage.UsageError(
"--order must be one of: %s. See --help-orders for details" %
(", ".join(repr(order) for order in _runOrders),))
return order
class _BasicOptions(object):
"""
Basic options shared between trial and its local workers.
"""
synopsis = """%s [options] [[file|package|module|TestCase|testmethod]...]
""" % (os.path.basename(sys.argv[0]),)
longdesc = ("trial loads and executes a suite of unit tests, obtained "
"from modules, packages and files listed on the command line.")
optFlags = [["help", "h"],
["no-recurse", "N", "Don't recurse into packages"],
['help-orders', None, "Help on available test running orders"],
['help-reporters', None,
"Help on available output plugins (reporters)"],
["rterrors", "e", "realtime errors, print out tracebacks as "
"soon as they occur"],
["unclean-warnings", None,
"Turn dirty reactor errors into warnings"],
["force-gc", None, "Have Trial run gc.collect() before and "
"after each test case."],
["exitfirst", "x",
"Exit after the first non-successful result (cannot be "
"specified along with --jobs)."],
]
optParameters = [
["order", "o", None,
"Specify what order to run test cases and methods. "
"See --help-orders for more info.", _checkKnownRunOrder],
["random", "z", None,
"Run tests in random order using the specified seed"],
['temp-directory', None, '_trial_temp',
'Path to use as working directory for tests.'],
['reporter', None, 'verbose',
'The reporter to use for this test run. See --help-reporters for '
'more info.']]
compData = usage.Completions(
optActions={"order": usage.CompleteList(_runOrders),
"reporter": _reporterAction,
"logfile": usage.CompleteFiles(descr="log file name"),
"random": usage.Completer(descr="random seed")},
extraActions=[usage.CompleteFiles(
"*.py", descr="file | module | package | TestCase | testMethod",
repeat=True)],
)
fallbackReporter = reporter.TreeReporter
tracer = None
def __init__(self):
self['tests'] = []
usage.Options.__init__(self)
def coverdir(self):
"""
Return a L{FilePath} representing the directory into which coverage
results should be written.
"""
coverdir = 'coverage'
result = FilePath(self['temp-directory']).child(coverdir)
print("Setting coverage directory to %s." % (result.path,))
return result
# TODO: Some of the opt_* methods on this class have docstrings and some do
# not. This is mostly because usage.Options's currently will replace
# any intended output in optFlags and optParameters with the
# docstring. See #6427. When that is fixed, all methods should be
# given docstrings (and it should be verified that those with
# docstrings already have content suitable for printing as usage
# information).
def opt_coverage(self):
"""
Generate coverage information in the coverage file in the
directory specified by the temp-directory option.
"""
import trace
self.tracer = trace.Trace(count=1, trace=0)
sys.settrace(self.tracer.globaltrace)
self['coverage'] = True
def opt_testmodule(self, filename):
"""
Filename to grep for test cases (-*- test-case-name).
"""
# If the filename passed to this parameter looks like a test module
# we just add that to the test suite.
#
# If not, we inspect it for an Emacs buffer local variable called
# 'test-case-name'. If that variable is declared, we try to add its
# value to the test suite as a module.
#
# This parameter allows automated processes (like Buildbot) to pass
# a list of files to Trial with the general expectation of "these files,
# whatever they are, will get tested"
if not os.path.isfile(filename):
sys.stderr.write("File %r doesn't exist\n" % (filename,))
return
filename = os.path.abspath(filename)
if isTestFile(filename):
self['tests'].append(filename)
else:
self['tests'].extend(getTestModules(filename))
def opt_spew(self):
"""
Print an insanely verbose log of everything that happens. Useful
when debugging freezes or locks in complex code.
"""
sys.settrace(spewer)
def opt_help_orders(self):
synopsis = ("Trial can attempt to run test cases and their methods in "
"a few different orders. You can select any of the "
"following options using --order=<foo>.\n")
print(synopsis)
for name, (description, _) in sorted(_runOrders.items()):
print(' ', name, '\t', description)
sys.exit(0)
def opt_help_reporters(self):
synopsis = ("Trial's output can be customized using plugins called "
"Reporters. You can\nselect any of the following "
"reporters using --reporter=<foo>\n")
print(synopsis)
for p in plugin.getPlugins(itrial.IReporter):
print(' ', p.longOpt, '\t', p.description)
sys.exit(0)
def opt_disablegc(self):
"""
Disable the garbage collector
"""
self["disablegc"] = True
gc.disable()
def opt_tbformat(self, opt):
"""
Specify the format to display tracebacks with. Valid formats are
'plain', 'emacs', and 'cgitb' which uses the nicely verbose stdlib
cgitb.text function
"""
try:
self['tbformat'] = TBFORMAT_MAP[opt]
except KeyError:
raise usage.UsageError(
"tbformat must be 'plain', 'emacs', or 'cgitb'.")
def opt_recursionlimit(self, arg):
"""
see sys.setrecursionlimit()
"""
try:
sys.setrecursionlimit(int(arg))
except (TypeError, ValueError):
raise usage.UsageError(
"argument to recursionlimit must be an integer")
else:
self["recursionlimit"] = int(arg)
def opt_random(self, option):
try:
self['random'] = long(option)
except ValueError:
raise usage.UsageError(
"Argument to --random must be a positive integer")
else:
if self['random'] < 0:
raise usage.UsageError(
"Argument to --random must be a positive integer")
elif self['random'] == 0:
self['random'] = long(time.time() * 100)
def opt_without_module(self, option):
"""
Fake the lack of the specified modules, separated with commas.
"""
self["without-module"] = option
for module in option.split(","):
if module in sys.modules:
warnings.warn("Module '%s' already imported, "
"disabling anyway." % (module,),
category=RuntimeWarning)
sys.modules[module] = None
def parseArgs(self, *args):
self['tests'].extend(args)
def _loadReporterByName(self, name):
for p in plugin.getPlugins(itrial.IReporter):
qual = "%s.%s" % (p.module, p.klass)
if p.longOpt == name:
return reflect.namedAny(qual)
raise usage.UsageError("Only pass names of Reporter plugins to "
"--reporter. See --help-reporters for "
"more info.")
def postOptions(self):
# Only load reporters now, as opposed to any earlier, to avoid letting
# application-defined plugins muck up reactor selecting by importing
# t.i.reactor and causing the default to be installed.
self['reporter'] = self._loadReporterByName(self['reporter'])
if 'tbformat' not in self:
self['tbformat'] = 'default'
if self['order'] is not None and self['random'] is not None:
raise usage.UsageError(
"You can't specify --random when using --order")
class Options(_BasicOptions, usage.Options, app.ReactorSelectionMixin):
"""
Options to the trial command line tool.
@ivar _workerFlags: List of flags which are accepted by trial distributed
workers. This is used by C{_getWorkerArguments} to build the command
line arguments.
@type _workerFlags: C{list}
@ivar _workerParameters: List of parameter which are accepted by trial
distrubuted workers. This is used by C{_getWorkerArguments} to build
the command line arguments.
@type _workerParameters: C{list}
"""
optFlags = [
["debug", "b", "Run tests in a debugger. If that debugger is "
"pdb, will load '.pdbrc' from current directory if it exists."
],
["debug-stacktraces", "B", "Report Deferred creation and "
"callback stack traces"],
["nopm", None, "don't automatically jump into debugger for "
"postmorteming of exceptions"],
["dry-run", 'n', "do everything but run the tests"],
["profile", None, "Run tests under the Python profiler"],
["until-failure", "u", "Repeat test until it fails"],
]
optParameters = [
["debugger", None, "pdb", "the fully qualified name of a debugger to "
"use if --debug is passed"],
["logfile", "l", "test.log", "log file name"],
["jobs", "j", None, "Number of local workers to run"]
]
compData = usage.Completions(
optActions = {
"tbformat": usage.CompleteList(["plain", "emacs", "cgitb"]),
"reporter": _reporterAction,
},
)
_workerFlags = ["disablegc", "force-gc", "coverage"]
_workerParameters = ["recursionlimit", "reactor", "without-module"]
fallbackReporter = reporter.TreeReporter
extra = None
tracer = None
def opt_jobs(self, number):
"""
Number of local workers to run, a strictly positive integer.
"""
try:
number = int(number)
except ValueError:
raise usage.UsageError(
"Expecting integer argument to jobs, got '%s'" % number)
if number <= 0:
raise usage.UsageError(
"Argument to jobs must be a strictly positive integer")
self["jobs"] = number
def _getWorkerArguments(self):
"""
Return a list of options to pass to distributed workers.
"""
args = []
for option in self._workerFlags:
if self.get(option) is not None:
if self[option]:
args.append("--%s" % (option,))
for option in self._workerParameters:
if self.get(option) is not None:
args.extend(["--%s" % (option,), str(self[option])])
return args
def postOptions(self):
_BasicOptions.postOptions(self)
if self['jobs']:
conflicts = ['debug', 'profile', 'debug-stacktraces', 'exitfirst']
for option in conflicts:
if self[option]:
raise usage.UsageError(
"You can't specify --%s when using --jobs" % option)
if self['nopm']:
if not self['debug']:
raise usage.UsageError("You must specify --debug when using "
"--nopm ")
failure.DO_POST_MORTEM = False
def _initialDebugSetup(config):
# do this part of debug setup first for easy debugging of import failures
if config['debug']:
failure.startDebugMode()
if config['debug'] or config['debug-stacktraces']:
defer.setDebugging(True)
def _getSuite(config):
loader = _getLoader(config)
recurse = not config['no-recurse']
return loader.loadByNames(config['tests'], recurse)
def _getLoader(config):
loader = runner.TestLoader()
if config['random']:
randomer = random.Random()
randomer.seed(config['random'])
loader.sorter = lambda x : randomer.random()
print('Running tests shuffled with seed %d\n' % config['random'])
elif config['order']:
_, sorter = _runOrders[config['order']]
loader.sorter = sorter
if not config['until-failure']:
loader.suiteFactory = runner.DestructiveTestSuite
return loader
def _wrappedPdb():
"""
Wrap an instance of C{pdb.Pdb} with readline support and load any .rcs.
"""
dbg = pdb.Pdb()
try:
import readline
except ImportError:
print("readline module not available")
sys.exc_clear()
for path in ('.pdbrc', 'pdbrc'):
if os.path.exists(path):
try:
rcFile = file(path, 'r')
except IOError:
sys.exc_clear()
else:
dbg.rcLines.extend(rcFile.readlines())
return dbg
class _DebuggerNotFound(Exception):
"""
A debugger import failed.
Used to allow translating these errors into usage error messages.
"""
def _makeRunner(config):
"""
Return a trial runner class set up with the parameters extracted from
C{config}.
@return: A trial runner instance.
@rtype: L{runner.TrialRunner} or C{DistTrialRunner} depending on the
configuration.
"""
cls = runner.TrialRunner
args = {'reporterFactory': config['reporter'],
'tracebackFormat': config['tbformat'],
'realTimeErrors': config['rterrors'],
'uncleanWarnings': config['unclean-warnings'],
'logfile': config['logfile'],
'workingDirectory': config['temp-directory']}
if config['dry-run']:
args['mode'] = runner.TrialRunner.DRY_RUN
elif config['jobs']:
from twisted.trial._dist.disttrial import DistTrialRunner
cls = DistTrialRunner
args['workerNumber'] = config['jobs']
args['workerArguments'] = config._getWorkerArguments()
else:
if config['debug']:
args['mode'] = runner.TrialRunner.DEBUG
debugger = config['debugger']
if debugger != 'pdb':
try:
args['debugger'] = reflect.namedAny(debugger)
except reflect.ModuleNotFound:
raise _DebuggerNotFound(
'%r debugger could not be found.' % (debugger,))
else:
args['debugger'] = _wrappedPdb()
args['exitFirst'] = config['exitfirst']
args['profile'] = config['profile']
args['forceGarbageCollection'] = config['force-gc']
return cls(**args)
def run():
if len(sys.argv) == 1:
sys.argv.append("--help")
config = Options()
try:
config.parseOptions()
except usage.error, ue:
raise SystemExit, "%s: %s" % (sys.argv[0], ue)
_initialDebugSetup(config)
try:
trialRunner = _makeRunner(config)
except _DebuggerNotFound as e:
raise SystemExit('%s: %s' % (sys.argv[0], str(e)))
suite = _getSuite(config)
if config['until-failure']:
test_result = trialRunner.runUntilFailure(suite)
else:
test_result = trialRunner.run(suite)
if config.tracer:
sys.settrace(None)
results = config.tracer.results()
results.write_results(show_missing=1, summary=False,
coverdir=config.coverdir().path)
sys.exit(not test_result.wasSuccessful()) | unknown | codeparrot/codeparrot-clean | ||
Text to force this as inline.
<docs-pill href="#pill-row" title="Same Page"/>
<docs-pill href="http://angular.dev" title="External Page"/>
<docs-pill href="./this-other-page" title="Another Page"/>
<docs-pill href="./some-target-page" target="_some_target" title="Some Target Page"/>
<docs-pill href="./download-page-example.ics" title="Download Page" download="file.ics"/>
<docs-pill href="./download-page-example.ics" title="Download Page" download="file.ics" target="_some_target" /> | unknown | github | https://github.com/angular/angular | adev/shared-docs/pipeline/shared/marked/test/docs-pill/docs-pill.md |
/*-------------------------------------------------------------------------
*
* reloptions.c
* Core support for relation options (pg_class.reloptions)
*
* Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
*
* IDENTIFICATION
* src/backend/access/common/reloptions.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include <float.h>
#include "access/gist_private.h"
#include "access/hash.h"
#include "access/heaptoast.h"
#include "access/htup_details.h"
#include "access/nbtree.h"
#include "access/reloptions.h"
#include "access/spgist_private.h"
#include "catalog/pg_type.h"
#include "commands/defrem.h"
#include "commands/tablespace.h"
#include "nodes/makefuncs.h"
#include "utils/array.h"
#include "utils/attoptcache.h"
#include "utils/builtins.h"
#include "utils/guc.h"
#include "utils/memutils.h"
#include "utils/rel.h"
/*
* Contents of pg_class.reloptions
*
* To add an option:
*
* (i) decide on a type (bool, ternary, integer, real, enum, string), name,
* default value, upper and lower bounds (if applicable); for strings,
* consider a validation routine.
* (ii) add a record below (or use add_<type>_reloption).
* (iii) add it to the appropriate options struct (perhaps StdRdOptions)
* (iv) add it to the appropriate handling routine (perhaps
* default_reloptions)
* (v) make sure the lock level is set correctly for that operation
* (vi) don't forget to document the option
*
* From the user's point of view, a 'ternary' is exactly like a Boolean,
* so we don't document it separately. On the implementation side, the
* handling code can detect the case where the option has not been set.
*
* The default choice for any new option should be AccessExclusiveLock.
* In some cases the lock level can be reduced from there, but the lock
* level chosen should always conflict with itself to ensure that multiple
* changes aren't lost when we attempt concurrent changes.
* The choice of lock level depends completely upon how that parameter
* is used within the server, not upon how and when you'd like to change it.
* Safety first. Existing choices are documented here, and elsewhere in
* backend code where the parameters are used.
*
* In general, anything that affects the results obtained from a SELECT must be
* protected by AccessExclusiveLock.
*
* Autovacuum related parameters can be set at ShareUpdateExclusiveLock
* since they are only used by the AV procs and don't change anything
* currently executing.
*
* Fillfactor can be set at ShareUpdateExclusiveLock because it applies only to
* subsequent changes made to data blocks, as documented in hio.c
*
* n_distinct options can be set at ShareUpdateExclusiveLock because they
* are only used during ANALYZE, which uses a ShareUpdateExclusiveLock,
* so the ANALYZE will not be affected by in-flight changes. Changing those
* values has no effect until the next ANALYZE, so no need for stronger lock.
*
* Planner-related parameters can be set at ShareUpdateExclusiveLock because
* they only affect planning and not the correctness of the execution. Plans
* cannot be changed in mid-flight, so changes here could not easily result in
* new improved plans in any case. So we allow existing queries to continue
* and existing plans to survive, a small price to pay for allowing better
* plans to be introduced concurrently without interfering with users.
*
* Setting parallel_workers at ShareUpdateExclusiveLock is safe, since it acts
* the same as max_parallel_workers_per_gather which is a USERSET parameter
* that doesn't affect existing plans or queries.
*
* vacuum_truncate can be set at ShareUpdateExclusiveLock because it
* is only used during VACUUM, which uses a ShareUpdateExclusiveLock,
* so the VACUUM will not be affected by in-flight changes. Changing its
* value has no effect until the next VACUUM, so no need for stronger lock.
*/
static relopt_bool boolRelOpts[] =
{
{
{
"autosummarize",
"Enables automatic summarization on this BRIN index",
RELOPT_KIND_BRIN,
AccessExclusiveLock
},
false
},
{
{
"autovacuum_enabled",
"Enables autovacuum in this relation",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
true
},
{
{
"user_catalog_table",
"Declare a table as an additional catalog table, e.g. for the purpose of logical replication",
RELOPT_KIND_HEAP,
AccessExclusiveLock
},
false
},
{
{
"fastupdate",
"Enables \"fast update\" feature for this GIN index",
RELOPT_KIND_GIN,
AccessExclusiveLock
},
true
},
{
{
"security_barrier",
"View acts as a row security barrier",
RELOPT_KIND_VIEW,
AccessExclusiveLock
},
false
},
{
{
"security_invoker",
"Privileges on underlying relations are checked as the invoking user, not the view owner",
RELOPT_KIND_VIEW,
AccessExclusiveLock
},
false
},
{
{
"deduplicate_items",
"Enables \"deduplicate items\" feature for this btree index",
RELOPT_KIND_BTREE,
ShareUpdateExclusiveLock /* since it applies only to later
* inserts */
},
true
},
/* list terminator */
{{NULL}}
};
static relopt_ternary ternaryRelOpts[] =
{
{
{
"vacuum_truncate",
"Enables vacuum to truncate empty pages at the end of this table",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
}
},
/* list terminator */
{
{
NULL
}
}
};
static relopt_int intRelOpts[] =
{
{
{
"fillfactor",
"Packs table pages only to this percentage",
RELOPT_KIND_HEAP,
ShareUpdateExclusiveLock /* since it applies only to later
* inserts */
},
HEAP_DEFAULT_FILLFACTOR, HEAP_MIN_FILLFACTOR, 100
},
{
{
"fillfactor",
"Packs btree index pages only to this percentage",
RELOPT_KIND_BTREE,
ShareUpdateExclusiveLock /* since it applies only to later
* inserts */
},
BTREE_DEFAULT_FILLFACTOR, BTREE_MIN_FILLFACTOR, 100
},
{
{
"fillfactor",
"Packs hash index pages only to this percentage",
RELOPT_KIND_HASH,
ShareUpdateExclusiveLock /* since it applies only to later
* inserts */
},
HASH_DEFAULT_FILLFACTOR, HASH_MIN_FILLFACTOR, 100
},
{
{
"fillfactor",
"Packs gist index pages only to this percentage",
RELOPT_KIND_GIST,
ShareUpdateExclusiveLock /* since it applies only to later
* inserts */
},
GIST_DEFAULT_FILLFACTOR, GIST_MIN_FILLFACTOR, 100
},
{
{
"fillfactor",
"Packs spgist index pages only to this percentage",
RELOPT_KIND_SPGIST,
ShareUpdateExclusiveLock /* since it applies only to later
* inserts */
},
SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
},
{
{
"autovacuum_vacuum_threshold",
"Minimum number of tuple updates or deletes prior to vacuum",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-1, 0, INT_MAX
},
{
{
"autovacuum_vacuum_max_threshold",
"Maximum number of tuple updates or deletes prior to vacuum",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-2, -1, INT_MAX
},
{
{
"autovacuum_vacuum_insert_threshold",
"Minimum number of tuple inserts prior to vacuum, or -1 to disable insert vacuums",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-2, -1, INT_MAX
},
{
{
"autovacuum_analyze_threshold",
"Minimum number of tuple inserts, updates or deletes prior to analyze",
RELOPT_KIND_HEAP,
ShareUpdateExclusiveLock
},
-1, 0, INT_MAX
},
{
{
"autovacuum_vacuum_cost_limit",
"Vacuum cost amount available before napping, for autovacuum",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-1, 1, 10000
},
{
{
"autovacuum_freeze_min_age",
"Minimum age at which VACUUM should freeze a table row, for autovacuum",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-1, 0, 1000000000
},
{
{
"autovacuum_multixact_freeze_min_age",
"Minimum multixact age at which VACUUM should freeze a row multixact's, for autovacuum",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-1, 0, 1000000000
},
{
{
"autovacuum_freeze_max_age",
"Age at which to autovacuum a table to prevent transaction ID wraparound",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-1, 100000, 2000000000
},
{
{
"autovacuum_multixact_freeze_max_age",
"Multixact age at which to autovacuum a table to prevent multixact wraparound",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-1, 10000, 2000000000
},
{
{
"autovacuum_freeze_table_age",
"Age at which VACUUM should perform a full table sweep to freeze row versions",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
}, -1, 0, 2000000000
},
{
{
"autovacuum_multixact_freeze_table_age",
"Age of multixact at which VACUUM should perform a full table sweep to freeze row versions",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
}, -1, 0, 2000000000
},
{
{
"log_autovacuum_min_duration",
"Sets the minimum execution time above which vacuum actions by autovacuum will be logged",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-1, -1, INT_MAX
},
{
{
"log_autoanalyze_min_duration",
"Sets the minimum execution time above which analyze actions by autovacuum will be logged",
RELOPT_KIND_HEAP,
ShareUpdateExclusiveLock
},
-1, -1, INT_MAX
},
{
{
"toast_tuple_target",
"Sets the target tuple length at which external columns will be toasted",
RELOPT_KIND_HEAP,
ShareUpdateExclusiveLock
},
TOAST_TUPLE_TARGET, 128, TOAST_TUPLE_TARGET_MAIN
},
{
{
"pages_per_range",
"Number of pages that each page range covers in a BRIN index",
RELOPT_KIND_BRIN,
AccessExclusiveLock
}, 128, 1, 131072
},
{
{
"gin_pending_list_limit",
"Maximum size of the pending list for this GIN index, in kilobytes.",
RELOPT_KIND_GIN,
AccessExclusiveLock
},
-1, 64, MAX_KILOBYTES
},
{
{
"effective_io_concurrency",
"Number of simultaneous requests that can be handled efficiently by the disk subsystem.",
RELOPT_KIND_TABLESPACE,
ShareUpdateExclusiveLock
},
-1, 0, MAX_IO_CONCURRENCY
},
{
{
"maintenance_io_concurrency",
"Number of simultaneous requests that can be handled efficiently by the disk subsystem for maintenance work.",
RELOPT_KIND_TABLESPACE,
ShareUpdateExclusiveLock
},
-1, 0, MAX_IO_CONCURRENCY
},
{
{
"parallel_workers",
"Number of parallel processes that can be used per executor node for this relation.",
RELOPT_KIND_HEAP,
ShareUpdateExclusiveLock
},
-1, 0, 1024
},
/* list terminator */
{{NULL}}
};
static relopt_real realRelOpts[] =
{
{
{
"autovacuum_vacuum_cost_delay",
"Vacuum cost delay in milliseconds, for autovacuum",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-1, 0.0, 100.0
},
{
{
"autovacuum_vacuum_scale_factor",
"Number of tuple updates or deletes prior to vacuum as a fraction of reltuples",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-1, 0.0, 100.0
},
{
{
"autovacuum_vacuum_insert_scale_factor",
"Number of tuple inserts prior to vacuum as a fraction of reltuples",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-1, 0.0, 100.0
},
{
{
"autovacuum_analyze_scale_factor",
"Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples",
RELOPT_KIND_HEAP,
ShareUpdateExclusiveLock
},
-1, 0.0, 100.0
},
{
{
"vacuum_max_eager_freeze_failure_rate",
"Fraction of pages in a relation vacuum can scan and fail to freeze before disabling eager scanning.",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
-1, 0.0, 1.0
},
{
{
"seq_page_cost",
"Sets the planner's estimate of the cost of a sequentially fetched disk page.",
RELOPT_KIND_TABLESPACE,
ShareUpdateExclusiveLock
},
-1, 0.0, DBL_MAX
},
{
{
"random_page_cost",
"Sets the planner's estimate of the cost of a nonsequentially fetched disk page.",
RELOPT_KIND_TABLESPACE,
ShareUpdateExclusiveLock
},
-1, 0.0, DBL_MAX
},
{
{
"n_distinct",
"Sets the planner's estimate of the number of distinct values appearing in a column (excluding child relations).",
RELOPT_KIND_ATTRIBUTE,
ShareUpdateExclusiveLock
},
0, -1.0, DBL_MAX
},
{
{
"n_distinct_inherited",
"Sets the planner's estimate of the number of distinct values appearing in a column (including child relations).",
RELOPT_KIND_ATTRIBUTE,
ShareUpdateExclusiveLock
},
0, -1.0, DBL_MAX
},
{
{
"vacuum_cleanup_index_scale_factor",
"Deprecated B-Tree parameter.",
RELOPT_KIND_BTREE,
ShareUpdateExclusiveLock
},
-1, 0.0, 1e10
},
/* list terminator */
{{NULL}}
};
/* values from StdRdOptIndexCleanup */
static relopt_enum_elt_def StdRdOptIndexCleanupValues[] =
{
{"auto", STDRD_OPTION_VACUUM_INDEX_CLEANUP_AUTO},
{"on", STDRD_OPTION_VACUUM_INDEX_CLEANUP_ON},
{"off", STDRD_OPTION_VACUUM_INDEX_CLEANUP_OFF},
{"true", STDRD_OPTION_VACUUM_INDEX_CLEANUP_ON},
{"false", STDRD_OPTION_VACUUM_INDEX_CLEANUP_OFF},
{"yes", STDRD_OPTION_VACUUM_INDEX_CLEANUP_ON},
{"no", STDRD_OPTION_VACUUM_INDEX_CLEANUP_OFF},
{"1", STDRD_OPTION_VACUUM_INDEX_CLEANUP_ON},
{"0", STDRD_OPTION_VACUUM_INDEX_CLEANUP_OFF},
{(const char *) NULL} /* list terminator */
};
/* values from GistOptBufferingMode */
static relopt_enum_elt_def gistBufferingOptValues[] =
{
{"auto", GIST_OPTION_BUFFERING_AUTO},
{"on", GIST_OPTION_BUFFERING_ON},
{"off", GIST_OPTION_BUFFERING_OFF},
{(const char *) NULL} /* list terminator */
};
/* values from ViewOptCheckOption */
static relopt_enum_elt_def viewCheckOptValues[] =
{
/* no value for NOT_SET */
{"local", VIEW_OPTION_CHECK_OPTION_LOCAL},
{"cascaded", VIEW_OPTION_CHECK_OPTION_CASCADED},
{(const char *) NULL} /* list terminator */
};
static relopt_enum enumRelOpts[] =
{
{
{
"vacuum_index_cleanup",
"Controls index vacuuming and index cleanup",
RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
ShareUpdateExclusiveLock
},
StdRdOptIndexCleanupValues,
STDRD_OPTION_VACUUM_INDEX_CLEANUP_AUTO,
gettext_noop("Valid values are \"on\", \"off\", and \"auto\".")
},
{
{
"buffering",
"Enables buffering build for this GiST index",
RELOPT_KIND_GIST,
AccessExclusiveLock
},
gistBufferingOptValues,
GIST_OPTION_BUFFERING_AUTO,
gettext_noop("Valid values are \"on\", \"off\", and \"auto\".")
},
{
{
"check_option",
"View has WITH CHECK OPTION defined (local or cascaded).",
RELOPT_KIND_VIEW,
AccessExclusiveLock
},
viewCheckOptValues,
VIEW_OPTION_CHECK_OPTION_NOT_SET,
gettext_noop("Valid values are \"local\" and \"cascaded\".")
},
/* list terminator */
{{NULL}}
};
static relopt_string stringRelOpts[] =
{
/* list terminator */
{{NULL}}
};
static relopt_gen **relOpts = NULL;
static bits32 last_assigned_kind = RELOPT_KIND_LAST_DEFAULT;
static int num_custom_options = 0;
static relopt_gen **custom_options = NULL;
static bool need_initialization = true;
static void initialize_reloptions(void);
static void parse_one_reloption(relopt_value *option, char *text_str,
int text_len, bool validate);
/*
* Get the length of a string reloption (either default or the user-defined
* value). This is used for allocation purposes when building a set of
* relation options.
*/
#define GET_STRING_RELOPTION_LEN(option) \
((option).isset ? strlen((option).string_val) : \
((relopt_string *) (option).gen)->default_len)
/*
* initialize_reloptions
* initialization routine, must be called before parsing
*
* Initialize the relOpts array and fill each variable's type and name length.
*/
static void
initialize_reloptions(void)
{
int i;
int j;
j = 0;
for (i = 0; boolRelOpts[i].gen.name; i++)
{
Assert(DoLockModesConflict(boolRelOpts[i].gen.lockmode,
boolRelOpts[i].gen.lockmode));
j++;
}
for (i = 0; ternaryRelOpts[i].gen.name; i++)
{
Assert(DoLockModesConflict(ternaryRelOpts[i].gen.lockmode,
ternaryRelOpts[i].gen.lockmode));
j++;
}
for (i = 0; intRelOpts[i].gen.name; i++)
{
Assert(DoLockModesConflict(intRelOpts[i].gen.lockmode,
intRelOpts[i].gen.lockmode));
j++;
}
for (i = 0; realRelOpts[i].gen.name; i++)
{
Assert(DoLockModesConflict(realRelOpts[i].gen.lockmode,
realRelOpts[i].gen.lockmode));
j++;
}
for (i = 0; enumRelOpts[i].gen.name; i++)
{
Assert(DoLockModesConflict(enumRelOpts[i].gen.lockmode,
enumRelOpts[i].gen.lockmode));
j++;
}
for (i = 0; stringRelOpts[i].gen.name; i++)
{
Assert(DoLockModesConflict(stringRelOpts[i].gen.lockmode,
stringRelOpts[i].gen.lockmode));
j++;
}
j += num_custom_options;
if (relOpts)
pfree(relOpts);
relOpts = MemoryContextAlloc(TopMemoryContext,
(j + 1) * sizeof(relopt_gen *));
j = 0;
for (i = 0; boolRelOpts[i].gen.name; i++)
{
relOpts[j] = &boolRelOpts[i].gen;
relOpts[j]->type = RELOPT_TYPE_BOOL;
relOpts[j]->namelen = strlen(relOpts[j]->name);
j++;
}
for (i = 0; ternaryRelOpts[i].gen.name; i++)
{
relOpts[j] = &ternaryRelOpts[i].gen;
relOpts[j]->type = RELOPT_TYPE_TERNARY;
relOpts[j]->namelen = strlen(relOpts[j]->name);
j++;
}
for (i = 0; intRelOpts[i].gen.name; i++)
{
relOpts[j] = &intRelOpts[i].gen;
relOpts[j]->type = RELOPT_TYPE_INT;
relOpts[j]->namelen = strlen(relOpts[j]->name);
j++;
}
for (i = 0; realRelOpts[i].gen.name; i++)
{
relOpts[j] = &realRelOpts[i].gen;
relOpts[j]->type = RELOPT_TYPE_REAL;
relOpts[j]->namelen = strlen(relOpts[j]->name);
j++;
}
for (i = 0; enumRelOpts[i].gen.name; i++)
{
relOpts[j] = &enumRelOpts[i].gen;
relOpts[j]->type = RELOPT_TYPE_ENUM;
relOpts[j]->namelen = strlen(relOpts[j]->name);
j++;
}
for (i = 0; stringRelOpts[i].gen.name; i++)
{
relOpts[j] = &stringRelOpts[i].gen;
relOpts[j]->type = RELOPT_TYPE_STRING;
relOpts[j]->namelen = strlen(relOpts[j]->name);
j++;
}
for (i = 0; i < num_custom_options; i++)
{
relOpts[j] = custom_options[i];
j++;
}
/* add a list terminator */
relOpts[j] = NULL;
/* flag the work is complete */
need_initialization = false;
}
/*
* add_reloption_kind
* Create a new relopt_kind value, to be used in custom reloptions by
* user-defined AMs.
*/
relopt_kind
add_reloption_kind(void)
{
/* don't hand out the last bit so that the enum's behavior is portable */
if (last_assigned_kind >= RELOPT_KIND_MAX)
ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("user-defined relation parameter types limit exceeded")));
last_assigned_kind <<= 1;
return (relopt_kind) last_assigned_kind;
}
/*
* add_reloption
* Add an already-created custom reloption to the list, and recompute the
* main parser table.
*/
static void
add_reloption(relopt_gen *newoption)
{
static int max_custom_options = 0;
if (num_custom_options >= max_custom_options)
{
MemoryContext oldcxt;
oldcxt = MemoryContextSwitchTo(TopMemoryContext);
if (max_custom_options == 0)
{
max_custom_options = 8;
custom_options = palloc(max_custom_options * sizeof(relopt_gen *));
}
else
{
max_custom_options *= 2;
custom_options = repalloc(custom_options,
max_custom_options * sizeof(relopt_gen *));
}
MemoryContextSwitchTo(oldcxt);
}
custom_options[num_custom_options++] = newoption;
need_initialization = true;
}
/*
* init_local_reloptions
* Initialize local reloptions that will parsed into bytea structure of
* 'relopt_struct_size'.
*/
void
init_local_reloptions(local_relopts *relopts, Size relopt_struct_size)
{
relopts->options = NIL;
relopts->validators = NIL;
relopts->relopt_struct_size = relopt_struct_size;
}
/*
* register_reloptions_validator
* Register custom validation callback that will be called at the end of
* build_local_reloptions().
*/
void
register_reloptions_validator(local_relopts *relopts, relopts_validator validator)
{
relopts->validators = lappend(relopts->validators, validator);
}
/*
* add_local_reloption
* Add an already-created custom reloption to the local list.
*/
static void
add_local_reloption(local_relopts *relopts, relopt_gen *newoption, int offset)
{
local_relopt *opt = palloc_object(local_relopt);
Assert(offset < relopts->relopt_struct_size);
opt->option = newoption;
opt->offset = offset;
relopts->options = lappend(relopts->options, opt);
}
/*
* allocate_reloption
* Allocate a new reloption and initialize the type-agnostic fields
* (for types other than string)
*/
static relopt_gen *
allocate_reloption(bits32 kinds, int type, const char *name, const char *desc,
LOCKMODE lockmode)
{
MemoryContext oldcxt;
size_t size;
relopt_gen *newoption;
if (kinds != RELOPT_KIND_LOCAL)
oldcxt = MemoryContextSwitchTo(TopMemoryContext);
else
oldcxt = NULL;
switch (type)
{
case RELOPT_TYPE_BOOL:
size = sizeof(relopt_bool);
break;
case RELOPT_TYPE_TERNARY:
size = sizeof(relopt_ternary);
break;
case RELOPT_TYPE_INT:
size = sizeof(relopt_int);
break;
case RELOPT_TYPE_REAL:
size = sizeof(relopt_real);
break;
case RELOPT_TYPE_ENUM:
size = sizeof(relopt_enum);
break;
case RELOPT_TYPE_STRING:
size = sizeof(relopt_string);
break;
default:
elog(ERROR, "unsupported reloption type %d", type);
return NULL; /* keep compiler quiet */
}
newoption = palloc(size);
newoption->name = pstrdup(name);
if (desc)
newoption->desc = pstrdup(desc);
else
newoption->desc = NULL;
newoption->kinds = kinds;
newoption->namelen = strlen(name);
newoption->type = type;
newoption->lockmode = lockmode;
if (oldcxt != NULL)
MemoryContextSwitchTo(oldcxt);
return newoption;
}
/*
* init_bool_reloption
* Allocate and initialize a new boolean reloption
*/
static relopt_bool *
init_bool_reloption(bits32 kinds, const char *name, const char *desc,
bool default_val, LOCKMODE lockmode)
{
relopt_bool *newoption;
newoption = (relopt_bool *) allocate_reloption(kinds, RELOPT_TYPE_BOOL,
name, desc, lockmode);
newoption->default_val = default_val;
return newoption;
}
/*
* add_bool_reloption
* Add a new boolean reloption
*/
void
add_bool_reloption(bits32 kinds, const char *name, const char *desc,
bool default_val, LOCKMODE lockmode)
{
relopt_bool *newoption = init_bool_reloption(kinds, name, desc,
default_val, lockmode);
add_reloption((relopt_gen *) newoption);
}
/*
* add_local_bool_reloption
* Add a new boolean local reloption
*
* 'offset' is offset of bool-typed field.
*/
void
add_local_bool_reloption(local_relopts *relopts, const char *name,
const char *desc, bool default_val, int offset)
{
relopt_bool *newoption = init_bool_reloption(RELOPT_KIND_LOCAL,
name, desc,
default_val, 0);
add_local_reloption(relopts, (relopt_gen *) newoption, offset);
}
/*
* init_ternary_reloption
* Allocate and initialize a new ternary reloption
*/
static relopt_ternary *
init_ternary_reloption(bits32 kinds, const char *name, const char *desc,
LOCKMODE lockmode)
{
relopt_ternary *newoption;
newoption = (relopt_ternary *)
allocate_reloption(kinds, RELOPT_TYPE_TERNARY, name, desc, lockmode);
return newoption;
}
/*
* add_ternary_reloption
* Add a new ternary reloption
*/
void
add_ternary_reloption(bits32 kinds, const char *name, const char *desc,
LOCKMODE lockmode)
{
relopt_ternary *newoption;
newoption =
init_ternary_reloption(kinds, name, desc, lockmode);
add_reloption((relopt_gen *) newoption);
}
/*
* add_local_ternary_reloption
* Add a new ternary local reloption
*
* 'offset' is offset of ternary-typed field.
*/
void
add_local_ternary_reloption(local_relopts *relopts, const char *name,
const char *desc, int offset)
{
relopt_ternary *newoption;
newoption =
init_ternary_reloption(RELOPT_KIND_LOCAL, name, desc, 0);
add_local_reloption(relopts, (relopt_gen *) newoption, offset);
}
/*
* init_real_reloption
* Allocate and initialize a new integer reloption
*/
static relopt_int *
init_int_reloption(bits32 kinds, const char *name, const char *desc,
int default_val, int min_val, int max_val,
LOCKMODE lockmode)
{
relopt_int *newoption;
newoption = (relopt_int *) allocate_reloption(kinds, RELOPT_TYPE_INT,
name, desc, lockmode);
newoption->default_val = default_val;
newoption->min = min_val;
newoption->max = max_val;
return newoption;
}
/*
* add_int_reloption
* Add a new integer reloption
*/
void
add_int_reloption(bits32 kinds, const char *name, const char *desc, int default_val,
int min_val, int max_val, LOCKMODE lockmode)
{
relopt_int *newoption = init_int_reloption(kinds, name, desc,
default_val, min_val,
max_val, lockmode);
add_reloption((relopt_gen *) newoption);
}
/*
* add_local_int_reloption
* Add a new local integer reloption
*
* 'offset' is offset of int-typed field.
*/
void
add_local_int_reloption(local_relopts *relopts, const char *name,
const char *desc, int default_val, int min_val,
int max_val, int offset)
{
relopt_int *newoption = init_int_reloption(RELOPT_KIND_LOCAL,
name, desc, default_val,
min_val, max_val, 0);
add_local_reloption(relopts, (relopt_gen *) newoption, offset);
}
/*
* init_real_reloption
* Allocate and initialize a new real reloption
*/
static relopt_real *
init_real_reloption(bits32 kinds, const char *name, const char *desc,
double default_val, double min_val, double max_val,
LOCKMODE lockmode)
{
relopt_real *newoption;
newoption = (relopt_real *) allocate_reloption(kinds, RELOPT_TYPE_REAL,
name, desc, lockmode);
newoption->default_val = default_val;
newoption->min = min_val;
newoption->max = max_val;
return newoption;
}
/*
* add_real_reloption
* Add a new float reloption
*/
void
add_real_reloption(bits32 kinds, const char *name, const char *desc,
double default_val, double min_val, double max_val,
LOCKMODE lockmode)
{
relopt_real *newoption = init_real_reloption(kinds, name, desc,
default_val, min_val,
max_val, lockmode);
add_reloption((relopt_gen *) newoption);
}
/*
* add_local_real_reloption
* Add a new local float reloption
*
* 'offset' is offset of double-typed field.
*/
void
add_local_real_reloption(local_relopts *relopts, const char *name,
const char *desc, double default_val,
double min_val, double max_val, int offset)
{
relopt_real *newoption = init_real_reloption(RELOPT_KIND_LOCAL,
name, desc,
default_val, min_val,
max_val, 0);
add_local_reloption(relopts, (relopt_gen *) newoption, offset);
}
/*
* init_enum_reloption
* Allocate and initialize a new enum reloption
*/
static relopt_enum *
init_enum_reloption(bits32 kinds, const char *name, const char *desc,
relopt_enum_elt_def *members, int default_val,
const char *detailmsg, LOCKMODE lockmode)
{
relopt_enum *newoption;
newoption = (relopt_enum *) allocate_reloption(kinds, RELOPT_TYPE_ENUM,
name, desc, lockmode);
newoption->members = members;
newoption->default_val = default_val;
newoption->detailmsg = detailmsg;
return newoption;
}
/*
* add_enum_reloption
* Add a new enum reloption
*
* The members array must have a terminating NULL entry.
*
* The detailmsg is shown when unsupported values are passed, and has this
* form: "Valid values are \"foo\", \"bar\", and \"bar\"."
*
* The members array and detailmsg are not copied -- caller must ensure that
* they are valid throughout the life of the process.
*/
void
add_enum_reloption(bits32 kinds, const char *name, const char *desc,
relopt_enum_elt_def *members, int default_val,
const char *detailmsg, LOCKMODE lockmode)
{
relopt_enum *newoption = init_enum_reloption(kinds, name, desc,
members, default_val,
detailmsg, lockmode);
add_reloption((relopt_gen *) newoption);
}
/*
* add_local_enum_reloption
* Add a new local enum reloption
*
* 'offset' is offset of int-typed field.
*/
void
add_local_enum_reloption(local_relopts *relopts, const char *name,
const char *desc, relopt_enum_elt_def *members,
int default_val, const char *detailmsg, int offset)
{
relopt_enum *newoption = init_enum_reloption(RELOPT_KIND_LOCAL,
name, desc,
members, default_val,
detailmsg, 0);
add_local_reloption(relopts, (relopt_gen *) newoption, offset);
}
/*
* init_string_reloption
* Allocate and initialize a new string reloption
*/
static relopt_string *
init_string_reloption(bits32 kinds, const char *name, const char *desc,
const char *default_val,
validate_string_relopt validator,
fill_string_relopt filler,
LOCKMODE lockmode)
{
relopt_string *newoption;
/* make sure the validator/default combination is sane */
if (validator)
(validator) (default_val);
newoption = (relopt_string *) allocate_reloption(kinds, RELOPT_TYPE_STRING,
name, desc, lockmode);
newoption->validate_cb = validator;
newoption->fill_cb = filler;
if (default_val)
{
if (kinds == RELOPT_KIND_LOCAL)
newoption->default_val = strdup(default_val);
else
newoption->default_val = MemoryContextStrdup(TopMemoryContext, default_val);
newoption->default_len = strlen(default_val);
newoption->default_isnull = false;
}
else
{
newoption->default_val = "";
newoption->default_len = 0;
newoption->default_isnull = true;
}
return newoption;
}
/*
* add_string_reloption
* Add a new string reloption
*
* "validator" is an optional function pointer that can be used to test the
* validity of the values. It must elog(ERROR) when the argument string is
* not acceptable for the variable. Note that the default value must pass
* the validation.
*/
void
add_string_reloption(bits32 kinds, const char *name, const char *desc,
const char *default_val, validate_string_relopt validator,
LOCKMODE lockmode)
{
relopt_string *newoption = init_string_reloption(kinds, name, desc,
default_val,
validator, NULL,
lockmode);
add_reloption((relopt_gen *) newoption);
}
/*
* add_local_string_reloption
* Add a new local string reloption
*
* 'offset' is offset of int-typed field that will store offset of string value
* in the resulting bytea structure.
*/
void
add_local_string_reloption(local_relopts *relopts, const char *name,
const char *desc, const char *default_val,
validate_string_relopt validator,
fill_string_relopt filler, int offset)
{
relopt_string *newoption = init_string_reloption(RELOPT_KIND_LOCAL,
name, desc,
default_val,
validator, filler,
0);
add_local_reloption(relopts, (relopt_gen *) newoption, offset);
}
/*
* Transform a relation options list (list of DefElem) into the text array
* format that is kept in pg_class.reloptions, including only those options
* that are in the passed namespace. The output values do not include the
* namespace.
*
* This is used for three cases: CREATE TABLE/INDEX, ALTER TABLE SET, and
* ALTER TABLE RESET. In the ALTER cases, oldOptions is the existing
* reloptions value (possibly NULL), and we replace or remove entries
* as needed.
*
* If acceptOidsOff is true, then we allow oids = false, but throw error when
* on. This is solely needed for backwards compatibility.
*
* Note that this is not responsible for determining whether the options
* are valid, but it does check that namespaces for all the options given are
* listed in validnsps. The NULL namespace is always valid and need not be
* explicitly listed. Passing a NULL pointer means that only the NULL
* namespace is valid.
*
* Both oldOptions and the result are text arrays (or NULL for "default"),
* but we declare them as Datums to avoid including array.h in reloptions.h.
*/
Datum
transformRelOptions(Datum oldOptions, List *defList, const char *nameSpace,
const char *const validnsps[], bool acceptOidsOff, bool isReset)
{
Datum result;
ArrayBuildState *astate;
ListCell *cell;
/* no change if empty list */
if (defList == NIL)
return oldOptions;
/* We build new array using accumArrayResult */
astate = NULL;
/* Copy any oldOptions that aren't to be replaced */
if (DatumGetPointer(oldOptions) != NULL)
{
ArrayType *array = DatumGetArrayTypeP(oldOptions);
Datum *oldoptions;
int noldoptions;
int i;
deconstruct_array_builtin(array, TEXTOID, &oldoptions, NULL, &noldoptions);
for (i = 0; i < noldoptions; i++)
{
char *text_str = VARDATA(DatumGetPointer(oldoptions[i]));
int text_len = VARSIZE(DatumGetPointer(oldoptions[i])) - VARHDRSZ;
/* Search for a match in defList */
foreach(cell, defList)
{
DefElem *def = (DefElem *) lfirst(cell);
int kw_len;
/* ignore if not in the same namespace */
if (nameSpace == NULL)
{
if (def->defnamespace != NULL)
continue;
}
else if (def->defnamespace == NULL)
continue;
else if (strcmp(def->defnamespace, nameSpace) != 0)
continue;
kw_len = strlen(def->defname);
if (text_len > kw_len && text_str[kw_len] == '=' &&
strncmp(text_str, def->defname, kw_len) == 0)
break;
}
if (!cell)
{
/* No match, so keep old option */
astate = accumArrayResult(astate, oldoptions[i],
false, TEXTOID,
CurrentMemoryContext);
}
}
}
/*
* If CREATE/SET, add new options to array; if RESET, just check that the
* user didn't say RESET (option=val). (Must do this because the grammar
* doesn't enforce it.)
*/
foreach(cell, defList)
{
DefElem *def = (DefElem *) lfirst(cell);
if (isReset)
{
if (def->arg != NULL)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("RESET must not include values for parameters")));
}
else
{
const char *name;
const char *value;
text *t;
Size len;
/*
* Error out if the namespace is not valid. A NULL namespace is
* always valid.
*/
if (def->defnamespace != NULL)
{
bool valid = false;
int i;
if (validnsps)
{
for (i = 0; validnsps[i]; i++)
{
if (strcmp(def->defnamespace, validnsps[i]) == 0)
{
valid = true;
break;
}
}
}
if (!valid)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("unrecognized parameter namespace \"%s\"",
def->defnamespace)));
}
/* ignore if not in the same namespace */
if (nameSpace == NULL)
{
if (def->defnamespace != NULL)
continue;
}
else if (def->defnamespace == NULL)
continue;
else if (strcmp(def->defnamespace, nameSpace) != 0)
continue;
/*
* Flatten the DefElem into a text string like "name=arg". If we
* have just "name", assume "name=true" is meant. Note: the
* namespace is not output.
*/
name = def->defname;
if (def->arg != NULL)
value = defGetString(def);
else
value = "true";
/* Insist that name not contain "=", else "a=b=c" is ambiguous */
if (strchr(name, '=') != NULL)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("invalid option name \"%s\": must not contain \"=\"",
name)));
/*
* This is not a great place for this test, but there's no other
* convenient place to filter the option out. As WITH (oids =
* false) will be removed someday, this seems like an acceptable
* amount of ugly.
*/
if (acceptOidsOff && def->defnamespace == NULL &&
strcmp(name, "oids") == 0)
{
if (defGetBoolean(def))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("tables declared WITH OIDS are not supported")));
/* skip over option, reloptions machinery doesn't know it */
continue;
}
len = VARHDRSZ + strlen(name) + 1 + strlen(value);
/* +1 leaves room for sprintf's trailing null */
t = (text *) palloc(len + 1);
SET_VARSIZE(t, len);
sprintf(VARDATA(t), "%s=%s", name, value);
astate = accumArrayResult(astate, PointerGetDatum(t),
false, TEXTOID,
CurrentMemoryContext);
}
}
if (astate)
result = makeArrayResult(astate, CurrentMemoryContext);
else
result = (Datum) 0;
return result;
}
/*
* Convert the text-array format of reloptions into a List of DefElem.
* This is the inverse of transformRelOptions().
*/
List *
untransformRelOptions(Datum options)
{
List *result = NIL;
ArrayType *array;
Datum *optiondatums;
int noptions;
int i;
/* Nothing to do if no options */
if (DatumGetPointer(options) == NULL)
return result;
array = DatumGetArrayTypeP(options);
deconstruct_array_builtin(array, TEXTOID, &optiondatums, NULL, &noptions);
for (i = 0; i < noptions; i++)
{
char *s;
char *p;
Node *val = NULL;
s = TextDatumGetCString(optiondatums[i]);
p = strchr(s, '=');
if (p)
{
*p++ = '\0';
val = (Node *) makeString(p);
}
result = lappend(result, makeDefElem(s, val, -1));
}
return result;
}
/*
* Extract and parse reloptions from a pg_class tuple.
*
* This is a low-level routine, expected to be used by relcache code and
* callers that do not have a table's relcache entry (e.g. autovacuum). For
* other uses, consider grabbing the rd_options pointer from the relcache entry
* instead.
*
* tupdesc is pg_class' tuple descriptor. amoptions is a pointer to the index
* AM's options parser function in the case of a tuple corresponding to an
* index, or NULL otherwise.
*/
bytea *
extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,
amoptions_function amoptions)
{
bytea *options;
bool isnull;
Datum datum;
Form_pg_class classForm;
datum = fastgetattr(tuple,
Anum_pg_class_reloptions,
tupdesc,
&isnull);
if (isnull)
return NULL;
classForm = (Form_pg_class) GETSTRUCT(tuple);
/* Parse into appropriate format; don't error out here */
switch (classForm->relkind)
{
case RELKIND_RELATION:
case RELKIND_TOASTVALUE:
case RELKIND_MATVIEW:
options = heap_reloptions(classForm->relkind, datum, false);
break;
case RELKIND_PARTITIONED_TABLE:
options = partitioned_table_reloptions(datum, false);
break;
case RELKIND_VIEW:
options = view_reloptions(datum, false);
break;
case RELKIND_INDEX:
case RELKIND_PARTITIONED_INDEX:
options = index_reloptions(amoptions, datum, false);
break;
case RELKIND_FOREIGN_TABLE:
options = NULL;
break;
default:
Assert(false); /* can't get here */
options = NULL; /* keep compiler quiet */
break;
}
return options;
}
static void
parseRelOptionsInternal(Datum options, bool validate,
relopt_value *reloptions, int numoptions)
{
ArrayType *array = DatumGetArrayTypeP(options);
Datum *optiondatums;
int noptions;
int i;
deconstruct_array_builtin(array, TEXTOID, &optiondatums, NULL, &noptions);
for (i = 0; i < noptions; i++)
{
char *text_str = VARDATA(DatumGetPointer(optiondatums[i]));
int text_len = VARSIZE(DatumGetPointer(optiondatums[i])) - VARHDRSZ;
int j;
/* Search for a match in reloptions */
for (j = 0; j < numoptions; j++)
{
int kw_len = reloptions[j].gen->namelen;
if (text_len > kw_len && text_str[kw_len] == '=' &&
strncmp(text_str, reloptions[j].gen->name, kw_len) == 0)
{
parse_one_reloption(&reloptions[j], text_str, text_len,
validate);
break;
}
}
if (j >= numoptions && validate)
{
char *s;
char *p;
s = TextDatumGetCString(optiondatums[i]);
p = strchr(s, '=');
if (p)
*p = '\0';
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("unrecognized parameter \"%s\"", s)));
}
}
/* It's worth avoiding memory leaks in this function */
pfree(optiondatums);
if (((void *) array) != DatumGetPointer(options))
pfree(array);
}
/*
* Interpret reloptions that are given in text-array format.
*
* options is a reloption text array as constructed by transformRelOptions.
* kind specifies the family of options to be processed.
*
* The return value is a relopt_value * array on which the options actually
* set in the options array are marked with isset=true. The length of this
* array is returned in *numrelopts. Options not set are also present in the
* array; this is so that the caller can easily locate the default values.
*
* If there are no options of the given kind, numrelopts is set to 0 and NULL
* is returned (unless options are illegally supplied despite none being
* defined, in which case an error occurs).
*
* Note: values of type int, bool and real are allocated as part of the
* returned array. Values of type string are allocated separately and must
* be freed by the caller.
*/
static relopt_value *
parseRelOptions(Datum options, bool validate, relopt_kind kind,
int *numrelopts)
{
relopt_value *reloptions = NULL;
int numoptions = 0;
int i;
int j;
if (need_initialization)
initialize_reloptions();
/* Build a list of expected options, based on kind */
for (i = 0; relOpts[i]; i++)
if (relOpts[i]->kinds & kind)
numoptions++;
if (numoptions > 0)
{
reloptions = palloc(numoptions * sizeof(relopt_value));
for (i = 0, j = 0; relOpts[i]; i++)
{
if (relOpts[i]->kinds & kind)
{
reloptions[j].gen = relOpts[i];
reloptions[j].isset = false;
j++;
}
}
}
/* Done if no options */
if (DatumGetPointer(options) != NULL)
parseRelOptionsInternal(options, validate, reloptions, numoptions);
*numrelopts = numoptions;
return reloptions;
}
/* Parse local unregistered options. */
static relopt_value *
parseLocalRelOptions(local_relopts *relopts, Datum options, bool validate)
{
int nopts = list_length(relopts->options);
relopt_value *values = palloc_array(relopt_value, nopts);
ListCell *lc;
int i = 0;
foreach(lc, relopts->options)
{
local_relopt *opt = lfirst(lc);
values[i].gen = opt->option;
values[i].isset = false;
i++;
}
if (options != (Datum) 0)
parseRelOptionsInternal(options, validate, values, nopts);
return values;
}
/*
* Subroutine for parseRelOptions, to parse and validate a single option's
* value
*/
static void
parse_one_reloption(relopt_value *option, char *text_str, int text_len,
bool validate)
{
char *value;
int value_len;
bool parsed;
bool nofree = false;
if (option->isset && validate)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("parameter \"%s\" specified more than once",
option->gen->name)));
value_len = text_len - option->gen->namelen - 1;
value = (char *) palloc(value_len + 1);
memcpy(value, text_str + option->gen->namelen + 1, value_len);
value[value_len] = '\0';
switch (option->gen->type)
{
case RELOPT_TYPE_BOOL:
{
parsed = parse_bool(value, &option->bool_val);
if (validate && !parsed)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("invalid value for boolean option \"%s\": %s",
option->gen->name, value)));
}
break;
case RELOPT_TYPE_TERNARY:
{
bool b;
parsed = parse_bool(value, &b);
option->ternary_val = b ? PG_TERNARY_TRUE :
PG_TERNARY_FALSE;
if (validate && !parsed)
ereport(ERROR,
errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("invalid value for boolean option \"%s\": %s",
option->gen->name, value));
}
break;
case RELOPT_TYPE_INT:
{
relopt_int *optint = (relopt_int *) option->gen;
parsed = parse_int(value, &option->int_val, 0, NULL);
if (validate && !parsed)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("invalid value for integer option \"%s\": %s",
option->gen->name, value)));
if (validate && (option->int_val < optint->min ||
option->int_val > optint->max))
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("value %s out of bounds for option \"%s\"",
value, option->gen->name),
errdetail("Valid values are between \"%d\" and \"%d\".",
optint->min, optint->max)));
}
break;
case RELOPT_TYPE_REAL:
{
relopt_real *optreal = (relopt_real *) option->gen;
parsed = parse_real(value, &option->real_val, 0, NULL);
if (validate && !parsed)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("invalid value for floating point option \"%s\": %s",
option->gen->name, value)));
if (validate && (option->real_val < optreal->min ||
option->real_val > optreal->max))
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("value %s out of bounds for option \"%s\"",
value, option->gen->name),
errdetail("Valid values are between \"%f\" and \"%f\".",
optreal->min, optreal->max)));
}
break;
case RELOPT_TYPE_ENUM:
{
relopt_enum *optenum = (relopt_enum *) option->gen;
relopt_enum_elt_def *elt;
parsed = false;
for (elt = optenum->members; elt->string_val; elt++)
{
if (pg_strcasecmp(value, elt->string_val) == 0)
{
option->enum_val = elt->symbol_val;
parsed = true;
break;
}
}
if (validate && !parsed)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("invalid value for enum option \"%s\": %s",
option->gen->name, value),
optenum->detailmsg ?
errdetail_internal("%s", _(optenum->detailmsg)) : 0));
/*
* If value is not among the allowed string values, but we are
* not asked to validate, just use the default numeric value.
*/
if (!parsed)
option->enum_val = optenum->default_val;
}
break;
case RELOPT_TYPE_STRING:
{
relopt_string *optstring = (relopt_string *) option->gen;
option->string_val = value;
nofree = true;
if (validate && optstring->validate_cb)
(optstring->validate_cb) (value);
parsed = true;
}
break;
default:
elog(ERROR, "unsupported reloption type %d", option->gen->type);
parsed = true; /* quiet compiler */
break;
}
if (parsed)
option->isset = true;
if (!nofree)
pfree(value);
}
/*
* Given the result from parseRelOptions, allocate a struct that's of the
* specified base size plus any extra space that's needed for string variables.
*
* "base" should be sizeof(struct) of the reloptions struct (StdRdOptions or
* equivalent).
*/
static void *
allocateReloptStruct(Size base, relopt_value *options, int numoptions)
{
Size size = base;
int i;
for (i = 0; i < numoptions; i++)
{
relopt_value *optval = &options[i];
if (optval->gen->type == RELOPT_TYPE_STRING)
{
relopt_string *optstr = (relopt_string *) optval->gen;
if (optstr->fill_cb)
{
const char *val = optval->isset ? optval->string_val :
optstr->default_isnull ? NULL : optstr->default_val;
size += optstr->fill_cb(val, NULL);
}
else
size += GET_STRING_RELOPTION_LEN(*optval) + 1;
}
}
return palloc0(size);
}
/*
* Given the result of parseRelOptions and a parsing table, fill in the
* struct (previously allocated with allocateReloptStruct) with the parsed
* values.
*
* rdopts is the pointer to the allocated struct to be filled.
* basesize is the sizeof(struct) that was passed to allocateReloptStruct.
* options, of length numoptions, is parseRelOptions' output.
* elems, of length numelems, is the table describing the allowed options.
* When validate is true, it is expected that all options appear in elems.
*/
static void
fillRelOptions(void *rdopts, Size basesize,
relopt_value *options, int numoptions,
bool validate,
const relopt_parse_elt *elems, int numelems)
{
int i;
int offset = basesize;
for (i = 0; i < numoptions; i++)
{
int j;
bool found = false;
for (j = 0; j < numelems; j++)
{
if (strcmp(options[i].gen->name, elems[j].optname) == 0)
{
relopt_string *optstring;
char *itempos = ((char *) rdopts) + elems[j].offset;
char *string_val;
switch (options[i].gen->type)
{
case RELOPT_TYPE_BOOL:
*(bool *) itempos = options[i].isset ?
options[i].bool_val :
((relopt_bool *) options[i].gen)->default_val;
break;
case RELOPT_TYPE_TERNARY:
*(pg_ternary *) itempos = options[i].isset ?
options[i].ternary_val : PG_TERNARY_UNSET;
break;
case RELOPT_TYPE_INT:
*(int *) itempos = options[i].isset ?
options[i].int_val :
((relopt_int *) options[i].gen)->default_val;
break;
case RELOPT_TYPE_REAL:
*(double *) itempos = options[i].isset ?
options[i].real_val :
((relopt_real *) options[i].gen)->default_val;
break;
case RELOPT_TYPE_ENUM:
*(int *) itempos = options[i].isset ?
options[i].enum_val :
((relopt_enum *) options[i].gen)->default_val;
break;
case RELOPT_TYPE_STRING:
optstring = (relopt_string *) options[i].gen;
if (options[i].isset)
string_val = options[i].string_val;
else if (!optstring->default_isnull)
string_val = optstring->default_val;
else
string_val = NULL;
if (optstring->fill_cb)
{
Size size =
optstring->fill_cb(string_val,
(char *) rdopts + offset);
if (size)
{
*(int *) itempos = offset;
offset += size;
}
else
*(int *) itempos = 0;
}
else if (string_val == NULL)
*(int *) itempos = 0;
else
{
strcpy((char *) rdopts + offset, string_val);
*(int *) itempos = offset;
offset += strlen(string_val) + 1;
}
break;
default:
elog(ERROR, "unsupported reloption type %d",
options[i].gen->type);
break;
}
found = true;
break;
}
}
if (validate && !found)
elog(ERROR, "reloption \"%s\" not found in parse table",
options[i].gen->name);
}
SET_VARSIZE(rdopts, offset);
}
/*
* Option parser for anything that uses StdRdOptions.
*/
bytea *
default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
{
static const relopt_parse_elt tab[] = {
{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
{"autovacuum_enabled", RELOPT_TYPE_BOOL,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_max_threshold)},
{"autovacuum_vacuum_insert_threshold", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_ins_threshold)},
{"autovacuum_analyze_threshold", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, analyze_threshold)},
{"autovacuum_vacuum_cost_limit", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_cost_limit)},
{"autovacuum_freeze_min_age", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, freeze_min_age)},
{"autovacuum_freeze_max_age", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, freeze_max_age)},
{"autovacuum_freeze_table_age", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, freeze_table_age)},
{"autovacuum_multixact_freeze_min_age", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, multixact_freeze_min_age)},
{"autovacuum_multixact_freeze_max_age", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, multixact_freeze_max_age)},
{"autovacuum_multixact_freeze_table_age", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, multixact_freeze_table_age)},
{"log_autovacuum_min_duration", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, log_vacuum_min_duration)},
{"log_autoanalyze_min_duration", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, log_analyze_min_duration)},
{"toast_tuple_target", RELOPT_TYPE_INT,
offsetof(StdRdOptions, toast_tuple_target)},
{"autovacuum_vacuum_cost_delay", RELOPT_TYPE_REAL,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_cost_delay)},
{"autovacuum_vacuum_scale_factor", RELOPT_TYPE_REAL,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_scale_factor)},
{"autovacuum_vacuum_insert_scale_factor", RELOPT_TYPE_REAL,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_ins_scale_factor)},
{"autovacuum_analyze_scale_factor", RELOPT_TYPE_REAL,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, analyze_scale_factor)},
{"user_catalog_table", RELOPT_TYPE_BOOL,
offsetof(StdRdOptions, user_catalog_table)},
{"parallel_workers", RELOPT_TYPE_INT,
offsetof(StdRdOptions, parallel_workers)},
{"vacuum_index_cleanup", RELOPT_TYPE_ENUM,
offsetof(StdRdOptions, vacuum_index_cleanup)},
{"vacuum_truncate", RELOPT_TYPE_TERNARY,
offsetof(StdRdOptions, vacuum_truncate)},
{"vacuum_max_eager_freeze_failure_rate", RELOPT_TYPE_REAL,
offsetof(StdRdOptions, vacuum_max_eager_freeze_failure_rate)}
};
return (bytea *) build_reloptions(reloptions, validate, kind,
sizeof(StdRdOptions),
tab, lengthof(tab));
}
/*
* build_reloptions
*
* Parses "reloptions" provided by the caller, returning them in a
* structure containing the parsed options. The parsing is done with
* the help of a parsing table describing the allowed options, defined
* by "relopt_elems" of length "num_relopt_elems".
*
* "validate" must be true if reloptions value is freshly built by
* transformRelOptions(), as opposed to being read from the catalog, in which
* case the values contained in it must already be valid.
*
* NULL is returned if the passed-in options did not match any of the options
* in the parsing table, unless validate is true in which case an error would
* be reported.
*/
void *
build_reloptions(Datum reloptions, bool validate,
relopt_kind kind,
Size relopt_struct_size,
const relopt_parse_elt *relopt_elems,
int num_relopt_elems)
{
int numoptions;
relopt_value *options;
void *rdopts;
/* parse options specific to given relation option kind */
options = parseRelOptions(reloptions, validate, kind, &numoptions);
Assert(numoptions <= num_relopt_elems);
/* if none set, we're done */
if (numoptions == 0)
{
Assert(options == NULL);
return NULL;
}
/* allocate and fill the structure */
rdopts = allocateReloptStruct(relopt_struct_size, options, numoptions);
fillRelOptions(rdopts, relopt_struct_size, options, numoptions,
validate, relopt_elems, num_relopt_elems);
pfree(options);
return rdopts;
}
/*
* Parse local options, allocate a bytea struct that's of the specified
* 'base_size' plus any extra space that's needed for string variables,
* fill its option's fields located at the given offsets and return it.
*/
void *
build_local_reloptions(local_relopts *relopts, Datum options, bool validate)
{
int noptions = list_length(relopts->options);
relopt_parse_elt *elems = palloc_array(relopt_parse_elt, noptions);
relopt_value *vals;
void *opts;
int i = 0;
ListCell *lc;
foreach(lc, relopts->options)
{
local_relopt *opt = lfirst(lc);
elems[i].optname = opt->option->name;
elems[i].opttype = opt->option->type;
elems[i].offset = opt->offset;
i++;
}
vals = parseLocalRelOptions(relopts, options, validate);
opts = allocateReloptStruct(relopts->relopt_struct_size, vals, noptions);
fillRelOptions(opts, relopts->relopt_struct_size, vals, noptions, validate,
elems, noptions);
if (validate)
foreach(lc, relopts->validators)
((relopts_validator) lfirst(lc)) (opts, vals, noptions);
if (elems)
pfree(elems);
return opts;
}
/*
* Option parser for partitioned tables
*/
bytea *
partitioned_table_reloptions(Datum reloptions, bool validate)
{
if (validate && reloptions)
ereport(ERROR,
errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot specify storage parameters for a partitioned table"),
errhint("Specify storage parameters for its leaf partitions instead."));
return NULL;
}
/*
* Option parser for views
*/
bytea *
view_reloptions(Datum reloptions, bool validate)
{
static const relopt_parse_elt tab[] = {
{"security_barrier", RELOPT_TYPE_BOOL,
offsetof(ViewOptions, security_barrier)},
{"security_invoker", RELOPT_TYPE_BOOL,
offsetof(ViewOptions, security_invoker)},
{"check_option", RELOPT_TYPE_ENUM,
offsetof(ViewOptions, check_option)}
};
return (bytea *) build_reloptions(reloptions, validate,
RELOPT_KIND_VIEW,
sizeof(ViewOptions),
tab, lengthof(tab));
}
/*
* Parse options for heaps, views and toast tables.
*/
bytea *
heap_reloptions(char relkind, Datum reloptions, bool validate)
{
StdRdOptions *rdopts;
switch (relkind)
{
case RELKIND_TOASTVALUE:
rdopts = (StdRdOptions *)
default_reloptions(reloptions, validate, RELOPT_KIND_TOAST);
if (rdopts != NULL)
{
/* adjust default-only parameters for TOAST relations */
rdopts->fillfactor = 100;
rdopts->autovacuum.analyze_threshold = -1;
rdopts->autovacuum.analyze_scale_factor = -1;
}
return (bytea *) rdopts;
case RELKIND_RELATION:
case RELKIND_MATVIEW:
return default_reloptions(reloptions, validate, RELOPT_KIND_HEAP);
default:
/* other relkinds are not supported */
return NULL;
}
}
/*
* Parse options for indexes.
*
* amoptions index AM's option parser function
* reloptions options as text[] datum
* validate error flag
*/
bytea *
index_reloptions(amoptions_function amoptions, Datum reloptions, bool validate)
{
Assert(amoptions != NULL);
/* Assume function is strict */
if (DatumGetPointer(reloptions) == NULL)
return NULL;
return amoptions(reloptions, validate);
}
/*
* Option parser for attribute reloptions
*/
bytea *
attribute_reloptions(Datum reloptions, bool validate)
{
static const relopt_parse_elt tab[] = {
{"n_distinct", RELOPT_TYPE_REAL, offsetof(AttributeOpts, n_distinct)},
{"n_distinct_inherited", RELOPT_TYPE_REAL, offsetof(AttributeOpts, n_distinct_inherited)}
};
return (bytea *) build_reloptions(reloptions, validate,
RELOPT_KIND_ATTRIBUTE,
sizeof(AttributeOpts),
tab, lengthof(tab));
}
/*
* Option parser for tablespace reloptions
*/
bytea *
tablespace_reloptions(Datum reloptions, bool validate)
{
static const relopt_parse_elt tab[] = {
{"random_page_cost", RELOPT_TYPE_REAL, offsetof(TableSpaceOpts, random_page_cost)},
{"seq_page_cost", RELOPT_TYPE_REAL, offsetof(TableSpaceOpts, seq_page_cost)},
{"effective_io_concurrency", RELOPT_TYPE_INT, offsetof(TableSpaceOpts, effective_io_concurrency)},
{"maintenance_io_concurrency", RELOPT_TYPE_INT, offsetof(TableSpaceOpts, maintenance_io_concurrency)}
};
return (bytea *) build_reloptions(reloptions, validate,
RELOPT_KIND_TABLESPACE,
sizeof(TableSpaceOpts),
tab, lengthof(tab));
}
/*
* Determine the required LOCKMODE from an option list.
*
* Called from AlterTableGetLockLevel(), see that function
* for a longer explanation of how this works.
*/
LOCKMODE
AlterTableGetRelOptionsLockLevel(List *defList)
{
LOCKMODE lockmode = NoLock;
ListCell *cell;
if (defList == NIL)
return AccessExclusiveLock;
if (need_initialization)
initialize_reloptions();
foreach(cell, defList)
{
DefElem *def = (DefElem *) lfirst(cell);
int i;
for (i = 0; relOpts[i]; i++)
{
if (strncmp(relOpts[i]->name,
def->defname,
relOpts[i]->namelen + 1) == 0)
{
if (lockmode < relOpts[i]->lockmode)
lockmode = relOpts[i]->lockmode;
}
}
}
return lockmode;
} | c | github | https://github.com/postgres/postgres | src/backend/access/common/reloptions.c |
## Input
```javascript
import {identity} from 'shared-runtime';
function Component(props) {
let {x} = props;
const foo = () => {
x = identity(props.x);
};
foo();
return {x};
}
export const FIXTURE_ENTRYPOINT = {
fn: Component,
params: [{x: 42}],
};
```
## Code
```javascript
import { c as _c } from "react/compiler-runtime";
import { identity } from "shared-runtime";
function Component(props) {
const $ = _c(4);
let x;
if ($[0] !== props) {
const { x: t0 } = props;
x = t0;
const foo = () => {
x = identity(props.x);
};
foo();
$[0] = props;
$[1] = x;
} else {
x = $[1];
}
let t0;
if ($[2] !== x) {
t0 = { x };
$[2] = x;
$[3] = t0;
} else {
t0 = $[3];
}
return t0;
}
export const FIXTURE_ENTRYPOINT = {
fn: Component,
params: [{ x: 42 }],
};
```
### Eval output
(kind: ok) {"x":42} | unknown | github | https://github.com/facebook/react | compiler/packages/babel-plugin-react-compiler/src/__tests__/fixtures/compiler/destructure-object-declaration-to-context-var.expect.md |
#! /usr/bin/env python
# ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2013, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Affero Public License for more details.
#
# You should have received a copy of the GNU Affero Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
from mock import Mock, patch, ANY, call
import numpy
import cPickle as pickle
import unittest2 as unittest
from nupic.bindings.math import GetNTAReal
from nupic.research.spatial_pooler import SpatialPooler
realType = GetNTAReal()
uintType = "uint32"
class SpatialPoolerAPITest(unittest.TestCase):
"""Tests for SpatialPooler public API"""
def setUp(self):
self.sp = SpatialPooler(columnDimensions=[5], inputDimensions=[5])
def testCompute(self):
# Check that there are no errors in call to compute
inputVector = numpy.ones(5)
activeArray = numpy.zeros(5)
self.sp.compute(inputVector, True, activeArray)
def testGetUpdatePeriod(self):
inParam = 1234
self.sp.setUpdatePeriod(inParam)
outParam = self.sp.getUpdatePeriod()
self.assertEqual(inParam, outParam)
def testGetPotentialRadius(self):
inParam = 56
self.sp.setPotentialRadius(inParam)
outParam = self.sp.getPotentialRadius()
self.assertEqual(inParam, outParam)
def testGetPotentialPct(self):
inParam = 0.4
self.sp.setPotentialPct(inParam)
outParam = self.sp.getPotentialPct()
self.assertAlmostEqual(inParam, outParam)
def testGetGlobalInhibition(self):
inParam = True
self.sp.setGlobalInhibition(inParam)
outParam = self.sp.getGlobalInhibition()
self.assertEqual(inParam, outParam)
inParam = False
self.sp.setGlobalInhibition(inParam)
outParam = self.sp.getGlobalInhibition()
self.assertEqual(inParam, outParam)
def testGetNumActiveColumnsPerInhArea(self):
inParam = 7
self.sp.setNumActiveColumnsPerInhArea(inParam)
outParam = self.sp.getNumActiveColumnsPerInhArea()
self.assertEqual(inParam, outParam)
def testGetLocalAreaDensity(self):
inParam = 0.4
self.sp.setLocalAreaDensity(inParam)
outParam = self.sp.getLocalAreaDensity()
self.assertAlmostEqual(inParam, outParam)
def testGetStimulusThreshold(self):
inParam = 89
self.sp.setStimulusThreshold(inParam)
outParam = self.sp.getStimulusThreshold()
self.assertEqual(inParam, outParam)
def testGetInhibitionRadius(self):
inParam = 4
self.sp.setInhibitionRadius(inParam)
outParam = self.sp.getInhibitionRadius()
self.assertEqual(inParam, outParam)
def testGetDutyCyclePeriod(self):
inParam = 2020
self.sp.setDutyCyclePeriod(inParam)
outParam = self.sp.getDutyCyclePeriod()
self.assertEqual(inParam, outParam)
def testGetMaxBoost(self):
inParam = 78
self.sp.setMaxBoost(inParam)
outParam = self.sp.getMaxBoost()
self.assertEqual(inParam, outParam)
def testGetIterationNum(self):
inParam = 999
self.sp.setIterationNum(inParam)
outParam = self.sp.getIterationNum()
self.assertEqual(inParam, outParam)
def testGetIterationLearnNum(self):
inParam = 666
self.sp.setIterationLearnNum(inParam)
outParam = self.sp.getIterationLearnNum()
self.assertEqual(inParam, outParam)
def testGetSpVerbosity(self):
inParam = 2
self.sp.setSpVerbosity(inParam)
outParam = self.sp.getSpVerbosity()
self.assertEqual(inParam, outParam)
def testGetSynPermTrimThreshold(self):
inParam = 0.7
self.sp.setSynPermTrimThreshold(inParam)
outParam = self.sp.getSynPermTrimThreshold()
self.assertAlmostEqual(inParam, outParam)
def testGetSynPermActiveInc(self):
inParam = 0.567
self.sp.setSynPermActiveInc(inParam)
outParam = self.sp.getSynPermActiveInc()
self.assertAlmostEqual(inParam, outParam)
def testGetSynPermInactiveDec(self):
inParam = 0.123
self.sp.setSynPermInactiveDec(inParam)
outParam = self.sp.getSynPermInactiveDec()
self.assertAlmostEqual(inParam, outParam)
def testGetSynPermBelowStimulusInc(self):
inParam = 0.0898
self.sp.setSynPermBelowStimulusInc(inParam)
outParam = self.sp.getSynPermBelowStimulusInc()
self.assertAlmostEqual(inParam, outParam)
def testGetSynPermConnected(self):
inParam = 0.514
self.sp.setSynPermConnected(inParam)
outParam = self.sp.getSynPermConnected()
self.assertAlmostEqual(inParam, outParam)
def testGetMinPctOverlapDutyCycles(self):
inParam = 0.11122
self.sp.setMinPctOverlapDutyCycles(inParam)
outParam = self.sp.getMinPctOverlapDutyCycles()
self.assertAlmostEqual(inParam, outParam)
def testGetMinPctActiveDutyCycles(self):
inParam = 0.444333
self.sp.setMinPctActiveDutyCycles(inParam)
outParam = self.sp.getMinPctActiveDutyCycles()
self.assertAlmostEqual(inParam, outParam)
def testGetPermanence(self):
numInputs = 5
numColumns = 5
self.sp.initialize(columnDimensions=[numInputs],
inputDimensions=[numColumns],
potentialRadius=1,
potentialPct=1)
inParam = numpy.array(
[0.06, 0.07, 0.08, 0.12, 0.13]).astype(realType)
self.sp.setPermanence(0,inParam)
outParam = numpy.zeros(numInputs).astype(realType)
self.sp.getPermanence(0, outParam)
self.assertListEqual(list(inParam),list(outParam))
def testGetBoostFactors(self):
numInputs = 3
numColumns = 3
self.sp.initialize(columnDimensions=[numInputs],
inputDimensions=[numColumns])
inParam = numpy.array([1, 1.2, 1.3, ]).astype(realType)
self.sp.setBoostFactors(inParam)
outParam = numpy.zeros(numInputs).astype(realType)
self.sp.getBoostFactors(outParam)
self.assertListEqual(list(inParam),list(outParam))
def testGetOverlapDutyCycles(self):
numInputs = 3
numColumns = 3
self.sp.initialize(columnDimensions=[numInputs],
inputDimensions=[numColumns])
inParam = numpy.array([0.9, 0.3, 0.1]).astype(realType)
self.sp.setOverlapDutyCycles(inParam)
outParam = numpy.zeros(numInputs).astype(realType)
self.sp.getOverlapDutyCycles(outParam)
self.assertListEqual(list(inParam),list(outParam))
def testGetActiveDutyCycles(self):
numInputs = 3
numColumns = 3
self.sp.initialize(columnDimensions=[numInputs],
inputDimensions=[numColumns])
inParam = numpy.array([0.9, 0.99, 0.999, ]).astype(realType)
self.sp.setActiveDutyCycles(inParam)
outParam = numpy.zeros(numInputs).astype(realType)
self.sp.getActiveDutyCycles(outParam)
self.assertListEqual(list(inParam),list(outParam))
def testGetMinOverlapDutyCycles(self):
numInputs = 3
numColumns = 3
self.sp.initialize(columnDimensions=[numInputs],
inputDimensions=[numColumns])
inParam = numpy.array([0.01, 0.02, 0.035, ]).astype(realType)
self.sp.setMinOverlapDutyCycles(inParam)
outParam = numpy.zeros(numInputs).astype(realType)
self.sp.getMinOverlapDutyCycles(outParam)
self.assertListEqual(list(inParam),list(outParam))
def testGetMinActiveDutyCycles(self):
numInputs = 3
numColumns = 3
self.sp.initialize(columnDimensions=[numInputs],
inputDimensions=[numColumns])
inParam = numpy.array([0.01, 0.02, 0.035, ]).astype(realType)
self.sp.setMinActiveDutyCycles(inParam)
outParam = numpy.zeros(numInputs).astype(realType)
self.sp.getMinActiveDutyCycles(outParam)
self.assertListEqual(list(inParam),list(outParam))
def testGetPotential(self):
self.sp.initialize(columnDimensions=[3], inputDimensions=[3])
numInputs = 3
numColumns = 3
self.sp.initialize(columnDimensions=[numInputs],
inputDimensions=[numColumns])
inParam1 = numpy.array([1, 0, 1]).astype(uintType)
self.sp.setPotential(0, inParam1)
inParam2 = numpy.array([1, 1, 0]).astype(uintType)
self.sp.setPotential(1, inParam2)
outParam1 = numpy.zeros(numInputs).astype(uintType)
outParam2 = numpy.zeros(numInputs).astype(uintType)
self.sp.getPotential(0, outParam1)
self.sp.getPotential(1, outParam2)
self.assertListEqual(list(inParam1),list(outParam1))
self.assertListEqual(list(inParam2),list(outParam2))
def testGetConnectedSynapses(self):
numInputs = 5
numColumns = 5
self.sp.initialize(columnDimensions=[numInputs],
inputDimensions=[numColumns],
potentialRadius=1,
potentialPct=1)
inParam = numpy.array(
[0.06, 0.07, 0.08, 0.12, 0.13]).astype(realType)
trueConnected = numpy.array([0, 0, 0, 1, 1])
self.sp.setSynPermConnected(0.1)
self.sp.setPermanence(0,inParam)
outParam = numpy.zeros(numInputs).astype(uintType)
self.sp.getConnectedSynapses(0, outParam)
self.assertListEqual(list(trueConnected),list(outParam))
def testGetConnectedCounts(self):
numInputs = 5
numColumns = 5
self.sp.initialize(columnDimensions=[numInputs],
inputDimensions=[numColumns],
potentialRadius=1,
potentialPct=1)
inParam = numpy.array(
[0.06, 0.07, 0.08, 0.12, 0.11]).astype(realType)
trueConnectedCount = 2
self.sp.setSynPermConnected(0.1)
self.sp.setPermanence(0, inParam)
outParam = numpy.zeros(numInputs).astype(uintType)
self.sp.getConnectedCounts(outParam)
self.assertEqual(trueConnectedCount, outParam[0])
def assertListAlmostEqual(self, alist, blist):
self.assertEqual(len(alist), len(blist))
for (a,b) in zip(alist,blist):
diff = abs(a - b)
self.assertLess(diff,1e-5)
if __name__ == "__main__":
unittest.main() | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
"""
oauthlib.oauth1.rfc5849
~~~~~~~~~~~~~~
This module is an implementation of various logic needed
for signing and checking OAuth 1.0 RFC 5849 requests.
"""
from __future__ import absolute_import, unicode_literals
from . import SIGNATURE_METHODS, utils
class RequestValidator(object):
"""A validator/datastore interaction base class for OAuth 1 providers.
OAuth providers should inherit from RequestValidator and implement the
methods and properties outlined below. Further details are provided in the
documentation for each method and property.
Methods used to check the format of input parameters. Common tests include
length, character set, membership, range or pattern. These tests are
referred to as `whitelisting or blacklisting`_. Whitelisting is better
but blacklisting can be usefull to spot malicious activity.
The following have methods a default implementation:
- check_client_key
- check_request_token
- check_access_token
- check_nonce
- check_verifier
- check_realms
The methods above default to whitelist input parameters, checking that they
are alphanumerical and between a minimum and maximum length. Rather than
overloading the methods a few properties can be used to configure these
methods.
* @safe_characters -> (character set)
* @client_key_length -> (min, max)
* @request_token_length -> (min, max)
* @access_token_length -> (min, max)
* @nonce_length -> (min, max)
* @verifier_length -> (min, max)
* @realms -> [list, of, realms]
Methods used to validate/invalidate input parameters. These checks usually
hit either persistent or temporary storage such as databases or the
filesystem. See each methods documentation for detailed usage.
The following methods must be implemented:
- validate_client_key
- validate_request_token
- validate_access_token
- validate_timestamp_and_nonce
- validate_redirect_uri
- validate_requested_realms
- validate_realms
- validate_verifier
- invalidate_request_token
Methods used to retrieve sensitive information from storage.
The following methods must be implemented:
- get_client_secret
- get_request_token_secret
- get_access_token_secret
- get_rsa_key
- get_realms
- get_default_realms
- get_redirect_uri
Methods used to save credentials.
The following methods must be implemented:
- save_request_token
- save_verifier
- save_access_token
Methods used to verify input parameters. This methods are used during
authorizing request token by user (AuthorizationEndpoint), to check if
parameters are valid. During token authorization request is not signed,
thus 'validation' methods can not be used. The following methods must be
implemented:
- verify_realms
- verify_request_token
To prevent timing attacks it is necessary to not exit early even if the
client key or resource owner key is invalid. Instead dummy values should
be used during the remaining verification process. It is very important
that the dummy client and token are valid input parameters to the methods
get_client_secret, get_rsa_key and get_(access/request)_token_secret and
that the running time of those methods when given a dummy value remain
equivalent to the running time when given a valid client/resource owner.
The following properties must be implemented:
* @dummy_client
* @dummy_request_token
* @dummy_access_token
Example implementations have been provided, note that the database used is
a simple dictionary and serves only an illustrative purpose. Use whichever
database suits your project and how to access it is entirely up to you.
The methods are introduced in an order which should make understanding
their use more straightforward and as such it could be worth reading what
follows in chronological order.
.. _`whitelisting or blacklisting`: http://www.schneier.com/blog/archives/2011/01/whitelisting_vs.html
"""
def __init__(self):
pass
@property
def allowed_signature_methods(self):
return SIGNATURE_METHODS
@property
def safe_characters(self):
return set(utils.UNICODE_ASCII_CHARACTER_SET)
@property
def client_key_length(self):
return 20, 30
@property
def request_token_length(self):
return 20, 30
@property
def access_token_length(self):
return 20, 30
@property
def timestamp_lifetime(self):
return 600
@property
def nonce_length(self):
return 20, 30
@property
def verifier_length(self):
return 20, 30
@property
def realms(self):
return []
@property
def enforce_ssl(self):
return True
def check_client_key(self, client_key):
"""Check that the client key only contains safe characters
and is no shorter than lower and no longer than upper.
"""
lower, upper = self.client_key_length
return (set(client_key) <= self.safe_characters and
lower <= len(client_key) <= upper)
def check_request_token(self, request_token):
"""Checks that the request token contains only safe characters
and is no shorter than lower and no longer than upper.
"""
lower, upper = self.request_token_length
return (set(request_token) <= self.safe_characters and
lower <= len(request_token) <= upper)
def check_access_token(self, request_token):
"""Checks that the token contains only safe characters
and is no shorter than lower and no longer than upper.
"""
lower, upper = self.access_token_length
return (set(request_token) <= self.safe_characters and
lower <= len(request_token) <= upper)
def check_nonce(self, nonce):
"""Checks that the nonce only contains only safe characters
and is no shorter than lower and no longer than upper.
"""
lower, upper = self.nonce_length
return (set(nonce) <= self.safe_characters and
lower <= len(nonce) <= upper)
def check_verifier(self, verifier):
"""Checks that the verifier contains only safe characters
and is no shorter than lower and no longer than upper.
"""
lower, upper = self.verifier_length
return (set(verifier) <= self.safe_characters and
lower <= len(verifier) <= upper)
def check_realms(self, realms):
"""Check that the realm is one of a set allowed realms."""
return all((r in self.realms for r in realms))
@property
def dummy_client(self):
"""Dummy client used when an invalid client key is supplied.
:returns: The dummy client key string.
The dummy client should be associated with either a client secret,
a rsa key or both depending on which signature methods are supported.
Providers should make sure that
get_client_secret(dummy_client)
get_rsa_key(dummy_client)
return a valid secret or key for the dummy client.
This method is used by
* AccessTokenEndpoint
* RequestTokenEndpoint
* ResourceEndpoint
* SignatureOnlyEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
@property
def dummy_request_token(self):
"""Dummy request token used when an invalid token was supplied.
:returns: The dummy request token string.
The dummy request token should be associated with a request token
secret such that get_request_token_secret(.., dummy_request_token)
returns a valid secret.
This method is used by
* AccessTokenEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
@property
def dummy_access_token(self):
"""Dummy access token used when an invalid token was supplied.
:returns: The dummy access token string.
The dummy access token should be associated with an access token
secret such that get_access_token_secret(.., dummy_access_token)
returns a valid secret.
This method is used by
* ResourceEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def get_client_secret(self, client_key, request):
"""Retrieves the client secret associated with the client key.
:param client_key: The client/consumer key.
:param request: An oauthlib.common.Request object.
:returns: The client secret as a string.
This method must allow the use of a dummy client_key value.
Fetching the secret using the dummy key must take the same amount of
time as fetching a secret for a valid client::
# Unlikely to be near constant time as it uses two database
# lookups for a valid client, and only one for an invalid.
from your_datastore import ClientSecret
if ClientSecret.has(client_key):
return ClientSecret.get(client_key)
else:
return 'dummy'
# Aim to mimic number of latency inducing operations no matter
# whether the client is valid or not.
from your_datastore import ClientSecret
return ClientSecret.get(client_key, 'dummy')
Note that the returned key must be in plaintext.
This method is used by
* AccessTokenEndpoint
* RequestTokenEndpoint
* ResourceEndpoint
* SignatureOnlyEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def get_request_token_secret(self, client_key, token, request):
"""Retrieves the shared secret associated with the request token.
:param client_key: The client/consumer key.
:param token: The request token string.
:param request: An oauthlib.common.Request object.
:returns: The token secret as a string.
This method must allow the use of a dummy values and the running time
must be roughly equivalent to that of the running time of valid values::
# Unlikely to be near constant time as it uses two database
# lookups for a valid client, and only one for an invalid.
from your_datastore import RequestTokenSecret
if RequestTokenSecret.has(client_key):
return RequestTokenSecret.get((client_key, request_token))
else:
return 'dummy'
# Aim to mimic number of latency inducing operations no matter
# whether the client is valid or not.
from your_datastore import RequestTokenSecret
return ClientSecret.get((client_key, request_token), 'dummy')
Note that the returned key must be in plaintext.
This method is used by
* AccessTokenEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def get_access_token_secret(self, client_key, token, request):
"""Retrieves the shared secret associated with the access token.
:param client_key: The client/consumer key.
:param token: The access token string.
:param request: An oauthlib.common.Request object.
:returns: The token secret as a string.
This method must allow the use of a dummy values and the running time
must be roughly equivalent to that of the running time of valid values::
# Unlikely to be near constant time as it uses two database
# lookups for a valid client, and only one for an invalid.
from your_datastore import AccessTokenSecret
if AccessTokenSecret.has(client_key):
return AccessTokenSecret.get((client_key, request_token))
else:
return 'dummy'
# Aim to mimic number of latency inducing operations no matter
# whether the client is valid or not.
from your_datastore import AccessTokenSecret
return ClientSecret.get((client_key, request_token), 'dummy')
Note that the returned key must be in plaintext.
This method is used by
* ResourceEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def get_default_realms(self, client_key, request):
"""Get the default realms for a client.
:param client_key: The client/consumer key.
:param request: An oauthlib.common.Request object.
:returns: The list of default realms associated with the client.
The list of default realms will be set during client registration and
is outside the scope of OAuthLib.
This method is used by
* RequestTokenEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def get_realms(self, token, request):
"""Get realms associated with a request token.
:param token: The request token string.
:param request: An oauthlib.common.Request object.
:returns: The list of realms associated with the request token.
This method is used by
* AuthorizationEndpoint
* AccessTokenEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def get_redirect_uri(self, token, request):
"""Get the redirect URI associated with a request token.
:param token: The request token string.
:param request: An oauthlib.common.Request object.
:returns: The redirect URI associated with the request token.
It may be desirable to return a custom URI if the redirect is set to "oob".
In this case, the user will be redirected to the returned URI and at that
endpoint the verifier can be displayed.
This method is used by
* AuthorizationEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def get_rsa_key(self, client_key, request):
"""Retrieves a previously stored client provided RSA key.
:param client_key: The client/consumer key.
:param request: An oauthlib.common.Request object.
:returns: The rsa public key as a string.
This method must allow the use of a dummy client_key value. Fetching
the rsa key using the dummy key must take the same amount of time
as fetching a key for a valid client. The dummy key must also be of
the same bit length as client keys.
Note that the key must be returned in plaintext.
This method is used by
* AccessTokenEndpoint
* RequestTokenEndpoint
* ResourceEndpoint
* SignatureOnlyEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def invalidate_request_token(self, client_key, request_token, request):
"""Invalidates a used request token.
:param client_key: The client/consumer key.
:param request_token: The request token string.
:param request: An oauthlib.common.Request object.
:returns: None
Per `Section 2.3`__ of the spec:
"The server MUST (...) ensure that the temporary
credentials have not expired or been used before."
.. _`Section 2.3`: http://tools.ietf.org/html/rfc5849#section-2.3
This method should ensure that provided token won't validate anymore.
It can be simply removing RequestToken from storage or setting
specific flag that makes it invalid (note that such flag should be
also validated during request token validation).
This method is used by
* AccessTokenEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def validate_client_key(self, client_key, request):
"""Validates that supplied client key is a registered and valid client.
:param client_key: The client/consumer key.
:param request: An oauthlib.common.Request object.
:returns: True or False
Note that if the dummy client is supplied it should validate in same
or nearly the same amount of time as a valid one.
Ensure latency inducing tasks are mimiced even for dummy clients.
For example, use::
from your_datastore import Client
try:
return Client.exists(client_key, access_token)
except DoesNotExist:
return False
Rather than::
from your_datastore import Client
if access_token == self.dummy_access_token:
return False
else:
return Client.exists(client_key, access_token)
This method is used by
* AccessTokenEndpoint
* RequestTokenEndpoint
* ResourceEndpoint
* SignatureOnlyEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def validate_request_token(self, client_key, token, request):
"""Validates that supplied request token is registered and valid.
:param client_key: The client/consumer key.
:param token: The request token string.
:param request: An oauthlib.common.Request object.
:returns: True or False
Note that if the dummy request_token is supplied it should validate in
the same nearly the same amount of time as a valid one.
Ensure latency inducing tasks are mimiced even for dummy clients.
For example, use::
from your_datastore import RequestToken
try:
return RequestToken.exists(client_key, access_token)
except DoesNotExist:
return False
Rather than::
from your_datastore import RequestToken
if access_token == self.dummy_access_token:
return False
else:
return RequestToken.exists(client_key, access_token)
This method is used by
* AccessTokenEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def validate_access_token(self, client_key, token, request):
"""Validates that supplied access token is registered and valid.
:param client_key: The client/consumer key.
:param token: The access token string.
:param request: An oauthlib.common.Request object.
:returns: True or False
Note that if the dummy access token is supplied it should validate in
the same or nearly the same amount of time as a valid one.
Ensure latency inducing tasks are mimiced even for dummy clients.
For example, use::
from your_datastore import AccessToken
try:
return AccessToken.exists(client_key, access_token)
except DoesNotExist:
return False
Rather than::
from your_datastore import AccessToken
if access_token == self.dummy_access_token:
return False
else:
return AccessToken.exists(client_key, access_token)
This method is used by
* ResourceEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def validate_timestamp_and_nonce(self, client_key, timestamp, nonce,
request, request_token=None, access_token=None):
"""Validates that the nonce has not been used before.
:param client_key: The client/consumer key.
:param timestamp: The ``oauth_timestamp`` parameter.
:param nonce: The ``oauth_nonce`` parameter.
:param request_token: Request token string, if any.
:param access_token: Access token string, if any.
:param request: An oauthlib.common.Request object.
:returns: True or False
Per `Section 3.3`_ of the spec.
"A nonce is a random string, uniquely generated by the client to allow
the server to verify that a request has never been made before and
helps prevent replay attacks when requests are made over a non-secure
channel. The nonce value MUST be unique across all requests with the
same timestamp, client credentials, and token combinations."
.. _`Section 3.3`: http://tools.ietf.org/html/rfc5849#section-3.3
One of the first validation checks that will be made is for the validity
of the nonce and timestamp, which are associated with a client key and
possibly a token. If invalid then immediately fail the request
by returning False. If the nonce/timestamp pair has been used before and
you may just have detected a replay attack. Therefore it is an essential
part of OAuth security that you not allow nonce/timestamp reuse.
Note that this validation check is done before checking the validity of
the client and token.::
nonces_and_timestamps_database = [
(u'foo', 1234567890, u'rannoMstrInghere', u'bar')
]
def validate_timestamp_and_nonce(self, client_key, timestamp, nonce,
request_token=None, access_token=None):
return ((client_key, timestamp, nonce, request_token or access_token)
not in self.nonces_and_timestamps_database)
This method is used by
* AccessTokenEndpoint
* RequestTokenEndpoint
* ResourceEndpoint
* SignatureOnlyEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def validate_redirect_uri(self, client_key, redirect_uri, request):
"""Validates the client supplied redirection URI.
:param client_key: The client/consumer key.
:param redirect_uri: The URI the client which to redirect back to after
authorization is successful.
:param request: An oauthlib.common.Request object.
:returns: True or False
It is highly recommended that OAuth providers require their clients
to register all redirection URIs prior to using them in requests and
register them as absolute URIs. See `CWE-601`_ for more information
about open redirection attacks.
By requiring registration of all redirection URIs it should be
straightforward for the provider to verify whether the supplied
redirect_uri is valid or not.
Alternatively per `Section 2.1`_ of the spec:
"If the client is unable to receive callbacks or a callback URI has
been established via other means, the parameter value MUST be set to
"oob" (case sensitive), to indicate an out-of-band configuration."
.. _`CWE-601`: http://cwe.mitre.org/top25/index.html#CWE-601
.. _`Section 2.1`: https://tools.ietf.org/html/rfc5849#section-2.1
This method is used by
* RequestTokenEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def validate_requested_realms(self, client_key, realms, request):
"""Validates that the client may request access to the realm.
:param client_key: The client/consumer key.
:param realms: The list of realms that client is requesting access to.
:param request: An oauthlib.common.Request object.
:returns: True or False
This method is invoked when obtaining a request token and should
tie a realm to the request token and after user authorization
this realm restriction should transfer to the access token.
This method is used by
* RequestTokenEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def validate_realms(self, client_key, token, request, uri=None,
realms=None):
"""Validates access to the request realm.
:param client_key: The client/consumer key.
:param token: A request token string.
:param request: An oauthlib.common.Request object.
:param uri: The URI the realms is protecting.
:param realms: A list of realms that must have been granted to
the access token.
:returns: True or False
How providers choose to use the realm parameter is outside the OAuth
specification but it is commonly used to restrict access to a subset
of protected resources such as "photos".
realms is a convenience parameter which can be used to provide
a per view method pre-defined list of allowed realms.
Can be as simple as::
from your_datastore import RequestToken
request_token = RequestToken.get(token, None)
if not request_token:
return False
return set(request_token.realms).issuperset(set(realms))
This method is used by
* ResourceEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def validate_verifier(self, client_key, token, verifier, request):
"""Validates a verification code.
:param client_key: The client/consumer key.
:param token: A request token string.
:param verifier: The authorization verifier string.
:param request: An oauthlib.common.Request object.
:returns: True or False
OAuth providers issue a verification code to clients after the
resource owner authorizes access. This code is used by the client to
obtain token credentials and the provider must verify that the
verifier is valid and associated with the client as well as the
resource owner.
Verifier validation should be done in near constant time
(to avoid verifier enumeration). To achieve this we need a
constant time string comparison which is provided by OAuthLib
in ``oauthlib.common.safe_string_equals``::
from your_datastore import Verifier
correct_verifier = Verifier.get(client_key, request_token)
from oauthlib.common import safe_string_equals
return safe_string_equals(verifier, correct_verifier)
This method is used by
* AccessTokenEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def verify_request_token(self, token, request):
"""Verify that the given OAuth1 request token is valid.
:param token: A request token string.
:param request: An oauthlib.common.Request object.
:returns: True or False
This method is used only in AuthorizationEndpoint to check whether the
oauth_token given in the authorization URL is valid or not.
This request is not signed and thus similar ``validate_request_token``
method can not be used.
This method is used by
* AuthorizationEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def verify_realms(self, token, realms, request):
"""Verify authorized realms to see if they match those given to token.
:param token: An access token string.
:param realms: A list of realms the client attempts to access.
:param request: An oauthlib.common.Request object.
:returns: True or False
This prevents the list of authorized realms sent by the client during
the authorization step to be altered to include realms outside what
was bound with the request token.
Can be as simple as::
valid_realms = self.get_realms(token)
return all((r in valid_realms for r in realms))
This method is used by
* AuthorizationEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def save_access_token(self, token, request):
"""Save an OAuth1 access token.
:param token: A dict with token credentials.
:param request: An oauthlib.common.Request object.
The token dictionary will at minimum include
* ``oauth_token`` the access token string.
* ``oauth_token_secret`` the token specific secret used in signing.
* ``oauth_authorized_realms`` a space separated list of realms.
Client key can be obtained from ``request.client_key``.
The list of realms (not joined string) can be obtained from
``request.realm``.
This method is used by
* AccessTokenEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def save_request_token(self, token, request):
"""Save an OAuth1 request token.
:param token: A dict with token credentials.
:param request: An oauthlib.common.Request object.
The token dictionary will at minimum include
* ``oauth_token`` the request token string.
* ``oauth_token_secret`` the token specific secret used in signing.
* ``oauth_callback_confirmed`` the string ``true``.
Client key can be obtained from ``request.client_key``.
This method is used by
* RequestTokenEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.")
def save_verifier(self, token, verifier, request):
"""Associate an authorization verifier with a request token.
:param token: A request token string.
:param verifier A dictionary containing the oauth_verifier and
oauth_token
:param request: An oauthlib.common.Request object.
We need to associate verifiers with tokens for validation during the
access token request.
Note that unlike save_x_token token here is the ``oauth_token`` token
string from the request token saved previously.
This method is used by
* AuthorizationEndpoint
"""
raise NotImplementedError("Subclasses must implement this function.") | unknown | codeparrot/codeparrot-clean | ||
"""distutils.command.bdist
Implements the Distutils 'bdist' command (create a built [binary]
distribution)."""
import os
from distutils.core import Command
from distutils.errors import *
from distutils.util import get_platform
def show_formats():
"""Print list of available formats (arguments to "--format" option).
"""
from distutils.fancy_getopt import FancyGetopt
formats = []
for format in bdist.format_commands:
formats.append(("formats=" + format, None,
bdist.format_command[format][1]))
pretty_printer = FancyGetopt(formats)
pretty_printer.print_help("List of available distribution formats:")
class bdist(Command):
description = "create a built (binary) distribution"
user_options = [('bdist-base=', 'b',
"temporary directory for creating built distributions"),
('plat-name=', 'p',
"platform name to embed in generated filenames "
"(default: %s)" % get_platform()),
('formats=', None,
"formats for distribution (comma-separated list)"),
('dist-dir=', 'd',
"directory to put final built distributions in "
"[default: dist]"),
('skip-build', None,
"skip rebuilding everything (for testing/debugging)"),
('owner=', 'u',
"Owner name used when creating a tar file"
" [default: current user]"),
('group=', 'g',
"Group name used when creating a tar file"
" [default: current group]"),
]
boolean_options = ['skip-build']
help_options = [
('help-formats', None,
"lists available distribution formats", show_formats),
]
# The following commands do not take a format option from bdist
no_format_option = ('bdist_rpm',)
# This won't do in reality: will need to distinguish RPM-ish Linux,
# Debian-ish Linux, Solaris, FreeBSD, ..., Windows, Mac OS.
default_format = {'posix': 'gztar',
'nt': 'zip'}
# Establish the preferred order (for the --help-formats option).
format_commands = ['rpm', 'gztar', 'bztar', 'ztar', 'tar',
'wininst', 'zip', 'msi']
# And the real information.
format_command = {'rpm': ('bdist_rpm', "RPM distribution"),
'gztar': ('bdist_dumb', "gzip'ed tar file"),
'bztar': ('bdist_dumb', "bzip2'ed tar file"),
'ztar': ('bdist_dumb', "compressed tar file"),
'tar': ('bdist_dumb', "tar file"),
'wininst': ('bdist_wininst',
"Windows executable installer"),
'zip': ('bdist_dumb', "ZIP file"),
'msi': ('bdist_msi', "Microsoft Installer")
}
def initialize_options(self):
self.bdist_base = None
self.plat_name = None
self.formats = None
self.dist_dir = None
self.skip_build = 0
self.group = None
self.owner = None
def finalize_options(self):
# have to finalize 'plat_name' before 'bdist_base'
if self.plat_name is None:
if self.skip_build:
self.plat_name = get_platform()
else:
self.plat_name = self.get_finalized_command('build').plat_name
# 'bdist_base' -- parent of per-built-distribution-format
# temporary directories (eg. we'll probably have
# "build/bdist.<plat>/dumb", "build/bdist.<plat>/rpm", etc.)
if self.bdist_base is None:
build_base = self.get_finalized_command('build').build_base
self.bdist_base = os.path.join(build_base,
'bdist.' + self.plat_name)
self.ensure_string_list('formats')
if self.formats is None:
try:
self.formats = [self.default_format[os.name]]
except KeyError:
raise DistutilsPlatformError(
"don't know how to create built distributions "
"on platform %s" % os.name)
if self.dist_dir is None:
self.dist_dir = "dist"
def run(self):
# Figure out which sub-commands we need to run.
commands = []
for format in self.formats:
try:
commands.append(self.format_command[format][0])
except KeyError:
raise DistutilsOptionError("invalid format '%s'" % format)
# Reinitialize and run each command.
for i in range(len(self.formats)):
cmd_name = commands[i]
sub_cmd = self.reinitialize_command(cmd_name)
if cmd_name not in self.no_format_option:
sub_cmd.format = self.formats[i]
# passing the owner and group names for tar archiving
if cmd_name == 'bdist_dumb':
sub_cmd.owner = self.owner
sub_cmd.group = self.group
# If we're going to need to run this command again, tell it to
# keep its temporary files around so subsequent runs go faster.
if cmd_name in commands[i+1:]:
sub_cmd.keep_temp = 1
self.run_command(cmd_name) | unknown | codeparrot/codeparrot-clean | ||
use rustc_data_structures::fx::FxIndexMap;
use rustc_hir::def::DefKind;
use rustc_hir::def_id::DefId;
use rustc_middle::ty::{self, GenericArg, GenericArgKind, Ty, TyCtxt};
use rustc_span::Span;
use tracing::debug;
use super::explicit::ExplicitPredicatesMap;
use super::utils::*;
/// Infer predicates for the items in the crate.
///
/// `global_inferred_outlives`: this is initially the empty map that
/// was generated by walking the items in the crate. This will
/// now be filled with inferred predicates.
pub(super) fn infer_predicates(
tcx: TyCtxt<'_>,
) -> FxIndexMap<DefId, ty::EarlyBinder<'_, RequiredPredicates<'_>>> {
debug!("infer_predicates");
let mut explicit_map = ExplicitPredicatesMap::new();
let mut global_inferred_outlives = FxIndexMap::default();
// If new predicates were added then we need to re-calculate
// all crates since there could be new implied predicates.
for i in 0.. {
let mut predicates_added = vec![];
// Visit all the crates and infer predicates
for id in tcx.hir_free_items() {
let item_did = id.owner_id;
debug!("InferVisitor::visit_item(item={:?})", item_did);
let mut item_required_predicates = RequiredPredicates::default();
match tcx.def_kind(item_did) {
DefKind::Union | DefKind::Enum | DefKind::Struct => {
let adt_def = tcx.adt_def(item_did.to_def_id());
// Iterate over all fields in item_did
for field_def in adt_def.all_fields() {
// Calculating the predicate requirements necessary
// for item_did.
//
// For field of type &'a T (reference) or Adt
// (struct/enum/union) there will be outlive
// requirements for adt_def.
let field_ty = tcx.type_of(field_def.did).instantiate_identity();
let field_span = tcx.def_span(field_def.did);
insert_required_predicates_to_be_wf(
tcx,
field_ty,
field_span,
&global_inferred_outlives,
&mut item_required_predicates,
&mut explicit_map,
);
}
}
DefKind::TyAlias if tcx.type_alias_is_lazy(item_did) => {
insert_required_predicates_to_be_wf(
tcx,
tcx.type_of(item_did).instantiate_identity(),
tcx.def_span(item_did),
&global_inferred_outlives,
&mut item_required_predicates,
&mut explicit_map,
);
}
_ => {}
};
// If new predicates were added (`local_predicate_map` has more
// predicates than the `global_inferred_outlives`), the new predicates
// might result in implied predicates for their parent types.
// Therefore mark `predicates_added` as true and which will ensure
// we walk the crates again and re-calculate predicates for all
// items.
let item_predicates_len: usize = global_inferred_outlives
.get(&item_did.to_def_id())
.map_or(0, |p| p.as_ref().skip_binder().len());
if item_required_predicates.len() > item_predicates_len {
predicates_added.push(item_did);
global_inferred_outlives
.insert(item_did.to_def_id(), ty::EarlyBinder::bind(item_required_predicates));
}
}
if predicates_added.is_empty() {
// We've reached a fixed point.
break;
} else if !tcx.recursion_limit().value_within_limit(i) {
let msg = if let &[id] = &predicates_added[..] {
format!("overflow computing implied lifetime bounds for `{}`", tcx.def_path_str(id),)
} else {
"overflow computing implied lifetime bounds".to_string()
};
tcx.dcx()
.struct_span_fatal(
predicates_added.iter().map(|id| tcx.def_span(*id)).collect::<Vec<_>>(),
msg,
)
.emit();
}
}
global_inferred_outlives
}
fn insert_required_predicates_to_be_wf<'tcx>(
tcx: TyCtxt<'tcx>,
ty: Ty<'tcx>,
span: Span,
global_inferred_outlives: &FxIndexMap<DefId, ty::EarlyBinder<'tcx, RequiredPredicates<'tcx>>>,
required_predicates: &mut RequiredPredicates<'tcx>,
explicit_map: &mut ExplicitPredicatesMap<'tcx>,
) {
for arg in ty.walk() {
let leaf_ty = match arg.kind() {
GenericArgKind::Type(ty) => ty,
// No predicates from lifetimes or constants, except potentially
// constants' types, but `walk` will get to them as well.
GenericArgKind::Lifetime(_) | GenericArgKind::Const(_) => continue,
};
match *leaf_ty.kind() {
ty::Ref(region, rty, _) => {
// The type is `&'a T` which means that we will have
// a predicate requirement of `T: 'a` (`T` outlives `'a`).
//
// We also want to calculate potential predicates for the `T`.
debug!("Ref");
insert_outlives_predicate(tcx, rty.into(), region, span, required_predicates);
}
ty::Adt(def, args) => {
// For ADTs (structs/enums/unions), we check inferred and explicit predicates.
debug!("Adt");
check_inferred_predicates(
tcx,
def.did(),
args,
global_inferred_outlives,
required_predicates,
);
check_explicit_predicates(
tcx,
def.did(),
args,
required_predicates,
explicit_map,
None,
);
}
ty::Alias(ty::Free, alias) => {
// This corresponds to a type like `Type<'a, T>`.
// We check inferred and explicit predicates.
debug!("Free");
check_inferred_predicates(
tcx,
alias.def_id,
alias.args,
global_inferred_outlives,
required_predicates,
);
check_explicit_predicates(
tcx,
alias.def_id,
alias.args,
required_predicates,
explicit_map,
None,
);
}
ty::Dynamic(obj, ..) => {
// This corresponds to `dyn Trait<..>`. In this case, we should
// use the explicit predicates as well.
debug!("Dynamic");
if let Some(ex_trait_ref) = obj.principal() {
// Here, we are passing the type `usize` as a
// placeholder value with the function
// `with_self_ty`, since there is no concrete type
// `Self` for a `dyn Trait` at this
// stage. Therefore when checking explicit
// predicates in `check_explicit_predicates` we
// need to ignore checking the explicit_map for
// Self type.
let args = ex_trait_ref.with_self_ty(tcx, tcx.types.usize).skip_binder().args;
check_explicit_predicates(
tcx,
ex_trait_ref.skip_binder().def_id,
args,
required_predicates,
explicit_map,
Some(tcx.types.self_param),
);
}
}
ty::Alias(ty::Projection, alias) => {
// This corresponds to a type like `<() as Trait<'a, T>>::Type`.
// We only use the explicit predicates of the trait but
// not the ones of the associated type itself.
debug!("Projection");
check_explicit_predicates(
tcx,
tcx.parent(alias.def_id),
alias.args,
required_predicates,
explicit_map,
None,
);
}
// FIXME(inherent_associated_types): Use the explicit predicates from the parent impl.
ty::Alias(ty::Inherent, _) => {}
_ => {}
}
}
}
/// Check the explicit predicates declared on the type.
///
/// ### Example
///
/// ```ignore (illustrative)
/// struct Outer<'a, T> {
/// field: Inner<T>,
/// }
///
/// struct Inner<U> where U: 'static, U: Outer {
/// // ...
/// }
/// ```
/// Here, we should fetch the explicit predicates, which
/// will give us `U: 'static` and `U: Outer`. The latter we
/// can ignore, but we will want to process `U: 'static`,
/// applying the instantiation as above.
fn check_explicit_predicates<'tcx>(
tcx: TyCtxt<'tcx>,
def_id: DefId,
args: &[GenericArg<'tcx>],
required_predicates: &mut RequiredPredicates<'tcx>,
explicit_map: &mut ExplicitPredicatesMap<'tcx>,
ignored_self_ty: Option<Ty<'tcx>>,
) {
debug!(
"check_explicit_predicates(def_id={:?}, \
args={:?}, \
explicit_map={:?}, \
required_predicates={:?}, \
ignored_self_ty={:?})",
def_id, args, explicit_map, required_predicates, ignored_self_ty,
);
let explicit_predicates = explicit_map.explicit_predicates_of(tcx, def_id);
for (outlives_predicate, &span) in explicit_predicates.as_ref().skip_binder() {
debug!("outlives_predicate = {outlives_predicate:?}");
// Careful: If we are inferring the effects of a `dyn Trait<..>`
// type, then when we look up the predicates for `Trait`,
// we may find some that reference `Self`. e.g., perhaps the
// definition of `Trait` was:
//
// ```
// trait Trait<'a, T> where Self: 'a { .. }
// ```
//
// we want to ignore such predicates here, because
// there is no type parameter for them to affect. Consider
// a struct containing `dyn Trait`:
//
// ```
// struct MyStruct<'x, X> { field: Box<dyn Trait<'x, X>> }
// ```
//
// The `where Self: 'a` predicate refers to the *existential, hidden type*
// that is represented by the `dyn Trait`, not to the `X` type parameter
// (or any other generic parameter) declared on `MyStruct`.
//
// Note that we do this check for self **before** applying `args`. In the
// case that `args` come from a `dyn Trait` type, our caller will have
// included `Self = usize` as the value for `Self`. If we were
// to apply the args, and not filter this predicate, we might then falsely
// conclude that e.g., `X: 'x` was a reasonable inferred requirement.
//
// Another similar case is where we have an inferred
// requirement like `<Self as Trait>::Foo: 'b`. We presently
// ignore such requirements as well (cc #54467)-- though
// conceivably it might be better if we could extract the `Foo
// = X` binding from the object type (there must be such a
// binding) and thus infer an outlives requirement that `X:
// 'b`.
if let Some(self_ty) = ignored_self_ty
&& let GenericArgKind::Type(ty) = outlives_predicate.0.kind()
&& ty.walk().any(|arg| arg == self_ty.into())
{
debug!("skipping self ty = {ty:?}");
continue;
}
let predicate = explicit_predicates.rebind(*outlives_predicate).instantiate(tcx, args);
debug!("predicate = {predicate:?}");
insert_outlives_predicate(tcx, predicate.0, predicate.1, span, required_predicates);
}
}
/// Check the inferred predicates declared on the type.
///
/// ### Example
///
/// ```ignore (illustrative)
/// struct Outer<'a, T> {
/// outer: Inner<'a, T>,
/// }
///
/// struct Inner<'b, U> {
/// inner: &'b U,
/// }
/// ```
///
/// Here, when processing the type of field `outer`, we would request the
/// set of implicit predicates computed for `Inner` thus far. This will
/// initially come back empty, but in next round we will get `U: 'b`.
/// We then apply the instantiation `['b => 'a, U => T]` and thus get the
/// requirement that `T: 'a` holds for `Outer`.
fn check_inferred_predicates<'tcx>(
tcx: TyCtxt<'tcx>,
def_id: DefId,
args: ty::GenericArgsRef<'tcx>,
global_inferred_outlives: &FxIndexMap<DefId, ty::EarlyBinder<'tcx, RequiredPredicates<'tcx>>>,
required_predicates: &mut RequiredPredicates<'tcx>,
) {
// Load the current set of inferred and explicit predicates from `global_inferred_outlives`
// and filter the ones that are `TypeOutlives`.
let Some(predicates) = global_inferred_outlives.get(&def_id) else {
return;
};
for (&predicate, &span) in predicates.as_ref().skip_binder() {
// `predicate` is `U: 'b` in the example above.
// So apply the instantiation to get `T: 'a`.
let ty::OutlivesPredicate(arg, region) =
predicates.rebind(predicate).instantiate(tcx, args);
insert_outlives_predicate(tcx, arg, region, span, required_predicates);
}
} | rust | github | https://github.com/rust-lang/rust | compiler/rustc_hir_analysis/src/outlives/implicit_infer.rs |
import os
from sys import maxint
_proc_status = '/proc/%d/status' % os.getpid()
#_scale = {'kB': 1024.0, 'mB': 1024.0*1024.0,
# 'KB': 1024.0, 'MB': 1024.0*1024.0}
_scale = {'kB': 1, 'mB': 1024, 'gB': 1024 * 1024,
'KB': 1, 'MB': 1024, 'GB': 1024 * 1024}
def _VmB(VmKey):
'''Private.
'''
global _proc_status, _scale
# get pseudo file /proc/<pid>/status
try:
t = open(_proc_status)
v = t.read()
t.close()
except:
return float('nan') # non-Linux?
# get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
i = v.index(VmKey)
v = v[i:].split(None, 3) # whitespace
if len(v) < 3:
return float('nan') # invalid format?
# convert Vm value to bytes
# return float(v[1]) * _scale[v[2]]
return int(v[1]) * _scale[v[2]]
def memory(since=0):
'''Return memory usage in kilobytes.
'''
return _VmB('VmSize:') - since
def resident(since=0):
'''Return resident memory usage in kilobytes.
'''
return _VmB('VmRSS:') - since
def memorypeak(since=0):
'''Return memory usage peak in kilobytes.
'''
try:
return _VmB('VmPeak:') - since
except:
return float('nan') # old Linux?
def residentpeak(since=0):
'''Return resident memory usage peak in kilobytes.
'''
try:
return _VmB('VmHWM:') - since
except:
return float('nan') # old Linux?
def stacksize(since=0):
'''Return stack size in kilobytes.
'''
return _VmB('VmStk:') - since | unknown | codeparrot/codeparrot-clean | ||
""" tests for otupdate.buildroot.file_actions
Checks functionality and error cases for the update utility functions there
"""
import binascii
import hashlib
import os
import subprocess
from unittest import mock
import zipfile
import pytest
from otupdate.buildroot import file_actions
def test_unzip(downloaded_update_file):
cb = mock.Mock()
paths, sizes = file_actions.unzip_update(downloaded_update_file, cb,
file_actions.UPDATE_FILES,
file_actions.UPDATE_FILES)
assert sorted(list(paths.keys())) == sorted(file_actions.UPDATE_FILES)
for filename, path in paths.items():
assert os.path.dirname(path) == os.path.dirname(downloaded_update_file)
with zipfile.ZipFile(downloaded_update_file) as zf:
for filename, size in sizes.items():
assert zf.getinfo(filename).file_size == size
for filename, path in paths.items():
assert zf.read(filename) == open(path, 'rb').read()
# We should have callback calls for
# - every chunk (including the fractional one at the end) of rootfs
calls = sizes[file_actions.ROOTFS_NAME] // 1024
if calls * 1024 != sizes[file_actions.ROOTFS_NAME]:
calls += 1
# - the two files that are less than a chunk
calls += 2
assert cb.call_count == calls
@pytest.mark.exclude_rootfs_ext4
def test_unzip_requires_rootfs(downloaded_update_file):
cb = mock.Mock()
with pytest.raises(file_actions.FileMissing):
file_actions.unzip_update(downloaded_update_file, cb,
file_actions.UPDATE_FILES,
file_actions.UPDATE_FILES)
@pytest.mark.exclude_rootfs_ext4_hash
def test_unzip_requires_hash(downloaded_update_file):
cb = mock.Mock()
with pytest.raises(file_actions.FileMissing):
file_actions.unzip_update(downloaded_update_file, cb,
file_actions.UPDATE_FILES,
file_actions.UPDATE_FILES)
@pytest.mark.exclude_rootfs_ext4_hash_sig
def test_unzip_does_not_require_sig(downloaded_update_file):
cb = mock.Mock()
file_actions.unzip_update(downloaded_update_file, cb,
file_actions.UPDATE_FILES,
[file_actions.ROOTFS_NAME,
file_actions.ROOTFS_HASH_NAME])
@pytest.mark.exclude_rootfs_ext4_hash_sig
def test_unzip_requires_sig(downloaded_update_file):
cb = mock.Mock()
with pytest.raises(file_actions.FileMissing):
file_actions.unzip_update(downloaded_update_file, cb,
file_actions.UPDATE_FILES,
file_actions.UPDATE_FILES)
def test_hash(extracted_update_file):
cb = mock.Mock()
hash_output = file_actions.hash_file(
os.path.join(extracted_update_file, 'rootfs.ext4'),
cb)
assert hash_output == open(
os.path.join(extracted_update_file, 'rootfs.ext4.hash'), 'rb').read()
cb.assert_called()
def test_verify_signature_ok(extracted_update_file, testing_cert):
file_actions.verify_signature(os.path.join(extracted_update_file,
'rootfs.ext4.hash'),
os.path.join(extracted_update_file,
'rootfs.ext4.hash.sig'),
testing_cert)
@pytest.mark.bad_sig
def test_verify_signature_catches_bad_sig(
extracted_update_file, testing_cert):
with pytest.raises(file_actions.SignatureMismatch):
file_actions.verify_signature(os.path.join(extracted_update_file,
'rootfs.ext4.hash'),
os.path.join(extracted_update_file,
'rootfs.ext4.hash.sig'),
testing_cert)
@pytest.mark.exclude_rootfs_ext4_hash_sig
def test_validate_hash_only(downloaded_update_file):
cb = mock.Mock()
assert file_actions.validate_update(downloaded_update_file,
cb,
cert_path=None)
# We should have a callback call for
# - the unzips (see test_unzip for calculation)
with zipfile.ZipFile(downloaded_update_file) as zf:
rootfs_size = zf.getinfo(file_actions.ROOTFS_NAME).file_size
rootfs_calls = rootfs_size // 1024
if rootfs_calls * 1024 != rootfs_size:
rootfs_calls += 1
# only adding 1 extra call because we don’t have a signature file
calls = rootfs_calls + 1
# - the hashes, one for each chunk, the same chunks as unzip
calls += rootfs_calls
assert cb.call_count == calls
def test_validate(downloaded_update_file, testing_cert):
cb = mock.Mock()
assert file_actions.validate_update(downloaded_update_file,
cb,
cert_path=testing_cert)
# We should have a callback call for
# - the unzips (see test_unzip for calculation)
with zipfile.ZipFile(downloaded_update_file) as zf:
rootfs_size = zf.getinfo(file_actions.ROOTFS_NAME).file_size
rootfs_calls = rootfs_size // 1024
if rootfs_calls * 1024 != rootfs_size:
rootfs_calls += 1
calls = rootfs_calls + 2
# - the hashes, one for each chunk, the same chunks as unzip
calls += rootfs_calls
assert cb.call_count == calls
@pytest.mark.bad_hash
def test_validate_catches_bad_hash(downloaded_update_file):
cb = mock.Mock()
with pytest.raises(file_actions.HashMismatch):
file_actions.validate_update(downloaded_update_file, cb, None)
@pytest.mark.bad_sig
def test_validate_catches_bad_sig(downloaded_update_file, testing_cert):
cb = mock.Mock()
with pytest.raises(file_actions.SignatureMismatch):
file_actions.validate_update(downloaded_update_file, cb,
testing_cert)
@pytest.mark.exclude_rootfs_ext4_hash_sig
def test_validate_catches_missing_sig(downloaded_update_file, testing_cert):
cb = mock.Mock()
with pytest.raises(file_actions.FileMissing):
file_actions.validate_update(downloaded_update_file, cb, testing_cert)
@pytest.mark.exclude_rootfs_ext4_hash
def test_validate_catches_missing_hash(downloaded_update_file, testing_cert):
cb = mock.Mock()
with pytest.raises(file_actions.FileMissing):
file_actions.validate_update(downloaded_update_file, cb, testing_cert)
@pytest.mark.exclude_rootfs_ext4
def test_validate_catches_missing_image(downloaded_update_file, testing_cert):
cb = mock.Mock()
with pytest.raises(file_actions.FileMissing):
file_actions.validate_update(downloaded_update_file, cb, testing_cert)
def test_write_update(extracted_update_file, testing_partition):
img = os.path.join(extracted_update_file, 'rootfs.ext4')
cb = mock.Mock()
file_actions.write_update(img, cb)
filesize = open(testing_partition).seek(0, 2)
call_count = filesize // 1024
if call_count * 1024 != filesize:
call_count += 1
assert cb.call_count == call_count
hasher = hashlib.sha256()
hasher.update(open(testing_partition, 'rb').read())
hash_val = binascii.hexlify(hasher.digest())
assert hash_val\
== open(
os.path.join(extracted_update_file, 'rootfs.ext4.hash'),
'rb').read().strip()
def test_commit_update(monkeypatch):
unused = file_actions.RootPartitions.TWO
new = file_actions.RootPartitions.TWO
monkeypatch.setattr(file_actions, '_find_unused_partition',
lambda: unused)
monkeypatch.setattr(file_actions, '_switch_partition',
lambda: new)
file_actions.commit_update()
def test_commit_mismatch(monkeypatch):
unused = file_actions.RootPartitions.THREE
new = file_actions.RootPartitions.TWO
monkeypatch.setattr(file_actions, '_find_unused_partition',
lambda: unused)
monkeypatch.setattr(file_actions, '_switch_partition',
lambda: new)
with pytest.raises(RuntimeError):
file_actions.commit_update()
def test_mount_update(monkeypatch, tmpdir):
subprocess_mock = mock.Mock()
subprocess_mock.return_value = 0
monkeypatch.setattr(subprocess, 'check_output', subprocess_mock)
unused = file_actions.RootPartitions.THREE
monkeypatch.setattr(file_actions, '_find_unused_partition',
lambda: unused)
monkeypatch.setattr(file_actions, '_mountpoint_root',
lambda: tmpdir)
mountpoint = None
with file_actions.mount_update() as mount:
subprocess_mock.assert_called_once_with(
['mount', unused.value.path, mount])
subprocess_mock.reset_mock()
subprocess_mock.return_value = 0
mountpoint = mount
subprocess_mock.assert_called_once_with(
['umount', mountpoint])
def test_write_machine_id(monkeypatch, tmpdir):
new = os.path.join(tmpdir, 'new_root')
old = os.path.join(tmpdir, 'old_root')
os.makedirs(os.path.join(new, 'etc'))
os.makedirs(os.path.join(old, 'etc'))
mid = '78a59366e08f4650bd2212afd7777eab\n'
open(os.path.join(old, 'etc', 'machine-id'), 'w').write(mid)
file_actions.write_machine_id(old, new)
assert open(os.path.join(new, 'etc', 'machine-id')).read() == mid | unknown | codeparrot/codeparrot-clean | ||
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division)
__metaclass__ = type
import json
import difflib
import warnings
from copy import deepcopy
from six import string_types
from ansible import constants as C
from ansible.utils.unicode import to_unicode
__all__ = ["CallbackBase"]
class CallbackBase:
'''
This is a base ansible callback class that does nothing. New callbacks should
use this class as a base and override any callback methods they wish to execute
custom actions.
'''
# FIXME: the list of functions here needs to be updated once we have
# finalized the list of callback methods used in the default callback
def __init__(self, display):
self._display = display
if self._display.verbosity >= 4:
name = getattr(self, 'CALLBACK_NAME', 'unnamed')
ctype = getattr(self, 'CALLBACK_TYPE', 'old')
version = getattr(self, 'CALLBACK_VERSION', '1.0')
self._display.vvvv('Loaded callback %s of type %s, v%s' % (name, ctype, version))
def _dump_results(self, result, indent=None, sort_keys=True):
if result.get('_ansible_no_log', False):
return json.dumps(dict(censored="the output has been hidden due to the fact that 'no_log: true' was specified for this result"))
if not indent and '_ansible_verbose_always' in result and result['_ansible_verbose_always']:
indent = 4
# All result keys stating with _ansible_ are internal, so remove them from the result before we output anything.
for k in result.keys():
if isinstance(k, string_types) and k.startswith('_ansible_'):
del result[k]
return json.dumps(result, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
def _handle_warnings(self, res):
''' display warnings, if enabled and any exist in the result '''
if C.COMMAND_WARNINGS and 'warnings' in res and res['warnings']:
for warning in res['warnings']:
self._display.warning(warning)
def _get_diff(self, difflist):
if not isinstance(difflist, list):
difflist = [difflist]
ret = []
for diff in difflist:
try:
with warnings.catch_warnings():
warnings.simplefilter('ignore')
ret = []
if 'dst_binary' in diff:
ret.append("diff skipped: destination file appears to be binary\n")
if 'src_binary' in diff:
ret.append("diff skipped: source file appears to be binary\n")
if 'dst_larger' in diff:
ret.append("diff skipped: destination file size is greater than %d\n" % diff['dst_larger'])
if 'src_larger' in diff:
ret.append("diff skipped: source file size is greater than %d\n" % diff['src_larger'])
if 'before' in diff and 'after' in diff:
if 'before_header' in diff:
before_header = "before: %s" % diff['before_header']
else:
before_header = 'before'
if 'after_header' in diff:
after_header = "after: %s" % diff['after_header']
else:
after_header = 'after'
differ = difflib.unified_diff(to_unicode(diff['before']).splitlines(True), to_unicode(diff['after']).splitlines(True), before_header, after_header, '', '', 10)
ret.extend(list(differ))
ret.append('\n')
return u"".join(ret)
except UnicodeDecodeError:
ret.append(">> the files are different, but the diff library cannot compare unicode strings\n\n")
def _process_items(self, result):
for res in result._result['results']:
newres = deepcopy(result)
newres._result = res
if 'failed' in res and res['failed']:
self.v2_playbook_item_on_failed(newres)
elif 'skipped' in res and res['skipped']:
self.v2_playbook_item_on_skipped(newres)
else:
self.v2_playbook_item_on_ok(newres)
#del result._result['results']
def set_play_context(self, play_context):
pass
def on_any(self, *args, **kwargs):
pass
def runner_on_failed(self, host, res, ignore_errors=False):
pass
def runner_on_ok(self, host, res):
pass
def runner_on_skipped(self, host, item=None):
pass
def runner_on_unreachable(self, host, res):
pass
def runner_on_no_hosts(self):
pass
def runner_on_async_poll(self, host, res, jid, clock):
pass
def runner_on_async_ok(self, host, res, jid):
pass
def runner_on_async_failed(self, host, res, jid):
pass
def playbook_on_start(self):
pass
def playbook_on_notify(self, host, handler):
pass
def playbook_on_no_hosts_matched(self):
pass
def playbook_on_no_hosts_remaining(self):
pass
def playbook_on_task_start(self, name, is_conditional):
pass
def playbook_on_vars_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None):
pass
def playbook_on_setup(self):
pass
def playbook_on_import_for_host(self, host, imported_file):
pass
def playbook_on_not_import_for_host(self, host, missing_file):
pass
def playbook_on_play_start(self, name):
pass
def playbook_on_stats(self, stats):
pass
def on_file_diff(self, host, diff):
pass
####### V2 METHODS, by default they call v1 counterparts if possible ######
def v2_on_any(self, *args, **kwargs):
self.on_any(args, kwargs)
def v2_runner_on_failed(self, result, ignore_errors=False):
host = result._host.get_name()
self.runner_on_failed(host, result._result, ignore_errors)
def v2_runner_on_ok(self, result):
host = result._host.get_name()
self.runner_on_ok(host, result._result)
def v2_runner_on_skipped(self, result):
if C.DISPLAY_SKIPPED_HOSTS:
host = result._host.get_name()
#FIXME, get item to pass through
item = None
self.runner_on_skipped(host, item)
def v2_runner_on_unreachable(self, result):
host = result._host.get_name()
self.runner_on_unreachable(host, result._result)
def v2_runner_on_no_hosts(self, task):
self.runner_on_no_hosts()
def v2_runner_on_async_poll(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
#FIXME, get real clock
clock = 0
self.runner_on_async_poll(host, result._result, jid, clock)
def v2_runner_on_async_ok(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
self.runner_on_async_ok(host, result._result, jid)
def v2_runner_on_async_failed(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
self.runner_on_async_failed(host, result._result, jid)
def v2_runner_on_file_diff(self, result, diff):
pass #no v1 correspondance
def v2_playbook_on_start(self):
self.playbook_on_start()
def v2_playbook_on_notify(self, result, handler):
host = result._host.get_name()
self.playbook_on_notify(host, handler)
def v2_playbook_on_no_hosts_matched(self):
self.playbook_on_no_hosts_matched()
def v2_playbook_on_no_hosts_remaining(self):
self.playbook_on_no_hosts_remaining()
def v2_playbook_on_task_start(self, task, is_conditional):
self.playbook_on_task_start(task, is_conditional)
def v2_playbook_on_cleanup_task_start(self, task):
pass #no v1 correspondance
def v2_playbook_on_handler_task_start(self, task):
pass #no v1 correspondance
def v2_playbook_on_vars_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None):
self.playbook_on_vars_prompt(varname, private, prompt, encrypt, confirm, salt_size, salt, default)
def v2_playbook_on_setup(self):
self.playbook_on_setup()
def v2_playbook_on_import_for_host(self, result, imported_file):
host = result._host.get_name()
self.playbook_on_import_for_host(host, imported_file)
def v2_playbook_on_not_import_for_host(self, result, missing_file):
host = result._host.get_name()
self.playbook_on_not_import_for_host(host, missing_file)
def v2_playbook_on_play_start(self, play):
self.playbook_on_play_start(play.name)
def v2_playbook_on_stats(self, stats):
self.playbook_on_stats(stats)
def v2_on_file_diff(self, result):
host = result._host.get_name()
if 'diff' in result._result:
self.on_file_diff(host, result._result['diff'])
def v2_playbook_on_item_ok(self, result):
pass # no v1
def v2_playbook_on_item_failed(self, result):
pass # no v1
def v2_playbook_on_item_skipped(self, result):
pass # no v1 | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright 2002-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.jmx.access;
import java.io.IOException;
import java.net.MalformedURLException;
import java.util.Arrays;
import java.util.Map;
import javax.management.MBeanServerConnection;
import javax.management.ObjectName;
import javax.management.remote.JMXServiceURL;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.jspecify.annotations.Nullable;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.jmx.JmxException;
import org.springframework.jmx.MBeanServerNotFoundException;
import org.springframework.jmx.support.NotificationListenerHolder;
import org.springframework.util.CollectionUtils;
/**
* Registrar object that associates a specific {@link javax.management.NotificationListener}
* with one or more MBeans in an {@link javax.management.MBeanServer}
* (typically via a {@link javax.management.MBeanServerConnection}).
*
* @author Juergen Hoeller
* @since 2.5.2
* @see #setServer
* @see #setMappedObjectNames
* @see #setNotificationListener
*/
public class NotificationListenerRegistrar extends NotificationListenerHolder
implements InitializingBean, DisposableBean {
/** Logger available to subclasses. */
protected final Log logger = LogFactory.getLog(getClass());
private final ConnectorDelegate connector = new ConnectorDelegate();
private @Nullable MBeanServerConnection server;
private @Nullable JMXServiceURL serviceUrl;
private @Nullable Map<String, ?> environment;
private @Nullable String agentId;
private ObjectName @Nullable [] actualObjectNames;
/**
* Set the {@code MBeanServerConnection} used to connect to the
* MBean which all invocations are routed to.
*/
public void setServer(MBeanServerConnection server) {
this.server = server;
}
/**
* Specify the environment for the JMX connector.
* @see javax.management.remote.JMXConnectorFactory#connect(javax.management.remote.JMXServiceURL, java.util.Map)
*/
public void setEnvironment(@Nullable Map<String, ?> environment) {
this.environment = environment;
}
/**
* Allow {@code Map} access to the environment to be set for the connector,
* with the option to add or override specific entries.
* <p>Useful for specifying entries directly, for example via
* {@code environment[myKey]}. This is particularly useful for
* adding or overriding entries in child bean definitions.
*/
public @Nullable Map<String, ?> getEnvironment() {
return this.environment;
}
/**
* Set the service URL of the remote {@code MBeanServer}.
*/
public void setServiceUrl(String url) throws MalformedURLException {
this.serviceUrl = new JMXServiceURL(url);
}
/**
* Set the agent id of the {@code MBeanServer} to locate.
* <p>Default is none. If specified, this will result in an
* attempt being made to locate the attendant MBeanServer, unless
* the {@link #setServiceUrl "serviceUrl"} property has been set.
* @see javax.management.MBeanServerFactory#findMBeanServer(String)
* <p>Specifying the empty String indicates the platform MBeanServer.
*/
public void setAgentId(String agentId) {
this.agentId = agentId;
}
@Override
public void afterPropertiesSet() {
if (getNotificationListener() == null) {
throw new IllegalArgumentException("Property 'notificationListener' is required");
}
if (CollectionUtils.isEmpty(this.mappedObjectNames)) {
throw new IllegalArgumentException("Property 'mappedObjectName' is required");
}
prepare();
}
/**
* Registers the specified {@code NotificationListener}.
* <p>Ensures that an {@code MBeanServerConnection} is configured and attempts
* to detect a local connection if one is not supplied.
*/
public void prepare() {
if (this.server == null) {
this.server = this.connector.connect(this.serviceUrl, this.environment, this.agentId);
}
try {
this.actualObjectNames = getResolvedObjectNames();
if (this.actualObjectNames != null) {
if (logger.isDebugEnabled()) {
logger.debug("Registering NotificationListener for MBeans " + Arrays.toString(this.actualObjectNames));
}
for (ObjectName actualObjectName : this.actualObjectNames) {
this.server.addNotificationListener(
actualObjectName, getNotificationListener(), getNotificationFilter(), getHandback());
}
}
}
catch (IOException ex) {
throw new MBeanServerNotFoundException(
"Could not connect to remote MBeanServer at URL [" + this.serviceUrl + "]", ex);
}
catch (Exception ex) {
throw new JmxException("Unable to register NotificationListener", ex);
}
}
/**
* Unregisters the specified {@code NotificationListener}.
*/
@Override
public void destroy() {
try {
if (this.server != null && this.actualObjectNames != null) {
for (ObjectName actualObjectName : this.actualObjectNames) {
try {
this.server.removeNotificationListener(
actualObjectName, getNotificationListener(), getNotificationFilter(), getHandback());
}
catch (Exception ex) {
if (logger.isDebugEnabled()) {
logger.debug("Unable to unregister NotificationListener", ex);
}
}
}
}
}
finally {
this.connector.close();
}
}
} | java | github | https://github.com/spring-projects/spring-framework | spring-context/src/main/java/org/springframework/jmx/access/NotificationListenerRegistrar.java |
#!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
try:
import hashlib
_new_md5 = hashlib.md5
except ImportError:
import md5
_new_md5 = md5.new
"""64-bit fingerprint support for strings.
Usage:
from extern import FP
print 'Fingerprint is %ld' % FP.FingerPrint('Hello world!')
"""
def _UnsignedFingerPrintImpl(str, encoding='utf-8'):
"""Generate a 64-bit fingerprint by taking the first half of the md5
of the string.
"""
hex128 = _new_md5(str).hexdigest()
int64 = long(hex128[:16], 16)
return int64
def UnsignedFingerPrint(str, encoding='utf-8'):
"""Generate a 64-bit fingerprint.
The default implementation uses _UnsignedFingerPrintImpl, which
takes the first half of the md5 of the string, but the
implementation may be switched using SetUnsignedFingerPrintImpl.
"""
return _UnsignedFingerPrintImpl(str, encoding)
def FingerPrint(str, encoding='utf-8'):
fp = UnsignedFingerPrint(str, encoding=encoding)
# interpret fingerprint as signed longs
if fp & 0x8000000000000000L:
fp = - ((~fp & 0xFFFFFFFFFFFFFFFFL) + 1)
return fp
def UseUnsignedFingerPrintFromModule(module_name):
"""Imports module_name and replaces UnsignedFingerPrint in the
current module with the function of the same name from the imported
module.
Returns the function object previously known as
grit.extern.FP.UnsignedFingerPrint.
"""
hash_module = __import__(module_name, fromlist=[module_name])
return SetUnsignedFingerPrint(hash_module.UnsignedFingerPrint)
def SetUnsignedFingerPrint(function_object):
"""Sets grit.extern.FP.UnsignedFingerPrint to point to
function_object.
Returns the function object previously known as
grit.extern.FP.UnsignedFingerPrint.
"""
global UnsignedFingerPrint
original_function_object = UnsignedFingerPrint
UnsignedFingerPrint = function_object
return original_function_object | unknown | codeparrot/codeparrot-clean | ||
{
"title": "V18 Gauge Options Migration Test Dashboard",
"schemaVersion": 17,
"panels": [
{
"id": 1,
"type": "gauge",
"title": "Complete Gauge Panel",
"options-gauge": {
"unit": "ms",
"stat": "last",
"decimals": 2,
"prefix": "Value: ",
"suffix": " ms",
"thresholds": [
{"color": "green", "value": 0},
{"color": "yellow", "value": 50},
{"color": "red", "value": 100}
]
}
},
{
"id": 2,
"type": "gauge",
"title": "Partial Gauge Panel",
"options-gauge": {
"unit": "percent",
"decimals": 1
}
},
{
"id": 3,
"type": "gauge",
"title": "Buggy Gauge Panel",
"options-gauge": {
"unit": "bytes",
"options": "this should be deleted",
"stat": "avg",
"decimals": 0
}
},
{
"id": 4,
"type": "gauge",
"title": "Custom Properties Gauge Panel",
"options-gauge": {
"unit": "short",
"customProperty": "customValue",
"anotherProp": 42,
"thresholds": [
{"color": "blue", "value": 10}
]
}
},
{
"id": 5,
"type": "graph",
"title": "Non-Gauge Panel",
"options": {
"legend": {
"show": true
}
}
}
]
} | json | github | https://github.com/grafana/grafana | apps/dashboard/pkg/migration/testdata/input/v18.gauge_options.json |
#!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import os
import sys
import re
import tarfile
import tempfile
import unittest
from sdktools_test import SdkToolsTestCase
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
BUILD_TOOLS_DIR = os.path.dirname(SCRIPT_DIR)
TOOLS_DIR = os.path.join(os.path.dirname(BUILD_TOOLS_DIR), 'tools')
sys.path.extend([BUILD_TOOLS_DIR, TOOLS_DIR])
import manifest_util
import oshelpers
class TestCommands(SdkToolsTestCase):
def setUp(self):
self.SetupDefault()
def _AddDummyBundle(self, manifest, bundle_name):
bundle = manifest_util.Bundle(bundle_name)
bundle.revision = 1337
bundle.version = 23
bundle.description = bundle_name
bundle.stability = 'beta'
bundle.recommended = 'no'
bundle.repath = bundle_name
archive = self._MakeDummyArchive(bundle_name)
bundle.AddArchive(archive)
manifest.SetBundle(bundle)
# Need to get the bundle from the manifest -- it doesn't use the one we
# gave it.
return manifest.GetBundle(bundle_name)
def _MakeDummyArchive(self, bundle_name, tarname=None, filename='dummy.txt'):
tarname = (tarname or bundle_name) + '.tar.bz2'
temp_dir = tempfile.mkdtemp(prefix='archive')
try:
dummy_path = os.path.join(temp_dir, filename)
with open(dummy_path, 'w') as stream:
stream.write('Dummy stuff for %s' % bundle_name)
# Build the tarfile directly into the server's directory.
tar_path = os.path.join(self.basedir, tarname)
tarstream = tarfile.open(tar_path, 'w:bz2')
try:
tarstream.add(dummy_path, os.path.join(bundle_name, filename))
finally:
tarstream.close()
with open(tar_path, 'rb') as archive_stream:
sha1, size = manifest_util.DownloadAndComputeHash(archive_stream)
archive = manifest_util.Archive(manifest_util.GetHostOS())
archive.url = self.server.GetURL(os.path.basename(tar_path))
archive.size = size
archive.checksum = sha1
return archive
finally:
oshelpers.Remove(['-rf', temp_dir])
def testInfoBasic(self):
"""The info command should display information about the given bundle."""
self._WriteManifest()
output = self._Run(['info', 'sdk_tools'])
# Make sure basic information is there
bundle = self.manifest.GetBundle('sdk_tools')
archive = bundle.GetHostOSArchive();
self.assertTrue(bundle.name in output)
self.assertTrue(bundle.description in output)
self.assertTrue(str(bundle.revision) in output)
self.assertTrue(str(archive.size) in output)
self.assertTrue(archive.checksum in output)
self.assertTrue(bundle.stability in output)
def testInfoUnknownBundle(self):
"""The info command should notify the user of unknown bundles."""
self._WriteManifest()
bogus_bundle = 'foobar'
output = self._Run(['info', bogus_bundle])
self.assertTrue(re.search(r'[uU]nknown', output))
self.assertTrue(bogus_bundle in output)
def testInfoMultipleBundles(self):
"""The info command should support listing multiple bundles."""
self._AddDummyBundle(self.manifest, 'pepper_23')
self._AddDummyBundle(self.manifest, 'pepper_24')
self._WriteManifest()
output = self._Run(['info', 'pepper_23', 'pepper_24'])
self.assertTrue('pepper_23' in output)
self.assertTrue('pepper_24' in output)
self.assertFalse(re.search(r'[uU]nknown', output))
def testInfoMultipleArchives(self):
"""The info command should display multiple archives."""
bundle = self._AddDummyBundle(self.manifest, 'pepper_26')
archive2 = self._MakeDummyArchive('pepper_26', tarname='pepper_26_more',
filename='dummy2.txt')
archive2.host_os = 'all'
bundle.AddArchive(archive2)
self._WriteManifest()
output = self._Run(['info', 'pepper_26'])
self.assertTrue('pepper_26' in output)
self.assertTrue('pepper_26_more' in output)
def testListBasic(self):
"""The list command should display basic information about remote
bundles."""
self._WriteManifest()
output = self._Run(['list'])
self.assertTrue(re.search('I.*?sdk_tools.*?stable', output, re.MULTILINE))
# This line is important (it's used by the updater to determine if the
# sdk_tools bundle needs to be updated), so let's be explicit.
self.assertTrue('All installed bundles are up-to-date.')
def testListMultiple(self):
"""The list command should display multiple bundles."""
self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteManifest()
output = self._Run(['list'])
# Added pepper_23 to the remote manifest not the local manifest, so it
# shouldn't be installed.
self.assertTrue(re.search('^[^I]*pepper_23', output, re.MULTILINE))
self.assertTrue('sdk_tools' in output)
def testListWithRevision(self):
"""The list command should display the revision, if desired."""
self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteManifest()
output = self._Run(['list', '-r'])
self.assertTrue(re.search('pepper_23.*?r1337', output))
def testListWithUpdatedRevision(self):
"""The list command should display when there is an update available."""
p23bundle = self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteCacheManifest(self.manifest)
# Modify the remote manifest to have a newer revision.
p23bundle.revision += 1
self._WriteManifest()
output = self._Run(['list', '-r'])
# We should see a display like this: I* pepper_23 (r1337 -> r1338)
# The star indicates the bundle has an update.
self.assertTrue(re.search('I\*\s+pepper_23.*?r1337.*?r1338', output))
def testListLocalVersionNotOnRemote(self):
"""The list command should tell the user if they have a bundle installed
that doesn't exist in the remote manifest."""
self._WriteManifest()
p23bundle = self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteCacheManifest(self.manifest)
output = self._Run(['list', '-r'])
message = 'Bundles installed locally that are not available remotely:'
message_loc = output.find(message)
self.assertNotEqual(message_loc, -1)
# Make sure pepper_23 is listed after the message above.
self.assertTrue('pepper_23' in output[message_loc:])
def testSources(self):
"""The sources command should allow adding/listing/removing of sources.
When a source is added, it will provide an additional set of bundles."""
other_manifest = manifest_util.SDKManifest()
self._AddDummyBundle(other_manifest, 'naclmono_23')
with open(os.path.join(self.basedir, 'source.json'), 'w') as stream:
stream.write(other_manifest.GetDataAsString())
source_json_url = self.server.GetURL('source.json')
self._WriteManifest()
output = self._Run(['sources', '--list'])
self.assertTrue('No external sources installed.' in output)
output = self._Run(['sources', '--add', source_json_url])
output = self._Run(['sources', '--list'])
self.assertTrue(source_json_url in output)
# Should be able to get info about that bundle.
output = self._Run(['info', 'naclmono_23'])
self.assertTrue('Unknown bundle' not in output)
self._Run(['sources', '--remove', source_json_url])
output = self._Run(['sources', '--list'])
self.assertTrue('No external sources installed.' in output)
def testUpdateBasic(self):
"""The update command should install the contents of a bundle to the SDK."""
self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteManifest()
self._Run(['update', 'pepper_23'])
self.assertTrue(os.path.exists(
os.path.join(self.basedir, 'nacl_sdk', 'pepper_23', 'dummy.txt')))
def testUpdateInCacheButDirectoryRemoved(self):
"""The update command should update if the bundle directory does not exist,
even if the bundle is already in the cache manifest."""
self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteCacheManifest(self.manifest)
self._WriteManifest()
self._Run(['update', 'pepper_23'])
self.assertTrue(os.path.exists(
os.path.join(self.basedir, 'nacl_sdk', 'pepper_23', 'dummy.txt')))
def testUpdateNoNewVersion(self):
"""The update command should do nothing if the bundle is already up-to-date.
"""
self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteManifest()
self._Run(['update', 'pepper_23'])
output = self._Run(['update', 'pepper_23'])
self.assertTrue('is already up-to-date.' in output)
def testUpdateWithNewVersion(self):
"""The update command should update to a new version if it exists."""
bundle = self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteManifest()
self._Run(['update', 'pepper_23'])
bundle.revision += 1
self._WriteManifest()
output = self._Run(['update', 'pepper_23'])
self.assertTrue('already exists, but has an update available' in output)
# Now update using --force.
output = self._Run(['update', 'pepper_23', '--force'])
self.assertTrue('Updating bundle' in output)
cache_manifest = self._ReadCacheManifest()
num_archives = len(cache_manifest.GetBundle('pepper_23').GetArchives())
self.assertEqual(num_archives, 1)
def testUpdateUnknownBundles(self):
"""The update command should ignore unknown bundles and notify the user."""
self._WriteManifest()
output = self._Run(['update', 'foobar'])
self.assertTrue('unknown bundle' in output)
def testUpdateRecommended(self):
"""The update command should update only recommended bundles when run
without args.
"""
bundle_25 = self._AddDummyBundle(self.manifest, 'pepper_25')
bundle_25.recommended = 'no'
bundle_26 = self._AddDummyBundle(self.manifest, 'pepper_26')
bundle_26.recommended = 'yes'
self._WriteManifest()
output = self._Run(['update'])
# Should not try to update sdk_tools (even though it is recommended)
self.assertTrue('Ignoring manual update request.' not in output)
self.assertFalse(os.path.exists(
os.path.join(self.basedir, 'nacl_sdk', 'pepper_25')))
self.assertTrue(os.path.exists(
os.path.join(self.basedir, 'nacl_sdk', 'pepper_26', 'dummy.txt')))
def testUpdateCanary(self):
"""The update command should create the correct directory name for repath'd
bundles.
"""
bundle = self._AddDummyBundle(self.manifest, 'pepper_26')
bundle.name = 'pepper_canary'
self._WriteManifest()
output = self._Run(['update', 'pepper_canary'])
self.assertTrue(os.path.exists(
os.path.join(self.basedir, 'nacl_sdk', 'pepper_canary', 'dummy.txt')))
def testUpdateMultiArchive(self):
"""The update command should include download/untar multiple archives
specified in the bundle.
"""
bundle = self._AddDummyBundle(self.manifest, 'pepper_26')
archive2 = self._MakeDummyArchive('pepper_26', tarname='pepper_26_more',
filename='dummy2.txt')
archive2.host_os = 'all'
bundle.AddArchive(archive2)
self._WriteManifest()
output = self._Run(['update', 'pepper_26'])
self.assertTrue(os.path.exists(
os.path.join(self.basedir, 'nacl_sdk', 'pepper_26', 'dummy.txt')))
self.assertTrue(os.path.exists(
os.path.join(self.basedir, 'nacl_sdk', 'pepper_26', 'dummy2.txt')))
def testUninstall(self):
"""The uninstall command should remove the installed bundle, if it
exists.
"""
# First install the bundle.
self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteManifest()
output = self._Run(['update', 'pepper_23'])
self.assertTrue(os.path.exists(
os.path.join(self.basedir, 'nacl_sdk', 'pepper_23', 'dummy.txt')))
# Now remove it.
self._Run(['uninstall', 'pepper_23'])
self.assertFalse(os.path.exists(
os.path.join(self.basedir, 'nacl_sdk', 'pepper_23')))
# The bundle should not be marked as installed.
output = self._Run(['list'])
self.assertTrue(re.search('^[^I]*pepper_23', output, re.MULTILINE))
def testReinstall(self):
"""The reinstall command should remove, then install, the specified
bundles.
"""
# First install the bundle.
self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteManifest()
output = self._Run(['update', 'pepper_23'])
dummy_txt = os.path.join(self.basedir, 'nacl_sdk', 'pepper_23', 'dummy.txt')
self.assertTrue(os.path.exists(dummy_txt))
with open(dummy_txt) as f:
self.assertEqual(f.read(), 'Dummy stuff for pepper_23')
# Change some files.
foo_txt = os.path.join(self.basedir, 'nacl_sdk', 'pepper_23', 'foo.txt')
with open(foo_txt, 'w') as f:
f.write('Another dummy file. This one is not part of the bundle.')
with open(dummy_txt, 'w') as f:
f.write('changed dummy.txt')
# Reinstall the bundle.
self._Run(['reinstall', 'pepper_23'])
self.assertFalse(os.path.exists(foo_txt))
self.assertTrue(os.path.exists(dummy_txt))
with open(dummy_txt) as f:
self.assertEqual(f.read(), 'Dummy stuff for pepper_23')
cache_manifest = self._ReadCacheManifest()
num_archives = len(cache_manifest.GetBundle('pepper_23').GetArchives())
self.assertEqual(num_archives, 1)
def testReinstallWithDuplicatedArchives(self):
"""The reinstall command should only use the most recent archive if there
are duplicated archives.
NOTE: There was a bug where the sdk_cache/naclsdk_manifest2.json file was
duplicating archives from different revisions. Make sure that reinstall
ignores old archives in the bundle.
"""
# First install the bundle.
self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteManifest()
self._Run(['update', 'pepper_23'])
manifest = self._ReadCacheManifest()
bundle = manifest.GetBundle('pepper_23')
self.assertEqual(len(bundle.GetArchives()), 1)
# Now add a bogus duplicate archive
archive2 = self._MakeDummyArchive('pepper_23', tarname='pepper_23',
filename='dummy2.txt')
bundle.AddArchive(archive2)
self._WriteCacheManifest(manifest)
output = self._Run(['reinstall', 'pepper_23'])
# When updating just one file, there is no (file 1/2 - "...") output.
self.assertFalse('file 1/' in output)
# Should be using the last archive.
self.assertFalse(os.path.exists(
os.path.join(self.basedir, 'nacl_sdk', 'pepper_23', 'dummy.txt')))
self.assertTrue(os.path.exists(
os.path.join(self.basedir, 'nacl_sdk', 'pepper_23', 'dummy2.txt')))
def testReinstallDoesntUpdate(self):
"""The reinstall command should not update a bundle that has an update."""
# First install the bundle.
bundle = self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteManifest()
self._Run(['update', 'pepper_23'])
dummy_txt = os.path.join(self.basedir, 'nacl_sdk', 'pepper_23', 'dummy.txt')
self.assertTrue(os.path.exists(dummy_txt))
with open(dummy_txt) as f:
self.assertEqual(f.read(), 'Dummy stuff for pepper_23')
# Update the revision.
bundle.revision += 1
self._WriteManifest()
# Change the file.
foo_txt = os.path.join(self.basedir, 'nacl_sdk', 'pepper_23', 'foo.txt')
with open(dummy_txt, 'w') as f:
f.write('changed dummy.txt')
# Reinstall.
self._Run(['reinstall', 'pepper_23'])
# The data has been reinstalled.
self.assertTrue(os.path.exists(dummy_txt))
with open(dummy_txt) as f:
self.assertEqual(f.read(), 'Dummy stuff for pepper_23')
# ... but the version hasn't been updated.
output = self._Run(['list', '-r'])
self.assertTrue(re.search('I\*\s+pepper_23.*?r1337.*?r1338', output))
def testArchiveCacheBasic(self):
"""Downloaded archives should be stored in the cache by default."""
self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteManifest()
self._Run(['update', 'pepper_23'])
archive_cache = os.path.join(self.cache_dir, 'archives')
cache_contents = os.listdir(archive_cache)
self.assertEqual(cache_contents, ['pepper_23'])
cache_contents = os.listdir(os.path.join(archive_cache, 'pepper_23'))
self.assertEqual(cache_contents, ['pepper_23.tar.bz2'])
def testArchiveCacheEviction(self):
archive_cache = os.path.join(self.cache_dir, 'archives')
self._AddDummyBundle(self.manifest, 'pepper_23')
self._AddDummyBundle(self.manifest, 'pepper_22')
self._WriteManifest()
# First install pepper_23
self._Run(['update', 'pepper_23'])
archive = os.path.join(archive_cache, 'pepper_23', 'pepper_23.tar.bz2')
archive_size = os.path.getsize(archive)
# Set the mtime on the pepper_23 bundle to be a few seconds in the past.
# This is needed so that the two bundles don't end up with the same
# timestamp which can happen on systems that don't report sub-second
# timestamps.
atime = os.path.getatime(archive)
mtime = os.path.getmtime(archive)
os.utime(archive, (atime, mtime-10))
# Set cache limit to size of pepper archive * 1.5
self._WriteConfig('{ "cache_max": %d }' % int(archive_size * 1.5))
# Now install pepper_22, which should cause pepper_23 to be evicted
self._Run(['update', 'pepper_22'])
cache_contents = os.listdir(archive_cache)
self.assertEqual(cache_contents, ['pepper_22'])
def testArchiveCacheZero(self):
"""Archives should not be cached when cache_max is zero."""
self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteConfig('{ "cache_max": 0 }')
self._AddDummyBundle(self.manifest, 'pepper_23')
self._WriteManifest()
self._Run(['update', 'pepper_23'])
archive_cache = os.path.join(self.cache_dir, 'archives')
# Archive folder should be completely remove by cache cleanup
self.assertFalse(os.path.exists(archive_cache))
if __name__ == '__main__':
unittest.main() | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import difflib
import os
import re
import subprocess
import textwrap
from collections.abc import Callable
from datetime import date
from pathlib import Path
from typing import Annotated, Any
import typer
from ..utils import is_libcst_available
from .add_fast_image_processor import add_fast_image_processor
# We protect this import to avoid requiring it for all `transformers` CLI commands - however it is actually
# strictly required for this one (we need it both for modular and for the following Visitor)
if is_libcst_available():
import libcst as cst
from libcst import CSTVisitor
from libcst import matchers as m
class ClassFinder(CSTVisitor):
"""
A visitor to find all classes in a python module.
"""
def __init__(self):
self.classes: list = []
self.public_classes: list = []
self.is_in_class = False
def visit_ClassDef(self, node: cst.ClassDef) -> None:
"""Record class names. We assume classes always only appear at top-level (i.e. no class definition in function or similar)"""
self.classes.append(node.name.value)
self.is_in_class = True
def leave_ClassDef(self, node: cst.ClassDef):
self.is_in_class = False
def visit_SimpleStatementLine(self, node: cst.SimpleStatementLine):
"""Record all public classes inside the `__all__` assignment."""
simple_top_level_assign_structure = m.SimpleStatementLine(
body=[m.Assign(targets=[m.AssignTarget(target=m.Name())])]
)
if not self.is_in_class and m.matches(node, simple_top_level_assign_structure):
assigned_variable = node.body[0].targets[0].target.value
if assigned_variable == "__all__":
elements = node.body[0].value.elements
self.public_classes = [element.value.value for element in elements]
CURRENT_YEAR = date.today().year
REPO_PATH = Path(__file__).parents[3]
COPYRIGHT = f"""
# coding=utf-8
# Copyright {CURRENT_YEAR} the HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""".lstrip()
### Entrypoint
def add_new_model_like(
repo_path: Annotated[
str | None, typer.Argument(help="When not using an editable install, the path to the Transformers repo.")
] = None,
):
"""
Add a new model to the library, based on an existing one.
"""
(
old_model_infos,
new_lowercase_name,
new_model_paper_name,
filenames_to_add,
create_fast_image_processor,
) = get_user_input()
_add_new_model_like_internal(
repo_path=Path(repo_path) if repo_path is not None else REPO_PATH,
old_model_infos=old_model_infos,
new_lowercase_name=new_lowercase_name,
new_model_paper_name=new_model_paper_name,
filenames_to_add=filenames_to_add,
create_fast_image_processor=create_fast_image_processor,
)
### Core logic
class ModelInfos:
"""
Retrieve the basic information about an existing model classes.
"""
def __init__(self, lowercase_name: str):
from ..models.auto.configuration_auto import CONFIG_MAPPING_NAMES, MODEL_NAMES_MAPPING
from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING_NAMES
from ..models.auto.image_processing_auto import IMAGE_PROCESSOR_MAPPING_NAMES
from ..models.auto.processing_auto import PROCESSOR_MAPPING_NAMES
from ..models.auto.tokenization_auto import TOKENIZER_MAPPING_NAMES
from ..models.auto.video_processing_auto import VIDEO_PROCESSOR_MAPPING_NAMES
# Just to make sure it's indeed lowercase
self.lowercase_name = lowercase_name.lower().replace(" ", "_").replace("-", "_")
if self.lowercase_name not in CONFIG_MAPPING_NAMES:
self.lowercase_name.replace("_", "-")
if self.lowercase_name not in CONFIG_MAPPING_NAMES:
raise ValueError(f"{lowercase_name} is not a valid model name")
self.paper_name = MODEL_NAMES_MAPPING[self.lowercase_name]
self.config_class = CONFIG_MAPPING_NAMES[self.lowercase_name]
self.camelcase_name = self.config_class.replace("Config", "")
# Get tokenizer class
if self.lowercase_name in TOKENIZER_MAPPING_NAMES:
self.fast_tokenizer_class = TOKENIZER_MAPPING_NAMES[self.lowercase_name]
self.fast_tokenizer_class = (
None if self.fast_tokenizer_class == "PreTrainedTokenizerFast" else self.fast_tokenizer_class
)
else:
self.tokenizer_class, self.fast_tokenizer_class = None, None
self.image_processor_class, self.fast_image_processor_class = IMAGE_PROCESSOR_MAPPING_NAMES.get(
self.lowercase_name, (None, None)
)
self.video_processor_class = VIDEO_PROCESSOR_MAPPING_NAMES.get(self.lowercase_name, None)
self.feature_extractor_class = FEATURE_EXTRACTOR_MAPPING_NAMES.get(self.lowercase_name, None)
self.processor_class = PROCESSOR_MAPPING_NAMES.get(self.lowercase_name, None)
def add_content_to_file(file_name: str | os.PathLike, new_content: str, add_after: str):
"""
A utility to add some content inside a given file.
Args:
file_name (`str` or `os.PathLike`):
The name of the file in which we want to insert some content.
new_content (`str`):
The content to add.
add_after (`str`):
The new content is added just after the first instance matching it.
"""
with open(file_name, "r", encoding="utf-8") as f:
old_content = f.read()
before, after = old_content.split(add_after, 1)
new_content = before + add_after + new_content + after
with open(file_name, "w", encoding="utf-8") as f:
f.write(new_content)
def add_model_to_auto_mappings(
repo_path: Path,
old_model_infos: ModelInfos,
new_lowercase_name: str,
new_model_paper_name: str,
filenames_to_add: list[tuple[str, bool]],
):
"""
Add a model to all the relevant mappings in the auto module.
Args:
old_model_infos (`ModelInfos`):
The structure containing the class information of the old model.
new_lowercase_name (`str`):
The new lowercase model name.
new_model_paper_name (`str`):
The fully cased name (as in the official paper name) of the new model.
filenames_to_add (`list[tuple[str, bool]]`):
A list of tuples of all potential filenames to add for a new model, along a boolean flag describing if we
should add this file or not. For example, [(`modeling_xxx.px`, True), (`configuration_xxx.py`, True), (`tokenization_xxx.py`, False),...]
"""
new_cased_name = "".join(x.title() for x in new_lowercase_name.replace("-", "_").split("_"))
old_lowercase_name = old_model_infos.lowercase_name
old_cased_name = old_model_infos.camelcase_name
filenames_to_add = [
(filename.replace(old_lowercase_name, "auto"), to_add) for filename, to_add in filenames_to_add[1:]
]
# fast tokenizer/image processor have the same auto mappings as normal ones
corrected_filenames_to_add = []
for file, to_add in filenames_to_add:
if re.search(r"(?:tokenization)|(?:image_processing)_auto_fast.py", file):
previous_file, previous_to_add = corrected_filenames_to_add[-1]
corrected_filenames_to_add[-1] = (previous_file, previous_to_add or to_add)
else:
corrected_filenames_to_add.append((file, to_add))
# Add the config mappings directly as the handling for config is a bit different
add_content_to_file(
repo_path / "src" / "transformers" / "models" / "auto" / "configuration_auto.py",
new_content=f' ("{new_lowercase_name}", "{new_cased_name}Config"),\n',
add_after="CONFIG_MAPPING_NAMES = OrderedDict[str, str](\n [\n # Add configs here\n",
)
add_content_to_file(
repo_path / "src" / "transformers" / "models" / "auto" / "configuration_auto.py",
new_content=f' ("{new_lowercase_name}", "{new_model_paper_name}"),\n',
add_after="MODEL_NAMES_MAPPING = OrderedDict[str, str](\n [\n # Add full (and cased) model names here\n",
)
for filename, to_add in corrected_filenames_to_add:
if to_add:
# The auto mapping
filename = filename.replace("_fast.py", ".py")
file = (repo_path / "src" / "transformers" / "models" / "auto" / filename).read_text()
# The regex has to be a bit complex like this as the tokenizer mapping has new lines everywhere
matching_lines = re.findall(
rf'( {{8,12}}\(\s*"{old_lowercase_name}",.*?\),\n)(?: {{4,12}}\(|\])', file, re.DOTALL
)
for match in matching_lines:
add_content_to_file(
repo_path / "src" / "transformers" / "models" / "auto" / filename,
new_content=match.replace(old_lowercase_name, new_lowercase_name).replace(
old_cased_name, new_cased_name
),
add_after=match,
)
def create_doc_file(new_paper_name: str, public_classes: list[str]):
"""
Create a new doc file to fill for the new model.
Args:
new_paper_name (`str`):
The fully cased name (as in the official paper name) of the new model.
public_classes (`list[str]`):
A list of all the public classes that the model will have in the library.
"""
added_note = (
"\n\n⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that "
"may not be rendered properly in your Markdown viewer.\n\n-->\n\n"
)
copyright_for_markdown = re.sub(r"# ?", "", COPYRIGHT).replace("coding=utf-8\n", "<!--") + added_note
doc_template = textwrap.dedent(
f"""
# {new_paper_name}
## Overview
The {new_paper_name} model was proposed in [<INSERT PAPER NAME HERE>](<INSERT PAPER LINK HERE>) by <INSERT AUTHORS HERE>.
<INSERT SHORT SUMMARY HERE>
The abstract from the paper is the following:
<INSERT PAPER ABSTRACT HERE>
Tips:
<INSERT TIPS ABOUT MODEL HERE>
This model was contributed by [INSERT YOUR HF USERNAME HERE](https://huggingface.co/<INSERT YOUR HF USERNAME HERE>).
The original code can be found [here](<INSERT LINK TO GITHUB REPO HERE>).
## Usage examples
<INSERT SOME NICE EXAMPLES HERE>
"""
)
# Add public classes doc
doc_for_classes = []
for class_ in public_classes:
doc = f"## {class_}\n\n[[autodoc]] {class_}"
if "Model" in class_:
doc += "\n - forward"
doc_for_classes.append(doc)
class_doc = "\n\n".join(doc_for_classes)
return copyright_for_markdown + doc_template + class_doc
def insert_model_in_doc_toc(
repo_path: Path, old_lowercase_name: str, new_lowercase_name: str, new_model_paper_name: str
):
"""
Insert the new model in the doc `_toctree.yaml`, in the same section as the old model.
Args:
old_lowercase_name (`str`):
The old lowercase model name.
new_lowercase_name (`str`):
The old lowercase model name.
new_model_paper_name (`str`):
The fully cased name (as in the official paper name) of the new model.
"""
toc_file = repo_path / "docs" / "source" / "en" / "_toctree.yml"
with open(toc_file, "r") as f:
content = f.read()
old_model_toc = re.search(rf"- local: model_doc/{old_lowercase_name}\n {{8}}title: .*?\n", content).group(0)
new_toc = f" - local: model_doc/{new_lowercase_name}\n title: {new_model_paper_name}\n"
add_content_to_file(
repo_path / "docs" / "source" / "en" / "_toctree.yml", new_content=new_toc, add_after=old_model_toc
)
def create_init_file(old_lowercase_name: str, new_lowercase_name: str, filenames_to_add: list[tuple[str, bool]]):
"""
Create the `__init__.py` file to add in the new model folder.
Args:
old_lowercase_name (`str`):
The old lowercase model name.
new_lowercase_name (`str`):
The new lowercase model name.
filenames_to_add (`list[tuple[str, bool]]`):
A list of tuples of all potential filenames to add for a new model, along a boolean flag describing if we
should add this file or not. For example, [(`modeling_xxx.px`, True), (`configuration_xxx.py`, True), (`tokenization_xxx.py`, False),...]
"""
filenames_to_add = [
(filename.replace(old_lowercase_name, new_lowercase_name).replace(".py", ""), to_add)
for filename, to_add in filenames_to_add
]
imports = "\n ".join(f"from .{file} import *" for file, to_add in filenames_to_add if to_add)
init_file = COPYRIGHT + textwrap.dedent(
f"""
from typing import TYPE_CHECKING
from ...utils import _LazyModule
from ...utils.import_utils import define_import_structure
if TYPE_CHECKING:
{imports}
else:
import sys
_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
"""
)
return init_file
def find_all_classes_from_file(module_name: str) -> set:
"""
Find the name of all classes defined in `module_name`, including public ones (defined in `__all__`).
Args:
module_name (`str`):
The full path to the python module from which to extract classes.
"""
with open(module_name, "r", encoding="utf-8") as file:
source_code = file.read()
module = cst.parse_module(source_code)
visitor = ClassFinder()
module.visit(visitor)
return visitor.classes, visitor.public_classes
def find_modular_structure(
module_name: str, old_model_infos: ModelInfos, new_cased_name: str
) -> tuple[str, str, list]:
"""
Extract the modular structure that will be needed to copy a file `module_name` using modular.
Args:
module_name (`str`):
The full path to the python module to copy with modular.
old_model_infos (`ModelInfos`):
The structure containing the class information of the old model.
new_cased_name (`str`):
The new cased model name.
"""
all_classes, public_classes = find_all_classes_from_file(module_name)
import_location = ".".join(module_name.parts[-2:]).replace(".py", "")
old_cased_name = old_model_infos.camelcase_name
imports = f"from ..{import_location} import {', '.join(class_ for class_ in all_classes)}"
modular_classes = "\n\n".join(
f"class {class_.replace(old_cased_name, new_cased_name)}({class_}):\n pass" for class_ in all_classes
)
public_classes = [class_.replace(old_cased_name, new_cased_name) for class_ in public_classes]
return imports, modular_classes, public_classes
def create_modular_file(
repo_path: Path,
old_model_infos: ModelInfos,
new_lowercase_name: str,
filenames_to_add: list[tuple[str, bool]],
) -> str:
"""
Create a new modular file which will copy the old model, based on the new name and the different filenames
(modules) to add.
Args:
old_model_infos (`ModelInfos`):
The structure containing the class information of the old model.
new_lowercase_name (`str`):
The new lowercase model name.
filenames_to_add (`list[tuple[str, bool]]`):
A list of tuples of all potential filenames to add for a new model, along a boolean flag describing if we
should add this file or not. For example, [(`modeling_xxx.px`, True), (`configuration_xxx.py`, True), (`tokenization_xxx.py`, False),...]
"""
new_cased_name = "".join(x.title() for x in new_lowercase_name.replace("-", "_").split("_"))
old_lowercase_name = old_model_infos.lowercase_name
old_folder_root = repo_path / "src" / "transformers" / "models" / old_lowercase_name
# Construct the modular file from the original (old) model, by subclassing each class
all_imports = ""
all_bodies = ""
all_public_classes = []
for filename, to_add in filenames_to_add:
if to_add:
imports, body, public_classes = find_modular_structure(
old_folder_root / filename, old_model_infos, new_cased_name
)
all_imports += f"\n{imports}"
all_bodies += f"\n\n{body}"
all_public_classes.extend(public_classes)
# Create the __all__ assignment
public_classes_formatted = "\n ".join(f"{public_class}," for public_class in all_public_classes)
all_statement = textwrap.dedent(
f"""
__all__ = [
{public_classes_formatted}
]
"""
)
# Create the whole modular file
modular_file = COPYRIGHT + all_imports + all_bodies + all_statement
# Remove outer explicit quotes "" around the public class names before returning them
all_public_classes = [public_class.replace('"', "") for public_class in all_public_classes]
return modular_file, all_public_classes
def create_test_files(
repo_path: Path, old_model_infos: ModelInfos, new_lowercase_name, filenames_to_add: list[tuple[str, bool]]
):
"""
Create the test files for the new model. It basically copies over the old test files and adjust the class names.
Args:
old_model_infos (`ModelInfos`):
The structure containing the class information of the old model.
new_lowercase_name (`str`):
The new lowercase model name.
filenames_to_add (`list[tuple[str, bool]]`):
A list of tuples of all potential filenames to add for a new model, along a boolean flag describing if we
should add this file or not. For example, [(`modeling_xxx.px`, True), (`configuration_xxx.py`, True), (`tokenization_xxx.py`, False),...]
"""
new_cased_name = "".join(x.title() for x in new_lowercase_name.replace("-", "_").split("_"))
old_lowercase_name = old_model_infos.lowercase_name
old_cased_name = old_model_infos.camelcase_name
filenames_to_add = [
("test_" + filename.replace(old_lowercase_name, new_lowercase_name), to_add)
for filename, to_add in filenames_to_add[1:]
]
# fast tokenizer/image processor have the same test files as normal ones
corrected_filenames_to_add = []
for file, to_add in filenames_to_add:
if re.search(rf"test_(?:tokenization)|(?:image_processing)_{new_lowercase_name}_fast.py", file):
previous_file, previous_to_add = corrected_filenames_to_add[-1]
corrected_filenames_to_add[-1] = (previous_file, previous_to_add or to_add)
else:
corrected_filenames_to_add.append((file, to_add))
test_files = {}
for new_file, to_add in corrected_filenames_to_add:
if to_add:
original_test_file = new_file.replace(new_lowercase_name, old_lowercase_name)
original_test_path = repo_path / "tests" / "models" / old_lowercase_name / original_test_file
# Sometimes, tests may not exist
if not original_test_path.is_file():
continue
with open(original_test_path, "r") as f:
test_code = f.read()
# Remove old copyright and add new one
test_lines = test_code.split("\n")
idx = 0
while test_lines[idx].startswith("#"):
idx += 1
test_code = COPYRIGHT + "\n".join(test_lines[idx:])
test_files[new_file] = test_code.replace(old_cased_name, new_cased_name)
return test_files
def _add_new_model_like_internal(
repo_path: Path,
old_model_infos: ModelInfos,
new_lowercase_name: str,
new_model_paper_name: str,
filenames_to_add: list[tuple[str, bool]],
create_fast_image_processor: bool,
):
"""
Creates a new model module like a given model of the Transformers library.
Args:
repo_path (`Path`):
The path to the root of the Transformers repository.
old_model_infos (`ModelInfos`):
The structure containing the class information of the old model.
new_lowercase_name (`str`):
The new lowercase model name.
new_model_paper_name (`str`):
The fully cased name (as in the official paper name) of the new model.
filenames_to_add (`list[tuple[str, bool]]`):
A list of tuples of all potential filenames to add for a new model, along a boolean flag describing if we
should add this file or not. For example, [(`modeling_xxx.px`, True), (`configuration_xxx.py`, True), (`tokenization_xxx.py`, False),...]
create_fast_image_processor (`bool`):
If it makes sense, whether to add a fast processor as well, even if the old model does not have one.
"""
# As the import was protected, raise if not present (as it's actually a hard dependency for this command)
if not is_libcst_available():
raise ValueError("You need to install `libcst` to run this command -> `pip install libcst`")
old_lowercase_name = old_model_infos.lowercase_name
# 1. We create the folder for our new model
new_module_folder = repo_path / "src" / "transformers" / "models" / new_lowercase_name
os.makedirs(new_module_folder, exist_ok=True)
# 2. Create and add the modular file
modular_file, public_classes = create_modular_file(
repo_path, old_model_infos, new_lowercase_name, filenames_to_add
)
with open(new_module_folder / f"modular_{new_lowercase_name}.py", "w") as f:
f.write(modular_file)
# 3. Create and add the __init__.py
init_file = create_init_file(old_lowercase_name, new_lowercase_name, filenames_to_add)
with open(new_module_folder / "__init__.py", "w") as f:
f.write(init_file)
# 4. Add new model to the models init
add_content_to_file(
repo_path / "src" / "transformers" / "models" / "__init__.py",
new_content=f" from .{new_lowercase_name} import *\n",
add_after="if TYPE_CHECKING:\n",
)
# 5. Add model to auto mappings
add_model_to_auto_mappings(repo_path, old_model_infos, new_lowercase_name, new_model_paper_name, filenames_to_add)
# 6. Add test files
tests_folder = repo_path / "tests" / "models" / new_lowercase_name
os.makedirs(tests_folder, exist_ok=True)
# Add empty __init__.py
with open(tests_folder / "__init__.py", "w"):
pass
test_files = create_test_files(repo_path, old_model_infos, new_lowercase_name, filenames_to_add)
for filename, content in test_files.items():
with open(tests_folder / filename, "w") as f:
f.write(content)
# 7. Add doc file
doc_file = create_doc_file(new_model_paper_name, public_classes)
with open(repo_path / "docs" / "source" / "en" / "model_doc" / f"{new_lowercase_name}.md", "w") as f:
f.write(doc_file)
insert_model_in_doc_toc(repo_path, old_lowercase_name, new_lowercase_name, new_model_paper_name)
# 8. Add additional fast image processor if necessary
if create_fast_image_processor:
add_fast_image_processor(model_name=new_lowercase_name)
# 9. Run linters
model_init_file = repo_path / "src" / "transformers" / "models" / "__init__.py"
subprocess.run(
["ruff", "check", new_module_folder, tests_folder, model_init_file, "--fix"],
cwd=repo_path,
stdout=subprocess.DEVNULL,
)
subprocess.run(
["ruff", "format", new_module_folder, tests_folder, model_init_file],
cwd=repo_path,
stdout=subprocess.DEVNULL,
)
subprocess.run(
["python", "utils/check_doc_toc.py", "--fix_and_overwrite"], cwd=repo_path, stdout=subprocess.DEVNULL
)
subprocess.run(["python", "utils/sort_auto_mappings.py"], cwd=repo_path, stdout=subprocess.DEVNULL)
# 10. Run the modular conversion
subprocess.run(
["python", "utils/modular_model_converter.py", new_lowercase_name], cwd=repo_path, stdout=subprocess.DEVNULL
)
def get_user_field(
question: str,
default_value: str | None = None,
convert_to: Callable | None = None,
fallback_message: str | None = None,
) -> Any:
"""
A utility function that asks a question to the user to get an answer, potentially looping until it gets a valid
answer.
Args:
question (`str`):
The question to ask the user.
default_value (`str`, *optional*):
A potential default value that will be used when the answer is empty.
convert_to (`Callable`, *optional*):
If set, the answer will be passed to this function. If this function raises an error on the provided
answer, the question will be asked again.
fallback_message (`str`, *optional*):
A message that will be displayed each time the question is asked again to the user.
Returns:
`Any`: The answer provided by the user (or the default), passed through the potential conversion function.
"""
if not question.endswith(" "):
question = question + " "
if default_value is not None:
question = f"{question} [{default_value}] "
valid_answer = False
while not valid_answer:
answer = input(question)
if default_value is not None and len(answer) == 0:
answer = default_value
if convert_to is not None:
try:
answer = convert_to(answer)
valid_answer = True
except Exception:
valid_answer = False
else:
valid_answer = True
if not valid_answer:
print(fallback_message)
return answer
def convert_to_bool(x: str) -> bool:
"""
Converts a string to a bool.
"""
if x.lower() in ["1", "y", "yes", "true"]:
return True
if x.lower() in ["0", "n", "no", "false"]:
return False
raise ValueError(f"{x} is not a value that can be converted to a bool.")
def get_user_input():
"""
Ask the user for the necessary inputs to add the new model.
"""
from transformers.models.auto.configuration_auto import MODEL_NAMES_MAPPING
model_types = list(MODEL_NAMES_MAPPING.keys())
# Get old model type
valid_model_type = False
while not valid_model_type:
old_model_type = input(
"What model would you like to duplicate? Please provide it as lowercase, e.g. `llama`): "
)
if old_model_type in model_types:
valid_model_type = True
else:
print(f"{old_model_type} is not a valid model type.")
near_choices = difflib.get_close_matches(old_model_type, model_types)
if len(near_choices) >= 1:
if len(near_choices) > 1:
near_choices = " or ".join(near_choices)
print(f"Did you mean {near_choices}?")
old_model_infos = ModelInfos(old_model_type)
# Ask for the new model name
new_lowercase_name = get_user_field(
"What is the new model name? Please provide it as snake lowercase, e.g. `new_model`?"
)
new_model_paper_name = get_user_field(
"What is the fully cased name you would like to appear in the doc (e.g. `NeW ModEl`)? ",
default_value="".join(x.title() for x in new_lowercase_name.split("_")),
)
# Ask if we want to add individual processor classes as well
add_tokenizer = False
add_fast_tokenizer = False
add_image_processor = False
add_fast_image_processor = False
add_video_processor = False
add_feature_extractor = False
add_processor = False
if old_model_infos.tokenizer_class is not None:
add_tokenizer = get_user_field(
f"Do you want to create a new tokenizer? If `no`, it will use the same as {old_model_type} (y/n)?",
convert_to=convert_to_bool,
fallback_message="Please answer yes/no, y/n, true/false or 1/0. ",
)
if old_model_infos.fast_tokenizer_class is not None:
add_fast_tokenizer = get_user_field(
f"Do you want to create a new fast tokenizer? If `no`, it will use the same as {old_model_type} (y/n)?",
convert_to=convert_to_bool,
fallback_message="Please answer yes/no, y/n, true/false or 1/0. ",
)
if old_model_infos.image_processor_class is not None:
add_image_processor = get_user_field(
f"Do you want to create a new image processor? If `no`, it will use the same as {old_model_type} (y/n)?",
convert_to=convert_to_bool,
fallback_message="Please answer yes/no, y/n, true/false or 1/0. ",
)
if old_model_infos.fast_image_processor_class is not None:
add_fast_image_processor = get_user_field(
f"Do you want to create a new fast image processor? If `no`, it will use the same as {old_model_type} (y/n)?",
convert_to=convert_to_bool,
fallback_message="Please answer yes/no, y/n, true/false or 1/0. ",
)
if old_model_infos.video_processor_class is not None:
add_video_processor = get_user_field(
f"Do you want to create a new video processor? If `no`, it will use the same as {old_model_type} (y/n)?",
convert_to=convert_to_bool,
fallback_message="Please answer yes/no, y/n, true/false or 1/0. ",
)
if old_model_infos.feature_extractor_class is not None:
add_feature_extractor = get_user_field(
f"Do you want to create a new feature extractor? If `no`, it will use the same as {old_model_type} (y/n)?",
convert_to=convert_to_bool,
fallback_message="Please answer yes/no, y/n, true/false or 1/0. ",
)
if old_model_infos.processor_class is not None:
add_processor = get_user_field(
f"Do you want to create a new processor? If `no`, it will use the same as {old_model_type} (y/n)?",
convert_to=convert_to_bool,
fallback_message="Please answer yes/no, y/n, true/false or 1/0. ",
)
old_lowercase_name = old_model_infos.lowercase_name
# A list of the old filenames, along whether we should copy them or not
filenames_to_add = (
(f"configuration_{old_lowercase_name}.py", True),
(f"modeling_{old_lowercase_name}.py", True),
(f"tokenization_{old_lowercase_name}.py", add_tokenizer),
(f"tokenization_{old_lowercase_name}_fast.py", add_fast_tokenizer),
(f"image_processing_{old_lowercase_name}.py", add_image_processor),
(f"image_processing_{old_lowercase_name}_fast.py", add_fast_image_processor),
(f"video_processing_{old_lowercase_name}.py", add_video_processor),
(f"feature_extraction_{old_lowercase_name}.py", add_feature_extractor),
(f"processing_{old_lowercase_name}.py", add_processor),
)
create_fast_image_processor = False
if add_image_processor and not add_fast_image_processor:
create_fast_image_processor = get_user_field(
"A fast image processor can be created from the slow one, but modifications might be needed. "
"Should we add a fast image processor class for this model (recommended) (y/n)? ",
convert_to=convert_to_bool,
default_value="y",
fallback_message="Please answer yes/no, y/n, true/false or 1/0.",
)
return old_model_infos, new_lowercase_name, new_model_paper_name, filenames_to_add, create_fast_image_processor | python | github | https://github.com/huggingface/transformers | src/transformers/cli/add_new_model_like.py |
/*
* Copyright (c) 2007 Mockito contributors
* This program is made available under the terms of the MIT License.
*/
package org.concurrentmockito;
import static org.mockito.Mockito.inOrder;
import static org.mockito.Mockito.mock;
import org.junit.Test;
import org.mockito.InOrder;
import org.mockitoutil.TestBase;
public class VerificationInOrderFromMultipleThreadsTest extends TestBase {
@Test
public void shouldVerifyInOrderWhenMultipleThreadsInteractWithMock() throws Exception {
final Foo testInf = mock(Foo.class);
Thread threadOne =
new Thread(
new Runnable() {
public void run() {
testInf.methodOne();
}
});
threadOne.start();
threadOne.join();
Thread threadTwo =
new Thread(
new Runnable() {
public void run() {
testInf.methodTwo();
}
});
threadTwo.start();
threadTwo.join();
InOrder inOrder = inOrder(testInf);
inOrder.verify(testInf).methodOne();
inOrder.verify(testInf).methodTwo();
}
public interface Foo {
void methodOne();
void methodTwo();
}
} | java | github | https://github.com/mockito/mockito | mockito-core/src/test/java/org/concurrentmockito/VerificationInOrderFromMultipleThreadsTest.java |
from __future__ import unicode_literals
import re
from .common import InfoExtractor
class SexuIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?sexu\.com/(?P<id>\d+)'
_TEST = {
'url': 'http://sexu.com/961791/',
'md5': 'ff615aca9691053c94f8f10d96cd7884',
'info_dict': {
'id': '961791',
'ext': 'mp4',
'title': 'md5:4d05a19a5fc049a63dbbaf05fb71d91b',
'description': 'md5:c5ed8625eb386855d5a7967bd7b77a54',
'categories': list, # NSFW
'thumbnail': 're:https?://.*\.jpg$',
'age_limit': 18,
}
}
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
quality_arr = self._search_regex(
r'sources:\s*\[([^\]]+)\]', webpage, 'forrmat string')
formats = [{
'url': fmt[0].replace('\\', ''),
'format_id': fmt[1],
'height': int(fmt[1][:3]),
} for fmt in re.findall(r'"file":"([^"]+)","label":"([^"]+)"', quality_arr)]
self._sort_formats(formats)
title = self._html_search_regex(
r'<title>([^<]+)\s*-\s*Sexu\.Com</title>', webpage, 'title')
description = self._html_search_meta(
'description', webpage, 'description')
thumbnail = self._html_search_regex(
r'image:\s*"([^"]+)"',
webpage, 'thumbnail', fatal=False)
categories_str = self._html_search_meta(
'keywords', webpage, 'categories')
categories = (
None if categories_str is None
else categories_str.split(','))
return {
'id': video_id,
'title': title,
'description': description,
'thumbnail': thumbnail,
'categories': categories,
'formats': formats,
'age_limit': 18,
} | unknown | codeparrot/codeparrot-clean | ||
###############################################################################
##
## Copyright 2011-2013 Tavendo GmbH
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
##
###############################################################################
from _version import __version__
version = __version__ # backward compat.
import util
import useragent
import flashpolicy
import httpstatus
import utf8validator
import xormasker
import websocket
import resource
import prefixmap
import wamp | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2014, Max Riveiro, <kavu13@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: rollbar_deployment
version_added: 1.6
author: "Max Riveiro (@kavu)"
short_description: Notify Rollbar about app deployments
description:
- Notify Rollbar about app deployments
(see https://rollbar.com/docs/deploys_other/)
options:
token:
description:
- Your project access token.
required: true
environment:
description:
- Name of the environment being deployed, e.g. 'production'.
required: true
revision:
description:
- Revision number/sha being deployed.
required: true
user:
description:
- User who deployed.
required: false
rollbar_user:
description:
- Rollbar username of the user who deployed.
required: false
comment:
description:
- Deploy comment (e.g. what is being deployed).
required: false
url:
description:
- Optional URL to submit the notification to.
required: false
default: 'https://api.rollbar.com/api/1/deploy/'
validate_certs:
description:
- If C(no), SSL certificates for the target url will not be validated.
This should only be used on personally controlled sites using
self-signed certificates.
required: false
default: 'yes'
choices: ['yes', 'no']
'''
EXAMPLES = '''
- rollbar_deployment:
token: AAAAAA
environment: staging
user: ansible
revision: '4.2'
rollbar_user: admin
comment: Test Deploy
'''
import urllib
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.pycompat24 import get_exception
from ansible.module_utils.urls import fetch_url
def main():
module = AnsibleModule(
argument_spec=dict(
token=dict(required=True),
environment=dict(required=True),
revision=dict(required=True),
user=dict(required=False),
rollbar_user=dict(required=False),
comment=dict(required=False),
url=dict(
required=False,
default='https://api.rollbar.com/api/1/deploy/'
),
validate_certs=dict(default='yes', type='bool'),
),
supports_check_mode=True
)
if module.check_mode:
module.exit_json(changed=True)
params = dict(
access_token=module.params['token'],
environment=module.params['environment'],
revision=module.params['revision']
)
if module.params['user']:
params['local_username'] = module.params['user']
if module.params['rollbar_user']:
params['rollbar_username'] = module.params['rollbar_user']
if module.params['comment']:
params['comment'] = module.params['comment']
url = module.params.get('url')
try:
data = urllib.urlencode(params)
response, info = fetch_url(module, url, data=data)
except Exception:
e = get_exception()
module.fail_json(msg='Unable to notify Rollbar: %s' % e)
else:
if info['status'] == 200:
module.exit_json(changed=True)
else:
module.fail_json(msg='HTTP result code: %d connecting to %s' % (info['status'], url))
if __name__ == '__main__':
main() | unknown | codeparrot/codeparrot-clean | ||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2013 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystone import config
from keystone.tests import test_sql_upgrade
CONF = config.CONF
class PostgresqlMigrateTests(test_sql_upgrade.SqlUpgradeTests):
def config_files(self):
files = (test_sql_upgrade.SqlUpgradeTests.
_config_file_list[:])
files.append("backend_postgresql.conf")
return files
class MysqlMigrateTests(test_sql_upgrade.SqlUpgradeTests):
def config_files(self):
files = (test_sql_upgrade.SqlUpgradeTests.
_config_file_list[:])
files.append("backend_mysql.conf")
return files
class Db2MigrateTests(test_sql_upgrade.SqlUpgradeTests):
def config_files(self):
files = (test_sql_upgrade.SqlUpgradeTests.
_config_file_list[:])
files.append("backend_db2.conf")
return files | unknown | codeparrot/codeparrot-clean | ||
"""
Skyrock OAuth1 backend, docs at:
http://psa.matiasaguirre.net/docs/backends/skyrock.html
"""
from social.backends.oauth import BaseOAuth1
class SkyrockOAuth(BaseOAuth1):
"""Skyrock OAuth authentication backend"""
name = 'skyrock'
ID_KEY = 'id_user'
AUTHORIZATION_URL = 'https://api.skyrock.com/v2/oauth/authenticate'
REQUEST_TOKEN_URL = 'https://api.skyrock.com/v2/oauth/initiate'
ACCESS_TOKEN_URL = 'https://api.skyrock.com/v2/oauth/token'
EXTRA_DATA = [('id', 'id')]
def get_user_details(self, response):
"""Return user details from Skyrock account"""
fullname, first_name, last_name = self.get_user_names(
first_name=response['firstname'],
last_name=response['name']
)
return {'username': response['username'],
'email': response['email'],
'fullname': fullname,
'first_name': first_name,
'last_name': last_name}
def user_data(self, access_token):
"""Return user data provided"""
return self.get_json('https://api.skyrock.com/v2/user/get.json',
auth=self.oauth_auth(access_token)) | unknown | codeparrot/codeparrot-clean | ||
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.runtime.distributed;
import org.apache.kafka.clients.ApiVersions;
import org.apache.kafka.clients.ClientUtils;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.GroupRebalanceConfig;
import org.apache.kafka.clients.Metadata;
import org.apache.kafka.clients.MetadataRecoveryStrategy;
import org.apache.kafka.clients.NetworkClient;
import org.apache.kafka.clients.consumer.CloseOptions;
import org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient;
import org.apache.kafka.common.KafkaException;
import org.apache.kafka.common.internals.ClusterResourceListeners;
import org.apache.kafka.common.metrics.KafkaMetricsContext;
import org.apache.kafka.common.metrics.MetricConfig;
import org.apache.kafka.common.metrics.Metrics;
import org.apache.kafka.common.metrics.MetricsContext;
import org.apache.kafka.common.metrics.MetricsReporter;
import org.apache.kafka.common.network.ChannelBuilder;
import org.apache.kafka.common.network.Selector;
import org.apache.kafka.common.utils.AppInfoParser;
import org.apache.kafka.common.utils.LogContext;
import org.apache.kafka.common.utils.Time;
import org.apache.kafka.common.utils.Utils;
import org.apache.kafka.connect.runtime.WorkerConfig;
import org.apache.kafka.connect.storage.ConfigBackingStore;
import org.apache.kafka.connect.util.ConnectorTaskId;
import org.slf4j.Logger;
import java.net.InetSocketAddress;
import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
import java.util.function.Supplier;
import static org.apache.kafka.common.utils.Utils.UncheckedCloseable;
/**
* This class manages the coordination process with brokers for the Connect cluster group membership. It ties together
* the Coordinator, which implements the group member protocol, with all the other pieces needed to drive the connection
* to the group coordinator broker. This isolates all the networking to a single thread managed by this class, with
* higher level operations in response to group membership events being handled by the herder.
*/
public class WorkerGroupMember {
private static final String JMX_PREFIX = "kafka.connect";
private final Logger log;
private final String clientId;
private final ConsumerNetworkClient client;
private final Metrics metrics;
private final WorkerCoordinator coordinator;
private boolean stopped = false;
public WorkerGroupMember(DistributedConfig config,
String restUrl,
ConfigBackingStore configStorage,
WorkerRebalanceListener listener,
Time time,
String clientId,
LogContext logContext) {
try {
this.clientId = clientId;
this.log = logContext.logger(WorkerGroupMember.class);
Map<String, String> metricsTags = new LinkedHashMap<>();
metricsTags.put("client-id", clientId);
MetricConfig metricConfig = new MetricConfig().samples(config.getInt(CommonClientConfigs.METRICS_NUM_SAMPLES_CONFIG))
.timeWindow(config.getLong(CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_CONFIG), TimeUnit.MILLISECONDS)
.tags(metricsTags);
List<MetricsReporter> reporters = CommonClientConfigs.metricsReporters(clientId, config);
Map<String, Object> contextLabels = new HashMap<>(config.originalsWithPrefix(CommonClientConfigs.METRICS_CONTEXT_PREFIX));
contextLabels.put(WorkerConfig.CONNECT_KAFKA_CLUSTER_ID, config.kafkaClusterId());
contextLabels.put(WorkerConfig.CONNECT_GROUP_ID, config.getString(DistributedConfig.GROUP_ID_CONFIG));
MetricsContext metricsContext = new KafkaMetricsContext(JMX_PREFIX, contextLabels);
this.metrics = new Metrics(metricConfig, reporters, time, metricsContext);
long retryBackoffMs = config.getLong(CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG);
long retryBackoffMaxMs = config.getLong(CommonClientConfigs.RETRY_BACKOFF_MAX_MS_CONFIG);
Metadata metadata = new Metadata(retryBackoffMs, retryBackoffMaxMs, config.getLong(CommonClientConfigs.METADATA_MAX_AGE_CONFIG),
logContext, new ClusterResourceListeners());
List<InetSocketAddress> addresses = ClientUtils.parseAndValidateAddresses(
config.getList(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG),
config.getString(CommonClientConfigs.CLIENT_DNS_LOOKUP_CONFIG));
metadata.bootstrap(addresses);
String metricGrpPrefix = "connect";
ChannelBuilder channelBuilder = ClientUtils.createChannelBuilder(config, time, logContext);
NetworkClient netClient = new NetworkClient(
new Selector(config.getLong(CommonClientConfigs.CONNECTIONS_MAX_IDLE_MS_CONFIG), metrics, time, metricGrpPrefix, channelBuilder, logContext),
metadata,
clientId,
100, // a fixed large enough value will suffice
config.getLong(CommonClientConfigs.RECONNECT_BACKOFF_MS_CONFIG),
config.getLong(CommonClientConfigs.RECONNECT_BACKOFF_MAX_MS_CONFIG),
config.getInt(CommonClientConfigs.SEND_BUFFER_CONFIG),
config.getInt(CommonClientConfigs.RECEIVE_BUFFER_CONFIG),
config.getInt(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG),
config.getLong(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG),
config.getLong(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG),
time,
true,
new ApiVersions(),
logContext,
config.getLong(CommonClientConfigs.METADATA_RECOVERY_REBOOTSTRAP_TRIGGER_MS_CONFIG),
MetadataRecoveryStrategy.forName(config.getString(CommonClientConfigs.METADATA_RECOVERY_STRATEGY_CONFIG))
);
this.client = new ConsumerNetworkClient(
logContext,
netClient,
metadata,
time,
retryBackoffMs,
config.getInt(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG),
Integer.MAX_VALUE);
this.coordinator = new WorkerCoordinator(
new GroupRebalanceConfig(config, GroupRebalanceConfig.ProtocolType.CONNECT),
logContext,
this.client,
metrics,
metricGrpPrefix,
time,
restUrl,
configStorage,
listener,
ConnectProtocolCompatibility.compatibility(config.getString(DistributedConfig.CONNECT_PROTOCOL_CONFIG)),
config.getInt(DistributedConfig.SCHEDULED_REBALANCE_MAX_DELAY_MS_CONFIG));
AppInfoParser.registerAppInfo(JMX_PREFIX, clientId, metrics, time.milliseconds());
log.debug("Connect group member created");
} catch (Throwable t) {
// call close methods if internal objects are already constructed
// this is to prevent resource leak. see KAFKA-2121
stop(true);
// now propagate the exception
throw new KafkaException("Failed to construct kafka consumer", t);
}
}
public void stop() {
if (stopped) return;
stop(false);
}
/**
* Ensure that the connection to the broker coordinator is up and that the worker is an
* active member of the group.
*/
public void ensureActive(Supplier<UncheckedCloseable> onPoll) {
coordinator.poll(0, onPoll);
}
public void poll(long timeout, Supplier<UncheckedCloseable> onPoll) {
if (timeout < 0)
throw new IllegalArgumentException("Timeout must not be negative");
coordinator.poll(timeout, onPoll);
}
/**
* Interrupt any running poll() calls, causing a WakeupException to be thrown in the thread invoking that method.
*/
public void wakeup() {
this.client.wakeup();
}
/**
* Get the member ID of this worker in the group of workers.
* <p>
* This ID is the unique member ID automatically generated.
*
* @return the member ID
*/
public String memberId() {
return coordinator.memberId();
}
public void requestRejoin() {
coordinator.requestRejoin("connect worker requested rejoin");
}
public void maybeLeaveGroup(String leaveReason) {
coordinator.maybeLeaveGroup(CloseOptions.GroupMembershipOperation.LEAVE_GROUP, leaveReason);
}
public String ownerUrl(String connector) {
return coordinator.ownerUrl(connector);
}
public String ownerUrl(ConnectorTaskId task) {
return coordinator.ownerUrl(task);
}
/**
* Get the version of the connect protocol that is currently active in the group of workers.
*
* @return the current connect protocol version
*/
public short currentProtocolVersion() {
return coordinator.currentProtocolVersion();
}
public void revokeAssignment(ExtendedAssignment assignment) {
coordinator.revokeAssignment(assignment);
}
private void stop(boolean swallowException) {
log.trace("Stopping the Connect group member.");
AtomicReference<Throwable> firstException = new AtomicReference<>();
this.stopped = true;
Utils.closeQuietly(coordinator, "coordinator", firstException);
Utils.closeQuietly(metrics, "consumer metrics", firstException);
Utils.closeQuietly(client, "consumer network client", firstException);
AppInfoParser.unregisterAppInfo(JMX_PREFIX, clientId, metrics);
if (firstException.get() != null && !swallowException)
throw new KafkaException("Failed to stop the Connect group member", firstException.get());
else
log.debug("The Connect group member has stopped.");
}
// Visible for testing
Metrics metrics() {
return this.metrics;
}
} | java | github | https://github.com/apache/kafka | connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/WorkerGroupMember.java |
# Example app with next-sass
This example demonstrates how to use Next.js' built-in Global Sass/Scss imports and Component-Level Sass/Scss modules support.
## Deploy your own
[](https://vercel.com/new/clone?repository-url=https://github.com/vercel/next.js/tree/canary/examples/with-sass&project-name=with-sass&repository-name=with-sass)
## How to use
Execute [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app) with [npm](https://docs.npmjs.com/cli/init), [Yarn](https://yarnpkg.com/lang/en/docs/cli/create/), or [pnpm](https://pnpm.io) to bootstrap the example:
```bash
npx create-next-app --example with-sass with-sass-app
```
```bash
yarn create next-app --example with-sass with-sass-app
```
```bash
pnpm create next-app --example with-sass with-sass-app
```
Run production build with:
```bash
npm run build
npm run start
# or
yarn build
yarn start
# or
pnpm build
pnpm start
```
Deploy it to the cloud with [Vercel](https://vercel.com/new?utm_source=github&utm_medium=readme&utm_campaign=next-example) ([Documentation](https://nextjs.org/docs/deployment)). | unknown | github | https://github.com/vercel/next.js | examples/with-sass/README.md |
/*
@Copyright Barrett Adair 2015-2017
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE.md or copy at http://boost.org/LICENSE_1_0.txt)
*/
#ifndef BOOST_CLBL_TRTS_IS_CONST_MEMBER_HPP
#define BOOST_CLBL_TRTS_IS_CONST_MEMBER_HPP
#include <boost/callable_traits/detail/core.hpp>
namespace boost { namespace callable_traits {
//[ is_const_member_hpp
/*`[section:ref_is_const_member is_const_member]
[heading Header]
``#include <boost/callable_traits/is_const_member.hpp>``
[heading Definition]
*/
// inherits from either std::true_type or std::false_type
template<typename T>
struct is_const_member;
//<-
template<typename T>
struct is_const_member
: detail::traits<detail::shallow_decay<T>>::is_const_member {
using type = typename detail::traits<
detail::shallow_decay<T>>::is_const_member;
};
#ifdef BOOST_CLBL_TRTS_DISABLE_VARIABLE_TEMPLATES
template<typename T>
struct is_const_member_v {
static_assert(std::is_same<T, detail::dummy>::value,
"Variable templates not supported on this compiler.");
};
#else
//->
// only available when variable templates are supported
template<typename T>
//<-
BOOST_CLBL_TRAITS_INLINE_VAR
//->
constexpr bool is_const_member_v = //see below
//<-
detail::traits<detail::shallow_decay<T>>::is_const_member::value;
#endif
}} // namespace boost::callable_traits
//->
/*`
[heading Constraints]
* none
[heading Behavior]
* `is_const_member<T>::value` is `true` when either:
* `T` is a function type with a `const` member qualifier
* `T` is a pointer to a member function with a `const` member qualifier
* `T` is a function object with a non-overloaded `operator()`, where the `operator()` has a `const` member qualifier
* On compilers that support variable templates, `is_const_member_v<T>` is equivalent to `is_const_member<T>::value`.
[heading Input/Output Examples]
[table
[[`T`] [`is_const_member_v<T>`]]
[[`int() const`] [`true`]]
[[`int() const volatile`] [`true`]]
[[`int() const & transaction_safe`] [`true`]]
[[`int() const &&`] [`true`]]
[[`int(foo::*&)() const`] [`true`]]
[[`int(foo::*)() const volatile`] [`true`]]
[[`int(foo::*)() const volatile &&`][`true`]]
[[`int(foo::* const)() const`] [`true`]]
[[`int()`] [`false`]]
[[`int() volatile`] [`false`]]
[[`int() &&`] [`false`]]
[[`int(*)()`] [`false`]]
[[`int`] [`false`]]
[[`int foo::*`] [`false`]]
[[`const int foo::*`] [`false`]]
]
[heading Example Program]
[import ../example/is_const_member.cpp]
[is_const_member]
[endsect]
*/
//]
#endif // #ifndef BOOST_CLBL_TRTS_IS_CONST_MEMBER_HPP | unknown | github | https://github.com/mysql/mysql-server | extra/boost/boost_1_87_0/boost/callable_traits/is_const_member.hpp |
# vim: sts=4 sw=4 et
# GladeVcp Widgets
#
# Copyright (c) 2010 Pavel Shramov <shramov@mexmat.net>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
import gtk
import gobject
import cairo
import math
import gtk.glade
import time
from collections import deque
from hal_widgets import _HalWidgetBase, hal
MAX_INT = 0x7fffffff
def gdk_color_tuple(c):
if not c:
return 0, 0, 0
return c.red_float, c.green_float, c.blue_float
def mround(v, m):
vm = v % m
if vm == 0:
if v > 0: return v - m
if v < 0: return v + m
return 0
if v > 0: return v - vm
if v < 0: return v - vm + m
return 0
class HAL_Graph(gtk.DrawingArea, _HalWidgetBase):
__gtype_name__ = 'HAL_Graph'
__gproperties__ = {
'min' : ( gobject.TYPE_FLOAT, 'Min', 'Minimum value',
-MAX_INT, MAX_INT, 0, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'max' : ( gobject.TYPE_FLOAT, 'Max', 'Maximum value',
-MAX_INT, MAX_INT, 100, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'autoscale' : ( gobject.TYPE_BOOLEAN, 'Autoscale', 'Autoscale Y axis',
False, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'period' : ( gobject.TYPE_FLOAT, 'Period', 'TIme period to display',
-MAX_INT, MAX_INT, 60, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'tick' : ( gobject.TYPE_INT, 'Tick period', 'Data acquarison pariod in ms',
100, 10000, 500, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'zero' : ( gobject.TYPE_FLOAT, 'Zero', 'Zero value',
-MAX_INT, MAX_INT, 0, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'value' : ( gobject.TYPE_FLOAT, 'Value', 'Current meter value (for glade testing)',
-MAX_INT, MAX_INT, 0, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'yticks' : ( gobject.TYPE_FLOAT, 'Y Tick scale', 'Ticks on Y scale',
0, MAX_INT, 10, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'xticks' : ( gobject.TYPE_FLOAT, 'X Tick scale', 'Ticks on X scale (in seconds)',
0, MAX_INT, 10, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'fg_color' : ( gtk.gdk.Color.__gtype__, 'Graph color', "Set graphing color",
gobject.PARAM_READWRITE),
'bg_color' : ( gtk.gdk.Color.__gtype__, 'Background', "Choose background color",
gobject.PARAM_READWRITE),
'fg_fill' : ( gobject.TYPE_BOOLEAN, 'Fill graph', 'Fill area covered with graph',
False, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'force_width' : ( gobject.TYPE_INT, 'Forced width', 'Force bar width not dependent on widget size. -1 to disable',
-1, MAX_INT, -1, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'force_height' : ( gobject.TYPE_INT, 'Forced height', 'Force bar height not dependent on widget size. -1 to disable',
-1, MAX_INT, -1, gobject.PARAM_READWRITE | gobject.PARAM_CONSTRUCT),
'time_format' : ( gobject.TYPE_STRING, 'Time format',
'Time format to display. Use any strftime capable formatting',
"%M:%S", gobject.PARAM_READWRITE|gobject.PARAM_CONSTRUCT),
'label' : ( gobject.TYPE_STRING, 'Graph label', 'Label to display',
"", gobject.PARAM_READWRITE|gobject.PARAM_CONSTRUCT),
'sublabel' : ( gobject.TYPE_STRING, 'Graph sub label', 'Sub text to display',
"", gobject.PARAM_READWRITE|gobject.PARAM_CONSTRUCT),
}
__gproperties = __gproperties__
def __init__(self):
super(HAL_Graph, self).__init__()
self.bg_color = gtk.gdk.Color('white')
self.fg_color = gtk.gdk.Color('red')
self.force_radius = None
self.ticks = deque()
self.ticks_saved = []
self.time_strings = {}
self.tick_period = 0.1
self.connect("button-press-event", self.snapshot)
self.connect("expose-event", self.expose)
self.add_events(gtk.gdk.BUTTON_PRESS_MASK)
self.tick = 500
self.tick_idx = 0
self.hal_pin = 0
gobject.timeout_add(self.tick, self.tick_poll, self.tick_idx)
def _hal_init(self):
_HalWidgetBase._hal_init(self)
self.hal_pin = self.hal.newpin(self.hal_name, hal.HAL_FLOAT, hal.HAL_IN)
def tick_poll(self, idx):
if self.tick_idx != idx:
return False
v = self.hal_pin and self.hal_pin.get()
now = time.time()
for (t,_) in list(self.ticks):
if t >= now - self.period:
break
self.ticks.popleft()
self.ticks.append((now, v))
self.queue_draw()
return True
def snapshot(self, widget, event):
if event.button != 1:
return
if self.ticks_saved:
self.ticks_saved = []
else:
self.ticks_saved = list(self.ticks)
def expose(self, widget, event):
w = self.allocation.width
h = self.allocation.height
fw = self.force_width
fh = self.force_height
aw = max(w, fw)
ah = max(h, fh)
#self.set_size_request(aw, ah)
if fw != -1: w = fw
if fh != -1: h = fh
cr = widget.window.cairo_create()
cr.set_line_width(2)
cr.set_source_rgb(0, 0, 0)
#print w, h, aw, ah, fw, fh
cr.set_line_width(2)
cr.translate((aw - w) / 2, (ah - h) / 2)
cr.rectangle(0, 0, w, h)
cr.clip_preserve()
cr.stroke()
cr.translate(1, 1)
w, h = w - 2, h - 2
cr.set_line_width(1)
cr.set_source_color(self.bg_color)
cr.rectangle(0, 0, w, h)
cr.stroke_preserve()
cr.fill()
#tw = self.tick_period * w / self.period
tnow = now = time.time()
if self.ticks_saved:
now = max(map(lambda x: x[0], self.ticks_saved))
cr.set_source_rgb(0, 0, 0)
def t2x(t, n=now):
p = (t - n + self.period) / self.period
if p < 0 or p > 1:
return None
return w * p
font_small = max(h/20, 10)
font_large = max(h/10, 20)
cr.set_font_size(font_small)
ymin, ymax = self.min, self.max
yticks = self.yticks
if self.autoscale:
tv = map(lambda x: x[1], self.ticks_saved + list(self.ticks))
if tv:
ymin, ymax = min(tv), max(tv)
ymin -= abs(ymin) * 0.1
ymax += abs(ymax) * 0.1
else:
ymin, ymax = -1.1, 1.1
yticks = 0
if not yticks:
if ymin == ymax:
ymin -= max(1, abs(ymin) * 0.1)
ymax += max(1, abs(ymax) * 0.1)
#print ymax, ymin, ymax - ymin
yround = 10 ** math.floor(math.log10((ymax - ymin) / 10))
yticks = mround((ymax - ymin) / 10, yround)
self.draw_xticks(cr, w, h, self.xticks, now, t2x)
self.draw_yticks(cr, w, h, ymin, ymax, yticks)
cr.set_source_rgb(0, 0, 0)
cr.set_font_size(font_large)
self.text_at(cr, self.label, w/2, font_large, yalign='top')
cr.set_font_size(font_small)
self.text_at(cr, self.sublabel, w/2, 2.5 * font_large, yalign='top')
cr.set_source_color(self.fg_color)
if self.ticks_saved:
self.draw_graph(cr, w, h, ymin, ymax, self.ticks_saved, t2x)
cr.set_source_rgba(*(gdk_color_tuple(self.fg_color) + (0.3,)))
self.draw_graph(cr, w, h, ymin, ymax, self.ticks, lambda t: t2x(t, tnow))
if not (self.flags() & gtk.PARENT_SENSITIVE):
cr.set_source_rgba(0, 0, 0, 0.3)
cr.set_operator(cairo.OPERATOR_DEST_OUT)
cr.rectangle(0, 0, w, h)
cr.stroke_preserve()
cr.fill()
return True
def text_at(self, cr, text, x, y, xalign='center', yalign='center'):
xbearing, ybearing, width, height, xadvance, yadvance = cr.text_extents(text)
#print xbearing, ybearing, width, height, xadvance, yadvance
if xalign == 'center':
x = x - width/2
elif xalign == 'right':
x = x - width
if yalign == 'center':
y = y + height/2
elif yalign == 'top':
y = y + height
cr.move_to(x, y)
cr.show_text(text)
def draw_graph(self, cr, w, h, ymin, ymax, ticks, t2x):
move = True
for (t, v) in ticks:
if v is None:
move = True
continue
v = min(max(v, ymin), ymax)
x = t2x(t)
if x is None:
move = True
continue
y = h * (1 - (v - ymin)/(ymax - ymin))
if move:
cr.move_to(x, y)
move = False
cr.line_to(x, y)
cr.stroke()
def draw_xticks(self, cr, w, h, xticks, now, t2x):
rnow = mround(now, xticks)
for t in range(0, int(self.period / xticks)):
ts = int(rnow - t * xticks)
x = t2x(ts)
if x is None: continue
cr.move_to(x, h)
cr.line_to(x, 0.98 * h)
s = self.time_string(ts)
self.text_at(cr, s, x, 0.96 * h, yalign='bottom')
cr.stroke()
def draw_yticks(self, cr, w, h, ymin, ymax, yticks):
ysize = ymax - ymin
rmax = mround(ymax, yticks)
rmin = mround(ymin, yticks)
rsize = rmax - rmin
for t in range(0, int(rsize / yticks) + 1):
v = rmin + yticks * t
y = h * (1 - (v - ymin)/ ysize)
cr.move_to(0, y)
cr.line_to(w/100, y)
cr.move_to(w, y)
cr.line_to(w - w/100, y)
self.text_at(cr, str(v), 0.02 * w, y, xalign='left', yalign='center')
cr.stroke()
cr.set_source_rgba(0.5, 0.5, 0.5, 0.5)
for t in range(0, int(rsize / yticks) + 1):
v = rmin + yticks * t
y = h * (1 - (v - ymin)/ ysize)
cr.move_to(0.1*w, y)
cr.line_to(0.9*w, y)
cr.stroke()
def time_string(self, ts):
if ts in self.time_strings:
return self.time_strings[ts]
s = time.strftime(self.time_format, time.localtime(ts))
self.time_strings[ts] = s
return s
def time_strings_clean(self, now):
for k in filter(lambda k: k < now):
del self.time_strings[k]
def set_value(self, v):
self.value = v
self.queue_draw()
def do_get_property(self, property):
name = property.name.replace('-', '_')
if name in self.__gproperties.keys():
return getattr(self, name)
else:
raise AttributeError('unknown property %s' % property.name)
def do_set_property(self, property, value):
name = property.name.replace('-', '_')
if name == 'tick':
self.tick_idx += 1
gobject.timeout_add(value, self.tick_poll, self.tick_idx)
if name in ['bg_color', 'fg_color']:
if not value:
return False
if name in self.__gproperties.keys():
setattr(self, name, value)
self.queue_draw()
else:
raise AttributeError('unknown property %s' % property.name)
if name in ['force_size', 'force_size']:
#print "Forcing size request %s" % name
self.set_size_request(self.force_size, self.force_size)
self.queue_draw()
return True | unknown | codeparrot/codeparrot-clean | ||
"""
Helper functions for loading environment settings.
"""
from __future__ import print_function
import os
import sys
import json
from lazy import lazy
from path import Path as path
import memcache
class Env(object):
"""
Load information about the execution environment.
"""
# Root of the git repository (edx-platform)
REPO_ROOT = path(__file__).abspath().parent.parent.parent
# Reports Directory
REPORT_DIR = REPO_ROOT / 'reports'
METRICS_DIR = REPORT_DIR / 'metrics'
# Python unittest dirs
PYTHON_COVERAGERC = REPO_ROOT / ".coveragerc"
# Bok_choy dirs
BOK_CHOY_DIR = REPO_ROOT / "common" / "test" / "acceptance"
BOK_CHOY_LOG_DIR = REPO_ROOT / "test_root" / "log"
BOK_CHOY_REPORT_DIR = REPORT_DIR / "bok_choy"
BOK_CHOY_A11Y_REPORT_DIR = REPORT_DIR / "a11y"
BOK_CHOY_COVERAGERC = BOK_CHOY_DIR / ".coveragerc"
BOK_CHOY_A11Y_COVERAGERC = BOK_CHOY_DIR / ".a11ycoveragerc"
BOK_CHOY_A11Y_CUSTOM_RULES_FILE = (
REPO_ROOT / "node_modules" / "edx-custom-a11y-rules" /
"lib" / "custom_a11y_rules.js"
)
PA11YCRAWLER_REPORT_DIR = REPORT_DIR / "pa11ycrawler"
PA11YCRAWLER_COVERAGERC = BOK_CHOY_DIR / ".pa11ycrawlercoveragerc"
# If set, put reports for run in "unique" directories.
# The main purpose of this is to ensure that the reports can be 'slurped'
# in the main jenkins flow job without overwriting the reports from other
# build steps. For local development/testing, this shouldn't be needed.
if os.environ.get("SHARD", None):
shard_str = "shard_{}".format(os.environ.get("SHARD"))
BOK_CHOY_REPORT_DIR = BOK_CHOY_REPORT_DIR / shard_str
BOK_CHOY_LOG_DIR = BOK_CHOY_LOG_DIR / shard_str
# For the time being, stubs are used by both the bok-choy and lettuce acceptance tests
# For this reason, the stubs package is currently located in the Django app called "terrain"
# where other lettuce configuration is stored.
BOK_CHOY_STUB_DIR = REPO_ROOT / "common" / "djangoapps" / "terrain"
# Directory that videos are served from
VIDEO_SOURCE_DIR = REPO_ROOT / "test_root" / "data" / "video"
BOK_CHOY_SERVERS = {
'lms': {
'port': 8003,
'log': BOK_CHOY_LOG_DIR / "bok_choy_lms.log"
},
'cms': {
'port': 8031,
'log': BOK_CHOY_LOG_DIR / "bok_choy_studio.log"
}
}
BOK_CHOY_STUBS = {
'xqueue': {
'port': 8040,
'log': BOK_CHOY_LOG_DIR / "bok_choy_xqueue.log",
'config': 'register_submission_url=http://0.0.0.0:8041/test/register_submission',
},
'ora': {
'port': 8041,
'log': BOK_CHOY_LOG_DIR / "bok_choy_ora.log",
'config': '',
},
'comments': {
'port': 4567,
'log': BOK_CHOY_LOG_DIR / "bok_choy_comments.log",
},
'video': {
'port': 8777,
'log': BOK_CHOY_LOG_DIR / "bok_choy_video_sources.log",
'config': "root_dir={}".format(VIDEO_SOURCE_DIR),
},
'youtube': {
'port': 9080,
'log': BOK_CHOY_LOG_DIR / "bok_choy_youtube.log",
},
'edxnotes': {
'port': 8042,
'log': BOK_CHOY_LOG_DIR / "bok_choy_edxnotes.log",
},
'ecommerce': {
'port': 8043,
'log': BOK_CHOY_LOG_DIR / "bok_choy_ecommerce.log",
},
'programs': {
'port': 8090,
'log': BOK_CHOY_LOG_DIR / "bok_choy_programs.log",
},
'catalog': {
'port': 8091,
'log': BOK_CHOY_LOG_DIR / "bok_choy_catalog.log",
},
}
# Mongo databases that will be dropped before/after the tests run
BOK_CHOY_MONGO_DATABASE = "test"
BOK_CHOY_CACHE = memcache.Client(['0.0.0.0:11211'], debug=0)
# Test Ids Directory
TEST_DIR = REPO_ROOT / ".testids"
# Files used to run each of the js test suites
# TODO: Store this as a dict. Order seems to matter for some
# reason. See issue TE-415.
KARMA_CONFIG_FILES = [
REPO_ROOT / 'cms/static/karma_cms.conf.js',
REPO_ROOT / 'cms/static/karma_cms_squire.conf.js',
REPO_ROOT / 'lms/static/karma_lms.conf.js',
REPO_ROOT / 'lms/static/karma_lms_coffee.conf.js',
REPO_ROOT / 'common/lib/xmodule/xmodule/js/karma_xmodule.conf.js',
REPO_ROOT / 'common/static/karma_common.conf.js',
REPO_ROOT / 'common/static/karma_common_requirejs.conf.js',
]
JS_TEST_ID_KEYS = [
'cms',
'cms-squire',
'lms',
'lms-coffee',
'xmodule',
'common',
'common-requirejs'
]
JS_REPORT_DIR = REPORT_DIR / 'javascript'
# Directories used for common/lib/ tests
LIB_TEST_DIRS = []
for item in (REPO_ROOT / "common/lib").listdir():
if (REPO_ROOT / 'common/lib' / item).isdir():
LIB_TEST_DIRS.append(path("common/lib") / item.basename())
LIB_TEST_DIRS.append(path("pavelib/paver_tests"))
# Directory for i18n test reports
I18N_REPORT_DIR = REPORT_DIR / 'i18n'
# Service variant (lms, cms, etc.) configured with an environment variable
# We use this to determine which envs.json file to load.
SERVICE_VARIANT = os.environ.get('SERVICE_VARIANT', None)
# If service variant not configured in env, then pass the correct
# environment for lms / cms
if not SERVICE_VARIANT: # this will intentionally catch "";
if any(i in sys.argv[1:] for i in ('cms', 'studio')):
SERVICE_VARIANT = 'cms'
else:
SERVICE_VARIANT = 'lms'
@lazy
def env_tokens(self):
"""
Return a dict of environment settings.
If we couldn't find the JSON file, issue a warning and return an empty dict.
"""
# Find the env JSON file
if self.SERVICE_VARIANT:
env_path = self.REPO_ROOT.parent / "{service}.env.json".format(service=self.SERVICE_VARIANT)
else:
env_path = path("env.json").abspath()
# If the file does not exist, here or one level up,
# issue a warning and return an empty dict
if not env_path.isfile():
env_path = env_path.parent.parent / env_path.basename()
if not env_path.isfile():
print(
"Warning: could not find environment JSON file "
"at '{path}'".format(path=env_path),
file=sys.stderr,
)
return dict()
# Otherwise, load the file as JSON and return the resulting dict
try:
with open(env_path) as env_file:
return json.load(env_file)
except ValueError:
print(
"Error: Could not parse JSON "
"in {path}".format(path=env_path),
file=sys.stderr,
)
sys.exit(1)
@lazy
def feature_flags(self):
"""
Return a dictionary of feature flags configured by the environment.
"""
return self.env_tokens.get('FEATURES', dict()) | unknown | codeparrot/codeparrot-clean | ||
import array
import codecs
import locale
import os
import pickle
import sys
import threading
import time
import unittest
import warnings
import weakref
from collections import UserList
from test import support
from test.support import os_helper, threading_helper
from test.support.script_helper import assert_python_ok
from .utils import CTestCase, PyTestCase
import io # C implementation of io
import _pyio as pyio # Python implementation of io
def _default_chunk_size():
"""Get the default TextIOWrapper chunk size"""
with open(__file__, "r", encoding="latin-1") as f:
return f._CHUNK_SIZE
class BadIndex:
def __index__(self):
1/0
# To fully exercise seek/tell, the StatefulIncrementalDecoder has these
# properties:
# - A single output character can correspond to many bytes of input.
# - The number of input bytes to complete the character can be
# undetermined until the last input byte is received.
# - The number of input bytes can vary depending on previous input.
# - A single input byte can correspond to many characters of output.
# - The number of output characters can be undetermined until the
# last input byte is received.
# - The number of output characters can vary depending on previous input.
class StatefulIncrementalDecoder(codecs.IncrementalDecoder):
"""
For testing seek/tell behavior with a stateful, buffering decoder.
Input is a sequence of words. Words may be fixed-length (length set
by input) or variable-length (period-terminated). In variable-length
mode, extra periods are ignored. Possible words are:
- 'i' followed by a number sets the input length, I (maximum 99).
When I is set to 0, words are space-terminated.
- 'o' followed by a number sets the output length, O (maximum 99).
- Any other word is converted into a word followed by a period on
the output. The output word consists of the input word truncated
or padded out with hyphens to make its length equal to O. If O
is 0, the word is output verbatim without truncating or padding.
I and O are initially set to 1. When I changes, any buffered input is
re-scanned according to the new I. EOF also terminates the last word.
"""
def __init__(self, errors='strict'):
codecs.IncrementalDecoder.__init__(self, errors)
self.reset()
def __repr__(self):
return '<SID %x>' % id(self)
def reset(self):
self.i = 1
self.o = 1
self.buffer = bytearray()
def getstate(self):
i, o = self.i ^ 1, self.o ^ 1 # so that flags = 0 after reset()
return bytes(self.buffer), i*100 + o
def setstate(self, state):
buffer, io = state
self.buffer = bytearray(buffer)
i, o = divmod(io, 100)
self.i, self.o = i ^ 1, o ^ 1
def decode(self, input, final=False):
output = ''
for b in input:
if self.i == 0: # variable-length, terminated with period
if b == ord('.'):
if self.buffer:
output += self.process_word()
else:
self.buffer.append(b)
else: # fixed-length, terminate after self.i bytes
self.buffer.append(b)
if len(self.buffer) == self.i:
output += self.process_word()
if final and self.buffer: # EOF terminates the last word
output += self.process_word()
return output
def process_word(self):
output = ''
if self.buffer[0] == ord('i'):
self.i = min(99, int(self.buffer[1:] or 0)) # set input length
elif self.buffer[0] == ord('o'):
self.o = min(99, int(self.buffer[1:] or 0)) # set output length
else:
output = self.buffer.decode('ascii')
if len(output) < self.o:
output += '-'*self.o # pad out with hyphens
if self.o:
output = output[:self.o] # truncate to output length
output += '.'
self.buffer = bytearray()
return output
codecEnabled = False
# bpo-41919: This method is separated from StatefulIncrementalDecoder to avoid a resource leak
# when registering codecs and cleanup functions.
def lookupTestDecoder(name):
if StatefulIncrementalDecoder.codecEnabled and name == 'test_decoder':
latin1 = codecs.lookup('latin-1')
return codecs.CodecInfo(
name='test_decoder', encode=latin1.encode, decode=None,
incrementalencoder=None,
streamreader=None, streamwriter=None,
incrementaldecoder=StatefulIncrementalDecoder)
class StatefulIncrementalDecoderTest(unittest.TestCase):
"""
Make sure the StatefulIncrementalDecoder actually works.
"""
test_cases = [
# I=1, O=1 (fixed-length input == fixed-length output)
(b'abcd', False, 'a.b.c.d.'),
# I=0, O=0 (variable-length input, variable-length output)
(b'oiabcd', True, 'abcd.'),
# I=0, O=0 (should ignore extra periods)
(b'oi...abcd...', True, 'abcd.'),
# I=0, O=6 (variable-length input, fixed-length output)
(b'i.o6.x.xyz.toolongtofit.', False, 'x-----.xyz---.toolon.'),
# I=2, O=6 (fixed-length input < fixed-length output)
(b'i.i2.o6xyz', True, 'xy----.z-----.'),
# I=6, O=3 (fixed-length input > fixed-length output)
(b'i.o3.i6.abcdefghijklmnop', True, 'abc.ghi.mno.'),
# I=0, then 3; O=29, then 15 (with longer output)
(b'i.o29.a.b.cde.o15.abcdefghijabcdefghij.i3.a.b.c.d.ei00k.l.m', True,
'a----------------------------.' +
'b----------------------------.' +
'cde--------------------------.' +
'abcdefghijabcde.' +
'a.b------------.' +
'.c.------------.' +
'd.e------------.' +
'k--------------.' +
'l--------------.' +
'm--------------.')
]
def test_decoder(self):
# Try a few one-shot test cases.
for input, eof, output in self.test_cases:
d = StatefulIncrementalDecoder()
self.assertEqual(d.decode(input, eof), output)
# Also test an unfinished decode, followed by forcing EOF.
d = StatefulIncrementalDecoder()
self.assertEqual(d.decode(b'oiabcd'), '')
self.assertEqual(d.decode(b'', 1), 'abcd.')
class TextIOWrapperTest:
def setUp(self):
self.testdata = b"AAA\r\nBBB\rCCC\r\nDDD\nEEE\r\n"
self.normalized = b"AAA\nBBB\nCCC\nDDD\nEEE\n".decode("ascii")
os_helper.unlink(os_helper.TESTFN)
codecs.register(lookupTestDecoder)
self.addCleanup(codecs.unregister, lookupTestDecoder)
def tearDown(self):
os_helper.unlink(os_helper.TESTFN)
def test_constructor(self):
r = self.BytesIO(b"\xc3\xa9\n\n")
b = self.BufferedReader(r, 1000)
t = self.TextIOWrapper(b, encoding="utf-8")
t.__init__(b, encoding="latin-1", newline="\r\n")
self.assertEqual(t.encoding, "latin-1")
self.assertEqual(t.line_buffering, False)
t.__init__(b, encoding="utf-8", line_buffering=True)
self.assertEqual(t.encoding, "utf-8")
self.assertEqual(t.line_buffering, True)
self.assertEqual("\xe9\n", t.readline())
invalid_type = TypeError if self.is_C else ValueError
with self.assertRaises(invalid_type):
t.__init__(b, encoding=42)
with self.assertRaises(UnicodeEncodeError):
t.__init__(b, encoding='\udcfe')
with self.assertRaises(ValueError):
t.__init__(b, encoding='utf-8\0')
with self.assertRaises(invalid_type):
t.__init__(b, encoding="utf-8", errors=42)
if support.Py_DEBUG or sys.flags.dev_mode or self.is_C:
with self.assertRaises(UnicodeEncodeError):
t.__init__(b, encoding="utf-8", errors='\udcfe')
if support.Py_DEBUG or sys.flags.dev_mode or self.is_C:
with self.assertRaises(ValueError):
t.__init__(b, encoding="utf-8", errors='replace\0')
with self.assertRaises(TypeError):
t.__init__(b, encoding="utf-8", newline=42)
with self.assertRaises(ValueError):
t.__init__(b, encoding="utf-8", newline='\udcfe')
with self.assertRaises(ValueError):
t.__init__(b, encoding="utf-8", newline='\n\0')
with self.assertRaises(ValueError):
t.__init__(b, encoding="utf-8", newline='xyzzy')
def test_uninitialized(self):
t = self.TextIOWrapper.__new__(self.TextIOWrapper)
del t
t = self.TextIOWrapper.__new__(self.TextIOWrapper)
self.assertRaises(Exception, repr, t)
self.assertRaisesRegex((ValueError, AttributeError),
'uninitialized|has no attribute',
t.read, 0)
t.__init__(self.MockRawIO(), encoding="utf-8")
self.assertEqual(t.read(0), '')
def test_non_text_encoding_codecs_are_rejected(self):
# Ensure the constructor complains if passed a codec that isn't
# marked as a text encoding
# http://bugs.python.org/issue20404
r = self.BytesIO()
b = self.BufferedWriter(r)
with self.assertRaisesRegex(LookupError, "is not a text encoding"):
self.TextIOWrapper(b, encoding="hex")
def test_detach(self):
r = self.BytesIO()
b = self.BufferedWriter(r)
t = self.TextIOWrapper(b, encoding="ascii")
self.assertIs(t.detach(), b)
t = self.TextIOWrapper(b, encoding="ascii")
t.write("howdy")
self.assertFalse(r.getvalue())
t.detach()
self.assertEqual(r.getvalue(), b"howdy")
self.assertRaises(ValueError, t.detach)
# Operations independent of the detached stream should still work
repr(t)
self.assertEqual(t.encoding, "ascii")
self.assertEqual(t.errors, "strict")
self.assertFalse(t.line_buffering)
self.assertFalse(t.write_through)
def test_repr(self):
raw = self.BytesIO("hello".encode("utf-8"))
b = self.BufferedReader(raw)
t = self.TextIOWrapper(b, encoding="utf-8")
modname = self.TextIOWrapper.__module__
self.assertRegex(repr(t),
r"<(%s\.)?TextIOWrapper encoding='utf-8'>" % modname)
raw.name = "dummy"
self.assertRegex(repr(t),
r"<(%s\.)?TextIOWrapper name='dummy' encoding='utf-8'>" % modname)
t.mode = "r"
self.assertRegex(repr(t),
r"<(%s\.)?TextIOWrapper name='dummy' mode='r' encoding='utf-8'>" % modname)
raw.name = b"dummy"
self.assertRegex(repr(t),
r"<(%s\.)?TextIOWrapper name=b'dummy' mode='r' encoding='utf-8'>" % modname)
t.buffer.detach()
repr(t) # Should not raise an exception
def test_recursive_repr(self):
# Issue #25455
raw = self.BytesIO()
t = self.TextIOWrapper(raw, encoding="utf-8")
with support.swap_attr(raw, 'name', t), support.infinite_recursion(25):
with self.assertRaises(RuntimeError):
repr(t) # Should not crash
def test_subclass_repr(self):
class TestSubclass(self.TextIOWrapper):
pass
f = TestSubclass(self.StringIO())
self.assertIn(TestSubclass.__name__, repr(f))
def test_line_buffering(self):
r = self.BytesIO()
b = self.BufferedWriter(r, 1000)
t = self.TextIOWrapper(b, encoding="utf-8", newline="\n", line_buffering=True)
t.write("X")
self.assertEqual(r.getvalue(), b"") # No flush happened
t.write("Y\nZ")
self.assertEqual(r.getvalue(), b"XY\nZ") # All got flushed
t.write("A\rB")
self.assertEqual(r.getvalue(), b"XY\nZA\rB")
def test_reconfigure_line_buffering(self):
r = self.BytesIO()
b = self.BufferedWriter(r, 1000)
t = self.TextIOWrapper(b, encoding="utf-8", newline="\n", line_buffering=False)
t.write("AB\nC")
self.assertEqual(r.getvalue(), b"")
t.reconfigure(line_buffering=True) # implicit flush
self.assertEqual(r.getvalue(), b"AB\nC")
t.write("DEF\nG")
self.assertEqual(r.getvalue(), b"AB\nCDEF\nG")
t.write("H")
self.assertEqual(r.getvalue(), b"AB\nCDEF\nG")
t.reconfigure(line_buffering=False) # implicit flush
self.assertEqual(r.getvalue(), b"AB\nCDEF\nGH")
t.write("IJ")
self.assertEqual(r.getvalue(), b"AB\nCDEF\nGH")
# Keeping default value
t.reconfigure()
t.reconfigure(line_buffering=None)
self.assertEqual(t.line_buffering, False)
t.reconfigure(line_buffering=True)
t.reconfigure()
t.reconfigure(line_buffering=None)
self.assertEqual(t.line_buffering, True)
@unittest.skipIf(sys.flags.utf8_mode, "utf-8 mode is enabled")
def test_default_encoding(self):
with os_helper.EnvironmentVarGuard() as env:
# try to get a user preferred encoding different than the current
# locale encoding to check that TextIOWrapper() uses the current
# locale encoding and not the user preferred encoding
env.unset('LC_ALL', 'LANG', 'LC_CTYPE')
current_locale_encoding = locale.getencoding()
b = self.BytesIO()
with warnings.catch_warnings():
warnings.simplefilter("ignore", EncodingWarning)
t = self.TextIOWrapper(b)
self.assertEqual(t.encoding, current_locale_encoding)
def test_encoding(self):
# Check the encoding attribute is always set, and valid
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="utf-8")
self.assertEqual(t.encoding, "utf-8")
with warnings.catch_warnings():
warnings.simplefilter("ignore", EncodingWarning)
t = self.TextIOWrapper(b)
self.assertIsNotNone(t.encoding)
codecs.lookup(t.encoding)
def test_encoding_errors_reading(self):
# (1) default
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii")
self.assertRaises(UnicodeError, t.read)
# (2) explicit strict
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii", errors="strict")
self.assertRaises(UnicodeError, t.read)
# (3) ignore
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii", errors="ignore")
self.assertEqual(t.read(), "abc\n\n")
# (4) replace
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii", errors="replace")
self.assertEqual(t.read(), "abc\n\ufffd\n")
def test_encoding_errors_writing(self):
# (1) default
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii")
self.assertRaises(UnicodeError, t.write, "\xff")
# (2) explicit strict
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii", errors="strict")
self.assertRaises(UnicodeError, t.write, "\xff")
# (3) ignore
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii", errors="ignore",
newline="\n")
t.write("abc\xffdef\n")
t.flush()
self.assertEqual(b.getvalue(), b"abcdef\n")
# (4) replace
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii", errors="replace",
newline="\n")
t.write("abc\xffdef\n")
t.flush()
self.assertEqual(b.getvalue(), b"abc?def\n")
def test_newlines(self):
input_lines = [ "unix\n", "windows\r\n", "os9\r", "last\n", "nonl" ]
tests = [
[ None, [ 'unix\n', 'windows\n', 'os9\n', 'last\n', 'nonl' ] ],
[ '', input_lines ],
[ '\n', [ "unix\n", "windows\r\n", "os9\rlast\n", "nonl" ] ],
[ '\r\n', [ "unix\nwindows\r\n", "os9\rlast\nnonl" ] ],
[ '\r', [ "unix\nwindows\r", "\nos9\r", "last\nnonl" ] ],
]
encodings = (
'utf-8', 'latin-1',
'utf-16', 'utf-16-le', 'utf-16-be',
'utf-32', 'utf-32-le', 'utf-32-be',
)
# Try a range of buffer sizes to test the case where \r is the last
# character in TextIOWrapper._pending_line.
for encoding in encodings:
# XXX: str.encode() should return bytes
data = bytes(''.join(input_lines).encode(encoding))
for do_reads in (False, True):
for bufsize in range(1, 10):
for newline, exp_lines in tests:
bufio = self.BufferedReader(self.BytesIO(data), bufsize)
textio = self.TextIOWrapper(bufio, newline=newline,
encoding=encoding)
if do_reads:
got_lines = []
while True:
c2 = textio.read(2)
if c2 == '':
break
self.assertEqual(len(c2), 2)
got_lines.append(c2 + textio.readline())
else:
got_lines = list(textio)
for got_line, exp_line in zip(got_lines, exp_lines):
self.assertEqual(got_line, exp_line)
self.assertEqual(len(got_lines), len(exp_lines))
def test_newlines_input(self):
testdata = b"AAA\nBB\x00B\nCCC\rDDD\rEEE\r\nFFF\r\nGGG"
normalized = testdata.replace(b"\r\n", b"\n").replace(b"\r", b"\n")
for newline, expected in [
(None, normalized.decode("ascii").splitlines(keepends=True)),
("", testdata.decode("ascii").splitlines(keepends=True)),
("\n", ["AAA\n", "BB\x00B\n", "CCC\rDDD\rEEE\r\n", "FFF\r\n", "GGG"]),
("\r\n", ["AAA\nBB\x00B\nCCC\rDDD\rEEE\r\n", "FFF\r\n", "GGG"]),
("\r", ["AAA\nBB\x00B\nCCC\r", "DDD\r", "EEE\r", "\nFFF\r", "\nGGG"]),
]:
buf = self.BytesIO(testdata)
txt = self.TextIOWrapper(buf, encoding="ascii", newline=newline)
self.assertEqual(txt.readlines(), expected)
txt.seek(0)
self.assertEqual(txt.read(), "".join(expected))
def test_newlines_output(self):
testdict = {
"": b"AAA\nBBB\nCCC\nX\rY\r\nZ",
"\n": b"AAA\nBBB\nCCC\nX\rY\r\nZ",
"\r": b"AAA\rBBB\rCCC\rX\rY\r\rZ",
"\r\n": b"AAA\r\nBBB\r\nCCC\r\nX\rY\r\r\nZ",
}
tests = [(None, testdict[os.linesep])] + sorted(testdict.items())
for newline, expected in tests:
buf = self.BytesIO()
txt = self.TextIOWrapper(buf, encoding="ascii", newline=newline)
txt.write("AAA\nB")
txt.write("BB\nCCC\n")
txt.write("X\rY\r\nZ")
txt.flush()
self.assertEqual(buf.closed, False)
self.assertEqual(buf.getvalue(), expected)
def test_destructor(self):
l = []
base = self.BytesIO
class MyBytesIO(base):
def close(self):
l.append(self.getvalue())
base.close(self)
b = MyBytesIO()
t = self.TextIOWrapper(b, encoding="ascii")
t.write("abc")
del t
support.gc_collect()
self.assertEqual([b"abc"], l)
def test_override_destructor(self):
record = []
class MyTextIO(self.TextIOWrapper):
def __del__(self):
record.append(1)
try:
f = super().__del__
except AttributeError:
pass
else:
f()
def close(self):
record.append(2)
super().close()
def flush(self):
record.append(3)
super().flush()
b = self.BytesIO()
t = MyTextIO(b, encoding="ascii")
del t
support.gc_collect()
self.assertEqual(record, [1, 2, 3])
def test_error_through_destructor(self):
# Test that the exception state is not modified by a destructor,
# even if close() fails.
rawio = self.CloseFailureIO()
with support.catch_unraisable_exception() as cm:
with self.assertRaises(AttributeError):
self.TextIOWrapper(rawio, encoding="utf-8").xyzzy
self.assertEqual(cm.unraisable.exc_type, OSError)
# Systematic tests of the text I/O API
def test_basic_io(self):
for chunksize in (1, 2, 3, 4, 5, 15, 16, 17, 31, 32, 33, 63, 64, 65):
for enc in "ascii", "latin-1", "utf-8" :# , "utf-16-be", "utf-16-le":
f = self.open(os_helper.TESTFN, "w+", encoding=enc)
f._CHUNK_SIZE = chunksize
self.assertEqual(f.write("abc"), 3)
f.close()
f = self.open(os_helper.TESTFN, "r+", encoding=enc)
f._CHUNK_SIZE = chunksize
self.assertEqual(f.tell(), 0)
self.assertEqual(f.read(), "abc")
cookie = f.tell()
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.read(None), "abc")
f.seek(0)
self.assertEqual(f.read(2), "ab")
self.assertEqual(f.read(1), "c")
self.assertEqual(f.read(1), "")
self.assertEqual(f.read(), "")
self.assertEqual(f.tell(), cookie)
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.seek(0, 2), cookie)
self.assertEqual(f.write("def"), 3)
self.assertEqual(f.seek(cookie), cookie)
self.assertEqual(f.read(), "def")
if enc.startswith("utf"):
self.multi_line_test(f, enc)
f.close()
def multi_line_test(self, f, enc):
f.seek(0)
f.truncate()
sample = "s\xff\u0fff\uffff"
wlines = []
for size in (0, 1, 2, 3, 4, 5, 30, 31, 32, 33, 62, 63, 64, 65, 1000):
chars = []
for i in range(size):
chars.append(sample[i % len(sample)])
line = "".join(chars) + "\n"
wlines.append((f.tell(), line))
f.write(line)
f.seek(0)
rlines = []
while True:
pos = f.tell()
line = f.readline()
if not line:
break
rlines.append((pos, line))
self.assertEqual(rlines, wlines)
def test_telling(self):
f = self.open(os_helper.TESTFN, "w+", encoding="utf-8")
p0 = f.tell()
f.write("\xff\n")
p1 = f.tell()
f.write("\xff\n")
p2 = f.tell()
f.seek(0)
self.assertEqual(f.tell(), p0)
self.assertEqual(f.readline(), "\xff\n")
self.assertEqual(f.tell(), p1)
self.assertEqual(f.readline(), "\xff\n")
self.assertEqual(f.tell(), p2)
f.seek(0)
for line in f:
self.assertEqual(line, "\xff\n")
self.assertRaises(OSError, f.tell)
self.assertEqual(f.tell(), p2)
f.close()
def test_seeking(self):
chunk_size = _default_chunk_size()
prefix_size = chunk_size - 2
u_prefix = "a" * prefix_size
prefix = bytes(u_prefix.encode("utf-8"))
self.assertEqual(len(u_prefix), len(prefix))
u_suffix = "\u8888\n"
suffix = bytes(u_suffix.encode("utf-8"))
line = prefix + suffix
with self.open(os_helper.TESTFN, "wb") as f:
f.write(line*2)
with self.open(os_helper.TESTFN, "r", encoding="utf-8") as f:
s = f.read(prefix_size)
self.assertEqual(s, str(prefix, "ascii"))
self.assertEqual(f.tell(), prefix_size)
self.assertEqual(f.readline(), u_suffix)
def test_seeking_too(self):
# Regression test for a specific bug
data = b'\xe0\xbf\xbf\n'
with self.open(os_helper.TESTFN, "wb") as f:
f.write(data)
with self.open(os_helper.TESTFN, "r", encoding="utf-8") as f:
f._CHUNK_SIZE # Just test that it exists
f._CHUNK_SIZE = 2
f.readline()
f.tell()
def test_seek_and_tell(self):
#Test seek/tell using the StatefulIncrementalDecoder.
# Make test faster by doing smaller seeks
CHUNK_SIZE = 128
def test_seek_and_tell_with_data(data, min_pos=0):
"""Tell/seek to various points within a data stream and ensure
that the decoded data returned by read() is consistent."""
f = self.open(os_helper.TESTFN, 'wb')
f.write(data)
f.close()
f = self.open(os_helper.TESTFN, encoding='test_decoder')
f._CHUNK_SIZE = CHUNK_SIZE
decoded = f.read()
f.close()
for i in range(min_pos, len(decoded) + 1): # seek positions
for j in [1, 5, len(decoded) - i]: # read lengths
f = self.open(os_helper.TESTFN, encoding='test_decoder')
self.assertEqual(f.read(i), decoded[:i])
cookie = f.tell()
self.assertEqual(f.read(j), decoded[i:i + j])
f.seek(cookie)
self.assertEqual(f.read(), decoded[i:])
f.close()
# Enable the test decoder.
StatefulIncrementalDecoder.codecEnabled = 1
# Run the tests.
try:
# Try each test case.
for input, _, _ in StatefulIncrementalDecoderTest.test_cases:
test_seek_and_tell_with_data(input)
# Position each test case so that it crosses a chunk boundary.
for input, _, _ in StatefulIncrementalDecoderTest.test_cases:
offset = CHUNK_SIZE - len(input)//2
prefix = b'.'*offset
# Don't bother seeking into the prefix (takes too long).
min_pos = offset*2
test_seek_and_tell_with_data(prefix + input, min_pos)
# Ensure our test decoder won't interfere with subsequent tests.
finally:
StatefulIncrementalDecoder.codecEnabled = 0
def test_multibyte_seek_and_tell(self):
f = self.open(os_helper.TESTFN, "w", encoding="euc_jp")
f.write("AB\n\u3046\u3048\n")
f.close()
f = self.open(os_helper.TESTFN, "r", encoding="euc_jp")
self.assertEqual(f.readline(), "AB\n")
p0 = f.tell()
self.assertEqual(f.readline(), "\u3046\u3048\n")
p1 = f.tell()
f.seek(p0)
self.assertEqual(f.readline(), "\u3046\u3048\n")
self.assertEqual(f.tell(), p1)
f.close()
def test_tell_after_readline_with_cr(self):
# Test for gh-141314: TextIOWrapper.tell() assertion failure
# when dealing with standalone carriage returns
data = b'line1\r'
with self.open(os_helper.TESTFN, "wb") as f:
f.write(data)
with self.open(os_helper.TESTFN, "r") as f:
# Read line that ends with \r
line = f.readline()
self.assertEqual(line, "line1\n")
# This should not cause an assertion failure
pos = f.tell()
# Verify we can seek back to this position
f.seek(pos)
remaining = f.read()
self.assertEqual(remaining, "")
def test_seek_with_encoder_state(self):
f = self.open(os_helper.TESTFN, "w", encoding="euc_jis_2004")
f.write("\u00e6\u0300")
p0 = f.tell()
f.write("\u00e6")
f.seek(p0)
f.write("\u0300")
f.close()
f = self.open(os_helper.TESTFN, "r", encoding="euc_jis_2004")
self.assertEqual(f.readline(), "\u00e6\u0300\u0300")
f.close()
def test_encoded_writes(self):
data = "1234567890"
tests = ("utf-16",
"utf-16-le",
"utf-16-be",
"utf-32",
"utf-32-le",
"utf-32-be")
for encoding in tests:
buf = self.BytesIO()
f = self.TextIOWrapper(buf, encoding=encoding)
# Check if the BOM is written only once (see issue1753).
f.write(data)
f.write(data)
f.seek(0)
self.assertEqual(f.read(), data * 2)
f.seek(0)
self.assertEqual(f.read(), data * 2)
self.assertEqual(buf.getvalue(), (data * 2).encode(encoding))
def test_unreadable(self):
class UnReadable(self.BytesIO):
def readable(self):
return False
txt = self.TextIOWrapper(UnReadable(), encoding="utf-8")
self.assertRaises(OSError, txt.read)
def test_read_one_by_one(self):
txt = self.TextIOWrapper(self.BytesIO(b"AA\r\nBB"), encoding="utf-8")
reads = ""
while True:
c = txt.read(1)
if not c:
break
reads += c
self.assertEqual(reads, "AA\nBB")
def test_readlines(self):
txt = self.TextIOWrapper(self.BytesIO(b"AA\nBB\nCC"), encoding="utf-8")
self.assertEqual(txt.readlines(), ["AA\n", "BB\n", "CC"])
txt.seek(0)
self.assertEqual(txt.readlines(None), ["AA\n", "BB\n", "CC"])
txt.seek(0)
self.assertEqual(txt.readlines(5), ["AA\n", "BB\n"])
# read in amounts equal to TextIOWrapper._CHUNK_SIZE which is 128.
def test_read_by_chunk(self):
# make sure "\r\n" straddles 128 char boundary.
txt = self.TextIOWrapper(self.BytesIO(b"A" * 127 + b"\r\nB"), encoding="utf-8")
reads = ""
while True:
c = txt.read(128)
if not c:
break
reads += c
self.assertEqual(reads, "A"*127+"\nB")
def test_writelines(self):
l = ['ab', 'cd', 'ef']
buf = self.BytesIO()
txt = self.TextIOWrapper(buf, encoding="utf-8")
txt.writelines(l)
txt.flush()
self.assertEqual(buf.getvalue(), b'abcdef')
def test_writelines_userlist(self):
l = UserList(['ab', 'cd', 'ef'])
buf = self.BytesIO()
txt = self.TextIOWrapper(buf, encoding="utf-8")
txt.writelines(l)
txt.flush()
self.assertEqual(buf.getvalue(), b'abcdef')
def test_writelines_error(self):
txt = self.TextIOWrapper(self.BytesIO(), encoding="utf-8")
self.assertRaises(TypeError, txt.writelines, [1, 2, 3])
self.assertRaises(TypeError, txt.writelines, None)
self.assertRaises(TypeError, txt.writelines, b'abc')
def test_issue1395_1(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
# read one char at a time
reads = ""
while True:
c = txt.read(1)
if not c:
break
reads += c
self.assertEqual(reads, self.normalized)
def test_issue1395_2(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = ""
while True:
c = txt.read(4)
if not c:
break
reads += c
self.assertEqual(reads, self.normalized)
def test_issue1395_3(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = txt.read(4)
reads += txt.read(4)
reads += txt.readline()
reads += txt.readline()
reads += txt.readline()
self.assertEqual(reads, self.normalized)
def test_issue1395_4(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = txt.read(4)
reads += txt.read()
self.assertEqual(reads, self.normalized)
def test_issue1395_5(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = txt.read(4)
pos = txt.tell()
txt.seek(0)
txt.seek(pos)
self.assertEqual(txt.read(4), "BBB\n")
def test_issue2282(self):
buffer = self.BytesIO(self.testdata)
txt = self.TextIOWrapper(buffer, encoding="ascii")
self.assertEqual(buffer.seekable(), txt.seekable())
def test_append_bom(self):
# The BOM is not written again when appending to a non-empty file
filename = os_helper.TESTFN
for charset in ('utf-8-sig', 'utf-16', 'utf-32'):
with self.open(filename, 'w', encoding=charset) as f:
f.write('aaa')
pos = f.tell()
with self.open(filename, 'rb') as f:
self.assertEqual(f.read(), 'aaa'.encode(charset))
with self.open(filename, 'a', encoding=charset) as f:
f.write('xxx')
with self.open(filename, 'rb') as f:
self.assertEqual(f.read(), 'aaaxxx'.encode(charset))
def test_seek_bom(self):
# Same test, but when seeking manually
filename = os_helper.TESTFN
for charset in ('utf-8-sig', 'utf-16', 'utf-32'):
with self.open(filename, 'w', encoding=charset) as f:
f.write('aaa')
pos = f.tell()
with self.open(filename, 'r+', encoding=charset) as f:
f.seek(pos)
f.write('zzz')
f.seek(0)
f.write('bbb')
with self.open(filename, 'rb') as f:
self.assertEqual(f.read(), 'bbbzzz'.encode(charset))
def test_seek_append_bom(self):
# Same test, but first seek to the start and then to the end
filename = os_helper.TESTFN
for charset in ('utf-8-sig', 'utf-16', 'utf-32'):
with self.open(filename, 'w', encoding=charset) as f:
f.write('aaa')
with self.open(filename, 'a', encoding=charset) as f:
f.seek(0)
f.seek(0, self.SEEK_END)
f.write('xxx')
with self.open(filename, 'rb') as f:
self.assertEqual(f.read(), 'aaaxxx'.encode(charset))
def test_errors_property(self):
with self.open(os_helper.TESTFN, "w", encoding="utf-8") as f:
self.assertEqual(f.errors, "strict")
with self.open(os_helper.TESTFN, "w", encoding="utf-8", errors="replace") as f:
self.assertEqual(f.errors, "replace")
@support.no_tracing
@threading_helper.requires_working_threading()
def test_threads_write(self):
# Issue6750: concurrent writes could duplicate data
event = threading.Event()
with self.open(os_helper.TESTFN, "w", encoding="utf-8", buffering=1) as f:
def run(n):
text = "Thread%03d\n" % n
event.wait()
f.write(text)
threads = [threading.Thread(target=run, args=(x,))
for x in range(20)]
with threading_helper.start_threads(threads, event.set):
time.sleep(0.02)
with self.open(os_helper.TESTFN, encoding="utf-8") as f:
content = f.read()
for n in range(20):
self.assertEqual(content.count("Thread%03d\n" % n), 1)
def test_flush_error_on_close(self):
# Test that text file is closed despite failed flush
# and that flush() is called before file closed.
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
closed = []
def bad_flush():
closed[:] = [txt.closed, txt.buffer.closed]
raise OSError()
txt.flush = bad_flush
self.assertRaises(OSError, txt.close) # exception not swallowed
self.assertTrue(txt.closed)
self.assertTrue(txt.buffer.closed)
self.assertTrue(closed) # flush() called
self.assertFalse(closed[0]) # flush() called before file closed
self.assertFalse(closed[1])
txt.flush = lambda: None # break reference loop
def test_close_error_on_close(self):
buffer = self.BytesIO(self.testdata)
def bad_flush():
raise OSError('flush')
def bad_close():
raise OSError('close')
buffer.close = bad_close
txt = self.TextIOWrapper(buffer, encoding="ascii")
txt.flush = bad_flush
with self.assertRaises(OSError) as err: # exception not swallowed
txt.close()
self.assertEqual(err.exception.args, ('close',))
self.assertIsInstance(err.exception.__context__, OSError)
self.assertEqual(err.exception.__context__.args, ('flush',))
self.assertFalse(txt.closed)
# Silence destructor error
buffer.close = lambda: None
txt.flush = lambda: None
def test_nonnormalized_close_error_on_close(self):
# Issue #21677
buffer = self.BytesIO(self.testdata)
def bad_flush():
raise non_existing_flush
def bad_close():
raise non_existing_close
buffer.close = bad_close
txt = self.TextIOWrapper(buffer, encoding="ascii")
txt.flush = bad_flush
with self.assertRaises(NameError) as err: # exception not swallowed
txt.close()
self.assertIn('non_existing_close', str(err.exception))
self.assertIsInstance(err.exception.__context__, NameError)
self.assertIn('non_existing_flush', str(err.exception.__context__))
self.assertFalse(txt.closed)
# Silence destructor error
buffer.close = lambda: None
txt.flush = lambda: None
def test_multi_close(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt.close()
txt.close()
txt.close()
self.assertRaises(ValueError, txt.flush)
def test_unseekable(self):
txt = self.TextIOWrapper(self.MockUnseekableIO(self.testdata), encoding="utf-8")
self.assertRaises(self.UnsupportedOperation, txt.tell)
self.assertRaises(self.UnsupportedOperation, txt.seek, 0)
def test_readonly_attributes(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
buf = self.BytesIO(self.testdata)
with self.assertRaises(AttributeError):
txt.buffer = buf
def test_rawio(self):
# Issue #12591: TextIOWrapper must work with raw I/O objects, so
# that subprocess.Popen() can have the required unbuffered
# semantics with universal_newlines=True.
raw = self.MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n'])
txt = self.TextIOWrapper(raw, encoding='ascii', newline='\n')
# Reads
self.assertEqual(txt.read(4), 'abcd')
self.assertEqual(txt.readline(), 'efghi\n')
self.assertEqual(list(txt), ['jkl\n', 'opq\n'])
def test_rawio_write_through(self):
# Issue #12591: with write_through=True, writes don't need a flush
raw = self.MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n'])
txt = self.TextIOWrapper(raw, encoding='ascii', newline='\n',
write_through=True)
txt.write('1')
txt.write('23\n4')
txt.write('5')
self.assertEqual(b''.join(raw._write_stack), b'123\n45')
def test_bufio_write_through(self):
# Issue #21396: write_through=True doesn't force a flush()
# on the underlying binary buffered object.
flush_called, write_called = [], []
class BufferedWriter(self.BufferedWriter):
def flush(self, *args, **kwargs):
flush_called.append(True)
return super().flush(*args, **kwargs)
def write(self, *args, **kwargs):
write_called.append(True)
return super().write(*args, **kwargs)
rawio = self.BytesIO()
data = b"a"
bufio = BufferedWriter(rawio, len(data)*2)
textio = self.TextIOWrapper(bufio, encoding='ascii',
write_through=True)
# write to the buffered io but don't overflow the buffer
text = data.decode('ascii')
textio.write(text)
# buffer.flush is not called with write_through=True
self.assertFalse(flush_called)
# buffer.write *is* called with write_through=True
self.assertTrue(write_called)
self.assertEqual(rawio.getvalue(), b"") # no flush
write_called = [] # reset
textio.write(text * 10) # total content is larger than bufio buffer
self.assertTrue(write_called)
self.assertEqual(rawio.getvalue(), data * 11) # all flushed
def test_reconfigure_write_through(self):
raw = self.MockRawIO([])
t = self.TextIOWrapper(raw, encoding='ascii', newline='\n')
t.write('1')
t.reconfigure(write_through=True) # implied flush
self.assertEqual(t.write_through, True)
self.assertEqual(b''.join(raw._write_stack), b'1')
t.write('23')
self.assertEqual(b''.join(raw._write_stack), b'123')
t.reconfigure(write_through=False)
self.assertEqual(t.write_through, False)
t.write('45')
t.flush()
self.assertEqual(b''.join(raw._write_stack), b'12345')
# Keeping default value
t.reconfigure()
t.reconfigure(write_through=None)
self.assertEqual(t.write_through, False)
t.reconfigure(write_through=True)
t.reconfigure()
t.reconfigure(write_through=None)
self.assertEqual(t.write_through, True)
def test_read_nonbytes(self):
# Issue #17106
# Crash when underlying read() returns non-bytes
t = self.TextIOWrapper(self.StringIO('a'), encoding="utf-8")
self.assertRaises(TypeError, t.read, 1)
t = self.TextIOWrapper(self.StringIO('a'), encoding="utf-8")
self.assertRaises(TypeError, t.readline)
t = self.TextIOWrapper(self.StringIO('a'), encoding="utf-8")
self.assertRaises(TypeError, t.read)
def test_illegal_encoder(self):
# Issue 31271: Calling write() while the return value of encoder's
# encode() is invalid shouldn't cause an assertion failure.
rot13 = codecs.lookup("rot13")
with support.swap_attr(rot13, '_is_text_encoding', True):
t = self.TextIOWrapper(self.BytesIO(b'foo'), encoding="rot13")
self.assertRaises(TypeError, t.write, 'bar')
def test_illegal_decoder(self):
# Issue #17106
# Bypass the early encoding check added in issue 20404
def _make_illegal_wrapper():
quopri = codecs.lookup("quopri")
quopri._is_text_encoding = True
try:
t = self.TextIOWrapper(self.BytesIO(b'aaaaaa'),
newline='\n', encoding="quopri")
finally:
quopri._is_text_encoding = False
return t
# Crash when decoder returns non-string
t = _make_illegal_wrapper()
self.assertRaises(TypeError, t.read, 1)
t = _make_illegal_wrapper()
self.assertRaises(TypeError, t.readline)
t = _make_illegal_wrapper()
self.assertRaises(TypeError, t.read)
# Issue 31243: calling read() while the return value of decoder's
# getstate() is invalid should neither crash the interpreter nor
# raise a SystemError.
def _make_very_illegal_wrapper(getstate_ret_val):
class BadDecoder:
def getstate(self):
return getstate_ret_val
def _get_bad_decoder(dummy):
return BadDecoder()
quopri = codecs.lookup("quopri")
with support.swap_attr(quopri, 'incrementaldecoder',
_get_bad_decoder):
return _make_illegal_wrapper()
t = _make_very_illegal_wrapper(42)
self.assertRaises(TypeError, t.read, 42)
t = _make_very_illegal_wrapper(())
self.assertRaises(TypeError, t.read, 42)
t = _make_very_illegal_wrapper((1, 2))
self.assertRaises(TypeError, t.read, 42)
def _check_create_at_shutdown(self, **kwargs):
# Issue #20037: creating a TextIOWrapper at shutdown
# shouldn't crash the interpreter.
iomod = self.io.__name__
code = """if 1:
import codecs
import {iomod} as io
# Avoid looking up codecs at shutdown
codecs.lookup('utf-8')
class C:
def __del__(self):
io.TextIOWrapper(io.BytesIO(), **{kwargs})
print("ok")
c = C()
""".format(iomod=iomod, kwargs=kwargs)
return assert_python_ok("-c", code)
def test_create_at_shutdown_without_encoding(self):
rc, out, err = self._check_create_at_shutdown()
if err:
# Can error out with a RuntimeError if the module state
# isn't found.
self.assertIn(self.shutdown_error, err.decode())
else:
self.assertEqual("ok", out.decode().strip())
def test_create_at_shutdown_with_encoding(self):
rc, out, err = self._check_create_at_shutdown(encoding='utf-8',
errors='strict')
self.assertFalse(err)
self.assertEqual("ok", out.decode().strip())
def test_read_byteslike(self):
r = MemviewBytesIO(b'Just some random string\n')
t = self.TextIOWrapper(r, 'utf-8')
# TextIOwrapper will not read the full string, because
# we truncate it to a multiple of the native int size
# so that we can construct a more complex memoryview.
bytes_val = _to_memoryview(r.getvalue()).tobytes()
self.assertEqual(t.read(200), bytes_val.decode('utf-8'))
def test_issue22849(self):
class F(object):
def readable(self): return True
def writable(self): return True
def seekable(self): return True
for i in range(10):
try:
self.TextIOWrapper(F(), encoding='utf-8')
except Exception:
pass
F.tell = lambda x: 0
t = self.TextIOWrapper(F(), encoding='utf-8')
def test_reconfigure_locale(self):
wrapper = self.TextIOWrapper(self.BytesIO(b"test"))
wrapper.reconfigure(encoding="locale")
def test_reconfigure_encoding_read(self):
# latin1 -> utf8
# (latin1 can decode utf-8 encoded string)
data = 'abc\xe9\n'.encode('latin1') + 'd\xe9f\n'.encode('utf8')
raw = self.BytesIO(data)
txt = self.TextIOWrapper(raw, encoding='latin1', newline='\n')
self.assertEqual(txt.readline(), 'abc\xe9\n')
with self.assertRaises(self.UnsupportedOperation):
txt.reconfigure(encoding='utf-8')
with self.assertRaises(self.UnsupportedOperation):
txt.reconfigure(newline=None)
def test_reconfigure_write_fromascii(self):
# ascii has a specific encodefunc in the C implementation,
# but utf-8-sig has not. Make sure that we get rid of the
# cached encodefunc when we switch encoders.
raw = self.BytesIO()
txt = self.TextIOWrapper(raw, encoding='ascii', newline='\n')
txt.write('foo\n')
txt.reconfigure(encoding='utf-8-sig')
txt.write('\xe9\n')
txt.flush()
self.assertEqual(raw.getvalue(), b'foo\n\xc3\xa9\n')
def test_reconfigure_write(self):
# latin -> utf8
raw = self.BytesIO()
txt = self.TextIOWrapper(raw, encoding='latin1', newline='\n')
txt.write('abc\xe9\n')
txt.reconfigure(encoding='utf-8')
self.assertEqual(raw.getvalue(), b'abc\xe9\n')
txt.write('d\xe9f\n')
txt.flush()
self.assertEqual(raw.getvalue(), b'abc\xe9\nd\xc3\xa9f\n')
# ascii -> utf-8-sig: ensure that no BOM is written in the middle of
# the file
raw = self.BytesIO()
txt = self.TextIOWrapper(raw, encoding='ascii', newline='\n')
txt.write('abc\n')
txt.reconfigure(encoding='utf-8-sig')
txt.write('d\xe9f\n')
txt.flush()
self.assertEqual(raw.getvalue(), b'abc\nd\xc3\xa9f\n')
def test_reconfigure_write_non_seekable(self):
raw = self.BytesIO()
raw.seekable = lambda: False
raw.seek = None
txt = self.TextIOWrapper(raw, encoding='ascii', newline='\n')
txt.write('abc\n')
txt.reconfigure(encoding='utf-8-sig')
txt.write('d\xe9f\n')
txt.flush()
# If the raw stream is not seekable, there'll be a BOM
self.assertEqual(raw.getvalue(), b'abc\n\xef\xbb\xbfd\xc3\xa9f\n')
def test_reconfigure_defaults(self):
txt = self.TextIOWrapper(self.BytesIO(), 'ascii', 'replace', '\n')
txt.reconfigure(encoding=None)
self.assertEqual(txt.encoding, 'ascii')
self.assertEqual(txt.errors, 'replace')
txt.write('LF\n')
txt.reconfigure(newline='\r\n')
self.assertEqual(txt.encoding, 'ascii')
self.assertEqual(txt.errors, 'replace')
txt.reconfigure(errors='ignore')
self.assertEqual(txt.encoding, 'ascii')
self.assertEqual(txt.errors, 'ignore')
txt.write('CRLF\n')
txt.reconfigure(encoding='utf-8', newline=None)
self.assertEqual(txt.errors, 'strict')
txt.seek(0)
self.assertEqual(txt.read(), 'LF\nCRLF\n')
self.assertEqual(txt.detach().getvalue(), b'LF\nCRLF\r\n')
def test_reconfigure_errors(self):
txt = self.TextIOWrapper(self.BytesIO(), 'ascii', 'replace', '\r')
with self.assertRaises(TypeError): # there was a crash
txt.reconfigure(encoding=42)
if self.is_C:
with self.assertRaises(UnicodeEncodeError):
txt.reconfigure(encoding='\udcfe')
with self.assertRaises(LookupError):
txt.reconfigure(encoding='locale\0')
# TODO: txt.reconfigure(encoding='utf-8\0')
# TODO: txt.reconfigure(encoding='nonexisting')
with self.assertRaises(TypeError):
txt.reconfigure(errors=42)
if self.is_C:
with self.assertRaises(UnicodeEncodeError):
txt.reconfigure(errors='\udcfe')
# TODO: txt.reconfigure(errors='ignore\0')
# TODO: txt.reconfigure(errors='nonexisting')
with self.assertRaises(TypeError):
txt.reconfigure(newline=42)
with self.assertRaises(ValueError):
txt.reconfigure(newline='\udcfe')
with self.assertRaises(ValueError):
txt.reconfigure(newline='xyz')
if not self.is_C:
# TODO: Should fail in C too.
with self.assertRaises(ValueError):
txt.reconfigure(newline='\n\0')
if self.is_C:
# TODO: Use __bool__(), not __index__().
with self.assertRaises(ZeroDivisionError):
txt.reconfigure(line_buffering=BadIndex())
with self.assertRaises(OverflowError):
txt.reconfigure(line_buffering=2**1000)
with self.assertRaises(ZeroDivisionError):
txt.reconfigure(write_through=BadIndex())
with self.assertRaises(OverflowError):
txt.reconfigure(write_through=2**1000)
with self.assertRaises(ZeroDivisionError): # there was a crash
txt.reconfigure(line_buffering=BadIndex(),
write_through=BadIndex())
self.assertEqual(txt.encoding, 'ascii')
self.assertEqual(txt.errors, 'replace')
self.assertIs(txt.line_buffering, False)
self.assertIs(txt.write_through, False)
txt.reconfigure(encoding='latin1', errors='ignore', newline='\r\n',
line_buffering=True, write_through=True)
self.assertEqual(txt.encoding, 'latin1')
self.assertEqual(txt.errors, 'ignore')
self.assertIs(txt.line_buffering, True)
self.assertIs(txt.write_through, True)
def test_reconfigure_newline(self):
raw = self.BytesIO(b'CR\rEOF')
txt = self.TextIOWrapper(raw, 'ascii', newline='\n')
txt.reconfigure(newline=None)
self.assertEqual(txt.readline(), 'CR\n')
raw = self.BytesIO(b'CR\rEOF')
txt = self.TextIOWrapper(raw, 'ascii', newline='\n')
txt.reconfigure(newline='')
self.assertEqual(txt.readline(), 'CR\r')
raw = self.BytesIO(b'CR\rLF\nEOF')
txt = self.TextIOWrapper(raw, 'ascii', newline='\r')
txt.reconfigure(newline='\n')
self.assertEqual(txt.readline(), 'CR\rLF\n')
raw = self.BytesIO(b'LF\nCR\rEOF')
txt = self.TextIOWrapper(raw, 'ascii', newline='\n')
txt.reconfigure(newline='\r')
self.assertEqual(txt.readline(), 'LF\nCR\r')
raw = self.BytesIO(b'CR\rCRLF\r\nEOF')
txt = self.TextIOWrapper(raw, 'ascii', newline='\r')
txt.reconfigure(newline='\r\n')
self.assertEqual(txt.readline(), 'CR\rCRLF\r\n')
txt = self.TextIOWrapper(self.BytesIO(), 'ascii', newline='\r')
txt.reconfigure(newline=None)
txt.write('linesep\n')
txt.reconfigure(newline='')
txt.write('LF\n')
txt.reconfigure(newline='\n')
txt.write('LF\n')
txt.reconfigure(newline='\r')
txt.write('CR\n')
txt.reconfigure(newline='\r\n')
txt.write('CRLF\n')
expected = 'linesep' + os.linesep + 'LF\nLF\nCR\rCRLF\r\n'
self.assertEqual(txt.detach().getvalue().decode('ascii'), expected)
def test_issue25862(self):
# Assertion failures occurred in tell() after read() and write().
t = self.TextIOWrapper(self.BytesIO(b'test'), encoding='ascii')
t.read(1)
t.read()
t.tell()
t = self.TextIOWrapper(self.BytesIO(b'test'), encoding='ascii')
t.read(1)
t.write('x')
t.tell()
def test_issue35928(self):
p = self.BufferedRWPair(self.BytesIO(b'foo\nbar\n'), self.BytesIO())
f = self.TextIOWrapper(p)
res = f.readline()
self.assertEqual(res, 'foo\n')
f.write(res)
self.assertEqual(res + f.readline(), 'foo\nbar\n')
def test_pickling_subclass(self):
global MyTextIO
class MyTextIO(self.TextIOWrapper):
def __init__(self, raw, tag):
super().__init__(raw)
self.tag = tag
def __getstate__(self):
return self.tag, self.buffer.getvalue()
def __setstate__(slf, state):
tag, value = state
slf.__init__(self.BytesIO(value), tag)
raw = self.BytesIO(b'data')
txt = MyTextIO(raw, 'ham')
for proto in range(pickle.HIGHEST_PROTOCOL + 1):
with self.subTest(protocol=proto):
pickled = pickle.dumps(txt, proto)
newtxt = pickle.loads(pickled)
self.assertEqual(newtxt.buffer.getvalue(), b'data')
self.assertEqual(newtxt.tag, 'ham')
del MyTextIO
@unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()")
def test_read_non_blocking(self):
import os
r, w = os.pipe()
try:
os.set_blocking(r, False)
with self.io.open(r, 'rt') as textfile:
r = None
# Nothing has been written so a non-blocking read raises a BlockingIOError exception.
with self.assertRaises(BlockingIOError):
textfile.read()
finally:
if r is not None:
os.close(r)
os.close(w)
class MemviewBytesIO(io.BytesIO):
'''A BytesIO object whose read method returns memoryviews
rather than bytes'''
def read1(self, len_):
return _to_memoryview(super().read1(len_))
def read(self, len_):
return _to_memoryview(super().read(len_))
def _to_memoryview(buf):
'''Convert bytes-object *buf* to a non-trivial memoryview'''
arr = array.array('i')
idx = len(buf) - len(buf) % arr.itemsize
arr.frombytes(buf[:idx])
return memoryview(arr)
class CTextIOWrapperTest(TextIOWrapperTest, CTestCase):
shutdown_error = "LookupError: unknown encoding: ascii"
def test_initialization(self):
r = self.BytesIO(b"\xc3\xa9\n\n")
b = self.BufferedReader(r, 1000)
t = self.TextIOWrapper(b, encoding="utf-8")
self.assertRaises(ValueError, t.__init__, b, encoding="utf-8", newline='xyzzy')
self.assertRaises(ValueError, t.read)
t = self.TextIOWrapper.__new__(self.TextIOWrapper)
self.assertRaises(Exception, repr, t)
def test_garbage_collection(self):
# C TextIOWrapper objects are collected, and collecting them flushes
# all data to disk.
# The Python version has __del__, so it ends in gc.garbage instead.
with warnings.catch_warnings():
warnings.simplefilter("ignore", ResourceWarning)
rawio = self.FileIO(os_helper.TESTFN, "wb")
b = self.BufferedWriter(rawio)
t = self.TextIOWrapper(b, encoding="ascii")
t.write("456def")
t.x = t
wr = weakref.ref(t)
del t
support.gc_collect()
self.assertIsNone(wr(), wr)
with self.open(os_helper.TESTFN, "rb") as f:
self.assertEqual(f.read(), b"456def")
def test_rwpair_cleared_before_textio(self):
# Issue 13070: TextIOWrapper's finalization would crash when called
# after the reference to the underlying BufferedRWPair's writer got
# cleared by the GC.
for i in range(1000):
b1 = self.BufferedRWPair(self.MockRawIO(), self.MockRawIO())
t1 = self.TextIOWrapper(b1, encoding="ascii")
b2 = self.BufferedRWPair(self.MockRawIO(), self.MockRawIO())
t2 = self.TextIOWrapper(b2, encoding="ascii")
# circular references
t1.buddy = t2
t2.buddy = t1
support.gc_collect()
def test_del__CHUNK_SIZE_SystemError(self):
t = self.TextIOWrapper(self.BytesIO(), encoding='ascii')
with self.assertRaises(AttributeError):
del t._CHUNK_SIZE
def test_internal_buffer_size(self):
# bpo-43260: TextIOWrapper's internal buffer should not store
# data larger than chunk size.
chunk_size = 8192 # default chunk size, updated later
class MockIO(self.MockRawIO):
def write(self, data):
if len(data) > chunk_size:
raise RuntimeError
return super().write(data)
buf = MockIO()
t = self.TextIOWrapper(buf, encoding="ascii")
chunk_size = t._CHUNK_SIZE
t.write("abc")
t.write("def")
# default chunk size is 8192 bytes so t don't write data to buf.
self.assertEqual([], buf._write_stack)
with self.assertRaises(RuntimeError):
t.write("x"*(chunk_size+1))
self.assertEqual([b"abcdef"], buf._write_stack)
t.write("ghi")
t.write("x"*chunk_size)
self.assertEqual([b"abcdef", b"ghi", b"x"*chunk_size], buf._write_stack)
def test_issue119506(self):
chunk_size = 8192
class MockIO(self.MockRawIO):
written = False
def write(self, data):
if not self.written:
self.written = True
t.write("middle")
return super().write(data)
buf = MockIO()
t = self.TextIOWrapper(buf)
t.write("abc")
t.write("def")
# writing data which size >= chunk_size cause flushing buffer before write.
t.write("g" * chunk_size)
t.flush()
self.assertEqual([b"abcdef", b"middle", b"g"*chunk_size],
buf._write_stack)
def test_issue142594(self):
wrapper = None
detached = False
class ReentrantRawIO(self.RawIOBase):
@property
def closed(self):
nonlocal detached
if wrapper is not None and not detached:
detached = True
wrapper.detach()
return False
raw = ReentrantRawIO()
wrapper = self.TextIOWrapper(raw)
wrapper.close() # should not crash
class PyTextIOWrapperTest(TextIOWrapperTest, PyTestCase):
shutdown_error = "LookupError: unknown encoding: ascii"
class IncrementalNewlineDecoderTest:
def check_newline_decoding_utf8(self, decoder):
# UTF-8 specific tests for a newline decoder
def _check_decode(b, s, **kwargs):
# We exercise getstate() / setstate() as well as decode()
state = decoder.getstate()
self.assertEqual(decoder.decode(b, **kwargs), s)
decoder.setstate(state)
self.assertEqual(decoder.decode(b, **kwargs), s)
_check_decode(b'\xe8\xa2\x88', "\u8888")
_check_decode(b'\xe8', "")
_check_decode(b'\xa2', "")
_check_decode(b'\x88', "\u8888")
_check_decode(b'\xe8', "")
_check_decode(b'\xa2', "")
_check_decode(b'\x88', "\u8888")
_check_decode(b'\xe8', "")
self.assertRaises(UnicodeDecodeError, decoder.decode, b'', final=True)
decoder.reset()
_check_decode(b'\n', "\n")
_check_decode(b'\r', "")
_check_decode(b'', "\n", final=True)
_check_decode(b'\r', "\n", final=True)
_check_decode(b'\r', "")
_check_decode(b'a', "\na")
_check_decode(b'\r\r\n', "\n\n")
_check_decode(b'\r', "")
_check_decode(b'\r', "\n")
_check_decode(b'\na', "\na")
_check_decode(b'\xe8\xa2\x88\r\n', "\u8888\n")
_check_decode(b'\xe8\xa2\x88', "\u8888")
_check_decode(b'\n', "\n")
_check_decode(b'\xe8\xa2\x88\r', "\u8888")
_check_decode(b'\n', "\n")
def check_newline_decoding(self, decoder, encoding):
result = []
if encoding is not None:
encoder = codecs.getincrementalencoder(encoding)()
def _decode_bytewise(s):
# Decode one byte at a time
for b in encoder.encode(s):
result.append(decoder.decode(bytes([b])))
else:
encoder = None
def _decode_bytewise(s):
# Decode one char at a time
for c in s:
result.append(decoder.decode(c))
self.assertEqual(decoder.newlines, None)
_decode_bytewise("abc\n\r")
self.assertEqual(decoder.newlines, '\n')
_decode_bytewise("\nabc")
self.assertEqual(decoder.newlines, ('\n', '\r\n'))
_decode_bytewise("abc\r")
self.assertEqual(decoder.newlines, ('\n', '\r\n'))
_decode_bytewise("abc")
self.assertEqual(decoder.newlines, ('\r', '\n', '\r\n'))
_decode_bytewise("abc\r")
self.assertEqual("".join(result), "abc\n\nabcabc\nabcabc")
decoder.reset()
input = "abc"
if encoder is not None:
encoder.reset()
input = encoder.encode(input)
self.assertEqual(decoder.decode(input), "abc")
self.assertEqual(decoder.newlines, None)
def test_newline_decoder(self):
encodings = (
# None meaning the IncrementalNewlineDecoder takes unicode input
# rather than bytes input
None, 'utf-8', 'latin-1',
'utf-16', 'utf-16-le', 'utf-16-be',
'utf-32', 'utf-32-le', 'utf-32-be',
)
for enc in encodings:
decoder = enc and codecs.getincrementaldecoder(enc)()
decoder = self.IncrementalNewlineDecoder(decoder, translate=True)
self.check_newline_decoding(decoder, enc)
decoder = codecs.getincrementaldecoder("utf-8")()
decoder = self.IncrementalNewlineDecoder(decoder, translate=True)
self.check_newline_decoding_utf8(decoder)
self.assertRaises(TypeError, decoder.setstate, 42)
def test_newline_bytes(self):
# Issue 5433: Excessive optimization in IncrementalNewlineDecoder
def _check(dec):
self.assertEqual(dec.newlines, None)
self.assertEqual(dec.decode("\u0D00"), "\u0D00")
self.assertEqual(dec.newlines, None)
self.assertEqual(dec.decode("\u0A00"), "\u0A00")
self.assertEqual(dec.newlines, None)
dec = self.IncrementalNewlineDecoder(None, translate=False)
_check(dec)
dec = self.IncrementalNewlineDecoder(None, translate=True)
_check(dec)
def test_translate(self):
# issue 35062
for translate in (-2, -1, 1, 2):
decoder = codecs.getincrementaldecoder("utf-8")()
decoder = self.IncrementalNewlineDecoder(decoder, translate)
self.check_newline_decoding_utf8(decoder)
decoder = codecs.getincrementaldecoder("utf-8")()
decoder = self.IncrementalNewlineDecoder(decoder, translate=0)
self.assertEqual(decoder.decode(b"\r\r\n"), "\r\r\n")
class CIncrementalNewlineDecoderTest(IncrementalNewlineDecoderTest, unittest.TestCase):
IncrementalNewlineDecoder = io.IncrementalNewlineDecoder
@support.cpython_only
def test_uninitialized(self):
uninitialized = self.IncrementalNewlineDecoder.__new__(
self.IncrementalNewlineDecoder)
self.assertRaises(ValueError, uninitialized.decode, b'bar')
self.assertRaises(ValueError, uninitialized.getstate)
self.assertRaises(ValueError, uninitialized.setstate, (b'foo', 0))
self.assertRaises(ValueError, uninitialized.reset)
class PyIncrementalNewlineDecoderTest(IncrementalNewlineDecoderTest, unittest.TestCase):
IncrementalNewlineDecoder = pyio.IncrementalNewlineDecoder | python | github | https://github.com/python/cpython | Lib/test/test_io/test_textio.py |
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys
import codecs
# we do this first, since BitTorrent/__init__.py installs a stderr proxy.
# py2exe'd Blackholes don't have encoding
encoding = getattr(sys.stdout, "encoding", None)
# and sometimes encoding is None anyway
if encoding is not None:
stdout_writer = codecs.getwriter(encoding)
# don't fail if we can't write a value in the sydout encoding
sys.stdout = stdout_writer(sys.stdout, 'replace')
stderr_writer = codecs.getwriter(encoding)
sys.stderr = stderr_writer(sys.stderr, 'replace')
from BitTorrent.platform import install_translation
install_translation(unicode=True)
_ = _ # not a typo | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/python2
#Dumps the memory of a seagate hd using the jumper pin serial interface
import serial
import sys,os,re,argparse
import time
from wgetstyle import *
debug=0
#automagicly set to 115200 baud
fast=1
#1 is slow 0 is fastest 0.1 is the sweetspot
timeout=0.2
benchmark=1
writing=0
try:
device=sys.argv[1]
dumpfile=sys.argv[2]
baud=sys.argv[3]
memf=open(dumpfile,'r+')
fileend=len(memf.read())
except:
print 'Usage:'
print sys.argv[0]+' device dumpfile baud\n'
print 'Default baud should be 38400 maximum is 115200\n'
quit()
def send(ser,command):
ser.write(command+"\n")
inco=""
while 1:
try:
arr=ser.readline()
if arr!="":
inco=inco+arr
else:
if debug>=2:
print inco
modus=re.findall('F3 (.)>',inco)
break
#print 'Next command'
except:
print 'Exception! (in send)'
if writing==1:
memf.close()
break
return inco,modus
def get_modus(ser):
inco,modus=send(ser,"")
return modus[0]
def set_baud(ser,baud):
modus=get_modus(ser)
print 'Setting baud to '+str(baud)
if modus!="T":
print 'Changing mode to T'
send(ser,"/T")
send(ser,"B"+str(baud))
ser = serial.Serial(port=device, baudrate=baud, bytesize=8,parity='N',stopbits=1,timeout=timeout)
send(ser,"/"+modus)
return ser
def init(device,baud,fast=fast):
ser = serial.Serial(port=device, baudrate=baud, bytesize=8,parity='N',stopbits=1,timeout=timeout)
#Initialize the command line
# print ser.inWaiting()
# print dir(ser)
# if send(ser,"\n")[1]==[]:
# print 'Got no modus bad'
# quit()
print send(ser,"\n")
print send(ser,"\x1A")[1]
if baud=="38400" and fast==1:
baud=115200
try:
set_baud(ser,baud)
ser = serial.Serial(port=device, baudrate=baud, bytesize=8,parity='N',stopbits=1,timeout=timeout)
except:
print 'You probably already are on 11500 baud'
send(ser,"/1")
return ser
def parse(buff):
hex=""
bin=""
fooR=re.compile('[0-9A-F][0-9A-F][0-9A-F][0-9A-F][0-9A-F][0-9A-F][0-9A-F][0-9A-F]\s+(.+)\r')
parsed=fooR.findall(buff)
for line in parsed:
hex=hex+re.sub(' ','',line)
try:
bin=hex.decode("hex")
except:
print 'Parse failed'
return hex,bin
def display_buffer(ser,num):
#num xxxx
foo,bar=send(ser,'B'+str(num))
return parse(foo)
def display_memory(ser,num1,num2):
#num1 xx, num2 yyyy
looped=0
if debug>=1:
print 'D'+str(num1)+","+str(num2)+" - "
foo,bar=send(ser,'D'+str(num1)+","+str(num2))
parsed=parse(foo)
if len(parsed[1])==0:
print 'Got nothing trying again :/'
if looped>10:
print "Seems like we're stuck - quitting"
memf.close()
quit()
looped=looped+1
parsed=display_memory(ser,num1,num2)
if len(parsed[1])!=512:
print 'Got the wrong size!!!!!!1111'
parsed=display_memory(ser,num1,num2)
return parsed
def dump_memory(ser,dumpfile):
writing=1
k=0
total=(64*128*512)/1024.0
stime=time.time() #start time
print 'Continuing memory dump'
for j in range(0,64):
for i in range(0,128):
k=k+1
zz=time.time()
if k>fileend/0x200:
mem=display_memory(ser,hex(j)[2:],hex(i*0x200)[2:])[1]
if benchmark==1:
size=(k*512)/1024.0
# speed=round(512/(time.time()-zz),2)
percentage=round(100.0/total*size,2)
minleft=round((time.time()-stime)/k*(247*128-k)/60,2)
if debug==0:
progress_bar(time.time()-stime,size*1024,total*1024)
elif debug>0:
print 'time:'+str(time.time()-stime)
print 'size:'+str(size)
print 'total:'+str(total)
if k>fileend/0x200:
memf.write(mem)
memf.close()
writing=0
def dump_buffer(ser,dumpfile):
writing=1
k=0
total=(65535*512)/1024.0
stime=time.time() #start time
for i in range(0,65535):
k=k+1
zz=time.time()
if k>fileend/0x200:
mem=display_buffer(ser,hex(i)[2:])[1]
size=(k*512)/1024.0
if benchmark==1:
progress_bar(time.time()-stime,size*1024,total*1024)
if k>fileend/0x200:
memf.write(mem)
memf.close()
writing=0
ser=init(device,baud)
#print send(ser,'')
#print display_buffer(ser,00)[0]
try:
modus=get_modus(ser)
except:
print "Couldn't even get the modus - quitting"
if debug<2:
print 'Try it using debug=2'
quit()
if modus!="1":
print 'Somethings not right here'
quit()
#print display_memory(ser,'15','C000')
dump_buffer(ser,dumpfile)
#dump_buffer(ser,dumpfile)
print | unknown | codeparrot/codeparrot-clean | ||
# Copyright (c) 2012 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
CellState Manager
"""
import copy
import datetime
import functools
import time
from oslo.config import cfg
from oslo.db import exception as db_exc
from nova.cells import rpc_driver
from nova import context
from nova.db import base
from nova import exception
from nova.i18n import _
from nova.openstack.common import fileutils
from nova.openstack.common import jsonutils
from nova.openstack.common import log as logging
from nova.openstack.common import timeutils
from nova.openstack.common import units
from nova import rpc
from nova import utils
cell_state_manager_opts = [
cfg.IntOpt('db_check_interval',
default=60,
help='Interval, in seconds, for getting fresh cell '
'information from the database.'),
cfg.StrOpt('cells_config',
help='Configuration file from which to read cells '
'configuration. If given, overrides reading cells '
'from the database.'),
]
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_opt('name', 'nova.cells.opts', group='cells')
CONF.import_opt('reserve_percent', 'nova.cells.opts', group='cells')
CONF.import_opt('mute_child_interval', 'nova.cells.opts', group='cells')
CONF.register_opts(cell_state_manager_opts, group='cells')
class CellState(object):
"""Holds information for a particular cell."""
def __init__(self, cell_name, is_me=False):
self.name = cell_name
self.is_me = is_me
self.last_seen = datetime.datetime.min
self.capabilities = {}
self.capacities = {}
self.db_info = {}
# TODO(comstud): The DB will specify the driver to use to talk
# to this cell, but there's no column for this yet. The only
# available driver is the rpc driver.
self.driver = rpc_driver.CellsRPCDriver()
def update_db_info(self, cell_db_info):
"""Update cell credentials from db."""
self.db_info = dict(
[(k, v) for k, v in cell_db_info.iteritems()
if k != 'name'])
def update_capabilities(self, cell_metadata):
"""Update cell capabilities for a cell."""
self.last_seen = timeutils.utcnow()
self.capabilities = cell_metadata
def update_capacities(self, capacities):
"""Update capacity information for a cell."""
self.last_seen = timeutils.utcnow()
self.capacities = capacities
def get_cell_info(self):
"""Return subset of cell information for OS API use."""
db_fields_to_return = ['is_parent', 'weight_scale', 'weight_offset']
url_fields_to_return = {
'username': 'username',
'hostname': 'rpc_host',
'port': 'rpc_port',
}
cell_info = dict(name=self.name, capabilities=self.capabilities)
if self.db_info:
for field in db_fields_to_return:
cell_info[field] = self.db_info[field]
url = rpc.get_transport_url(self.db_info['transport_url'])
if url.hosts:
for field, canonical in url_fields_to_return.items():
cell_info[canonical] = getattr(url.hosts[0], field)
return cell_info
def send_message(self, message):
"""Send a message to a cell. Just forward this to the driver,
passing ourselves and the message as arguments.
"""
self.driver.send_message_to_cell(self, message)
def __repr__(self):
me = "me" if self.is_me else "not_me"
return "Cell '%s' (%s)" % (self.name, me)
def sync_before(f):
"""Use as a decorator to wrap methods that use cell information to
make sure they sync the latest information from the DB periodically.
"""
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
self._cell_data_sync()
return f(self, *args, **kwargs)
return wrapper
def sync_after(f):
"""Use as a decorator to wrap methods that update cell information
in the database to make sure the data is synchronized immediately.
"""
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
result = f(self, *args, **kwargs)
self._cell_data_sync(force=True)
return result
return wrapper
_unset = object()
class CellStateManager(base.Base):
def __new__(cls, cell_state_cls=None, cells_config=_unset):
if cls is not CellStateManager:
return super(CellStateManager, cls).__new__(cls)
if cells_config is _unset:
cells_config = CONF.cells.cells_config
if cells_config:
return CellStateManagerFile(cell_state_cls)
return CellStateManagerDB(cell_state_cls)
def __init__(self, cell_state_cls=None):
super(CellStateManager, self).__init__()
if not cell_state_cls:
cell_state_cls = CellState
self.cell_state_cls = cell_state_cls
self.my_cell_state = cell_state_cls(CONF.cells.name, is_me=True)
self.parent_cells = {}
self.child_cells = {}
self.last_cell_db_check = datetime.datetime.min
attempts = 0
while True:
try:
self._cell_data_sync(force=True)
break
except db_exc.DBError as e:
attempts += 1
if attempts > 120:
raise
LOG.exception(_('DB error: %s') % e)
time.sleep(30)
my_cell_capabs = {}
for cap in CONF.cells.capabilities:
name, value = cap.split('=', 1)
if ';' in value:
values = set(value.split(';'))
else:
values = set([value])
my_cell_capabs[name] = values
self.my_cell_state.update_capabilities(my_cell_capabs)
def _refresh_cells_from_dict(self, db_cells_dict):
"""Make our cell info map match the db."""
# Update current cells. Delete ones that disappeared
for cells_dict in (self.parent_cells, self.child_cells):
for cell_name, cell_info in cells_dict.items():
is_parent = cell_info.db_info['is_parent']
db_dict = db_cells_dict.get(cell_name)
if db_dict and is_parent == db_dict['is_parent']:
cell_info.update_db_info(db_dict)
else:
del cells_dict[cell_name]
# Add new cells
for cell_name, db_info in db_cells_dict.items():
if db_info['is_parent']:
cells_dict = self.parent_cells
else:
cells_dict = self.child_cells
if cell_name not in cells_dict:
cells_dict[cell_name] = self.cell_state_cls(cell_name)
cells_dict[cell_name].update_db_info(db_info)
def _time_to_sync(self):
"""Is it time to sync the DB against our memory cache?"""
diff = timeutils.utcnow() - self.last_cell_db_check
return diff.seconds >= CONF.cells.db_check_interval
def _update_our_capacity(self, ctxt=None):
"""Update our capacity in the self.my_cell_state CellState.
This will add/update 2 entries in our CellState.capacities,
'ram_free' and 'disk_free'.
The values of these are both dictionaries with the following
format:
{'total_mb': <total_memory_free_in_the_cell>,
'units_by_mb: <units_dictionary>}
<units_dictionary> contains the number of units that we can build for
every distinct memory or disk requirement that we have based on
instance types. This number is computed by looking at room available
on every compute_node.
Take the following instance_types as an example:
[{'memory_mb': 1024, 'root_gb': 10, 'ephemeral_gb': 100},
{'memory_mb': 2048, 'root_gb': 20, 'ephemeral_gb': 200}]
capacities['ram_free']['units_by_mb'] would contain the following:
{'1024': <number_of_instances_that_will_fit>,
'2048': <number_of_instances_that_will_fit>}
capacities['disk_free']['units_by_mb'] would contain the following:
{'122880': <number_of_instances_that_will_fit>,
'225280': <number_of_instances_that_will_fit>}
Units are in MB, so 122880 = (10 + 100) * 1024.
NOTE(comstud): Perhaps we should only report a single number
available per instance_type.
"""
if not ctxt:
ctxt = context.get_admin_context()
reserve_level = CONF.cells.reserve_percent / 100.0
compute_hosts = {}
def _get_compute_hosts():
compute_nodes = self.db.compute_node_get_all(ctxt)
for compute in compute_nodes:
service = compute['service']
if not service or service['disabled']:
continue
host = service['host']
compute_hosts[host] = {
'free_ram_mb': compute['free_ram_mb'],
'free_disk_mb': compute['free_disk_gb'] * 1024,
'total_ram_mb': compute['memory_mb'],
'total_disk_mb': compute['local_gb'] * 1024}
_get_compute_hosts()
if not compute_hosts:
self.my_cell_state.update_capacities({})
return
ram_mb_free_units = {}
disk_mb_free_units = {}
total_ram_mb_free = 0
total_disk_mb_free = 0
def _free_units(total, free, per_inst):
if per_inst:
min_free = total * reserve_level
free = max(0, free - min_free)
return int(free / per_inst)
else:
return 0
instance_types = self.db.flavor_get_all(ctxt)
memory_mb_slots = frozenset(
[inst_type['memory_mb'] for inst_type in instance_types])
disk_mb_slots = frozenset(
[(inst_type['root_gb'] + inst_type['ephemeral_gb']) * units.Ki
for inst_type in instance_types])
for compute_values in compute_hosts.values():
total_ram_mb_free += compute_values['free_ram_mb']
total_disk_mb_free += compute_values['free_disk_mb']
for memory_mb_slot in memory_mb_slots:
ram_mb_free_units.setdefault(str(memory_mb_slot), 0)
free_units = _free_units(compute_values['total_ram_mb'],
compute_values['free_ram_mb'], memory_mb_slot)
ram_mb_free_units[str(memory_mb_slot)] += free_units
for disk_mb_slot in disk_mb_slots:
disk_mb_free_units.setdefault(str(disk_mb_slot), 0)
free_units = _free_units(compute_values['total_disk_mb'],
compute_values['free_disk_mb'], disk_mb_slot)
disk_mb_free_units[str(disk_mb_slot)] += free_units
capacities = {'ram_free': {'total_mb': total_ram_mb_free,
'units_by_mb': ram_mb_free_units},
'disk_free': {'total_mb': total_disk_mb_free,
'units_by_mb': disk_mb_free_units}}
self.my_cell_state.update_capacities(capacities)
@sync_before
def get_cell_info_for_neighbors(self):
"""Return cell information for all neighbor cells."""
cell_list = [cell.get_cell_info()
for cell in self.child_cells.itervalues()]
cell_list.extend([cell.get_cell_info()
for cell in self.parent_cells.itervalues()])
return cell_list
@sync_before
def get_my_state(self):
"""Return information for my (this) cell."""
return self.my_cell_state
@sync_before
def get_child_cells(self):
"""Return list of child cell_infos."""
return self.child_cells.values()
@sync_before
def get_parent_cells(self):
"""Return list of parent cell_infos."""
return self.parent_cells.values()
@sync_before
def get_parent_cell(self, cell_name):
return self.parent_cells.get(cell_name)
@sync_before
def get_child_cell(self, cell_name):
return self.child_cells.get(cell_name)
@sync_before
def update_cell_capabilities(self, cell_name, capabilities):
"""Update capabilities for a cell."""
cell = (self.child_cells.get(cell_name) or
self.parent_cells.get(cell_name))
if not cell:
LOG.error(_("Unknown cell '%(cell_name)s' when trying to "
"update capabilities"),
{'cell_name': cell_name})
return
# Make sure capabilities are sets.
for capab_name, values in capabilities.items():
capabilities[capab_name] = set(values)
cell.update_capabilities(capabilities)
@sync_before
def update_cell_capacities(self, cell_name, capacities):
"""Update capacities for a cell."""
cell = (self.child_cells.get(cell_name) or
self.parent_cells.get(cell_name))
if not cell:
LOG.error(_("Unknown cell '%(cell_name)s' when trying to "
"update capacities"),
{'cell_name': cell_name})
return
cell.update_capacities(capacities)
@sync_before
def get_our_capabilities(self, include_children=True):
capabs = copy.deepcopy(self.my_cell_state.capabilities)
if include_children:
for cell in self.child_cells.values():
if timeutils.is_older_than(cell.last_seen,
CONF.cells.mute_child_interval):
continue
for capab_name, values in cell.capabilities.items():
if capab_name not in capabs:
capabs[capab_name] = set([])
capabs[capab_name] |= values
return capabs
def _add_to_dict(self, target, src):
for key, value in src.items():
if isinstance(value, dict):
target.setdefault(key, {})
self._add_to_dict(target[key], value)
continue
target.setdefault(key, 0)
target[key] += value
@sync_before
def get_our_capacities(self, include_children=True):
capacities = copy.deepcopy(self.my_cell_state.capacities)
if include_children:
for cell in self.child_cells.values():
self._add_to_dict(capacities, cell.capacities)
return capacities
@sync_before
def get_capacities(self, cell_name=None):
if not cell_name or cell_name == self.my_cell_state.name:
return self.get_our_capacities()
if cell_name in self.child_cells:
return self.child_cells[cell_name].capacities
raise exception.CellNotFound(cell_name=cell_name)
@sync_before
def cell_get(self, ctxt, cell_name):
for cells_dict in (self.parent_cells, self.child_cells):
if cell_name in cells_dict:
return cells_dict[cell_name]
raise exception.CellNotFound(cell_name=cell_name)
class CellStateManagerDB(CellStateManager):
@utils.synchronized('cell-db-sync')
def _cell_data_sync(self, force=False):
"""Update cell status for all cells from the backing data store
when necessary.
:param force: If True, cell status will be updated regardless
of whether it's time to do so.
"""
if force or self._time_to_sync():
LOG.debug("Updating cell cache from db.")
self.last_cell_db_check = timeutils.utcnow()
ctxt = context.get_admin_context()
db_cells = self.db.cell_get_all(ctxt)
db_cells_dict = dict((cell['name'], cell) for cell in db_cells)
self._refresh_cells_from_dict(db_cells_dict)
self._update_our_capacity(ctxt)
@sync_after
def cell_create(self, ctxt, values):
return self.db.cell_create(ctxt, values)
@sync_after
def cell_update(self, ctxt, cell_name, values):
return self.db.cell_update(ctxt, cell_name, values)
@sync_after
def cell_delete(self, ctxt, cell_name):
return self.db.cell_delete(ctxt, cell_name)
class CellStateManagerFile(CellStateManager):
def __init__(self, cell_state_cls=None):
cells_config = CONF.cells.cells_config
self.cells_config_path = CONF.find_file(cells_config)
if not self.cells_config_path:
raise cfg.ConfigFilesNotFoundError(config_files=[cells_config])
super(CellStateManagerFile, self).__init__(cell_state_cls)
def _cell_data_sync(self, force=False):
"""Update cell status for all cells from the backing data store
when necessary.
:param force: If True, cell status will be updated regardless
of whether it's time to do so.
"""
reloaded, data = fileutils.read_cached_file(self.cells_config_path,
force_reload=force)
if reloaded:
LOG.debug("Updating cell cache from config file.")
self.cells_config_data = jsonutils.loads(data)
self._refresh_cells_from_dict(self.cells_config_data)
if force or self._time_to_sync():
self.last_cell_db_check = timeutils.utcnow()
self._update_our_capacity()
def cell_create(self, ctxt, values):
raise exception.CellsUpdateUnsupported()
def cell_update(self, ctxt, cell_name, values):
raise exception.CellsUpdateUnsupported()
def cell_delete(self, ctxt, cell_name):
raise exception.CellsUpdateUnsupported() | unknown | codeparrot/codeparrot-clean | ||
from script import (
QUERY_TITLES_GET, get_connector, copy_titles, load_config,
update_data, update_many_data, to_chunks
)
from mock import patch
import os
conns = {}
QUERY_TITLES_GET_ORDER = "{} {}".format(
QUERY_TITLES_GET,
'ORDER BY emp_no DESC'
)
def setup_function(function):
conf_db_in, conf_db_out = test_load_config()
conns['in'] = get_connector(conf_db_in)
conns['out'] = get_connector(conf_db_out)
for conn in conns.values():
conn.query("DROP TABLE IF EXISTS titles")
conn.query("""
CREATE TABLE titles (
emp_no INT NOT NULL,
title VARCHAR(50) NOT NULL,
from_date DATE NOT NULL,
to_date DATE,
PRIMARY KEY (emp_no,title, from_date)
)""")
def teardown_function(function):
for conn in conns.values():
conn.close()
def test_load_config():
test_config_path = os.environ.get('TEST_CONFIG')
if test_config_path is None:
dirname = os.path.dirname(__file__)
test_config_path = os.path.join(dirname, 'test_config.ini')
with open(test_config_path) as f:
config = load_config(f)
conf_db_in = dict(config.items('database_in'))
conf_db_out = dict(config.items('database_out'))
return conf_db_in, conf_db_out
def test_update_data():
conn = conns['in']
cursor = conns['in'].cursor()
assert update_data(cursor, (0, 'test', 0, 0)) == 1
assert update_data(cursor, (1, 'another_test', 0, 0)) == 1
cursor.execute(QUERY_TITLES_GET_ORDER)
data = list(cursor)
assert len(data) == 2
assert data[0] == (1, 'another_test', None, None)
assert data[1] == (0, 'test', None, None)
@patch('script.update_data')
def test_copy_titles(mock_update_data):
conn = conns['in']
cursor = conns['in'].cursor()
update_data(cursor, (0, 'test', 0, 0))
update_data(cursor, (1, 'another_test', 0, 0))
copy_titles(conns['in'], conns['out'])
assert mock_update_data.call_count == 2
def test_many_updates():
conn = conns['in']
cursor = conns['in'].cursor()
update_many_data(cursor, [
(i, 'test %d' % i, 0, 0)
for i in range(30)
])
cursor.execute(QUERY_TITLES_GET_ORDER)
data = list(cursor)
assert len(data) == 30
def test_to_chunks():
data = [1, 2, 1, 2, 3, 4]
chunks = list(to_chunks(data, 2))
assert len(chunks) == 3
assert chunks[0] == [1, 2]
assert chunks[1] == [1, 2]
assert chunks[2] == [3, 4]
@patch('script.update_many_data')
def test_copy_titles_chunks(mock_update_many_data):
conn = conns['in']
cursor = conns['in'].cursor()
update_many_data(cursor, [
(i, 'test %d' % i, 0, 0)
for i in range(30)
])
copy_titles(conns['in'], conns['out'], chunk_size=10)
assert mock_update_many_data.call_count == 3 | unknown | codeparrot/codeparrot-clean | ||
# encoding: utf-8
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Deleting field 'Source.storage'
db.delete_column('easy_thumbnails_source', 'storage_id')
# Deleting field 'Thumbnail.storage'
db.delete_column('easy_thumbnails_thumbnail', 'storage_id')
def backwards(self, orm):
# Adding field 'Source.storage'
db.add_column('easy_thumbnails_source', 'storage', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['easy_thumbnails.Storage'], null=True), keep_default=False)
# Adding field 'Thumbnail.storage'
db.add_column('easy_thumbnails_thumbnail', 'storage', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['easy_thumbnails.Storage'], null=True), keep_default=False)
models = {
'easy_thumbnails.source': {
'Meta': {'object_name': 'Source'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime(2010, 7, 21, 4, 30, 33, 144413)'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}),
'storage_new': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['easy_thumbnails.StorageNew']"})
},
'easy_thumbnails.storage': {
'Meta': {'object_name': 'Storage'},
'hash': ('django.db.models.fields.CharField', [], {'max_length': '40', 'primary_key': 'True', 'db_index': 'True'}),
'pickle': ('django.db.models.fields.TextField', [], {})
},
'easy_thumbnails.storagenew': {
'Meta': {'object_name': 'StorageNew'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'hash': ('django.db.models.fields.CharField', [], {'max_length': '40', 'db_index': 'True'}),
'pickle': ('django.db.models.fields.TextField', [], {})
},
'easy_thumbnails.thumbnail': {
'Meta': {'object_name': 'Thumbnail'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime(2010, 7, 21, 4, 30, 33, 144413)'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}),
'source': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'thumbnails'", 'to': "orm['easy_thumbnails.Source']"}),
'storage_new': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['easy_thumbnails.StorageNew']"})
}
}
complete_apps = ['easy_thumbnails'] | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright (C) 2015 The Guava Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.google.common.collect;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkNotNull;
import static com.google.common.collect.NullnessCasts.uncheckedCastNullableTToT;
import static java.lang.Math.max;
import com.google.common.annotations.GwtCompatible;
import com.google.j2objc.annotations.Weak;
import java.util.Comparator;
import java.util.Spliterator;
import java.util.function.Consumer;
import java.util.function.DoubleConsumer;
import java.util.function.Function;
import java.util.function.IntConsumer;
import java.util.function.IntFunction;
import java.util.function.LongConsumer;
import java.util.function.Predicate;
import java.util.stream.IntStream;
import org.jspecify.annotations.Nullable;
/** Spliterator utilities for {@code common.collect} internals. */
@GwtCompatible
@IgnoreJRERequirement // used only from APIs that work with Stream
final class CollectSpliterators {
private CollectSpliterators() {}
static <T extends @Nullable Object> Spliterator<T> indexed(
int size, int extraCharacteristics, IntFunction<T> function) {
return indexed(size, extraCharacteristics, function, null);
}
static <T extends @Nullable Object> Spliterator<T> indexed(
int size,
int extraCharacteristics,
IntFunction<T> function,
@Nullable Comparator<? super T> comparator) {
if (comparator != null) {
checkArgument((extraCharacteristics & Spliterator.SORTED) != 0);
}
/*
* @IgnoreJRERequirement should be redundant with the one on Streams itself, but it's necessary
* as of Animal Sniffer 1.24. Maybe Animal Sniffer processes this nested class before it
* processes Streams and thus hasn't had a chance to see Streams's annotation?
*/
@IgnoreJRERequirement
final class WithCharacteristics implements Spliterator<T> {
private final Spliterator.OfInt delegate;
WithCharacteristics(Spliterator.OfInt delegate) {
this.delegate = delegate;
}
@Override
public boolean tryAdvance(Consumer<? super T> action) {
return delegate.tryAdvance((IntConsumer) i -> action.accept(function.apply(i)));
}
@Override
public void forEachRemaining(Consumer<? super T> action) {
delegate.forEachRemaining((IntConsumer) i -> action.accept(function.apply(i)));
}
@Override
public @Nullable Spliterator<T> trySplit() {
Spliterator.OfInt split = delegate.trySplit();
return (split == null) ? null : new WithCharacteristics(split);
}
@Override
public long estimateSize() {
return delegate.estimateSize();
}
@Override
public int characteristics() {
return Spliterator.ORDERED
| Spliterator.SIZED
| Spliterator.SUBSIZED
| extraCharacteristics;
}
@Override
public @Nullable Comparator<? super T> getComparator() {
if (hasCharacteristics(Spliterator.SORTED)) {
return comparator;
} else {
throw new IllegalStateException();
}
}
}
return new WithCharacteristics(IntStream.range(0, size).spliterator());
}
/**
* Returns a {@code Spliterator} over the elements of {@code fromSpliterator} mapped by {@code
* function}.
*/
static <InElementT extends @Nullable Object, OutElementT extends @Nullable Object>
Spliterator<OutElementT> map(
Spliterator<InElementT> fromSpliterator,
Function<? super InElementT, ? extends OutElementT> function) {
checkNotNull(fromSpliterator);
checkNotNull(function);
return new Spliterator<OutElementT>() {
@Override
public boolean tryAdvance(Consumer<? super OutElementT> action) {
return fromSpliterator.tryAdvance(
fromElement -> action.accept(function.apply(fromElement)));
}
@Override
public void forEachRemaining(Consumer<? super OutElementT> action) {
fromSpliterator.forEachRemaining(fromElement -> action.accept(function.apply(fromElement)));
}
@Override
public @Nullable Spliterator<OutElementT> trySplit() {
Spliterator<InElementT> fromSplit = fromSpliterator.trySplit();
return (fromSplit != null) ? map(fromSplit, function) : null;
}
@Override
public long estimateSize() {
return fromSpliterator.estimateSize();
}
@Override
public int characteristics() {
return fromSpliterator.characteristics()
& ~(Spliterator.DISTINCT | Spliterator.NONNULL | Spliterator.SORTED);
}
};
}
/** Returns a {@code Spliterator} filtered by the specified predicate. */
static <T extends @Nullable Object> Spliterator<T> filter(
Spliterator<T> fromSpliterator, Predicate<? super T> predicate) {
checkNotNull(fromSpliterator);
checkNotNull(predicate);
@IgnoreJRERequirement // see earlier comment about redundancy
final class Splitr implements Spliterator<T>, Consumer<T> {
@Nullable T holder = null;
@Override
public void accept(@ParametricNullness T t) {
this.holder = t;
}
@Override
public boolean tryAdvance(Consumer<? super T> action) {
while (fromSpliterator.tryAdvance(this)) {
try {
// The cast is safe because tryAdvance puts a T into `holder`.
T next = uncheckedCastNullableTToT(holder);
if (predicate.test(next)) {
action.accept(next);
return true;
}
} finally {
holder = null;
}
}
return false;
}
@Override
public @Nullable Spliterator<T> trySplit() {
Spliterator<T> fromSplit = fromSpliterator.trySplit();
return (fromSplit == null) ? null : filter(fromSplit, predicate);
}
@Override
public long estimateSize() {
return fromSpliterator.estimateSize() / 2;
}
@Override
public @Nullable Comparator<? super T> getComparator() {
return fromSpliterator.getComparator();
}
@Override
public int characteristics() {
return fromSpliterator.characteristics()
& (Spliterator.DISTINCT
| Spliterator.NONNULL
| Spliterator.ORDERED
| Spliterator.SORTED);
}
}
return new Splitr();
}
/**
* Returns a {@code Spliterator} that iterates over the elements of the spliterators generated by
* applying {@code function} to the elements of {@code fromSpliterator}.
*/
static <InElementT extends @Nullable Object, OutElementT extends @Nullable Object>
Spliterator<OutElementT> flatMap(
Spliterator<InElementT> fromSpliterator,
Function<? super InElementT, @Nullable Spliterator<OutElementT>> function,
int topCharacteristics,
long topSize) {
checkArgument(
(topCharacteristics & Spliterator.SUBSIZED) == 0,
"flatMap does not support SUBSIZED characteristic");
checkArgument(
(topCharacteristics & Spliterator.SORTED) == 0,
"flatMap does not support SORTED characteristic");
checkNotNull(fromSpliterator);
checkNotNull(function);
return new FlatMapSpliteratorOfObject<>(
null, fromSpliterator, function, topCharacteristics, topSize);
}
/**
* Returns a {@code Spliterator.OfInt} that iterates over the elements of the spliterators
* generated by applying {@code function} to the elements of {@code fromSpliterator}. (If {@code
* function} returns {@code null} for an input, it is replaced with an empty stream.)
*/
static <InElementT extends @Nullable Object> Spliterator.OfInt flatMapToInt(
Spliterator<InElementT> fromSpliterator,
Function<? super InElementT, Spliterator.@Nullable OfInt> function,
int topCharacteristics,
long topSize) {
checkArgument(
(topCharacteristics & Spliterator.SUBSIZED) == 0,
"flatMap does not support SUBSIZED characteristic");
checkArgument(
(topCharacteristics & Spliterator.SORTED) == 0,
"flatMap does not support SORTED characteristic");
checkNotNull(fromSpliterator);
checkNotNull(function);
return new FlatMapSpliteratorOfInt<>(
null, fromSpliterator, function, topCharacteristics, topSize);
}
/**
* Returns a {@code Spliterator.OfLong} that iterates over the elements of the spliterators
* generated by applying {@code function} to the elements of {@code fromSpliterator}. (If {@code
* function} returns {@code null} for an input, it is replaced with an empty stream.)
*/
static <InElementT extends @Nullable Object> Spliterator.OfLong flatMapToLong(
Spliterator<InElementT> fromSpliterator,
Function<? super InElementT, Spliterator.@Nullable OfLong> function,
int topCharacteristics,
long topSize) {
checkArgument(
(topCharacteristics & Spliterator.SUBSIZED) == 0,
"flatMap does not support SUBSIZED characteristic");
checkArgument(
(topCharacteristics & Spliterator.SORTED) == 0,
"flatMap does not support SORTED characteristic");
checkNotNull(fromSpliterator);
checkNotNull(function);
return new FlatMapSpliteratorOfLong<>(
null, fromSpliterator, function, topCharacteristics, topSize);
}
/**
* Returns a {@code Spliterator.OfDouble} that iterates over the elements of the spliterators
* generated by applying {@code function} to the elements of {@code fromSpliterator}. (If {@code
* function} returns {@code null} for an input, it is replaced with an empty stream.)
*/
static <InElementT extends @Nullable Object> Spliterator.OfDouble flatMapToDouble(
Spliterator<InElementT> fromSpliterator,
Function<? super InElementT, Spliterator.@Nullable OfDouble> function,
int topCharacteristics,
long topSize) {
checkArgument(
(topCharacteristics & Spliterator.SUBSIZED) == 0,
"flatMap does not support SUBSIZED characteristic");
checkArgument(
(topCharacteristics & Spliterator.SORTED) == 0,
"flatMap does not support SORTED characteristic");
checkNotNull(fromSpliterator);
checkNotNull(function);
return new FlatMapSpliteratorOfDouble<>(
null, fromSpliterator, function, topCharacteristics, topSize);
}
/**
* Implements the {@link Stream#flatMap} operation on spliterators.
*
* @param <InElementT> the element type of the input spliterator
* @param <OutElementT> the element type of the output spliterators
* @param <OutSpliteratorT> the type of the output spliterators
*/
@IgnoreJRERequirement // see earlier comment about redundancy
abstract static class FlatMapSpliterator<
InElementT extends @Nullable Object,
OutElementT extends @Nullable Object,
OutSpliteratorT extends Spliterator<OutElementT>>
implements Spliterator<OutElementT> {
/** Factory for constructing {@link FlatMapSpliterator} instances. */
@IgnoreJRERequirement // should be redundant with the annotations on *both* enclosing classes
interface Factory<InElementT extends @Nullable Object, OutSpliteratorT extends Spliterator<?>> {
OutSpliteratorT newFlatMapSpliterator(
@Nullable OutSpliteratorT prefix,
Spliterator<InElementT> fromSplit,
Function<? super InElementT, @Nullable OutSpliteratorT> function,
int splitCharacteristics,
long estSplitSize);
}
@Weak @Nullable OutSpliteratorT prefix;
final Spliterator<InElementT> from;
final Function<? super InElementT, @Nullable OutSpliteratorT> function;
final Factory<InElementT, OutSpliteratorT> factory;
int characteristics;
long estimatedSize;
FlatMapSpliterator(
@Nullable OutSpliteratorT prefix,
Spliterator<InElementT> from,
Function<? super InElementT, @Nullable OutSpliteratorT> function,
Factory<InElementT, OutSpliteratorT> factory,
int characteristics,
long estimatedSize) {
this.prefix = prefix;
this.from = from;
this.function = function;
this.factory = factory;
this.characteristics = characteristics;
this.estimatedSize = estimatedSize;
}
/*
* The tryAdvance and forEachRemaining in FlatMapSpliteratorOfPrimitive are overloads of these
* methods, not overrides. They are annotated @Override because they implement methods from
* Spliterator.OfPrimitive (and override default implementations from Spliterator.OfPrimitive or
* a subtype like Spliterator.OfInt).
*/
@Override
public /*non-final for J2KT*/ boolean tryAdvance(Consumer<? super OutElementT> action) {
while (true) {
if (prefix != null && prefix.tryAdvance(action)) {
if (estimatedSize != Long.MAX_VALUE) {
estimatedSize--;
}
return true;
} else {
prefix = null;
}
if (!from.tryAdvance(fromElement -> prefix = function.apply(fromElement))) {
return false;
}
}
}
@Override
public /*non-final for J2KT*/ void forEachRemaining(Consumer<? super OutElementT> action) {
if (prefix != null) {
prefix.forEachRemaining(action);
prefix = null;
}
from.forEachRemaining(
fromElement -> {
Spliterator<OutElementT> elements = function.apply(fromElement);
if (elements != null) {
elements.forEachRemaining(action);
}
});
estimatedSize = 0;
}
@Override
public final @Nullable OutSpliteratorT trySplit() {
Spliterator<InElementT> fromSplit = from.trySplit();
if (fromSplit != null) {
int splitCharacteristics = characteristics & ~Spliterator.SIZED;
long estSplitSize = estimateSize();
if (estSplitSize < Long.MAX_VALUE) {
estSplitSize /= 2;
this.estimatedSize -= estSplitSize;
this.characteristics = splitCharacteristics;
}
OutSpliteratorT result =
factory.newFlatMapSpliterator(
this.prefix, fromSplit, function, splitCharacteristics, estSplitSize);
this.prefix = null;
return result;
} else if (prefix != null) {
OutSpliteratorT result = prefix;
this.prefix = null;
return result;
} else {
return null;
}
}
@Override
public final long estimateSize() {
if (prefix != null) {
estimatedSize = max(estimatedSize, prefix.estimateSize());
}
return max(estimatedSize, 0);
}
@Override
public final int characteristics() {
return characteristics;
}
}
/**
* Implementation of {@link Stream#flatMap} with an object spliterator output type.
*
* <p>To avoid having this type, we could use {@code FlatMapSpliterator} directly. The main
* advantages to having the type are the ability to use its constructor reference below and the
* parallelism with the primitive version. In short, it makes its caller ({@code flatMap})
* simpler.
*
* @param <InElementT> the element type of the input spliterator
* @param <OutElementT> the element type of the output spliterators
*/
@IgnoreJRERequirement // see earlier comment about redundancy
static final class FlatMapSpliteratorOfObject<
InElementT extends @Nullable Object, OutElementT extends @Nullable Object>
extends FlatMapSpliterator<InElementT, OutElementT, Spliterator<OutElementT>> {
FlatMapSpliteratorOfObject(
@Nullable Spliterator<OutElementT> prefix,
Spliterator<InElementT> from,
Function<? super InElementT, @Nullable Spliterator<OutElementT>> function,
int characteristics,
long estimatedSize) {
super(
prefix, from, function, FlatMapSpliteratorOfObject::new, characteristics, estimatedSize);
}
}
/**
* Implementation of {@link Stream#flatMap} with a primitive spliterator output type.
*
* @param <InElementT> the element type of the input spliterator
* @param <OutElementT> the (boxed) element type of the output spliterators
* @param <OutConsumerT> the specialized consumer type for the primitive output type
* @param <OutSpliteratorT> the primitive spliterator type associated with {@code OutElementT}
*/
@IgnoreJRERequirement // see earlier comment about redundancy
abstract static class FlatMapSpliteratorOfPrimitive<
InElementT extends @Nullable Object,
OutElementT extends @Nullable Object,
OutConsumerT,
OutSpliteratorT extends
Spliterator.OfPrimitive<OutElementT, OutConsumerT, OutSpliteratorT>>
extends FlatMapSpliterator<InElementT, OutElementT, OutSpliteratorT>
implements Spliterator.OfPrimitive<OutElementT, OutConsumerT, OutSpliteratorT> {
FlatMapSpliteratorOfPrimitive(
@Nullable OutSpliteratorT prefix,
Spliterator<InElementT> from,
Function<? super InElementT, @Nullable OutSpliteratorT> function,
Factory<InElementT, OutSpliteratorT> factory,
int characteristics,
long estimatedSize) {
super(prefix, from, function, factory, characteristics, estimatedSize);
}
@Override
public final boolean tryAdvance(OutConsumerT action) {
while (true) {
if (prefix != null && prefix.tryAdvance(action)) {
if (estimatedSize != Long.MAX_VALUE) {
estimatedSize--;
}
return true;
} else {
prefix = null;
}
if (!from.tryAdvance(fromElement -> prefix = function.apply(fromElement))) {
return false;
}
}
}
@Override
public final void forEachRemaining(OutConsumerT action) {
if (prefix != null) {
prefix.forEachRemaining(action);
prefix = null;
}
from.forEachRemaining(
fromElement -> {
OutSpliteratorT elements = function.apply(fromElement);
if (elements != null) {
elements.forEachRemaining(action);
}
});
estimatedSize = 0;
}
}
/** Implementation of {@link #flatMapToInt}. */
@IgnoreJRERequirement // see earlier comment about redundancy
static final class FlatMapSpliteratorOfInt<InElementT extends @Nullable Object>
extends FlatMapSpliteratorOfPrimitive<InElementT, Integer, IntConsumer, Spliterator.OfInt>
implements Spliterator.OfInt {
FlatMapSpliteratorOfInt(
Spliterator.@Nullable OfInt prefix,
Spliterator<InElementT> from,
Function<? super InElementT, Spliterator.@Nullable OfInt> function,
int characteristics,
long estimatedSize) {
super(prefix, from, function, FlatMapSpliteratorOfInt::new, characteristics, estimatedSize);
}
}
/** Implementation of {@link #flatMapToLong}. */
@IgnoreJRERequirement // see earlier comment about redundancy
static final class FlatMapSpliteratorOfLong<InElementT extends @Nullable Object>
extends FlatMapSpliteratorOfPrimitive<InElementT, Long, LongConsumer, Spliterator.OfLong>
implements Spliterator.OfLong {
FlatMapSpliteratorOfLong(
Spliterator.@Nullable OfLong prefix,
Spliterator<InElementT> from,
Function<? super InElementT, Spliterator.@Nullable OfLong> function,
int characteristics,
long estimatedSize) {
super(prefix, from, function, FlatMapSpliteratorOfLong::new, characteristics, estimatedSize);
}
}
/** Implementation of {@link #flatMapToDouble}. */
@IgnoreJRERequirement // see earlier comment about redundancy
static final class FlatMapSpliteratorOfDouble<InElementT extends @Nullable Object>
extends FlatMapSpliteratorOfPrimitive<
InElementT, Double, DoubleConsumer, Spliterator.OfDouble>
implements Spliterator.OfDouble {
FlatMapSpliteratorOfDouble(
Spliterator.@Nullable OfDouble prefix,
Spliterator<InElementT> from,
Function<? super InElementT, Spliterator.@Nullable OfDouble> function,
int characteristics,
long estimatedSize) {
super(
prefix, from, function, FlatMapSpliteratorOfDouble::new, characteristics, estimatedSize);
}
}
} | java | github | https://github.com/google/guava | android/guava/src/com/google/common/collect/CollectSpliterators.java |
# Copyright 2017 Capital One Services, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from boto3.dynamodb import conditions
from c7n.utils import format_event
log = logging.getLogger('sphere11.db')
class LockDb(object):
STATE_LOCKED = "locked"
STATE_UNLOCKED = "unlocked"
STATE_PENDING = "pending"
def __init__(self, session, table_name, endpoint=None):
self.client = session.client('dynamodb', endpoint_url=endpoint)
self.table = session.resource(
'dynamodb', endpoint_url=endpoint).Table(table_name)
self.table_name = table_name
def record(self, account_id, resource_id):
result = self.table.get_item(
Key={
'AccountId': account_id,
"ResourceId": resource_id},
ConsistentRead=True)
result.pop('ResponseMetadata')
if result:
return result['Item']
return None
def save(self, record):
try:
log.info("Serializing record %s", format_event(record))
except TypeError:
pass
self.table.put_item(Item=record)
def iter_pending(self, account_id):
expr = conditions.Key('AccountId').eq(account_id)
expr & conditions.Key("LockStatus").eq("pending")
results = self.table.query(
IndexName='PendingLocks', KeyConditionExpression=expr)
return results.get('Items', ())
def iter_resources(self, account_id, resource_type=None):
expr = conditions.Key('AccountId').eq(account_id)
if resource_type == "security-group":
expr = expr & conditions.Key('ResourceId').between('sg-', 'vpc-')
elif resource_type == "vpc":
expr = expr & conditions.Key('ResourceId').begins_with('vpc-')
results = self.table.scan(FilterExpression=expr)
return results['Items']
def info(self, account_id, resource_id, parent_id):
"""Check if a resource is locked.
If a resource has an explicit status we use that, else
we defer to the parent resource lock status.
"""
resource = self.record(account_id, resource_id)
if resource is None and not parent_id:
return {'ResourceId': resource_id,
'LockStatus': self.STATE_UNLOCKED}
elif resource is None:
parent = self.record(account_id, parent_id)
if parent is None:
return {'ResourceId': resource_id,
'ParentId': parent_id,
'LockStatus': self.STATE_UNLOCKED}
parent['ResourceId'] = resource_id
parent['ParentId'] = parent_id
parent['LockType'] = 'parent'
return parent
if resource['ResourceId'].startswith('vpc-'):
return resource
if resource['ResourceId'].startswith('sg-'):
return resource
def provision(self, read_capacity=5, write_capacity=1):
names = set()
for p in self.client.get_paginator('list_tables').paginate():
names.update(p['TableNames'])
if self.table_name in names:
return False
self.client.create_table(
TableName=self.table_name,
KeySchema=[
{
"AttributeName": "AccountId",
"KeyType": "HASH"
},
{
"AttributeName": "ResourceId",
"KeyType": "RANGE"
}
],
AttributeDefinitions=[
{
"AttributeName": "ResourceId",
"AttributeType": "S"
},
{
"AttributeName": "AccountId",
"AttributeType": "S"
},
{
"AttributeName": "LockStatus",
"AttributeType": "S"
}
],
LocalSecondaryIndexes=[{
'IndexName': 'PendingLocks',
'Projection': {
'ProjectionType': 'INCLUDE',
'NonKeyAttributes': ['LockDate'],
},
'KeySchema': [
{
'AttributeName': 'AccountId',
'KeyType': 'HASH'
},
{
'AttributeName': 'LockStatus',
'KeyType': 'RANGE'
}
],
}],
ProvisionedThroughput={
"ReadCapacityUnits": read_capacity,
"WriteCapacityUnits": write_capacity
},
StreamSpecification={
'StreamEnabled': True,
'StreamViewType': 'NEW_IMAGE'
}
)
return True | unknown | codeparrot/codeparrot-clean | ||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/ipmi/ssif-bmc.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: SSIF IPMI BMC interface
description: SSIF IPMI BMC device bindings
maintainers:
- Quan Nguyen <quan@os.amperecomputing.com>
properties:
compatible:
enum:
- ssif-bmc
reg:
maxItems: 1
required:
- compatible
- reg
additionalProperties: false
examples:
- |
i2c {
#address-cells = <1>;
#size-cells = <0>;
ssif-bmc@10 {
compatible = "ssif-bmc";
reg = <0x10>;
};
}; | unknown | github | https://github.com/torvalds/linux | Documentation/devicetree/bindings/ipmi/ssif-bmc.yaml |
<?php
namespace Illuminate\Tests\Support;
use Illuminate\Support\ConfigurationUrlParser;
use PHPUnit\Framework\Attributes\DataProvider;
use PHPUnit\Framework\TestCase;
class ConfigurationUrlParserTest extends TestCase
{
#[DataProvider('databaseUrls')]
public function testDatabaseUrlsAreParsed($config, $expectedOutput)
{
$this->assertEquals($expectedOutput, (new ConfigurationUrlParser)->parseConfiguration($config));
}
public function testDriversAliases()
{
$this->assertEquals([
'mssql' => 'sqlsrv',
'mysql2' => 'mysql',
'postgres' => 'pgsql',
'postgresql' => 'pgsql',
'sqlite3' => 'sqlite',
'redis' => 'tcp',
'rediss' => 'tls',
], ConfigurationUrlParser::getDriverAliases());
ConfigurationUrlParser::addDriverAlias('some-particular-alias', 'mysql');
$this->assertEquals([
'mssql' => 'sqlsrv',
'mysql2' => 'mysql',
'postgres' => 'pgsql',
'postgresql' => 'pgsql',
'sqlite3' => 'sqlite',
'redis' => 'tcp',
'rediss' => 'tls',
'some-particular-alias' => 'mysql',
], ConfigurationUrlParser::getDriverAliases());
$this->assertEquals([
'driver' => 'mysql',
], (new ConfigurationUrlParser)->parseConfiguration('some-particular-alias://null'));
}
public static function databaseUrls()
{
return [
'simple URL' => [
'mysql://foo:bar@localhost/baz',
[
'driver' => 'mysql',
'username' => 'foo',
'password' => 'bar',
'host' => 'localhost',
'database' => 'baz',
],
],
'simple URL with port' => [
'mysql://foo:bar@localhost:134/baz',
[
'driver' => 'mysql',
'username' => 'foo',
'password' => 'bar',
'host' => 'localhost',
'port' => 134,
'database' => 'baz',
],
],
'sqlite relative URL with host' => [
'sqlite://localhost/foo/database.sqlite',
[
'database' => 'foo/database.sqlite',
'driver' => 'sqlite',
'host' => 'localhost',
],
],
'sqlite absolute URL with host' => [
'sqlite://localhost//tmp/database.sqlite',
[
'database' => '/tmp/database.sqlite',
'driver' => 'sqlite',
'host' => 'localhost',
],
],
'sqlite relative URL without host' => [
'sqlite:///foo/database.sqlite',
[
'database' => 'foo/database.sqlite',
'driver' => 'sqlite',
],
],
'sqlite absolute URL without host' => [
'sqlite:////tmp/database.sqlite',
[
'database' => '/tmp/database.sqlite',
'driver' => 'sqlite',
],
],
'sqlite memory' => [
'sqlite:///:memory:',
[
'database' => ':memory:',
'driver' => 'sqlite',
],
],
'params parsed from URL override individual params' => [
[
'url' => 'mysql://foo:bar@localhost/baz',
'password' => 'lulz',
'driver' => 'sqlite',
],
[
'username' => 'foo',
'password' => 'bar',
'host' => 'localhost',
'database' => 'baz',
'driver' => 'mysql',
],
],
'params not parsed from URL but individual params are preserved' => [
[
'url' => 'mysql://foo:bar@localhost/baz',
'port' => 134,
],
[
'username' => 'foo',
'password' => 'bar',
'host' => 'localhost',
'port' => 134,
'database' => 'baz',
'driver' => 'mysql',
],
],
'query params from URL are used as extra params' => [
'mysql://foo:bar@localhost/database?charset=UTF-8',
[
'driver' => 'mysql',
'database' => 'database',
'host' => 'localhost',
'username' => 'foo',
'password' => 'bar',
'charset' => 'UTF-8',
],
],
'simple URL with driver set apart' => [
[
'url' => '//foo:bar@localhost/baz',
'driver' => 'sqlsrv',
],
[
'username' => 'foo',
'password' => 'bar',
'host' => 'localhost',
'database' => 'baz',
'driver' => 'sqlsrv',
],
],
'simple URL with percent encoding' => [
'mysql://foo%3A:bar%2F@localhost/baz+baz%40',
[
'username' => 'foo:',
'password' => 'bar/',
'host' => 'localhost',
'database' => 'baz+baz@',
'driver' => 'mysql',
],
],
'simple URL with percent sign in password' => [
'mysql://foo:bar%25bar@localhost/baz',
[
'username' => 'foo',
'password' => 'bar%bar',
'host' => 'localhost',
'database' => 'baz',
'driver' => 'mysql',
],
],
'simple URL with percent encoding in query' => [
'mysql://foo:bar%25bar@localhost/baz?timezone=%2B00%3A00',
[
'username' => 'foo',
'password' => 'bar%bar',
'host' => 'localhost',
'database' => 'baz',
'driver' => 'mysql',
'timezone' => '+00:00',
],
],
'URL with mssql alias driver' => [
'mssql://null',
[
'driver' => 'sqlsrv',
],
],
'URL with sqlsrv alias driver' => [
'sqlsrv://null',
[
'driver' => 'sqlsrv',
],
],
'URL with mysql alias driver' => [
'mysql://null',
[
'driver' => 'mysql',
],
],
'URL with mysql2 alias driver' => [
'mysql2://null',
[
'driver' => 'mysql',
],
],
'URL with postgres alias driver' => [
'postgres://null',
[
'driver' => 'pgsql',
],
],
'URL with postgresql alias driver' => [
'postgresql://null',
[
'driver' => 'pgsql',
],
],
'URL with pgsql alias driver' => [
'pgsql://null',
[
'driver' => 'pgsql',
],
],
'URL with sqlite alias driver' => [
'sqlite://null',
[
'driver' => 'sqlite',
],
],
'URL with sqlite3 alias driver' => [
'sqlite3://null',
[
'driver' => 'sqlite',
],
],
'URL with unknown driver' => [
'foo://null',
[
'driver' => 'foo',
],
],
'Sqlite with foreign_key_constraints' => [
'sqlite:////absolute/path/to/database.sqlite?foreign_key_constraints=true',
[
'driver' => 'sqlite',
'database' => '/absolute/path/to/database.sqlite',
'foreign_key_constraints' => true,
],
],
'Sqlite with busy_timeout' => [
'sqlite:////absolute/path/to/database.sqlite?busy_timeout=5000',
[
'driver' => 'sqlite',
'database' => '/absolute/path/to/database.sqlite',
'busy_timeout' => 5000,
],
],
'Sqlite with journal_mode' => [
'sqlite:////absolute/path/to/database.sqlite?journal_mode=WAL',
[
'driver' => 'sqlite',
'database' => '/absolute/path/to/database.sqlite',
'journal_mode' => 'WAL',
],
],
'Sqlite with synchronous' => [
'sqlite:////absolute/path/to/database.sqlite?synchronous=NORMAL',
[
'driver' => 'sqlite',
'database' => '/absolute/path/to/database.sqlite',
'synchronous' => 'NORMAL',
],
],
'Most complex example with read and write subarrays all in string' => [
'mysql://root:@null/database?read[host][]=192.168.1.1&write[host][]=196.168.1.2&sticky=true&charset=utf8mb4&collation=utf8mb4_unicode_ci&prefix=',
[
'read' => [
'host' => ['192.168.1.1'],
],
'write' => [
'host' => ['196.168.1.2'],
],
'sticky' => true,
'driver' => 'mysql',
'database' => 'database',
'username' => 'root',
'password' => '',
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
],
],
'Full example from doc that prove that there isn\'t any Breaking Change' => [
[
'driver' => 'mysql',
'host' => '127.0.0.1',
'port' => '3306',
'database' => 'forge',
'username' => 'forge',
'password' => '',
'unix_socket' => '',
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'prefix_indexes' => true,
'strict' => true,
'engine' => null,
'options' => ['foo' => 'bar'],
],
[
'driver' => 'mysql',
'host' => '127.0.0.1',
'port' => '3306',
'database' => 'forge',
'username' => 'forge',
'password' => '',
'unix_socket' => '',
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'prefix_indexes' => true,
'strict' => true,
'engine' => null,
'options' => ['foo' => 'bar'],
],
],
'Full example from doc with url overwriting parameters' => [
[
'url' => 'mysql://root:pass@db/local',
'driver' => 'mysql',
'host' => '127.0.0.1',
'port' => '3306',
'database' => 'forge',
'username' => 'forge',
'password' => '',
'unix_socket' => '',
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'prefix_indexes' => true,
'strict' => true,
'engine' => null,
'options' => ['foo' => 'bar'],
],
[
'driver' => 'mysql',
'host' => 'db',
'port' => '3306',
'database' => 'local',
'username' => 'root',
'password' => 'pass',
'unix_socket' => '',
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'prefix_indexes' => true,
'strict' => true,
'engine' => null,
'options' => ['foo' => 'bar'],
],
],
'Redis Example' => [
[
// Coming directly from Heroku documentation
'url' => 'redis://h:asdfqwer1234asdf@ec2-111-1-1-1.compute-1.amazonaws.com:111',
'host' => '127.0.0.1',
'password' => null,
'port' => 6379,
'database' => 0,
],
[
'driver' => 'tcp',
'host' => 'ec2-111-1-1-1.compute-1.amazonaws.com',
'port' => 111,
'database' => 0,
'username' => 'h',
'password' => 'asdfqwer1234asdf',
],
],
'Redis example where URL ends with "/" and database is not present' => [
[
'url' => 'redis://h:asdfqwer1234asdf@ec2-111-1-1-1.compute-1.amazonaws.com:111/',
'host' => '127.0.0.1',
'password' => null,
'port' => 6379,
'database' => 2,
],
[
'driver' => 'tcp',
'host' => 'ec2-111-1-1-1.compute-1.amazonaws.com',
'port' => 111,
'database' => 2,
'username' => 'h',
'password' => 'asdfqwer1234asdf',
],
],
'Redis Example with tls scheme' => [
[
'url' => 'tls://h:asdfqwer1234asdf@ec2-111-1-1-1.compute-1.amazonaws.com:111',
'host' => '127.0.0.1',
'password' => null,
'port' => 6379,
'database' => 0,
],
[
'driver' => 'tls',
'host' => 'ec2-111-1-1-1.compute-1.amazonaws.com',
'port' => 111,
'database' => 0,
'username' => 'h',
'password' => 'asdfqwer1234asdf',
],
],
'Redis Example with rediss scheme' => [
[
'url' => 'rediss://h:asdfqwer1234asdf@ec2-111-1-1-1.compute-1.amazonaws.com:111',
'host' => '127.0.0.1',
'password' => null,
'port' => 6379,
'database' => 0,
],
[
'driver' => 'tls',
'host' => 'ec2-111-1-1-1.compute-1.amazonaws.com',
'port' => 111,
'database' => 0,
'username' => 'h',
'password' => 'asdfqwer1234asdf',
],
],
];
}
} | php | github | https://github.com/laravel/framework | tests/Support/ConfigurationUrlParserTest.php |
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2012, Nachi Ueno, NTT MCL, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Views for managing Neutron Routers.
"""
from django.core.urlresolvers import reverse_lazy # noqa
from django.utils.translation import ugettext_lazy as _ # noqa
from horizon import exceptions
from openstack_dashboard import api
from openstack_dashboard.dashboards.admin.networks import views as n_views
from openstack_dashboard.dashboards.project.routers import views as r_views
from openstack_dashboard.dashboards.admin.routers.ports \
import tables as ports_tables
from openstack_dashboard.dashboards.admin.routers import tables
class IndexView(r_views.IndexView, n_views.IndexView):
table_class = tables.RoutersTable
template_name = 'admin/routers/index.html'
def _get_routers(self, search_opts=None):
try:
routers = api.neutron.router_list(self.request,
search_opts=search_opts)
except Exception:
routers = []
exceptions.handle(self.request,
_('Unable to retrieve router list.'))
if routers:
tenant_dict = self._get_tenant_list()
ext_net_dict = self._list_external_networks()
for r in routers:
# Set tenant name
tenant = tenant_dict.get(r.tenant_id, None)
r.tenant_name = getattr(tenant, 'name', None)
# If name is empty use UUID as name
r.set_id_as_name_if_empty()
# Set external network name
self._set_external_network(r, ext_net_dict)
return routers
def get_data(self):
routers = self._get_routers()
return routers
class DetailView(r_views.DetailView):
table_classes = (ports_tables.PortsTable, )
template_name = 'admin/routers/detail.html'
failure_url = reverse_lazy('horizon:admin:routers:index') | unknown | codeparrot/codeparrot-clean | ||
{
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"cors": "^2.8.5",
"next": "latest",
"react": "^18.3.1",
"react-dom": "^18.3.1"
},
"devDependencies": {
"@types/cors": "^2.8.17",
"@types/node": "^22.10.1",
"@types/react": "^18.3.12",
"@types/react-dom": "^18.3.1",
"typescript": "^5.7.2"
}
} | json | github | https://github.com/vercel/next.js | examples/api-routes-cors/package.json |
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
Test the interaction between trial and errors logged during test run.
"""
from __future__ import division, absolute_import
import time
from twisted.internet import reactor, task
from twisted.python import failure, log
from twisted.trial import unittest, reporter, _synctest
def makeFailure():
"""
Return a new, realistic failure.
"""
try:
1/0
except ZeroDivisionError:
f = failure.Failure()
return f
class Mask(object):
"""
Hide C{MockTest}s from Trial's automatic test finder.
"""
class FailureLoggingMixin(object):
def test_silent(self):
"""
Don't log any errors.
"""
def test_single(self):
"""
Log a single error.
"""
log.err(makeFailure())
def test_double(self):
"""
Log two errors.
"""
log.err(makeFailure())
log.err(makeFailure())
class SynchronousFailureLogging(FailureLoggingMixin, unittest.SynchronousTestCase):
pass
class AsynchronousFailureLogging(FailureLoggingMixin, unittest.TestCase):
def test_inCallback(self):
"""
Log an error in an asynchronous callback.
"""
return task.deferLater(reactor, 0, lambda: log.err(makeFailure()))
class ObserverTests(unittest.SynchronousTestCase):
"""
Tests for L{_synctest._LogObserver}, a helper for the implementation of
L{SynchronousTestCase.flushLoggedErrors}.
"""
def setUp(self):
self.result = reporter.TestResult()
self.observer = _synctest._LogObserver()
def test_msg(self):
"""
Test that a standard log message doesn't go anywhere near the result.
"""
self.observer.gotEvent({'message': ('some message',),
'time': time.time(), 'isError': 0,
'system': '-'})
self.assertEqual(self.observer.getErrors(), [])
def test_error(self):
"""
Test that an observed error gets added to the result
"""
f = makeFailure()
self.observer.gotEvent({'message': (),
'time': time.time(), 'isError': 1,
'system': '-', 'failure': f,
'why': None})
self.assertEqual(self.observer.getErrors(), [f])
def test_flush(self):
"""
Check that flushing the observer with no args removes all errors.
"""
self.test_error()
flushed = self.observer.flushErrors()
self.assertEqual(self.observer.getErrors(), [])
self.assertEqual(len(flushed), 1)
self.assertTrue(flushed[0].check(ZeroDivisionError))
def _makeRuntimeFailure(self):
return failure.Failure(RuntimeError('test error'))
def test_flushByType(self):
"""
Check that flushing the observer remove all failures of the given type.
"""
self.test_error() # log a ZeroDivisionError to the observer
f = self._makeRuntimeFailure()
self.observer.gotEvent(dict(message=(), time=time.time(), isError=1,
system='-', failure=f, why=None))
flushed = self.observer.flushErrors(ZeroDivisionError)
self.assertEqual(self.observer.getErrors(), [f])
self.assertEqual(len(flushed), 1)
self.assertTrue(flushed[0].check(ZeroDivisionError))
def test_ignoreErrors(self):
"""
Check that C{_ignoreErrors} actually causes errors to be ignored.
"""
self.observer._ignoreErrors(ZeroDivisionError)
f = makeFailure()
self.observer.gotEvent({'message': (),
'time': time.time(), 'isError': 1,
'system': '-', 'failure': f,
'why': None})
self.assertEqual(self.observer.getErrors(), [])
def test_clearIgnores(self):
"""
Check that C{_clearIgnores} ensures that previously ignored errors
get captured.
"""
self.observer._ignoreErrors(ZeroDivisionError)
self.observer._clearIgnores()
f = makeFailure()
self.observer.gotEvent({'message': (),
'time': time.time(), 'isError': 1,
'system': '-', 'failure': f,
'why': None})
self.assertEqual(self.observer.getErrors(), [f])
class LogErrorsMixin(object):
"""
High-level tests demonstrating the expected behaviour of logged errors
during tests.
"""
def setUp(self):
self.result = reporter.TestResult()
def tearDown(self):
self.flushLoggedErrors(ZeroDivisionError)
def test_singleError(self):
"""
Test that a logged error gets reported as a test error.
"""
test = self.MockTest('test_single')
test(self.result)
self.assertEqual(len(self.result.errors), 1)
self.assertTrue(self.result.errors[0][1].check(ZeroDivisionError),
self.result.errors[0][1])
self.assertEqual(0, self.result.successes)
def test_twoErrors(self):
"""
Test that when two errors get logged, they both get reported as test
errors.
"""
test = self.MockTest('test_double')
test(self.result)
self.assertEqual(len(self.result.errors), 2)
self.assertEqual(0, self.result.successes)
def test_errorsIsolated(self):
"""
Check that an error logged in one test doesn't fail the next test.
"""
t1 = self.MockTest('test_single')
t2 = self.MockTest('test_silent')
t1(self.result)
t2(self.result)
self.assertEqual(len(self.result.errors), 1)
self.assertEqual(self.result.errors[0][0], t1)
self.assertEqual(1, self.result.successes)
def test_boundedObservers(self):
"""
There are no extra log observers after a test runs.
"""
# XXX trial is *all about* global log state. It should really be fixed.
observer = _synctest._LogObserver()
self.patch(_synctest, '_logObserver', observer)
observers = log.theLogPublisher.observers[:]
test = self.MockTest()
test(self.result)
self.assertEqual(observers, log.theLogPublisher.observers)
class SynchronousLogErrorsTests(LogErrorsMixin, unittest.SynchronousTestCase):
MockTest = Mask.SynchronousFailureLogging
class AsynchronousLogErrorsTests(LogErrorsMixin, unittest.TestCase):
MockTest = Mask.AsynchronousFailureLogging
def test_inCallback(self):
"""
Test that errors logged in callbacks get reported as test errors.
"""
test = self.MockTest('test_inCallback')
test(self.result)
self.assertEqual(len(self.result.errors), 1)
self.assertTrue(self.result.errors[0][1].check(ZeroDivisionError),
self.result.errors[0][1]) | unknown | codeparrot/codeparrot-clean | ||
{
"kind": "Dashboard",
"apiVersion": "dashboard.grafana.app/v2alpha1",
"metadata": {
"name": "v18.gauge_options.v42"
},
"spec": {
"annotations": [
{
"kind": "AnnotationQuery",
"spec": {
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"query": {
"kind": "grafana",
"spec": {}
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations \u0026 Alerts",
"builtIn": true,
"legacyOptions": {
"type": "dashboard"
}
}
}
],
"cursorSync": "Off",
"editable": true,
"elements": {
"panel-1": {
"kind": "Panel",
"spec": {
"id": 1,
"title": "Complete Gauge Panel",
"description": "",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {}
},
"datasource": {
"type": "prometheus",
"uid": "default-ds-uid"
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "gauge",
"spec": {
"pluginVersion": "",
"options": {
"thresholds": [
{
"color": "red",
"value": 100
},
{
"color": "yellow",
"value": 50
},
{
"color": "green",
"value": 0
}
],
"valueOptions": {
"decimals": 2,
"prefix": "Value: ",
"stat": "last",
"suffix": " ms",
"unit": "ms"
}
},
"fieldConfig": {
"defaults": {},
"overrides": []
}
}
}
}
},
"panel-2": {
"kind": "Panel",
"spec": {
"id": 2,
"title": "Partial Gauge Panel",
"description": "",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {}
},
"datasource": {
"type": "prometheus",
"uid": "default-ds-uid"
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "gauge",
"spec": {
"pluginVersion": "",
"options": {
"valueOptions": {
"decimals": 1,
"unit": "percent"
}
},
"fieldConfig": {
"defaults": {},
"overrides": []
}
}
}
}
},
"panel-3": {
"kind": "Panel",
"spec": {
"id": 3,
"title": "Buggy Gauge Panel",
"description": "",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {}
},
"datasource": {
"type": "prometheus",
"uid": "default-ds-uid"
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "gauge",
"spec": {
"pluginVersion": "",
"options": {
"valueOptions": {
"decimals": 0,
"stat": "avg",
"unit": "bytes"
}
},
"fieldConfig": {
"defaults": {},
"overrides": []
}
}
}
}
},
"panel-4": {
"kind": "Panel",
"spec": {
"id": 4,
"title": "Custom Properties Gauge Panel",
"description": "",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {}
},
"datasource": {
"type": "prometheus",
"uid": "default-ds-uid"
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "gauge",
"spec": {
"pluginVersion": "",
"options": {
"anotherProp": 42,
"customProperty": "customValue",
"thresholds": [
{
"color": "blue",
"value": 10
}
],
"valueOptions": {
"unit": "short"
}
},
"fieldConfig": {
"defaults": {},
"overrides": []
}
}
}
}
},
"panel-5": {
"kind": "Panel",
"spec": {
"id": 5,
"title": "Non-Gauge Panel",
"description": "",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {}
},
"datasource": {
"type": "prometheus",
"uid": "default-ds-uid"
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "timeseries",
"spec": {
"pluginVersion": "",
"options": {
"legend": {
"show": true,
"showLegend": true
}
},
"fieldConfig": {
"defaults": {},
"overrides": []
}
}
}
}
}
},
"layout": {
"kind": "GridLayout",
"spec": {
"items": [
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 0,
"width": 6,
"height": 3,
"element": {
"kind": "ElementReference",
"name": "panel-1"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 0,
"width": 6,
"height": 3,
"element": {
"kind": "ElementReference",
"name": "panel-2"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 0,
"width": 6,
"height": 3,
"element": {
"kind": "ElementReference",
"name": "panel-3"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 0,
"width": 6,
"height": 3,
"element": {
"kind": "ElementReference",
"name": "panel-4"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 0,
"width": 6,
"height": 3,
"element": {
"kind": "ElementReference",
"name": "panel-5"
}
}
}
]
}
},
"links": [],
"liveNow": false,
"preload": false,
"tags": [],
"timeSettings": {
"timezone": "",
"from": "now-6h",
"to": "now",
"autoRefresh": "",
"autoRefreshIntervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"hideTimepicker": false,
"fiscalYearStartMonth": 0
},
"title": "V18 Gauge Options Migration Test Dashboard",
"variables": []
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v2beta1"
}
}
} | json | github | https://github.com/grafana/grafana | apps/dashboard/pkg/migration/conversion/testdata/output/migrated_dashboards_from_v0_to_v2/v2beta1.v18.gauge_options.v2alpha1.json |
/* https://github.com/eslint/eslint/blob/v9.36.0/packages/js/src/configs/eslint-recommended.js */
/*
Copyright OpenJS Foundation and other contributors, <www.openjsf.org>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
/**
* @fileoverview Configuration applied when a user configuration extends from
* eslint:recommended.
* @author Nicholas C. Zakas
*/
"use strict";
/* eslint sort-keys: ["error", "asc"] -- Long, so make more readable */
/*
* IMPORTANT!
*
* We cannot add a "name" property to this object because it's still used in eslintrc
* which doesn't support the "name" property. If we add a "name" property, it will
* cause an error.
*/
module.exports = Object.freeze({
rules: Object.freeze({
"constructor-super": "error",
"for-direction": "error",
"getter-return": "error",
"no-async-promise-executor": "error",
"no-case-declarations": "error",
"no-class-assign": "error",
"no-compare-neg-zero": "error",
"no-cond-assign": "error",
"no-const-assign": "error",
"no-constant-binary-expression": "error",
"no-constant-condition": "error",
"no-control-regex": "error",
"no-debugger": "error",
"no-delete-var": "error",
"no-dupe-args": "error",
"no-dupe-class-members": "error",
"no-dupe-else-if": "error",
"no-dupe-keys": "error",
"no-duplicate-case": "error",
"no-empty": "error",
"no-empty-character-class": "error",
"no-empty-pattern": "error",
"no-empty-static-block": "error",
"no-ex-assign": "error",
"no-extra-boolean-cast": "error",
"no-fallthrough": "error",
"no-func-assign": "error",
"no-global-assign": "error",
"no-import-assign": "error",
"no-invalid-regexp": "error",
"no-irregular-whitespace": "error",
"no-loss-of-precision": "error",
"no-misleading-character-class": "error",
"no-new-native-nonconstructor": "error",
"no-nonoctal-decimal-escape": "error",
"no-obj-calls": "error",
"no-octal": "error",
"no-prototype-builtins": "error",
"no-redeclare": "error",
"no-regex-spaces": "error",
"no-self-assign": "error",
"no-setter-return": "error",
"no-shadow-restricted-names": "error",
"no-sparse-arrays": "error",
"no-this-before-super": "error",
"no-undef": "error",
"no-unexpected-multiline": "error",
"no-unreachable": "error",
"no-unsafe-finally": "error",
"no-unsafe-negation": "error",
"no-unsafe-optional-chaining": "error",
"no-unused-labels": "error",
"no-unused-private-class-members": "error",
"no-unused-vars": "error",
"no-useless-backreference": "error",
"no-useless-catch": "error",
"no-useless-escape": "error",
"no-with": "error",
"require-yield": "error",
"use-isnan": "error",
"valid-typeof": "error",
}),
}); | javascript | github | https://github.com/django/django | eslint-recommended.js |
from sqlalchemy.orm import create_session, relationship, mapper, \
contains_eager, joinedload, subqueryload, subqueryload_all,\
Session, aliased, with_polymorphic
from sqlalchemy import Integer, String, ForeignKey
from sqlalchemy.engine import default
from sqlalchemy.testing import AssertsCompiledSQL, fixtures
from sqlalchemy import testing
from sqlalchemy.testing.schema import Table, Column
from sqlalchemy.testing import assert_raises, eq_, is_
class Company(fixtures.ComparableEntity):
pass
class Person(fixtures.ComparableEntity):
pass
class Engineer(Person):
pass
class Manager(Person):
pass
class Boss(Manager):
pass
class Machine(fixtures.ComparableEntity):
pass
class Paperwork(fixtures.ComparableEntity):
pass
class SelfReferentialTestJoinedToBase(fixtures.MappedTest):
run_setup_mappers = 'once'
@classmethod
def define_tables(cls, metadata):
Table('people', metadata,
Column('person_id', Integer,
primary_key=True,
test_needs_autoincrement=True),
Column('name', String(50)),
Column('type', String(30)))
Table('engineers', metadata,
Column('person_id', Integer,
ForeignKey('people.person_id'),
primary_key=True),
Column('primary_language', String(50)),
Column('reports_to_id', Integer,
ForeignKey('people.person_id')))
@classmethod
def setup_mappers(cls):
engineers, people = cls.tables.engineers, cls.tables.people
mapper(Person, people,
polymorphic_on=people.c.type,
polymorphic_identity='person')
mapper(Engineer, engineers,
inherits=Person,
inherit_condition=engineers.c.person_id == people.c.person_id,
polymorphic_identity='engineer',
properties={
'reports_to':relationship(
Person,
primaryjoin=
people.c.person_id == engineers.c.reports_to_id)})
def test_has(self):
p1 = Person(name='dogbert')
e1 = Engineer(name='dilbert', primary_language='java', reports_to=p1)
sess = create_session()
sess.add(p1)
sess.add(e1)
sess.flush()
sess.expunge_all()
eq_(sess.query(Engineer)
.filter(Engineer.reports_to.has(Person.name == 'dogbert'))
.first(),
Engineer(name='dilbert'))
def test_oftype_aliases_in_exists(self):
e1 = Engineer(name='dilbert', primary_language='java')
e2 = Engineer(name='wally', primary_language='c++', reports_to=e1)
sess = create_session()
sess.add_all([e1, e2])
sess.flush()
eq_(sess.query(Engineer)
.filter(Engineer.reports_to
.of_type(Engineer)
.has(Engineer.name == 'dilbert'))
.first(),
e2)
def test_join(self):
p1 = Person(name='dogbert')
e1 = Engineer(name='dilbert', primary_language='java', reports_to=p1)
sess = create_session()
sess.add(p1)
sess.add(e1)
sess.flush()
sess.expunge_all()
eq_(sess.query(Engineer)
.join('reports_to', aliased=True)
.filter(Person.name == 'dogbert').first(),
Engineer(name='dilbert'))
class SelfReferentialJ2JTest(fixtures.MappedTest):
run_setup_mappers = 'once'
@classmethod
def define_tables(cls, metadata):
people = Table('people', metadata,
Column('person_id', Integer,
primary_key=True,
test_needs_autoincrement=True),
Column('name', String(50)),
Column('type', String(30)))
engineers = Table('engineers', metadata,
Column('person_id', Integer,
ForeignKey('people.person_id'),
primary_key=True),
Column('primary_language', String(50)),
Column('reports_to_id', Integer,
ForeignKey('managers.person_id'))
)
managers = Table('managers', metadata,
Column('person_id', Integer, ForeignKey('people.person_id'),
primary_key=True),
)
@classmethod
def setup_mappers(cls):
engineers = cls.tables.engineers
managers = cls.tables.managers
people = cls.tables.people
mapper(Person, people,
polymorphic_on=people.c.type,
polymorphic_identity='person')
mapper(Manager, managers,
inherits=Person,
polymorphic_identity='manager')
mapper(Engineer, engineers,
inherits=Person,
polymorphic_identity='engineer',
properties={
'reports_to':relationship(
Manager,
primaryjoin=
managers.c.person_id == engineers.c.reports_to_id,
backref='engineers')})
def test_has(self):
m1 = Manager(name='dogbert')
e1 = Engineer(name='dilbert', primary_language='java', reports_to=m1)
sess = create_session()
sess.add(m1)
sess.add(e1)
sess.flush()
sess.expunge_all()
eq_(sess.query(Engineer)
.filter(Engineer.reports_to.has(Manager.name == 'dogbert'))
.first(),
Engineer(name='dilbert'))
def test_join(self):
m1 = Manager(name='dogbert')
e1 = Engineer(name='dilbert', primary_language='java', reports_to=m1)
sess = create_session()
sess.add(m1)
sess.add(e1)
sess.flush()
sess.expunge_all()
eq_(sess.query(Engineer)
.join('reports_to', aliased=True)
.filter(Manager.name == 'dogbert').first(),
Engineer(name='dilbert'))
def test_filter_aliasing(self):
m1 = Manager(name='dogbert')
m2 = Manager(name='foo')
e1 = Engineer(name='wally', primary_language='java', reports_to=m1)
e2 = Engineer(name='dilbert', primary_language='c++', reports_to=m2)
e3 = Engineer(name='etc', primary_language='c++')
sess = create_session()
sess.add_all([m1, m2, e1, e2, e3])
sess.flush()
sess.expunge_all()
# filter aliasing applied to Engineer doesn't whack Manager
eq_(sess.query(Manager)
.join(Manager.engineers)
.filter(Manager.name == 'dogbert').all(),
[m1])
eq_(sess.query(Manager)
.join(Manager.engineers)
.filter(Engineer.name == 'dilbert').all(),
[m2])
eq_(sess.query(Manager, Engineer)
.join(Manager.engineers)
.order_by(Manager.name.desc()).all(),
[(m2, e2), (m1, e1)])
def test_relationship_compare(self):
m1 = Manager(name='dogbert')
m2 = Manager(name='foo')
e1 = Engineer(name='dilbert', primary_language='java', reports_to=m1)
e2 = Engineer(name='wally', primary_language='c++', reports_to=m2)
e3 = Engineer(name='etc', primary_language='c++')
sess = create_session()
sess.add(m1)
sess.add(m2)
sess.add(e1)
sess.add(e2)
sess.add(e3)
sess.flush()
sess.expunge_all()
eq_(sess.query(Manager)
.join(Manager.engineers)
.filter(Engineer.reports_to == None).all(),
[])
eq_(sess.query(Manager)
.join(Manager.engineers)
.filter(Engineer.reports_to == m1).all(),
[m1])
class SelfReferentialJ2JSelfTest(fixtures.MappedTest):
run_setup_mappers = 'once'
@classmethod
def define_tables(cls, metadata):
people = Table('people', metadata,
Column('person_id', Integer,
primary_key=True,
test_needs_autoincrement=True),
Column('name', String(50)),
Column('type', String(30)))
engineers = Table('engineers', metadata,
Column('person_id', Integer,
ForeignKey('people.person_id'),
primary_key=True),
Column('reports_to_id', Integer,
ForeignKey('engineers.person_id')))
@classmethod
def setup_mappers(cls):
engineers = cls.tables.engineers
people = cls.tables.people
mapper(Person, people,
polymorphic_on=people.c.type,
polymorphic_identity='person')
mapper(Engineer, engineers,
inherits=Person,
polymorphic_identity='engineer',
properties={
'reports_to':relationship(
Engineer,
primaryjoin=
engineers.c.person_id == engineers.c.reports_to_id,
backref='engineers',
remote_side=engineers.c.person_id)})
def _two_obj_fixture(self):
e1 = Engineer(name='wally')
e2 = Engineer(name='dilbert', reports_to=e1)
sess = Session()
sess.add_all([e1, e2])
sess.commit()
return sess
def _five_obj_fixture(self):
sess = Session()
e1, e2, e3, e4, e5 = [
Engineer(name='e%d' % (i + 1)) for i in range(5)
]
e3.reports_to = e1
e4.reports_to = e2
sess.add_all([e1, e2, e3, e4, e5])
sess.commit()
return sess
def test_has(self):
sess = self._two_obj_fixture()
eq_(sess.query(Engineer)
.filter(Engineer.reports_to.has(Engineer.name == 'wally'))
.first(),
Engineer(name='dilbert'))
def test_join_explicit_alias(self):
sess = self._five_obj_fixture()
ea = aliased(Engineer)
eq_(sess.query(Engineer)
.join(ea, Engineer.engineers)
.filter(Engineer.name == 'e1').all(),
[Engineer(name='e1')])
def test_join_aliased_flag_one(self):
sess = self._two_obj_fixture()
eq_(sess.query(Engineer)
.join('reports_to', aliased=True)
.filter(Engineer.name == 'wally').first(),
Engineer(name='dilbert'))
def test_join_aliased_flag_two(self):
sess = self._five_obj_fixture()
eq_(sess.query(Engineer)
.join(Engineer.engineers, aliased=True)
.filter(Engineer.name == 'e4').all(),
[Engineer(name='e2')])
def test_relationship_compare(self):
sess = self._five_obj_fixture()
e1 = sess.query(Engineer).filter_by(name='e1').one()
eq_(sess.query(Engineer)
.join(Engineer.engineers, aliased=True)
.filter(Engineer.reports_to == None).all(),
[])
eq_(sess.query(Engineer)
.join(Engineer.engineers, aliased=True)
.filter(Engineer.reports_to == e1).all(),
[e1])
class M2MFilterTest(fixtures.MappedTest):
run_setup_mappers = 'once'
run_inserts = 'once'
run_deletes = None
@classmethod
def define_tables(cls, metadata):
organizations = Table('organizations', metadata,
Column('id', Integer,
primary_key=True,
test_needs_autoincrement=True),
Column('name', String(50)))
engineers_to_org = Table('engineers_to_org', metadata,
Column('org_id', Integer,
ForeignKey('organizations.id')),
Column('engineer_id', Integer,
ForeignKey('engineers.person_id')))
people = Table('people', metadata,
Column('person_id', Integer,
primary_key=True,
test_needs_autoincrement=True),
Column('name', String(50)),
Column('type', String(30)))
engineers = Table('engineers', metadata,
Column('person_id', Integer,
ForeignKey('people.person_id'),
primary_key=True),
Column('primary_language', String(50)))
@classmethod
def setup_mappers(cls):
organizations = cls.tables.organizations
people = cls.tables.people
engineers = cls.tables.engineers
engineers_to_org = cls.tables.engineers_to_org
class Organization(cls.Comparable):
pass
mapper(Organization, organizations,
properties={
'engineers':relationship(
Engineer,
secondary=engineers_to_org,
backref='organizations')})
mapper(Person, people,
polymorphic_on=people.c.type,
polymorphic_identity='person')
mapper(Engineer, engineers,
inherits=Person,
polymorphic_identity='engineer')
@classmethod
def insert_data(cls):
Organization = cls.classes.Organization
e1 = Engineer(name='e1')
e2 = Engineer(name='e2')
e3 = Engineer(name='e3')
e4 = Engineer(name='e4')
org1 = Organization(name='org1', engineers=[e1, e2])
org2 = Organization(name='org2', engineers=[e3, e4])
sess = create_session()
sess.add(org1)
sess.add(org2)
sess.flush()
def test_not_contains(self):
Organization = self.classes.Organization
sess = create_session()
e1 = sess.query(Person).filter(Engineer.name == 'e1').one()
eq_(sess.query(Organization)
.filter(~Organization.engineers
.of_type(Engineer)
.contains(e1))
.all(),
[Organization(name='org2')])
# this had a bug
eq_(sess.query(Organization)
.filter(~Organization.engineers
.contains(e1))
.all(),
[Organization(name='org2')])
def test_any(self):
sess = create_session()
Organization = self.classes.Organization
eq_(sess.query(Organization)
.filter(Organization.engineers
.of_type(Engineer)
.any(Engineer.name == 'e1'))
.all(),
[Organization(name='org1')])
eq_(sess.query(Organization)
.filter(Organization.engineers
.any(Engineer.name == 'e1'))
.all(),
[Organization(name='org1')])
class SelfReferentialM2MTest(fixtures.MappedTest, AssertsCompiledSQL):
__dialect__ = "default"
@classmethod
def define_tables(cls, metadata):
Table('secondary', metadata,
Column('left_id', Integer,
ForeignKey('parent.id'),
nullable=False),
Column('right_id', Integer,
ForeignKey('parent.id'),
nullable=False))
Table('parent', metadata,
Column('id', Integer,
primary_key=True,
test_needs_autoincrement=True),
Column('cls', String(50)))
Table('child1', metadata,
Column('id', Integer,
ForeignKey('parent.id'),
primary_key=True))
Table('child2', metadata,
Column('id', Integer,
ForeignKey('parent.id'),
primary_key=True))
@classmethod
def setup_classes(cls):
class Parent(cls.Basic):
pass
class Child1(Parent):
pass
class Child2(Parent):
pass
@classmethod
def setup_mappers(cls):
child1 = cls.tables.child1
child2 = cls.tables.child2
Parent = cls.classes.Parent
parent = cls.tables.parent
Child1 = cls.classes.Child1
Child2 = cls.classes.Child2
secondary = cls.tables.secondary
mapper(Parent, parent,
polymorphic_on=parent.c.cls)
mapper(Child1, child1,
inherits=Parent,
polymorphic_identity='child1',
properties={
'left_child2':relationship(
Child2,
secondary=secondary,
primaryjoin=parent.c.id == secondary.c.right_id,
secondaryjoin=parent.c.id == secondary.c.left_id,
uselist=False,
backref="right_children")})
mapper(Child2, child2,
inherits=Parent,
polymorphic_identity='child2')
def test_query_crit(self):
Child1, Child2 = self.classes.Child1, self.classes.Child2
sess = create_session()
c11, c12, c13 = Child1(), Child1(), Child1()
c21, c22, c23 = Child2(), Child2(), Child2()
c11.left_child2 = c22
c12.left_child2 = c22
c13.left_child2 = c23
sess.add_all([c11, c12, c13, c21, c22, c23])
sess.flush()
# test that the join to Child2 doesn't alias Child1 in the select
eq_(set(sess.query(Child1).join(Child1.left_child2)),
set([c11, c12, c13]))
eq_(set(sess.query(Child1, Child2).join(Child1.left_child2)),
set([(c11, c22), (c12, c22), (c13, c23)]))
# test __eq__() on property is annotating correctly
eq_(set(sess.query(Child2)
.join(Child2.right_children)
.filter(Child1.left_child2 == c22)),
set([c22]))
# test the same again
self.assert_compile(
sess.query(Child2)
.join(Child2.right_children)
.filter(Child1.left_child2 == c22)
.with_labels().statement,
"SELECT child2.id AS child2_id, parent.id AS parent_id, "
"parent.cls AS parent_cls FROM secondary AS secondary_1, "
"parent JOIN child2 ON parent.id = child2.id JOIN secondary AS "
"secondary_2 ON parent.id = secondary_2.left_id JOIN "
"(parent AS parent_1 JOIN child1 AS child1_1 ON parent_1.id = child1_1.id) "
"ON parent_1.id = secondary_2.right_id WHERE "
"parent_1.id = secondary_1.right_id AND :param_1 = "
"secondary_1.left_id"
)
def test_eager_join(self):
Child1, Child2 = self.classes.Child1, self.classes.Child2
sess = create_session()
c1 = Child1()
c1.left_child2 = Child2()
sess.add(c1)
sess.flush()
# test that the splicing of the join works here, doesn't break in
# the middle of "parent join child1"
q = sess.query(Child1).options(joinedload('left_child2'))
self.assert_compile(q.limit(1).with_labels().statement,
"SELECT anon_1.child1_id AS anon_1_child1_id, anon_1.parent_id "
"AS anon_1_parent_id, anon_1.parent_cls AS anon_1_parent_cls, "
"child2_1.id AS child2_1_id, parent_1.id AS "
"parent_1_id, parent_1.cls AS parent_1_cls FROM "
"(SELECT child1.id AS child1_id, parent.id AS parent_id, "
"parent.cls AS parent_cls "
"FROM parent JOIN child1 ON parent.id = child1.id "
"LIMIT :param_1) AS anon_1 LEFT OUTER JOIN "
"(secondary AS secondary_1 JOIN "
"(parent AS parent_1 JOIN child2 AS child2_1 "
"ON parent_1.id = child2_1.id) ON parent_1.id = secondary_1.left_id) "
"ON anon_1.parent_id = secondary_1.right_id",
{'param_1':1})
# another way to check
assert q.limit(1).with_labels().subquery().count().scalar() == 1
assert q.first() is c1
def test_subquery_load(self):
Child1, Child2 = self.classes.Child1, self.classes.Child2
sess = create_session()
c1 = Child1()
c1.left_child2 = Child2()
sess.add(c1)
sess.flush()
sess.expunge_all()
query_ = sess.query(Child1).options(subqueryload('left_child2'))
for row in query_.all():
assert row.left_child2
class EagerToSubclassTest(fixtures.MappedTest):
"""Test eager loads to subclass mappers"""
run_setup_classes = 'once'
run_setup_mappers = 'once'
run_inserts = 'once'
run_deletes = None
@classmethod
def define_tables(cls, metadata):
Table('parent', metadata,
Column('id', Integer,
primary_key=True,
test_needs_autoincrement=True),
Column('data', String(10)))
Table('base', metadata,
Column('id', Integer,
primary_key=True,
test_needs_autoincrement=True),
Column('type', String(10)),
Column('related_id', Integer,
ForeignKey('related.id')))
Table('sub', metadata,
Column('id', Integer,
ForeignKey('base.id'),
primary_key=True),
Column('data', String(10)),
Column('parent_id', Integer,
ForeignKey('parent.id'),
nullable=False))
Table('related', metadata,
Column('id', Integer,
primary_key=True,
test_needs_autoincrement=True),
Column('data', String(10)))
@classmethod
def setup_classes(cls):
class Parent(cls.Comparable):
pass
class Base(cls.Comparable):
pass
class Sub(Base):
pass
class Related(cls.Comparable):
pass
@classmethod
def setup_mappers(cls):
sub = cls.tables.sub
Sub = cls.classes.Sub
base = cls.tables.base
Base = cls.classes.Base
parent = cls.tables.parent
Parent = cls.classes.Parent
related = cls.tables.related
Related = cls.classes.Related
mapper(Parent, parent,
properties={'children':relationship(Sub, order_by=sub.c.data)})
mapper(Base, base,
polymorphic_on=base.c.type,
polymorphic_identity='b',
properties={'related':relationship(Related)})
mapper(Sub, sub,
inherits=Base,
polymorphic_identity='s')
mapper(Related, related)
@classmethod
def insert_data(cls):
global p1, p2
Parent = cls.classes.Parent
Sub = cls.classes.Sub
Related = cls.classes.Related
sess = Session()
r1, r2 = Related(data='r1'), Related(data='r2')
s1 = Sub(data='s1', related=r1)
s2 = Sub(data='s2', related=r2)
s3 = Sub(data='s3')
s4 = Sub(data='s4', related=r2)
s5 = Sub(data='s5')
p1 = Parent(data='p1', children=[s1, s2, s3])
p2 = Parent(data='p2', children=[s4, s5])
sess.add(p1)
sess.add(p2)
sess.commit()
def test_joinedload(self):
Parent = self.classes.Parent
sess = Session()
def go():
eq_(sess.query(Parent)
.options(joinedload(Parent.children)).all(),
[p1, p2])
self.assert_sql_count(testing.db, go, 1)
def test_contains_eager(self):
Parent = self.classes.Parent
Sub = self.classes.Sub
sess = Session()
def go():
eq_(sess.query(Parent)
.join(Parent.children)
.options(contains_eager(Parent.children))
.order_by(Parent.data, Sub.data).all(),
[p1, p2])
self.assert_sql_count(testing.db, go, 1)
def test_subq_through_related(self):
Parent = self.classes.Parent
Base = self.classes.Base
sess = Session()
def go():
eq_(sess.query(Parent)
.options(subqueryload_all(Parent.children, Base.related))
.order_by(Parent.data).all(),
[p1, p2])
self.assert_sql_count(testing.db, go, 3)
def test_subq_through_related_aliased(self):
Parent = self.classes.Parent
Base = self.classes.Base
pa = aliased(Parent)
sess = Session()
def go():
eq_(sess.query(pa)
.options(subqueryload_all(pa.children, Base.related))
.order_by(pa.data).all(),
[p1, p2])
self.assert_sql_count(testing.db, go, 3)
class SubClassEagerToSubClassTest(fixtures.MappedTest):
"""Test joinedloads from subclass to subclass mappers"""
run_setup_classes = 'once'
run_setup_mappers = 'once'
run_inserts = 'once'
run_deletes = None
@classmethod
def define_tables(cls, metadata):
Table('parent', metadata,
Column('id', Integer,
primary_key=True,
test_needs_autoincrement=True),
Column('type', String(10)),
)
Table('subparent', metadata,
Column('id', Integer,
ForeignKey('parent.id'),
primary_key=True),
Column('data', String(10)),
)
Table('base', metadata,
Column('id', Integer,
primary_key=True,
test_needs_autoincrement=True),
Column('type', String(10)),
)
Table('sub', metadata,
Column('id', Integer,
ForeignKey('base.id'),
primary_key=True),
Column('data', String(10)),
Column('subparent_id', Integer,
ForeignKey('subparent.id'),
nullable=False)
)
@classmethod
def setup_classes(cls):
class Parent(cls.Comparable):
pass
class Subparent(Parent):
pass
class Base(cls.Comparable):
pass
class Sub(Base):
pass
@classmethod
def setup_mappers(cls):
sub = cls.tables.sub
Sub = cls.classes.Sub
base = cls.tables.base
Base = cls.classes.Base
parent = cls.tables.parent
Parent = cls.classes.Parent
subparent = cls.tables.subparent
Subparent = cls.classes.Subparent
mapper(Parent, parent,
polymorphic_on=parent.c.type,
polymorphic_identity='b')
mapper(Subparent, subparent,
inherits=Parent,
polymorphic_identity='s',
properties={
'children':relationship(Sub, order_by=base.c.id)})
mapper(Base, base,
polymorphic_on=base.c.type,
polymorphic_identity='b')
mapper(Sub, sub,
inherits=Base,
polymorphic_identity='s')
@classmethod
def insert_data(cls):
global p1, p2
Sub, Subparent = cls.classes.Sub, cls.classes.Subparent
sess = create_session()
p1 = Subparent(
data='p1',
children=[Sub(data='s1'), Sub(data='s2'), Sub(data='s3')])
p2 = Subparent(
data='p2',
children=[Sub(data='s4'), Sub(data='s5')])
sess.add(p1)
sess.add(p2)
sess.flush()
def test_joinedload(self):
Subparent = self.classes.Subparent
sess = create_session()
def go():
eq_(sess.query(Subparent)
.options(joinedload(Subparent.children)).all(),
[p1, p2])
self.assert_sql_count(testing.db, go, 1)
sess.expunge_all()
def go():
eq_(sess.query(Subparent)
.options(joinedload("children")).all(),
[p1, p2])
self.assert_sql_count(testing.db, go, 1)
def test_contains_eager(self):
Subparent = self.classes.Subparent
sess = create_session()
def go():
eq_(sess.query(Subparent)
.join(Subparent.children)
.options(contains_eager(Subparent.children)).all(),
[p1, p2])
self.assert_sql_count(testing.db, go, 1)
sess.expunge_all()
def go():
eq_(sess.query(Subparent)
.join(Subparent.children)
.options(contains_eager("children")).all(),
[p1, p2])
self.assert_sql_count(testing.db, go, 1)
def test_subqueryload(self):
Subparent = self.classes.Subparent
sess = create_session()
def go():
eq_(sess.query(Subparent)
.options(subqueryload(Subparent.children)).all(),
[p1, p2])
self.assert_sql_count(testing.db, go, 2)
sess.expunge_all()
def go():
eq_(sess.query(Subparent)
.options(subqueryload("children")).all(),
[p1, p2])
self.assert_sql_count(testing.db, go, 2)
class SameNamedPropTwoPolymorphicSubClassesTest(fixtures.MappedTest):
"""test pathing when two subclasses contain a different property
for the same name, and polymorphic loading is used.
#2614
"""
run_setup_classes = 'once'
run_setup_mappers = 'once'
run_inserts = 'once'
run_deletes = None
@classmethod
def define_tables(cls, metadata):
Table('a', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('type', String(10))
)
Table('b', metadata,
Column('id', Integer, ForeignKey('a.id'), primary_key=True)
)
Table('btod', metadata,
Column('bid', Integer, ForeignKey('b.id'), nullable=False),
Column('did', Integer, ForeignKey('d.id'), nullable=False)
)
Table('c', metadata,
Column('id', Integer, ForeignKey('a.id'), primary_key=True)
)
Table('ctod', metadata,
Column('cid', Integer, ForeignKey('c.id'), nullable=False),
Column('did', Integer, ForeignKey('d.id'), nullable=False)
)
Table('d', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True)
)
@classmethod
def setup_classes(cls):
class A(cls.Comparable):
pass
class B(A):
pass
class C(A):
pass
class D(cls.Comparable):
pass
@classmethod
def setup_mappers(cls):
A = cls.classes.A
B = cls.classes.B
C = cls.classes.C
D = cls.classes.D
mapper(A, cls.tables.a, polymorphic_on=cls.tables.a.c.type)
mapper(B, cls.tables.b, inherits=A, polymorphic_identity='b',
properties={
'related': relationship(D, secondary=cls.tables.btod)
})
mapper(C, cls.tables.c, inherits=A, polymorphic_identity='c',
properties={
'related': relationship(D, secondary=cls.tables.ctod)
})
mapper(D, cls.tables.d)
@classmethod
def insert_data(cls):
B = cls.classes.B
C = cls.classes.C
D = cls.classes.D
session = Session()
d = D()
session.add_all([
B(related=[d]),
C(related=[d])
])
session.commit()
def test_free_w_poly_subquery(self):
A = self.classes.A
B = self.classes.B
C = self.classes.C
D = self.classes.D
session = Session()
d = session.query(D).one()
a_poly = with_polymorphic(A, [B, C])
def go():
for a in session.query(a_poly).\
options(
subqueryload(a_poly.B.related),
subqueryload(a_poly.C.related)):
eq_(a.related, [d])
self.assert_sql_count(testing.db, go, 3)
def test_fixed_w_poly_subquery(self):
A = self.classes.A
B = self.classes.B
C = self.classes.C
D = self.classes.D
session = Session()
d = session.query(D).one()
def go():
for a in session.query(A).with_polymorphic([B, C]).\
options(subqueryload(B.related), subqueryload(C.related)):
eq_(a.related, [d])
self.assert_sql_count(testing.db, go, 3)
def test_free_w_poly_joined(self):
A = self.classes.A
B = self.classes.B
C = self.classes.C
D = self.classes.D
session = Session()
d = session.query(D).one()
a_poly = with_polymorphic(A, [B, C])
def go():
for a in session.query(a_poly).\
options(
joinedload(a_poly.B.related),
joinedload(a_poly.C.related)):
eq_(a.related, [d])
self.assert_sql_count(testing.db, go, 1)
def test_fixed_w_poly_joined(self):
A = self.classes.A
B = self.classes.B
C = self.classes.C
D = self.classes.D
session = Session()
d = session.query(D).one()
def go():
for a in session.query(A).with_polymorphic([B, C]).\
options(joinedload(B.related), joinedload(C.related)):
eq_(a.related, [d])
self.assert_sql_count(testing.db, go, 1)
class SubClassToSubClassFromParentTest(fixtures.MappedTest):
"""test #2617
"""
run_setup_classes = 'once'
run_setup_mappers = 'once'
run_inserts = 'once'
run_deletes = None
@classmethod
def define_tables(cls, metadata):
Table('z', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True)
)
Table('a', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('type', String(10)),
Column('z_id', Integer, ForeignKey('z.id'))
)
Table('b', metadata,
Column('id', Integer, ForeignKey('a.id'), primary_key=True)
)
Table('d', metadata,
Column('id', Integer, ForeignKey('a.id'), primary_key=True),
Column('b_id', Integer, ForeignKey('b.id'))
)
@classmethod
def setup_classes(cls):
class Z(cls.Comparable):
pass
class A(cls.Comparable):
pass
class B(A):
pass
class D(A):
pass
@classmethod
def setup_mappers(cls):
Z = cls.classes.Z
A = cls.classes.A
B = cls.classes.B
D = cls.classes.D
mapper(Z, cls.tables.z)
mapper(A, cls.tables.a, polymorphic_on=cls.tables.a.c.type,
with_polymorphic='*',
properties={
'zs': relationship(Z, lazy="subquery")
})
mapper(B, cls.tables.b, inherits=A, polymorphic_identity='b',
properties={
'related': relationship(D, lazy="subquery",
primaryjoin=cls.tables.d.c.b_id ==
cls.tables.b.c.id)
})
mapper(D, cls.tables.d, inherits=A, polymorphic_identity='d')
@classmethod
def insert_data(cls):
B = cls.classes.B
session = Session()
session.add(B())
session.commit()
def test_2617(self):
A = self.classes.A
session = Session()
def go():
a1 = session.query(A).first()
eq_(a1.related, [])
self.assert_sql_count(testing.db, go, 3)
class SubClassToSubClassMultiTest(AssertsCompiledSQL, fixtures.MappedTest):
"""
Two different joined-inh subclasses, led by a
parent, with two distinct endpoints:
parent -> subcl1 -> subcl2 -> (ep1, ep2)
the join to ep2 indicates we need to join
from the middle of the joinpoint, skipping ep1
"""
run_create_tables = None
run_deletes = None
__dialect__ = 'default'
@classmethod
def define_tables(cls, metadata):
Table('parent', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('data', String(30))
)
Table('base1', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('data', String(30))
)
Table('sub1', metadata,
Column('id', Integer, ForeignKey('base1.id'), primary_key=True),
Column('parent_id', ForeignKey('parent.id')),
Column('subdata', String(30))
)
Table('base2', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('base1_id', ForeignKey('base1.id')),
Column('data', String(30))
)
Table('sub2', metadata,
Column('id', Integer, ForeignKey('base2.id'), primary_key=True),
Column('subdata', String(30))
)
Table('ep1', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('base2_id', Integer, ForeignKey('base2.id')),
Column('data', String(30))
)
Table('ep2', metadata,
Column('id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('base2_id', Integer, ForeignKey('base2.id')),
Column('data', String(30))
)
@classmethod
def setup_classes(cls):
class Parent(cls.Comparable):
pass
class Base1(cls.Comparable):
pass
class Sub1(Base1):
pass
class Base2(cls.Comparable):
pass
class Sub2(Base2):
pass
class EP1(cls.Comparable):
pass
class EP2(cls.Comparable):
pass
@classmethod
def _classes(cls):
return cls.classes.Parent, cls.classes.Base1,\
cls.classes.Base2, cls.classes.Sub1,\
cls.classes.Sub2, cls.classes.EP1,\
cls.classes.EP2
@classmethod
def setup_mappers(cls):
Parent, Base1, Base2, Sub1, Sub2, EP1, EP2 = cls._classes()
mapper(Parent, cls.tables.parent, properties={
'sub1': relationship(Sub1)
})
mapper(Base1, cls.tables.base1, properties={
'sub2': relationship(Sub2)
})
mapper(Sub1, cls.tables.sub1, inherits=Base1)
mapper(Base2, cls.tables.base2, properties={
'ep1': relationship(EP1),
'ep2': relationship(EP2)
})
mapper(Sub2, cls.tables.sub2, inherits=Base2)
mapper(EP1, cls.tables.ep1)
mapper(EP2, cls.tables.ep2)
def test_one(self):
Parent, Base1, Base2, Sub1, Sub2, EP1, EP2 = self._classes()
s = Session()
self.assert_compile(
s.query(Parent).join(Parent.sub1, Sub1.sub2).
join(Sub2.ep1).
join(Sub2.ep2),
"SELECT parent.id AS parent_id, parent.data AS parent_data "
"FROM parent JOIN (base1 JOIN sub1 ON base1.id = sub1.id) "
"ON parent.id = sub1.parent_id JOIN "
"(base2 JOIN sub2 "
"ON base2.id = sub2.id) "
"ON base1.id = base2.base1_id "
"JOIN ep1 ON base2.id = ep1.base2_id "
"JOIN ep2 ON base2.id = ep2.base2_id"
)
def test_two(self):
Parent, Base1, Base2, Sub1, Sub2, EP1, EP2 = self._classes()
s2a = aliased(Sub2, flat=True)
s = Session()
self.assert_compile(
s.query(Parent).join(Parent.sub1).
join(s2a, Sub1.sub2),
"SELECT parent.id AS parent_id, parent.data AS parent_data "
"FROM parent JOIN (base1 JOIN sub1 ON base1.id = sub1.id) "
"ON parent.id = sub1.parent_id JOIN "
"(base2 AS base2_1 JOIN sub2 AS sub2_1 "
"ON base2_1.id = sub2_1.id) "
"ON base1.id = base2_1.base1_id"
)
def test_three(self):
Parent, Base1, Base2, Sub1, Sub2, EP1, EP2 = self._classes()
s = Session()
self.assert_compile(
s.query(Base1).join(Base1.sub2).
join(Sub2.ep1).\
join(Sub2.ep2),
"SELECT base1.id AS base1_id, base1.data AS base1_data "
"FROM base1 JOIN (base2 JOIN sub2 "
"ON base2.id = sub2.id) ON base1.id = "
"base2.base1_id "
"JOIN ep1 ON base2.id = ep1.base2_id "
"JOIN ep2 ON base2.id = ep2.base2_id"
)
def test_four(self):
Parent, Base1, Base2, Sub1, Sub2, EP1, EP2 = self._classes()
s = Session()
self.assert_compile(
s.query(Sub2).join(Base1, Base1.id == Sub2.base1_id).
join(Sub2.ep1).\
join(Sub2.ep2),
"SELECT sub2.id AS sub2_id, base2.id AS base2_id, "
"base2.base1_id AS base2_base1_id, base2.data AS base2_data, "
"sub2.subdata AS sub2_subdata "
"FROM base2 JOIN sub2 ON base2.id = sub2.id "
"JOIN base1 ON base1.id = base2.base1_id "
"JOIN ep1 ON base2.id = ep1.base2_id "
"JOIN ep2 ON base2.id = ep2.base2_id"
)
def test_five(self):
Parent, Base1, Base2, Sub1, Sub2, EP1, EP2 = self._classes()
s = Session()
self.assert_compile(
s.query(Sub2).join(Sub1, Sub1.id == Sub2.base1_id).
join(Sub2.ep1).\
join(Sub2.ep2),
"SELECT sub2.id AS sub2_id, base2.id AS base2_id, "
"base2.base1_id AS base2_base1_id, base2.data AS base2_data, "
"sub2.subdata AS sub2_subdata "
"FROM base2 JOIN sub2 ON base2.id = sub2.id "
"JOIN "
"(base1 JOIN sub1 ON base1.id = sub1.id) "
"ON sub1.id = base2.base1_id "
"JOIN ep1 ON base2.id = ep1.base2_id "
"JOIN ep2 ON base2.id = ep2.base2_id"
)
def test_six(self):
Parent, Base1, Base2, Sub1, Sub2, EP1, EP2 = self._classes()
s = Session()
self.assert_compile(
s.query(Sub2).from_self().\
join(Sub2.ep1).
join(Sub2.ep2),
"SELECT anon_1.sub2_id AS anon_1_sub2_id, "
"anon_1.base2_id AS anon_1_base2_id, "
"anon_1.base2_base1_id AS anon_1_base2_base1_id, "
"anon_1.base2_data AS anon_1_base2_data, "
"anon_1.sub2_subdata AS anon_1_sub2_subdata "
"FROM (SELECT sub2.id AS sub2_id, base2.id AS base2_id, "
"base2.base1_id AS base2_base1_id, base2.data AS base2_data, "
"sub2.subdata AS sub2_subdata "
"FROM base2 JOIN sub2 ON base2.id = sub2.id) AS anon_1 "
"JOIN ep1 ON anon_1.base2_id = ep1.base2_id "
"JOIN ep2 ON anon_1.base2_id = ep2.base2_id"
)
def test_seven(self):
Parent, Base1, Base2, Sub1, Sub2, EP1, EP2 = self._classes()
s = Session()
self.assert_compile(
# adding Sub2 to the entities list helps it,
# otherwise the joins for Sub2.ep1/ep2 don't have columns
# to latch onto. Can't really make it better than this
s.query(Parent, Sub2).join(Parent.sub1).\
join(Sub1.sub2).from_self().\
join(Sub2.ep1).
join(Sub2.ep2),
"SELECT anon_1.parent_id AS anon_1_parent_id, "
"anon_1.parent_data AS anon_1_parent_data, "
"anon_1.sub2_id AS anon_1_sub2_id, "
"anon_1.base2_id AS anon_1_base2_id, "
"anon_1.base2_base1_id AS anon_1_base2_base1_id, "
"anon_1.base2_data AS anon_1_base2_data, "
"anon_1.sub2_subdata AS anon_1_sub2_subdata "
"FROM (SELECT parent.id AS parent_id, parent.data AS parent_data, "
"sub2.id AS sub2_id, "
"base2.id AS base2_id, "
"base2.base1_id AS base2_base1_id, "
"base2.data AS base2_data, "
"sub2.subdata AS sub2_subdata "
"FROM parent JOIN (base1 JOIN sub1 ON base1.id = sub1.id) "
"ON parent.id = sub1.parent_id JOIN "
"(base2 JOIN sub2 ON base2.id = sub2.id) "
"ON base1.id = base2.base1_id) AS anon_1 "
"JOIN ep1 ON anon_1.base2_id = ep1.base2_id "
"JOIN ep2 ON anon_1.base2_id = ep2.base2_id"
)
class JoinAcrossJoinedInhMultiPath(fixtures.DeclarativeMappedTest,
testing.AssertsCompiledSQL):
"""test long join paths with a joined-inh in the middle, where we go multiple
times across the same joined-inh to the same target but with other classes
in the middle. E.g. test [ticket:2908]
"""
run_setup_mappers = 'once'
__dialect__ = 'default'
@classmethod
def setup_classes(cls):
Base = cls.DeclarativeBasic
class Root(Base):
__tablename__ = 'root'
id = Column(Integer, primary_key=True)
sub1_id = Column(Integer, ForeignKey('sub1.id'))
intermediate = relationship("Intermediate")
sub1 = relationship("Sub1")
class Intermediate(Base):
__tablename__ = 'intermediate'
id = Column(Integer, primary_key=True)
sub1_id = Column(Integer, ForeignKey('sub1.id'))
root_id = Column(Integer, ForeignKey('root.id'))
sub1 = relationship("Sub1")
class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
class Sub1(Parent):
__tablename__ = 'sub1'
id = Column(Integer, ForeignKey('parent.id'),
primary_key=True)
target = relationship("Target")
class Target(Base):
__tablename__ = 'target'
id = Column(Integer, primary_key=True)
sub1_id = Column(Integer, ForeignKey('sub1.id'))
def test_join(self):
Root, Intermediate, Sub1, Target = \
self.classes.Root, self.classes.Intermediate, \
self.classes.Sub1, self.classes.Target
s1_alias = aliased(Sub1)
s2_alias = aliased(Sub1)
t1_alias = aliased(Target)
t2_alias = aliased(Target)
sess = Session()
q = sess.query(Root).\
join(s1_alias, Root.sub1).join(t1_alias, s1_alias.target).\
join(Root.intermediate).join(s2_alias, Intermediate.sub1).\
join(t2_alias, s2_alias.target)
self.assert_compile(q,
"SELECT root.id AS root_id, root.sub1_id AS root_sub1_id "
"FROM root "
"JOIN (SELECT parent.id AS parent_id, sub1.id AS sub1_id "
"FROM parent JOIN sub1 ON parent.id = sub1.id) AS anon_1 "
"ON anon_1.sub1_id = root.sub1_id "
"JOIN target AS target_1 ON anon_1.sub1_id = target_1.sub1_id "
"JOIN intermediate ON root.id = intermediate.root_id "
"JOIN (SELECT parent.id AS parent_id, sub1.id AS sub1_id "
"FROM parent JOIN sub1 ON parent.id = sub1.id) AS anon_2 "
"ON anon_2.sub1_id = intermediate.sub1_id "
"JOIN target AS target_2 ON anon_2.sub1_id = target_2.sub1_id")
def test_join_flat(self):
Root, Intermediate, Sub1, Target = \
self.classes.Root, self.classes.Intermediate, \
self.classes.Sub1, self.classes.Target
s1_alias = aliased(Sub1, flat=True)
s2_alias = aliased(Sub1, flat=True)
t1_alias = aliased(Target)
t2_alias = aliased(Target)
sess = Session()
q = sess.query(Root).\
join(s1_alias, Root.sub1).join(t1_alias, s1_alias.target).\
join(Root.intermediate).join(s2_alias, Intermediate.sub1).\
join(t2_alias, s2_alias.target)
self.assert_compile(q,
"SELECT root.id AS root_id, root.sub1_id AS root_sub1_id "
"FROM root "
"JOIN (parent AS parent_1 JOIN sub1 AS sub1_1 ON parent_1.id = sub1_1.id) "
"ON sub1_1.id = root.sub1_id "
"JOIN target AS target_1 ON sub1_1.id = target_1.sub1_id "
"JOIN intermediate ON root.id = intermediate.root_id "
"JOIN (parent AS parent_2 JOIN sub1 AS sub1_2 ON parent_2.id = sub1_2.id) "
"ON sub1_2.id = intermediate.sub1_id "
"JOIN target AS target_2 ON sub1_2.id = target_2.sub1_id"
)
def test_joinedload(self):
Root, Intermediate, Sub1, Target = \
self.classes.Root, self.classes.Intermediate, \
self.classes.Sub1, self.classes.Target
sess = Session()
q = sess.query(Root).\
options(
joinedload(Root.sub1).joinedload(Sub1.target),
joinedload(Root.intermediate).joinedload(Intermediate.sub1).\
joinedload(Sub1.target),
)
self.assert_compile(q,
"SELECT root.id AS root_id, root.sub1_id AS root_sub1_id, "
"target_1.id AS target_1_id, target_1.sub1_id AS target_1_sub1_id, "
"sub1_1.id AS sub1_1_id, parent_1.id AS parent_1_id, "
"intermediate_1.id AS intermediate_1_id, "
"intermediate_1.sub1_id AS intermediate_1_sub1_id, "
"intermediate_1.root_id AS intermediate_1_root_id, "
"target_2.id AS target_2_id, target_2.sub1_id AS target_2_sub1_id, "
"sub1_2.id AS sub1_2_id, parent_2.id AS parent_2_id "
"FROM root "
"LEFT OUTER JOIN intermediate AS intermediate_1 "
"ON root.id = intermediate_1.root_id "
"LEFT OUTER JOIN (parent AS parent_1 JOIN sub1 AS sub1_1 "
"ON parent_1.id = sub1_1.id) ON sub1_1.id = intermediate_1.sub1_id "
"LEFT OUTER JOIN target AS target_1 ON sub1_1.id = target_1.sub1_id "
"LEFT OUTER JOIN (parent AS parent_2 JOIN sub1 AS sub1_2 "
"ON parent_2.id = sub1_2.id) ON sub1_2.id = root.sub1_id "
"LEFT OUTER JOIN target AS target_2 ON sub1_2.id = target_2.sub1_id")
class MultipleAdaptUsesEntityOverTableTest(AssertsCompiledSQL, fixtures.MappedTest):
__dialect__ = 'default'
run_create_tables = None
run_deletes = None
@classmethod
def define_tables(cls, metadata):
Table('a', metadata,
Column('id', Integer, primary_key=True),
Column('name', String)
)
Table('b', metadata,
Column('id', Integer, ForeignKey('a.id'), primary_key=True)
)
Table('c', metadata,
Column('id', Integer, ForeignKey('a.id'), primary_key=True),
Column('bid', Integer, ForeignKey('b.id'))
)
Table('d', metadata,
Column('id', Integer, ForeignKey('a.id'), primary_key=True),
Column('cid', Integer, ForeignKey('c.id'))
)
@classmethod
def setup_classes(cls):
class A(cls.Comparable):
pass
class B(A):
pass
class C(A):
pass
class D(A):
pass
@classmethod
def setup_mappers(cls):
A, B, C, D = cls.classes.A, cls.classes.B, cls.classes.C, cls.classes.D
a, b, c, d = cls.tables.a, cls.tables.b, cls.tables.c, cls.tables.d
mapper(A, a)
mapper(B, b, inherits=A)
mapper(C, c, inherits=A)
mapper(D, d, inherits=A)
def _two_join_fixture(self):
A, B, C, D = self.classes.A, self.classes.B, self.classes.C, self.classes.D
s = Session()
return s.query(B.name, C.name, D.name).select_from(B).\
join(C, C.bid == B.id).\
join(D, D.cid == C.id)
def test_two_joins_adaption(self):
a, b, c, d = self.tables.a, self.tables.b, self.tables.c, self.tables.d
q = self._two_join_fixture()
btoc = q._from_obj[0].left
ac_adapted = btoc.right.element.left
c_adapted = btoc.right.element.right
is_(ac_adapted.element, a)
is_(c_adapted.element, c)
ctod = q._from_obj[0].right
ad_adapted = ctod.left
d_adapted = ctod.right
is_(ad_adapted.element, a)
is_(d_adapted.element, d)
bname, cname, dname = q._entities
b_name_adapted = bname._resolve_expr_against_query_aliases(
q, bname.column, None)
c_name_adapted = cname._resolve_expr_against_query_aliases(
q, cname.column, None)
d_name_adapted = dname._resolve_expr_against_query_aliases(
q, dname.column, None)
assert bool(b_name_adapted == a.c.name)
assert bool(c_name_adapted == ac_adapted.c.name)
assert bool(d_name_adapted == ad_adapted.c.name)
def test_two_joins_sql(self):
q = self._two_join_fixture()
self.assert_compile(q,
"SELECT a.name AS a_name, a_1.name AS a_1_name, "
"a_2.name AS a_2_name "
"FROM a JOIN b ON a.id = b.id JOIN "
"(a AS a_1 JOIN c AS c_1 ON a_1.id = c_1.id) ON c_1.bid = b.id "
"JOIN (a AS a_2 JOIN d AS d_1 ON a_2.id = d_1.id) "
"ON d_1.cid = c_1.id"
) | unknown | codeparrot/codeparrot-clean | ||
DOCUMENTATION:
name: link
author: Ansible Core
version_added: "2.5"
short_description: does the path reference existing symbolic link
aliases: [islink]
description:
- Check if the provided path maps to an existing symlink on the controller's filesystem (localhost).
options:
_input:
description: A path.
type: path
EXAMPLES: |
ismyhostsalink: "{{ '/etc/hosts' is link}}"
list_of_symlinks: "{{ list_of_paths | select('link') }}"
RETURN:
_value:
description: Returns V(True) if the path corresponds to an existing symlink on the filesystem on the controller, V(False) if otherwise.
type: boolean | unknown | github | https://github.com/ansible/ansible | lib/ansible/plugins/test/is_link.yml |
"""Reproduce an input boolean state."""
import asyncio
import logging
from typing import Any, Dict, Iterable, Optional
from homeassistant.const import (
ATTR_ENTITY_ID,
SERVICE_TURN_OFF,
SERVICE_TURN_ON,
STATE_OFF,
STATE_ON,
)
from homeassistant.core import Context, State
from homeassistant.helpers.typing import HomeAssistantType
from . import DOMAIN
_LOGGER = logging.getLogger(__name__)
async def _async_reproduce_states(
hass: HomeAssistantType,
state: State,
*,
context: Optional[Context] = None,
reproduce_options: Optional[Dict[str, Any]] = None,
) -> None:
"""Reproduce input boolean states."""
cur_state = hass.states.get(state.entity_id)
if cur_state is None:
_LOGGER.warning("Unable to find entity %s", state.entity_id)
return
if state.state not in (STATE_ON, STATE_OFF):
_LOGGER.warning(
"Invalid state specified for %s: %s", state.entity_id, state.state
)
return
if cur_state.state == state.state:
return
service = SERVICE_TURN_ON if state.state == STATE_ON else SERVICE_TURN_OFF
await hass.services.async_call(
DOMAIN,
service,
{ATTR_ENTITY_ID: state.entity_id},
context=context,
blocking=True,
)
async def async_reproduce_states(
hass: HomeAssistantType,
states: Iterable[State],
*,
context: Optional[Context] = None,
reproduce_options: Optional[Dict[str, Any]] = None,
) -> None:
"""Reproduce component states."""
await asyncio.gather(
*(
_async_reproduce_states(
hass, state, context=context, reproduce_options=reproduce_options
)
for state in states
)
) | unknown | codeparrot/codeparrot-clean | ||
import pkgutil
import pytest
from ..csuconf import CSUConf
from ..csuconf import CSUBarModel, CSUBarModelL, CSUBarModelR
from ..csuconf import PhysicalBar, PhysicalBarL, PhysicalBarR
from ..csuconf import LogicalSlit, TargetType
from ..csuconf import create_bar_models, read_csu_from_header
from ..csuconf import merge_slits
from ..csuconf import EMIR_NBARS
import numpy
try:
import StringIO as S
except ImportError:
import io as S
def create_test_header0():
hdr = {}
for i in range(55):
hdr["CSUP{}".format(i + 1)] = -100
for i in range(55, 110):
hdr["CSUP{}".format(i + 1)] = -100
return hdr
def create_test_header1():
hdr = create_test_header0()
hdr["SLIFL12"] = 2
hdr["SLIFL13"] = 2
hdr["SLIFL33"] = 2
hdr["SLIFL44"] = 2
hdr["SLIFL45"] = 2
hdr["XRSLI12"] = 200.1
hdr["XRSLI13"] = 200.1
hdr["YRSLI12"] = 300.1
hdr["YRSLI13"] = 300.1
hdr["XRSLI33"] = 1300.1
hdr["YRSLI33"] = 1300.1
hdr["XRSLI44"] = 1900.1
hdr["YRSLI44"] = 1850.1
hdr["XRSLI45"] = 1900.1
hdr["YRSLI45"] = 1850.1
#
hdr["XVSLI12"] = 200.1
hdr["XVSLI13"] = 200.1
hdr["YVSLI12"] = 300.1
hdr["YVSLI13"] = 300.1
#
hdr["XRSLI33"] = 1300.1
hdr["YRSLI33"] = 1300.1
hdr["XRSLI44"] = 1900.1
hdr["YRSLI44"] = 1850.1
hdr["XRSLI45"] = 1900.1
hdr["YRSLI45"] = 1850.1
return hdr
@pytest.mark.parametrize("hdr, nslits",[(create_test_header0(), 55), (create_test_header1(), 53)])
def test_csubar(hdr, nslits):
dumdata = pkgutil.get_data('emirdrp.instrument.configs', 'bars_nominal_positions_test.txt')
ss = S.StringIO(dumdata.decode('utf8'))
bars_nominal_positions = numpy.loadtxt(ss)
barmodel = create_bar_models(bars_nominal_positions)
csu_conf = read_csu_from_header(barmodel, hdr)
assert len(csu_conf.slits) == nslits
@pytest.mark.parametrize("hdr, nslits",[(create_test_header0(), 55), (create_test_header1(), 53)])
def test_merge_bars(hdr, nslits):
mm = []
for idx in range(1, EMIR_NBARS + 1):
# References from header
try:
slit_t = hdr["SLIFL%d" % idx]
target_type = TargetType(slit_t)
except KeyError:
target_type = TargetType.UNKNOWN
xref = hdr.get("XRSLI%d" % idx, -100) - 1
yref = hdr.get("YRSLI%d" % idx, -100) - 1
target_coordinates = (xref, yref)
xref = hdr.get("XVSLI%d" % idx, -100) - 1
yref = hdr.get("YVSLI%d" % idx, -100) - 1
target_coordinates_v = (xref, yref)
mm.append((idx, target_type, target_coordinates, target_coordinates_v))
bag = merge_slits(mm, max_slits=3, tol=1e-2)
assert len(bag) == nslits
def test_csuconf1():
dumdata = pkgutil.get_data('emirdrp.instrument.configs', 'bars_nominal_positions_test.txt')
ss = S.StringIO(dumdata.decode('utf8'))
bars_nominal_positions = numpy.loadtxt(ss)
hdr = create_test_header1()
barmodel = create_bar_models(bars_nominal_positions)
csu_conf = read_csu_from_header(barmodel, hdr)
assert csu_conf.is_open() | unknown | codeparrot/codeparrot-clean | ||
import collections
import json
import re
from functools import partial
from itertools import chain
from django.core.exceptions import EmptyResultSet, FieldError, FullResultSet
from django.db import DatabaseError, NotSupportedError
from django.db.models.constants import LOOKUP_SEP
from django.db.models.expressions import ColPairs, F, OrderBy, RawSQL, Ref, Value
from django.db.models.fields import AutoField, composite
from django.db.models.functions import Cast, Random
from django.db.models.lookups import Lookup
from django.db.models.query_utils import select_related_descend
from django.db.models.sql.constants import (
CURSOR,
GET_ITERATOR_CHUNK_SIZE,
MULTI,
NO_RESULTS,
ORDER_DIR,
ROW_COUNT,
SINGLE,
)
from django.db.models.sql.query import Query, get_order_dir
from django.db.transaction import TransactionManagementError
from django.utils.functional import cached_property
from django.utils.hashable import make_hashable
from django.utils.regex_helper import _lazy_re_compile
class PositionRef(Ref):
def __init__(self, ordinal, refs, source):
self.ordinal = ordinal
super().__init__(refs, source)
def as_sql(self, compiler, connection):
return str(self.ordinal), ()
class SQLCompiler:
# Multiline ordering SQL clause may appear from RawSQL.
ordering_parts = _lazy_re_compile(
r"^(.*)\s(?:ASC|DESC).*",
re.MULTILINE | re.DOTALL,
)
def __init__(self, query, connection, using, elide_empty=True):
self.query = query
self.connection = connection
self.using = using
# Some queries, e.g. coalesced aggregation, need to be executed even if
# they would return an empty result set.
self.elide_empty = elide_empty
self.quote_cache = {"*": "*"}
# The select, klass_info, and annotations are needed by
# QuerySet.iterator() these are set as a side-effect of executing the
# query. Note that we calculate separately a list of extra select
# columns needed for grammatical correctness of the query, but these
# columns are not included in self.select.
self.select = None
self.annotation_col_map = None
self.klass_info = None
self._meta_ordering = None
def __repr__(self):
return (
f"<{self.__class__.__qualname__} "
f"model={self.query.model.__qualname__} "
f"connection={self.connection!r} using={self.using!r}>"
)
def setup_query(self, with_col_aliases=False):
if all(self.query.alias_refcount[a] == 0 for a in self.query.alias_map):
self.query.get_initial_alias()
self.select, self.klass_info, self.annotation_col_map = self.get_select(
with_col_aliases=with_col_aliases,
)
self.col_count = len(self.select)
def pre_sql_setup(self, with_col_aliases=False):
"""
Do any necessary class setup immediately prior to producing SQL. This
is for things that can't necessarily be done in __init__ because we
might not have all the pieces in place at that time.
"""
self.setup_query(with_col_aliases=with_col_aliases)
order_by = self.get_order_by()
self.where, self.having, self.qualify = self.query.where.split_having_qualify(
must_group_by=self.query.group_by is not None
)
extra_select = self.get_extra_select(order_by, self.select)
self.has_extra_select = bool(extra_select)
group_by = self.get_group_by(self.select + extra_select, order_by)
return extra_select, order_by, group_by
def get_group_by(self, select, order_by):
"""
Return a list of 2-tuples of form (sql, params).
The logic of what exactly the GROUP BY clause contains is hard
to describe in other words than "if it passes the test suite,
then it is correct".
"""
# Some examples:
# SomeModel.objects.annotate(Count('somecol'))
# GROUP BY: all fields of the model
#
# SomeModel.objects.values('name').annotate(Count('somecol'))
# GROUP BY: name
#
# SomeModel.objects.annotate(Count('somecol')).values('name')
# GROUP BY: all cols of the model
#
# SomeModel.objects.values('name', 'pk')
# .annotate(Count('somecol')).values('pk')
# GROUP BY: name, pk
#
# SomeModel.objects.values('name').annotate(Count('somecol')).values('pk')
# GROUP BY: name, pk
#
# In fact, the self.query.group_by is the minimal set to GROUP BY. It
# can't be ever restricted to a smaller set, but additional columns in
# HAVING, ORDER BY, and SELECT clauses are added to it. Unfortunately
# the end result is that it is impossible to force the query to have
# a chosen GROUP BY clause - you can almost do this by using the form:
# .values(*wanted_cols).annotate(AnAggregate())
# but any later annotations, extra selects, values calls that
# refer some column outside of the wanted_cols, order_by, or even
# filter calls can alter the GROUP BY clause.
# The query.group_by is either None (no GROUP BY at all), True
# (group by select fields), or a list of expressions to be added
# to the group by.
if self.query.group_by is None:
return []
expressions = []
group_by_refs = set()
if self.query.group_by is not True:
# If the group by is set to a list (by .values() call most likely),
# then we need to add everything in it to the GROUP BY clause.
# Backwards compatibility hack for setting query.group_by. Remove
# when we have public API way of forcing the GROUP BY clause.
# Converts string references to expressions.
for expr in self.query.group_by:
if not hasattr(expr, "as_sql"):
expr = self.query.resolve_ref(expr)
if isinstance(expr, Ref):
if expr.refs not in group_by_refs:
group_by_refs.add(expr.refs)
expressions.append(expr.source)
else:
expressions.append(expr)
# Note that even if the group_by is set, it is only the minimal
# set to group by. So, we need to add cols in select, order_by, and
# having into the select in any case.
selected_expr_positions = {}
for ordinal, (expr, _, alias) in enumerate(select, start=1):
if alias:
selected_expr_positions[expr] = ordinal
# Skip members of the select clause that are already explicitly
# grouped against.
if alias in group_by_refs:
continue
expressions.extend(expr.get_group_by_cols())
if not self._meta_ordering:
for expr, (sql, params, is_ref) in order_by:
# Skip references to the SELECT clause, as all expressions in
# the SELECT clause are already part of the GROUP BY.
if not is_ref:
expressions.extend(expr.get_group_by_cols())
having_group_by = self.having.get_group_by_cols() if self.having else ()
for expr in having_group_by:
expressions.append(expr)
result = []
seen = set()
expressions = self.collapse_group_by(expressions, having_group_by)
allows_group_by_select_index = (
self.connection.features.allows_group_by_select_index
)
for expr in expressions:
try:
sql, params = self.compile(expr)
except (EmptyResultSet, FullResultSet):
continue
if (
allows_group_by_select_index
and (position := selected_expr_positions.get(expr)) is not None
):
sql, params = str(position), ()
else:
sql, params = expr.select_format(self, sql, params)
params_hash = make_hashable(params)
if (sql, params_hash) not in seen:
result.append((sql, params))
seen.add((sql, params_hash))
return result
def collapse_group_by(self, expressions, having):
# If the database supports group by functional dependence reduction,
# then the expressions can be reduced to the set of selected table
# primary keys as all other columns are functionally dependent on them.
if self.connection.features.allows_group_by_selected_pks:
# Filter out all expressions associated with a table's primary key
# present in the grouped columns. This is done by identifying all
# tables that have their primary key included in the grouped
# columns and removing non-primary key columns referring to them.
# Unmanaged models are excluded because they could be representing
# database views on which the optimization might not be allowed.
pks = {
expr
for expr in expressions
if (
hasattr(expr, "target")
and expr.target.primary_key
and self.connection.features.allows_group_by_selected_pks_on_model(
expr.target.model
)
)
}
aliases = {expr.alias for expr in pks}
expressions = [
expr
for expr in expressions
if expr in pks
or expr in having
or getattr(expr, "alias", None) not in aliases
]
return expressions
@classmethod
def get_select_from_parent(cls, klass_info):
for ki in klass_info["related_klass_infos"]:
if ki["from_parent"]:
ki["select_fields"] = klass_info["select_fields"] + ki["select_fields"]
cls.get_select_from_parent(ki)
def get_select(self, with_col_aliases=False):
"""
Return three values:
- a list of 3-tuples of (expression, (sql, params), alias)
- a klass_info structure,
- a dictionary of annotations
The (sql, params) is what the expression will produce, and alias is the
"AS alias" for the column (possibly None).
The klass_info structure contains the following information:
- The base model of the query.
- Which columns for that model are present in the query (by
position of the select clause).
- related_klass_infos: [f, klass_info] to descent into
The annotations is a dictionary of {'attname': column position} values.
"""
select = []
klass_info = None
annotations = {}
assert not (self.query.select and self.query.default_cols)
select_mask = self.query.get_select_mask()
if self.query.default_cols:
cols = self.get_default_columns(select_mask)
else:
# self.query.select is a special case. These columns never go to
# any model.
cols = self.query.select
selected = []
select_fields = None
if self.query.selected is None:
selected = [
*(
(alias, RawSQL(*args))
for alias, args in self.query.extra_select.items()
),
*((None, col) for col in cols),
*self.query.annotation_select.items(),
]
select_fields = list(
range(
len(self.query.extra_select),
len(self.query.extra_select) + len(cols),
)
)
else:
select_fields = []
for index, (alias, expression) in enumerate(self.query.selected.items()):
# Reference to an annotation.
if isinstance(expression, str):
expression = self.query.annotations[expression]
# Reference to a column.
elif isinstance(expression, int):
select_fields.append(index)
expression = cols[expression]
# ColPairs cannot be aliased.
if isinstance(expression, ColPairs):
alias = None
selected.append((alias, expression))
if select_fields:
klass_info = {"model": self.query.model, "select_fields": select_fields}
for select_idx, (alias, expression) in enumerate(selected):
if alias:
annotations[alias] = select_idx
select.append((expression, alias))
if self.query.select_related:
related_klass_infos = self.get_related_selections(select, select_mask)
klass_info["related_klass_infos"] = related_klass_infos
self.get_select_from_parent(klass_info)
ret = []
col_idx = 1
for col, alias in select:
try:
sql, params = self.compile(col)
except EmptyResultSet:
empty_result_set_value = getattr(
col, "empty_result_set_value", NotImplemented
)
if empty_result_set_value is NotImplemented:
# Select a predicate that's always False.
sql, params = "0", ()
else:
sql, params = self.compile(Value(empty_result_set_value))
except FullResultSet:
sql, params = self.compile(Value(True))
else:
sql, params = col.select_format(self, sql, params)
if alias is None and with_col_aliases:
alias = f"col{col_idx}"
col_idx += 1
ret.append((col, (sql, params), alias))
return ret, klass_info, annotations
def _order_by_pairs(self):
if self.query.extra_order_by:
ordering = self.query.extra_order_by
elif not self.query.default_ordering:
ordering = self.query.order_by
elif self.query.order_by:
ordering = self.query.order_by
elif (meta := self.query.get_meta()) and meta.ordering:
ordering = meta.ordering
self._meta_ordering = ordering
else:
ordering = []
if self.query.standard_ordering:
default_order, _ = ORDER_DIR["ASC"]
else:
default_order, _ = ORDER_DIR["DESC"]
selected_exprs = {}
# Avoid computing `selected_exprs` if there is no `ordering` as it's
# relatively expensive.
if ordering and (select := self.select):
for ordinal, (expr, _, alias) in enumerate(select, start=1):
pos_expr = PositionRef(ordinal, alias, expr)
if alias:
selected_exprs[alias] = pos_expr
selected_exprs[expr] = pos_expr
for field in ordering:
if hasattr(field, "resolve_expression"):
if isinstance(field, Value):
# output_field must be resolved for constants.
field = Cast(field, field.output_field)
if not isinstance(field, OrderBy):
field = field.asc()
if not self.query.standard_ordering:
field = field.copy()
field.reverse_ordering()
select_ref = selected_exprs.get(field.expression)
if select_ref or (
isinstance(field.expression, F)
and (select_ref := selected_exprs.get(field.expression.name))
):
# Emulation of NULLS (FIRST|LAST) cannot be combined with
# the usage of ordering by position.
if (
field.nulls_first is None and field.nulls_last is None
) or self.connection.features.supports_order_by_nulls_modifier:
field = field.copy()
field.expression = select_ref
# Alias collisions are not possible when dealing with
# combined queries so fallback to it if emulation of NULLS
# handling is required.
elif self.query.combinator:
field = field.copy()
field.expression = Ref(select_ref.refs, select_ref.source)
yield field, select_ref is not None
continue
if field == "?": # random
yield OrderBy(Random()), False
continue
col, order = get_order_dir(field, default_order)
descending = order == "DESC"
if select_ref := selected_exprs.get(col):
# Reference to expression in SELECT clause
yield (
OrderBy(
select_ref,
descending=descending,
),
True,
)
continue
if expr := self.query.annotations.get(col):
ref = col
transforms = []
else:
ref, *transforms = col.split(LOOKUP_SEP)
expr = self.query.annotations.get(ref)
if expr:
if self.query.combinator and self.select:
if transforms:
raise NotImplementedError(
"Ordering combined queries by transforms is not "
"implemented."
)
# Don't use the resolved annotation because other
# combined queries might define it differently.
expr = F(ref)
if transforms:
for name in transforms:
expr = self.query.try_transform(expr, name)
if isinstance(expr, Value):
# output_field must be resolved for constants.
expr = Cast(expr, expr.output_field)
yield OrderBy(expr, descending=descending), False
continue
if "." in field and field in self.query.extra_order_by:
# This came in through an extra(order_by=...) addition. Pass it
# on verbatim.
table, col = col.split(".", 1)
yield (
OrderBy(
RawSQL(
"%s.%s" % (self.quote_name_unless_alias(table), col), []
),
descending=descending,
),
False,
)
continue
if self.query.extra and col in self.query.extra:
if col in self.query.extra_select:
yield (
OrderBy(
Ref(col, RawSQL(*self.query.extra[col])),
descending=descending,
),
True,
)
else:
yield (
OrderBy(RawSQL(*self.query.extra[col]), descending=descending),
False,
)
else:
if self.query.combinator and self.select:
# Don't use the first model's field because other
# combinated queries might define it differently.
yield OrderBy(F(col), descending=descending), False
else:
# 'col' is of the form 'field' or 'field1__field2' or
# '-field1__field2__field', etc.
yield from self.find_ordering_name(
field,
self.query.get_meta(),
default_order=default_order,
)
def get_order_by(self):
"""
Return a list of 2-tuples of the form (expr, (sql, params, is_ref)) for
the ORDER BY clause.
The order_by clause can alter the select clause (for example it can add
aliases to clauses that do not yet have one, or it can add totally new
select clauses).
"""
result = []
seen = set()
for expr, is_ref in self._order_by_pairs():
resolved = expr.resolve_expression(self.query, allow_joins=True, reuse=None)
if not is_ref and self.query.combinator and self.select:
src = resolved.expression
expr_src = expr.expression
for sel_expr, _, col_alias in self.select:
if src == sel_expr:
# When values() is used the exact alias must be used to
# reference annotations.
if (
self.query.has_select_fields
and col_alias in self.query.annotation_select
and not (
isinstance(expr_src, F) and col_alias == expr_src.name
)
):
continue
resolved.set_source_expressions(
[Ref(col_alias if col_alias else src.target.column, src)]
)
break
else:
# Add column used in ORDER BY clause to the selected
# columns and to each combined query.
order_by_idx = len(self.query.select) + 1
col_alias = f"__orderbycol{order_by_idx}"
for q in self.query.combined_queries:
# If fields were explicitly selected through values()
# combined queries cannot be augmented.
if q.has_select_fields:
raise DatabaseError(
"ORDER BY term does not match any column in "
"the result set."
)
q.add_annotation(expr_src, col_alias)
self.query.add_select_col(resolved, col_alias)
resolved.set_source_expressions([Ref(col_alias, src)])
sql, params = self.compile(resolved)
# Don't add the same column twice, but the order direction is
# not taken into account so we strip it. When this entire method
# is refactored into expressions, then we can check each part as we
# generate it.
without_ordering = self.ordering_parts.search(sql)[1]
params_hash = make_hashable(params)
if (without_ordering, params_hash) in seen:
continue
seen.add((without_ordering, params_hash))
result.append((resolved, (sql, params, is_ref)))
return result
def get_extra_select(self, order_by, select):
extra_select = []
if self.query.distinct and not self.query.distinct_fields:
select_sql = [t[1] for t in select]
for expr, (sql, params, is_ref) in order_by:
without_ordering = self.ordering_parts.search(sql)[1]
if not is_ref and (without_ordering, params) not in select_sql:
extra_select.append((expr, (without_ordering, params), None))
return extra_select
def quote_name_unless_alias(self, name):
"""
A wrapper around connection.ops.quote_name that doesn't quote aliases
for table names. This avoids problems with some SQL dialects that treat
quoted strings specially (e.g. PostgreSQL).
"""
if (
self.connection.features.prohibits_dollar_signs_in_column_aliases
and "$" in name
):
raise ValueError(
"Dollar signs are not permitted in column aliases on "
f"{self.connection.display_name}."
)
if name in self.quote_cache:
return self.quote_cache[name]
if (
(name in self.query.alias_map and name not in self.query.table_map)
or name in self.query.extra_select
or (
self.query.external_aliases.get(name)
and name not in self.query.table_map
)
):
self.quote_cache[name] = name
return name
r = self.connection.ops.quote_name(name)
self.quote_cache[name] = r
return r
def compile(self, node):
vendor_impl = getattr(node, "as_" + self.connection.vendor, None)
if vendor_impl:
sql, params = vendor_impl(self, self.connection)
else:
sql, params = node.as_sql(self, self.connection)
return sql, params
def get_combinator_sql(self, combinator, all):
features = self.connection.features
compilers = [
query.get_compiler(self.using, self.connection, self.elide_empty)
for query in self.query.combined_queries
]
if not features.supports_slicing_ordering_in_compound:
for compiler in compilers:
if compiler.query.is_sliced:
raise DatabaseError(
"LIMIT/OFFSET not allowed in subqueries of compound statements."
)
if compiler.get_order_by():
raise DatabaseError(
"ORDER BY not allowed in subqueries of compound statements."
)
parts = []
empty_compiler = None
for compiler in compilers:
try:
parts.append(self._get_combinator_part_sql(compiler))
except EmptyResultSet:
# Omit the empty queryset with UNION and with DIFFERENCE if the
# first queryset is nonempty.
if combinator == "union" or (combinator == "difference" and parts):
empty_compiler = compiler
continue
raise
if not parts:
raise EmptyResultSet
elif len(parts) == 1 and combinator == "union" and self.query.is_sliced:
# A sliced union cannot be composed of a single component because
# in the event the later is also sliced it might result in invalid
# SQL due to the usage of multiple LIMIT clauses. Prevent that from
# happening by always including an empty resultset query to force
# the creation of an union.
empty_compiler.elide_empty = False
parts.append(self._get_combinator_part_sql(empty_compiler))
combinator_sql = self.connection.ops.set_operators[combinator]
if all and combinator == "union":
combinator_sql += " ALL"
braces = "{}"
if not self.query.subquery and features.supports_slicing_ordering_in_compound:
braces = "({})"
sql_parts, args_parts = zip(
*((braces.format(sql), args) for sql, args in parts)
)
result = [" {} ".format(combinator_sql).join(sql_parts)]
params = []
for part in args_parts:
params.extend(part)
return result, params
def _get_combinator_part_sql(self, compiler):
features = self.connection.features
# If the columns list is limited, then all combined queries
# must have the same columns list. Set the selects defined on
# the query on all combined queries, if not already set.
selected = self.query.selected
if selected is not None and compiler.query.selected is None:
compiler.query = compiler.query.clone()
compiler.query.set_values(selected)
part_sql, part_args = compiler.as_sql(with_col_aliases=True)
if compiler.query.combinator:
# Wrap in a subquery if wrapping in parentheses isn't
# supported.
if not features.supports_parentheses_in_compound:
part_sql = "SELECT * FROM ({})".format(part_sql)
# Add parentheses when combining with compound query if not
# already added for all compound queries.
elif (
self.query.subquery
or not features.supports_slicing_ordering_in_compound
):
part_sql = "({})".format(part_sql)
elif self.query.subquery and features.supports_slicing_ordering_in_compound:
part_sql = "({})".format(part_sql)
return part_sql, part_args
def get_qualify_sql(self):
where_parts = []
if self.where:
where_parts.append(self.where)
if self.having:
where_parts.append(self.having)
inner_query = self.query.clone()
inner_query.subquery = True
inner_query.where = inner_query.where.__class__(where_parts)
# Augment the inner query with any window function references that
# might have been masked via values() and alias(). If any masked
# aliases are added they'll be masked again to avoid fetching
# the data in the `if qual_aliases` branch below.
select = {
expr: alias for expr, _, alias in self.get_select(with_col_aliases=True)[0]
}
select_aliases = set(select.values())
qual_aliases = set()
replacements = {}
def collect_replacements(expressions):
while expressions:
expr = expressions.pop()
if expr in replacements:
continue
elif select_alias := select.get(expr):
replacements[expr] = select_alias
elif isinstance(expr, Lookup):
expressions.extend(expr.get_source_expressions())
elif isinstance(expr, Ref):
if expr.refs not in select_aliases:
expressions.extend(expr.get_source_expressions())
else:
num_qual_alias = len(qual_aliases)
select_alias = f"qual{num_qual_alias}"
qual_aliases.add(select_alias)
inner_query.add_annotation(expr, select_alias)
replacements[expr] = select_alias
collect_replacements(list(self.qualify.leaves()))
self.qualify = self.qualify.replace_expressions(
{expr: Ref(alias, expr) for expr, alias in replacements.items()}
)
order_by = []
for order_by_expr, *_ in self.get_order_by():
collect_replacements(order_by_expr.get_source_expressions())
order_by.append(
order_by_expr.replace_expressions(
{expr: Ref(alias, expr) for expr, alias in replacements.items()}
)
)
inner_query_compiler = inner_query.get_compiler(
self.using, connection=self.connection, elide_empty=self.elide_empty
)
inner_sql, inner_params = inner_query_compiler.as_sql(
# The limits must be applied to the outer query to avoid pruning
# results too eagerly.
with_limits=False,
# Force unique aliasing of selected columns to avoid collisions
# and make rhs predicates referencing easier.
with_col_aliases=True,
)
qualify_sql, qualify_params = self.compile(self.qualify)
result = [
"SELECT * FROM (",
inner_sql,
")",
self.connection.ops.quote_name("qualify"),
"WHERE",
qualify_sql,
]
if qual_aliases:
# If some select aliases were unmasked for filtering purposes they
# must be masked back.
cols = [self.connection.ops.quote_name(alias) for alias in select.values()]
result = [
"SELECT",
", ".join(cols),
"FROM (",
*result,
")",
self.connection.ops.quote_name("qualify_mask"),
]
params = list(inner_params) + qualify_params
# As the SQL spec is unclear on whether or not derived tables
# ordering must propagate it has to be explicitly repeated on the
# outer-most query to ensure it's preserved.
if order_by:
ordering_sqls = []
for ordering in order_by:
ordering_sql, ordering_params = self.compile(ordering)
ordering_sqls.append(ordering_sql)
params.extend(ordering_params)
result.extend(["ORDER BY", ", ".join(ordering_sqls)])
return result, params
def as_sql(self, with_limits=True, with_col_aliases=False):
"""
Create the SQL for this query. Return the SQL string and list of
parameters.
If 'with_limits' is False, any limit/offset information is not included
in the query.
"""
refcounts_before = self.query.alias_refcount.copy()
try:
combinator = self.query.combinator
extra_select, order_by, group_by = self.pre_sql_setup(
with_col_aliases=with_col_aliases or bool(combinator),
)
for_update_part = None
# Is a LIMIT/OFFSET clause needed?
with_limit_offset = with_limits and self.query.is_sliced
combinator = self.query.combinator
features = self.connection.features
if combinator:
if not getattr(features, "supports_select_{}".format(combinator)):
raise NotSupportedError(
"{} is not supported on this database backend.".format(
combinator
)
)
result, params = self.get_combinator_sql(
combinator, self.query.combinator_all
)
elif self.qualify:
result, params = self.get_qualify_sql()
order_by = None
else:
distinct_fields, distinct_params = self.get_distinct()
# This must come after 'select', 'ordering', and 'distinct'
# (see docstring of get_from_clause() for details).
from_, f_params = self.get_from_clause()
try:
where, w_params = (
self.compile(self.where) if self.where is not None else ("", [])
)
except EmptyResultSet:
if self.elide_empty:
raise
# Use a predicate that's always False.
where, w_params = "0 = 1", []
except FullResultSet:
where, w_params = "", []
try:
having, h_params = (
self.compile(self.having)
if self.having is not None
else ("", [])
)
except FullResultSet:
having, h_params = "", []
result = ["SELECT"]
params = []
if self.query.distinct:
distinct_result, distinct_params = self.connection.ops.distinct_sql(
distinct_fields,
distinct_params,
)
result += distinct_result
params += distinct_params
out_cols = []
for _, (s_sql, s_params), alias in self.select + extra_select:
if alias:
s_sql = "%s AS %s" % (
s_sql,
self.connection.ops.quote_name(alias),
)
params.extend(s_params)
out_cols.append(s_sql)
result += [", ".join(out_cols)]
if from_:
result += ["FROM", *from_]
elif self.connection.features.bare_select_suffix:
result += [self.connection.features.bare_select_suffix]
params.extend(f_params)
if self.query.select_for_update and features.has_select_for_update:
if (
self.connection.get_autocommit()
# Don't raise an exception when database doesn't
# support transactions, as it's a noop.
and features.supports_transactions
):
raise TransactionManagementError(
"select_for_update cannot be used outside of a transaction."
)
if (
with_limit_offset
and not features.supports_select_for_update_with_limit
):
raise NotSupportedError(
"LIMIT/OFFSET is not supported with "
"select_for_update on this database backend."
)
nowait = self.query.select_for_update_nowait
skip_locked = self.query.select_for_update_skip_locked
of = self.query.select_for_update_of
no_key = self.query.select_for_no_key_update
# If it's a NOWAIT/SKIP LOCKED/OF/NO KEY query but the
# backend doesn't support it, raise NotSupportedError to
# prevent a possible deadlock.
if nowait and not features.has_select_for_update_nowait:
raise NotSupportedError(
"NOWAIT is not supported on this database backend."
)
elif skip_locked and not features.has_select_for_update_skip_locked:
raise NotSupportedError(
"SKIP LOCKED is not supported on this database backend."
)
elif of and not features.has_select_for_update_of:
raise NotSupportedError(
"FOR UPDATE OF is not supported on this database backend."
)
elif no_key and not features.has_select_for_no_key_update:
raise NotSupportedError(
"FOR NO KEY UPDATE is not supported on this "
"database backend."
)
for_update_part = self.connection.ops.for_update_sql(
nowait=nowait,
skip_locked=skip_locked,
of=self.get_select_for_update_of_arguments(),
no_key=no_key,
)
if for_update_part and features.for_update_after_from:
result.append(for_update_part)
if where:
result.append("WHERE %s" % where)
params.extend(w_params)
grouping = []
for g_sql, g_params in group_by:
grouping.append(g_sql)
params.extend(g_params)
if grouping:
if distinct_fields:
raise NotImplementedError(
"annotate() + distinct(fields) is not implemented."
)
order_by = order_by or self.connection.ops.force_no_ordering()
result.append("GROUP BY %s" % ", ".join(grouping))
if self._meta_ordering:
order_by = None
if having:
if not grouping:
result.extend(self.connection.ops.force_group_by())
result.append("HAVING %s" % having)
params.extend(h_params)
if self.query.explain_info:
result.insert(
0,
self.connection.ops.explain_query_prefix(
self.query.explain_info.format,
**self.query.explain_info.options,
),
)
if order_by:
ordering = []
for _, (o_sql, o_params, _) in order_by:
ordering.append(o_sql)
params.extend(o_params)
order_by_sql = "ORDER BY %s" % ", ".join(ordering)
if combinator and features.requires_compound_order_by_subquery:
result = ["SELECT * FROM (", *result, ")", order_by_sql]
else:
result.append(order_by_sql)
if with_limit_offset:
result.append(
self.connection.ops.limit_offset_sql(
self.query.low_mark, self.query.high_mark
)
)
if for_update_part and not features.for_update_after_from:
result.append(for_update_part)
if self.query.subquery and extra_select:
# If the query is used as a subquery, the extra selects would
# result in more columns than the left-hand side expression is
# expecting. This can happen when a subquery uses a combination
# of order_by() and distinct(), forcing the ordering
# expressions to be selected as well. Wrap the query in another
# subquery to exclude extraneous selects.
sub_selects = []
sub_params = []
for index, (select, _, alias) in enumerate(self.select, start=1):
if alias:
sub_selects.append(
"%s.%s"
% (
self.connection.ops.quote_name("subquery"),
self.connection.ops.quote_name(alias),
)
)
else:
select_clone = select.relabeled_clone(
{select.alias: "subquery"}
)
subselect, subparams = select_clone.as_sql(
self, self.connection
)
sub_selects.append(subselect)
sub_params.extend(subparams)
return "SELECT %s FROM (%s) subquery" % (
", ".join(sub_selects),
" ".join(result),
), tuple(sub_params + params)
return " ".join(result), tuple(params)
finally:
# Finally do cleanup - get rid of the joins we created above.
self.query.reset_refcounts(refcounts_before)
def get_default_columns(
self, select_mask, start_alias=None, opts=None, from_parent=None
):
"""
Compute the default columns for selecting every field in the base
model. Will sometimes be called to pull in related models (e.g. via
select_related), in which case "opts" and "start_alias" will be given
to provide a starting point for the traversal.
Return a list of strings, quoted appropriately for use in SQL
directly, as well as a set of aliases used in the select statement (if
'as_pairs' is True, return a list of (alias, col_name) pairs instead
of strings as the first component and None as the second component).
"""
result = []
if opts is None:
if (opts := self.query.get_meta()) is None:
return result
start_alias = start_alias or self.query.get_initial_alias()
# The 'seen_models' is used to optimize checking the needed parent
# alias for a given field. This also includes None -> start_alias to
# be used by local fields.
seen_models = {None: start_alias}
select_mask_fields = set(composite.unnest(select_mask))
for field in opts.concrete_fields:
model = field.model._meta.concrete_model
# A proxy model will have a different model and concrete_model. We
# will assign None if the field belongs to this model.
if model == opts.model:
model = None
if (
from_parent
and model is not None
and issubclass(
from_parent._meta.concrete_model, model._meta.concrete_model
)
):
# Avoid loading data for already loaded parents.
# We end up here in the case select_related() resolution
# proceeds from parent model to child model. In that case the
# parent model data is already present in the SELECT clause,
# and we want to avoid reloading the same data again.
continue
if select_mask and field not in select_mask_fields:
continue
alias = self.query.join_parent_model(opts, model, start_alias, seen_models)
column = field.get_col(alias)
result.append(column)
return result
def get_distinct(self):
"""
Return a quoted list of fields to use in DISTINCT ON part of the query.
This method can alter the tables in the query, and thus it must be
called before get_from_clause().
"""
result = []
params = []
opts = self.query.get_meta()
for name in self.query.distinct_fields:
parts = name.split(LOOKUP_SEP)
_, targets, alias, joins, path, _, transform_function = self._setup_joins(
parts, opts, None
)
targets, alias, _ = self.query.trim_joins(targets, joins, path)
for target in targets:
if name in self.query.annotation_select:
result.append(self.connection.ops.quote_name(name))
else:
r, p = self.compile(transform_function(target, alias))
result.append(r)
params.append(p)
return result, params
def find_ordering_name(
self, name, opts, alias=None, default_order="ASC", already_seen=None
):
"""
Return the table alias (the name might be ambiguous, the alias will
not be) and column name for ordering by the given 'name' parameter.
The 'name' is of the form 'field1__field2__...__fieldN'.
"""
name, order = get_order_dir(name, default_order)
descending = order == "DESC"
pieces = name.split(LOOKUP_SEP)
(
field,
targets,
alias,
joins,
path,
opts,
transform_function,
) = self._setup_joins(pieces, opts, alias)
# If we get to this point and the field is a relation to another model,
# append the default ordering for that model unless it is the pk
# shortcut or the attribute name of the field that is specified or
# there are transforms to process.
if (
field.is_relation
and opts.ordering
and getattr(field, "attname", None) != pieces[-1]
and name != "pk"
and not getattr(transform_function, "has_transforms", False)
):
# Firstly, avoid infinite loops.
already_seen = already_seen or set()
join_tuple = tuple(
getattr(self.query.alias_map[j], "join_cols", None) for j in joins
)
if join_tuple in already_seen:
raise FieldError("Infinite loop caused by ordering.")
already_seen.add(join_tuple)
results = []
for item in opts.ordering:
if hasattr(item, "resolve_expression") and not isinstance(
item, OrderBy
):
item = item.desc() if descending else item.asc()
if isinstance(item, OrderBy):
results.append(
(item.prefix_references(f"{name}{LOOKUP_SEP}"), False)
)
continue
results.extend(
(expr.prefix_references(f"{name}{LOOKUP_SEP}"), is_ref)
for expr, is_ref in self.find_ordering_name(
item, opts, alias, order, already_seen
)
)
return results
targets, alias, _ = self.query.trim_joins(targets, joins, path)
return [
(OrderBy(transform_function(t, alias), descending=descending), False)
for t in targets
]
def _setup_joins(self, pieces, opts, alias):
"""
Helper method for get_order_by() and get_distinct().
get_ordering() and get_distinct() must produce same target columns on
same input, as the prefixes of get_ordering() and get_distinct() must
match. Executing SQL where this is not true is an error.
"""
alias = alias or self.query.get_initial_alias()
field, targets, opts, joins, path, transform_function = self.query.setup_joins(
pieces, opts, alias
)
alias = joins[-1]
return field, targets, alias, joins, path, opts, transform_function
def get_from_clause(self):
"""
Return a list of strings that are joined together to go after the
"FROM" part of the query, as well as a list any extra parameters that
need to be included. Subclasses, can override this to create a
from-clause via a "select".
This should only be called after any SQL construction methods that
might change the tables that are needed. This means the select columns,
ordering, and distinct must be done first.
"""
result = []
params = []
# Copy alias_map to a tuple in case Join.as_sql() subclasses (objects
# in alias_map) alter compiler.query.alias_map. That would otherwise
# raise "RuntimeError: dictionary changed size during iteration".
for alias, from_clause in tuple(self.query.alias_map.items()):
if not self.query.alias_refcount[alias]:
continue
clause_sql, clause_params = self.compile(from_clause)
result.append(clause_sql)
params.extend(clause_params)
for t in self.query.extra_tables:
alias, _ = self.query.table_alias(t)
# Only add the alias if it's not already present (the table_alias()
# call increments the refcount, so an alias refcount of one means
# this is the only reference).
if (
alias not in self.query.alias_map
or self.query.alias_refcount[alias] == 1
):
result.append(", %s" % self.quote_name_unless_alias(alias))
return result, params
def get_related_selections(
self,
select,
select_mask,
opts=None,
root_alias=None,
cur_depth=1,
requested=None,
restricted=None,
):
"""
Fill in the information needed for a select_related query. The current
depth is measured as the number of connections away from the root model
(for example, cur_depth=1 means we are looking at models with direct
connections to the root model).
"""
def _get_field_choices():
direct_choices = (f.name for f in opts.fields if f.is_relation)
reverse_choices = (
f.field.related_query_name()
for f in opts.related_objects
if f.field.unique
)
return chain(
direct_choices, reverse_choices, self.query._filtered_relations
)
related_klass_infos = []
if not restricted and cur_depth > self.query.max_depth:
# We've recursed far enough; bail out.
return related_klass_infos
if not opts:
opts = self.query.get_meta()
root_alias = self.query.get_initial_alias()
# Setup for the case when only particular related fields should be
# included in the related selection.
fields_found = set()
if requested is None:
restricted = isinstance(self.query.select_related, dict)
if restricted:
requested = self.query.select_related
def get_related_klass_infos(klass_info, related_klass_infos):
klass_info["related_klass_infos"] = related_klass_infos
for f in opts.fields:
fields_found.add(f.name)
if restricted:
next = requested.get(f.name, {})
if not f.is_relation:
# If a non-related field is used like a relation,
# or if a single non-relational field is given.
if next or f.name in requested:
raise FieldError(
"Non-relational field given in select_related: '%s'. "
"Choices are: %s"
% (
f.name,
", ".join(_get_field_choices()) or "(none)",
)
)
else:
next = False
if not select_related_descend(f, restricted, requested, select_mask):
continue
related_select_mask = select_mask.get(f) or {}
klass_info = {
"model": f.remote_field.model,
"field": f,
"reverse": False,
"local_setter": f.set_cached_value,
"remote_setter": (
f.remote_field.set_cached_value if f.unique else lambda x, y: None
),
"from_parent": False,
}
related_klass_infos.append(klass_info)
select_fields = []
_, _, _, joins, _, _ = self.query.setup_joins([f.name], opts, root_alias)
alias = joins[-1]
columns = self.get_default_columns(
related_select_mask, start_alias=alias, opts=f.remote_field.model._meta
)
for col in columns:
select_fields.append(len(select))
select.append((col, None))
klass_info["select_fields"] = select_fields
next_klass_infos = self.get_related_selections(
select,
related_select_mask,
f.remote_field.model._meta,
alias,
cur_depth + 1,
next,
restricted,
)
get_related_klass_infos(klass_info, next_klass_infos)
if restricted:
related_fields = [
(o, o.field, o.related_model)
for o in opts.related_objects
if o.field.unique and not o.many_to_many
]
for related_object, related_field, model in related_fields:
if not select_related_descend(
related_object,
restricted,
requested,
select_mask,
):
continue
related_select_mask = select_mask.get(related_object) or {}
related_field_name = related_field.related_query_name()
fields_found.add(related_field_name)
join_info = self.query.setup_joins(
[related_field_name], opts, root_alias
)
alias = join_info.joins[-1]
from_parent = issubclass(model, opts.model) and model is not opts.model
klass_info = {
"model": model,
"field": related_field,
"reverse": True,
"local_setter": related_object.set_cached_value,
"remote_setter": related_field.set_cached_value,
"from_parent": from_parent,
}
related_klass_infos.append(klass_info)
select_fields = []
columns = self.get_default_columns(
related_select_mask,
start_alias=alias,
opts=model._meta,
from_parent=opts.model,
)
for col in columns:
select_fields.append(len(select))
select.append((col, None))
klass_info["select_fields"] = select_fields
next = requested.get(related_field_name, {})
next_klass_infos = self.get_related_selections(
select,
related_select_mask,
model._meta,
alias,
cur_depth + 1,
next,
restricted,
)
get_related_klass_infos(klass_info, next_klass_infos)
def local_setter(final_field, obj, from_obj):
# Set a reverse fk object when relation is non-empty.
if from_obj:
final_field.remote_field.set_cached_value(from_obj, obj)
def local_setter_noop(obj, from_obj):
pass
def remote_setter(name, obj, from_obj):
setattr(from_obj, name, obj)
for name in list(requested):
# Filtered relations work only on the topmost level.
if cur_depth > 1:
break
if name in self.query._filtered_relations:
fields_found.add(name)
final_field, _, join_opts, joins, _, _ = self.query.setup_joins(
[name], opts, root_alias
)
model = join_opts.model
alias = joins[-1]
from_parent = (
issubclass(model, opts.model) and model is not opts.model
)
klass_info = {
"model": model,
"field": final_field,
"reverse": True,
"local_setter": (
partial(local_setter, final_field)
if len(joins) <= 2
else local_setter_noop
),
"remote_setter": partial(remote_setter, name),
"from_parent": from_parent,
}
related_klass_infos.append(klass_info)
select_fields = []
field_select_mask = select_mask.get((name, final_field)) or {}
columns = self.get_default_columns(
field_select_mask,
start_alias=alias,
opts=model._meta,
from_parent=opts.model,
)
for col in columns:
select_fields.append(len(select))
select.append((col, None))
klass_info["select_fields"] = select_fields
next_requested = requested.get(name, {})
next_klass_infos = self.get_related_selections(
select,
field_select_mask,
opts=model._meta,
root_alias=alias,
cur_depth=cur_depth + 1,
requested=next_requested,
restricted=restricted,
)
get_related_klass_infos(klass_info, next_klass_infos)
fields_not_found = set(requested).difference(fields_found)
if fields_not_found:
invalid_fields = ("'%s'" % s for s in fields_not_found)
raise FieldError(
"Invalid field name(s) given in select_related: %s. "
"Choices are: %s"
% (
", ".join(invalid_fields),
", ".join(_get_field_choices()) or "(none)",
)
)
return related_klass_infos
def get_select_for_update_of_arguments(self):
"""
Return a quoted list of arguments for the SELECT FOR UPDATE OF part of
the query.
"""
def _get_parent_klass_info(klass_info):
concrete_model = klass_info["model"]._meta.concrete_model
for parent_model, parent_link in concrete_model._meta.parents.items():
all_parents = parent_model._meta.all_parents
yield {
"model": parent_model,
"field": parent_link,
"reverse": False,
"select_fields": [
select_index
for select_index in klass_info["select_fields"]
# Selected columns from a model or its parents.
if (
self.select[select_index][0].target.model == parent_model
or self.select[select_index][0].target.model in all_parents
)
],
}
def _get_first_selected_col_from_model(klass_info):
"""
Find the first selected column from a model. If it doesn't exist,
don't lock a model.
select_fields is filled recursively, so it also contains fields
from the parent models.
"""
concrete_model = klass_info["model"]._meta.concrete_model
for select_index in klass_info["select_fields"]:
if self.select[select_index][0].target.model == concrete_model:
return self.select[select_index][0]
def _get_field_choices():
"""Yield all allowed field paths in breadth-first search order."""
queue = collections.deque([(None, self.klass_info)])
while queue:
parent_path, klass_info = queue.popleft()
if parent_path is None:
path = []
yield "self"
else:
field = klass_info["field"]
if klass_info["reverse"]:
field = field.remote_field
path = [*parent_path, field.name]
yield LOOKUP_SEP.join(path)
queue.extend(
(path, klass_info)
for klass_info in _get_parent_klass_info(klass_info)
)
queue.extend(
(path, klass_info)
for klass_info in klass_info.get("related_klass_infos", [])
)
if not self.klass_info:
return []
result = []
invalid_names = []
for name in self.query.select_for_update_of:
klass_info = self.klass_info
if name == "self":
col = _get_first_selected_col_from_model(klass_info)
else:
for part in name.split(LOOKUP_SEP):
klass_infos = (
*klass_info.get("related_klass_infos", []),
*_get_parent_klass_info(klass_info),
)
for related_klass_info in klass_infos:
field = related_klass_info["field"]
if related_klass_info["reverse"]:
field = field.remote_field
if field.name == part:
klass_info = related_klass_info
break
else:
klass_info = None
break
if klass_info is None:
invalid_names.append(name)
continue
col = _get_first_selected_col_from_model(klass_info)
if col is not None:
if self.connection.features.select_for_update_of_column:
result.append(self.compile(col)[0])
else:
result.append(self.quote_name_unless_alias(col.alias))
if invalid_names:
raise FieldError(
"Invalid field name(s) given in select_for_update(of=(...)): %s. "
"Only relational fields followed in the query are allowed. "
"Choices are: %s."
% (
", ".join(invalid_names),
", ".join(_get_field_choices()),
)
)
return result
def get_converters(self, expressions):
i = 0
converters = {}
for expression in expressions:
if isinstance(expression, ColPairs):
cols = expression.get_source_expressions()
cols_converters = self.get_converters(cols)
for j, (convs, col) in cols_converters.items():
converters[i + j] = (convs, col)
i += len(expression)
elif expression:
backend_converters = self.connection.ops.get_db_converters(expression)
field_converters = expression.get_db_converters(self.connection)
if backend_converters or field_converters:
converters[i] = (backend_converters + field_converters, expression)
i += 1
else:
i += 1
return converters
def apply_converters(self, rows, converters):
connection = self.connection
converters = list(converters.items())
for row in map(list, rows):
for pos, (convs, expression) in converters:
value = row[pos]
for converter in convs:
value = converter(value, expression, connection)
row[pos] = value
yield row
def has_composite_fields(self, expressions):
# Check for composite fields before calling the relatively costly
# composite_fields_to_tuples.
return any(isinstance(expression, ColPairs) for expression in expressions)
def composite_fields_to_tuples(self, rows, expressions):
col_pair_slices = [
slice(i, i + len(expression))
for i, expression in enumerate(expressions)
if isinstance(expression, ColPairs)
]
for row in map(list, rows):
for pos in col_pair_slices:
row[pos] = (tuple(row[pos]),)
yield row
def results_iter(
self,
results=None,
tuple_expected=False,
chunked_fetch=False,
chunk_size=GET_ITERATOR_CHUNK_SIZE,
):
"""Return an iterator over the results from executing this query."""
if results is None:
results = self.execute_sql(
MULTI, chunked_fetch=chunked_fetch, chunk_size=chunk_size
)
fields = [s[0] for s in self.select[0 : self.col_count]]
converters = self.get_converters(fields)
rows = chain.from_iterable(results)
if converters:
rows = self.apply_converters(rows, converters)
if self.has_composite_fields(fields):
rows = self.composite_fields_to_tuples(rows, fields)
if tuple_expected:
rows = map(tuple, rows)
return rows
def has_results(self):
"""
Backends (e.g. NoSQL) can override this in order to use optimized
versions of "query has any results."
"""
return bool(self.execute_sql(SINGLE))
def execute_sql(
self, result_type=MULTI, chunked_fetch=False, chunk_size=GET_ITERATOR_CHUNK_SIZE
):
"""
Run the query against the database and return the result(s). The
return value depends on the value of result_type.
When result_type is:
- MULTI: Retrieves all rows using fetchmany(). Wraps in an iterator for
chunked reads when supported.
- SINGLE: Retrieves a single row using fetchone().
- ROW_COUNT: Retrieves the number of rows in the result.
- CURSOR: Runs the query, and returns the cursor object. It is the
caller's responsibility to close the cursor.
"""
result_type = result_type or NO_RESULTS
try:
sql, params = self.as_sql()
if not sql:
raise EmptyResultSet
except EmptyResultSet:
if result_type == MULTI:
return iter([])
else:
return
if chunked_fetch:
cursor = self.connection.chunked_cursor()
else:
cursor = self.connection.cursor()
try:
cursor.execute(sql, params)
except Exception as e:
# Might fail for server-side cursors (e.g. connection closed)
try:
cursor.close()
except DatabaseError:
raise e from None
raise
if result_type == ROW_COUNT:
try:
return cursor.rowcount
finally:
cursor.close()
if result_type == CURSOR:
# Give the caller the cursor to process and close.
return cursor
if result_type == SINGLE:
try:
val = cursor.fetchone()
if val:
return val[0 : self.col_count]
return val
finally:
# done with the cursor
cursor.close()
if result_type == NO_RESULTS:
cursor.close()
return
result = cursor_iter(
cursor,
self.connection.features.empty_fetchmany_value,
self.col_count if self.has_extra_select else None,
chunk_size,
)
if not chunked_fetch or not self.connection.features.can_use_chunked_reads:
# If we are using non-chunked reads, we return the same data
# structure as normally, but ensure it is all read into memory
# before going any further. Use chunked_fetch if requested,
# unless the database doesn't support it.
return list(result)
return result
def explain_query(self):
result = list(self.execute_sql())
# Some backends return 1 item tuples with strings, and others return
# tuples with integers and strings. Flatten them out into strings.
format_ = self.query.explain_info.format
output_formatter = json.dumps if format_ and format_.lower() == "json" else str
for row in result:
for value in row:
if not isinstance(value, str):
yield " ".join([output_formatter(c) for c in value])
else:
yield value
class SQLInsertCompiler(SQLCompiler):
returning_fields = None
returning_params = ()
def field_as_sql(self, field, get_placeholder, val):
"""
Take a field and a value intended to be saved on that field, and
return placeholder SQL and accompanying params. Check for raw values,
expressions, and fields with get_placeholder() defined in that order.
When field is None, consider the value raw and use it as the
placeholder, with no corresponding parameters returned.
"""
if field is None:
# A field value of None means the value is raw.
sql, params = val, []
elif hasattr(val, "as_sql"):
# This is an expression, let's compile it.
sql, params = self.compile(val)
elif get_placeholder is not None:
# Some fields (e.g. geo fields) need special munging before
# they can be inserted.
sql, params = get_placeholder(val, self, self.connection), [val]
else:
# Return the common case for the placeholder
sql, params = "%s", [val]
# The following hook is only used by Oracle Spatial, which sometimes
# needs to yield 'NULL' and () as its placeholder and params instead
# of '%s' and (None,). The 'NULL' placeholder is produced earlier by
# OracleOperations.get_geom_placeholder(). The following line removes
# the corresponding None parameter. See ticket #10888.
params = self.connection.ops.modify_insert_params(sql, params)
return sql, params
def prepare_value(self, field, value):
"""
Prepare a value to be used in a query by resolving it if it is an
expression and otherwise calling the field's get_db_prep_save().
"""
if hasattr(value, "resolve_expression"):
value = value.resolve_expression(
self.query, allow_joins=False, for_save=True
)
# Don't allow values containing Col expressions. They refer to
# existing columns on a row, but in the case of insert the row
# doesn't exist yet.
if value.contains_column_references:
raise ValueError(
'Failed to insert expression "%s" on %s. F() expressions '
"can only be used to update, not to insert." % (value, field)
)
if value.contains_aggregate:
raise FieldError(
"Aggregate functions are not allowed in this query "
"(%s=%r)." % (field.name, value)
)
if value.contains_over_clause:
raise FieldError(
"Window expressions are not allowed in this query (%s=%r)."
% (field.name, value)
)
return field.get_db_prep_save(value, connection=self.connection)
def pre_save_val(self, field, obj):
"""
Get the given field's value off the given obj. pre_save() is used for
things like auto_now on DateTimeField. Skip it if this is a raw query.
"""
if self.query.raw:
return getattr(obj, field.attname)
return field.pre_save(obj, add=True)
def assemble_as_sql(self, fields, value_rows):
"""
Take a sequence of N fields and a sequence of M rows of values, and
generate placeholder SQL and parameters for each field and value.
Return a pair containing:
* a sequence of M rows of N SQL placeholder strings, and
* a sequence of M rows of corresponding parameter values.
Each placeholder string may contain any number of '%s' interpolation
strings, and each parameter row will contain exactly as many params
as the total number of '%s's in the corresponding placeholder row.
"""
if not value_rows:
return [], []
# list of (sql, [params]) tuples for each object to be saved
# Shape: [n_objs][n_fields][2]
get_placeholders = [getattr(field, "get_placeholder", None) for field in fields]
rows_of_fields_as_sql = (
(
self.field_as_sql(field, get_placeholder, value)
for field, get_placeholder, value in zip(fields, get_placeholders, row)
)
for row in value_rows
)
# tuple like ([sqls], [[params]s]) for each object to be saved
# Shape: [n_objs][2][n_fields]
sql_and_param_pair_rows = (zip(*row) for row in rows_of_fields_as_sql)
# Extract separate lists for placeholders and params.
# Each of these has shape [n_objs][n_fields]
placeholder_rows, param_rows = zip(*sql_and_param_pair_rows)
# Params for each field are still lists, and need to be flattened.
param_rows = [[p for ps in row for p in ps] for row in param_rows]
return placeholder_rows, param_rows
def as_sql(self):
# We don't need quote_name_unless_alias() here, since these are all
# going to be column names (so we can avoid the extra overhead).
qn = self.connection.ops.quote_name
opts = self.query.get_meta()
insert_statement = self.connection.ops.insert_statement(
on_conflict=self.query.on_conflict,
)
result = ["%s %s" % (insert_statement, qn(opts.db_table))]
if fields := list(self.query.fields):
from django.db.models.expressions import DatabaseDefault
supports_default_keyword_in_bulk_insert = (
self.connection.features.supports_default_keyword_in_bulk_insert
)
value_cols = []
for field in list(fields):
field_prepare = partial(self.prepare_value, field)
field_pre_save = partial(self.pre_save_val, field)
field_values = []
for obj in self.query.objs:
value = field_pre_save(obj)
if not isinstance(value, DatabaseDefault):
value = field_prepare(value)
field_values.append(value)
if not field.has_db_default():
value_cols.append(field_values)
continue
# If all values are DEFAULT don't include the field and its
# values in the query as they are redundant and could prevent
# optimizations. This cannot be done if we're dealing with the
# last field as INSERT statements require at least one.
if len(fields) > 1 and all(
isinstance(value, DatabaseDefault) for value in field_values
):
fields.remove(field)
continue
if supports_default_keyword_in_bulk_insert:
value_cols.append(field_values)
continue
# If the field cannot be excluded from the INSERT for the
# reasons listed above and the backend doesn't support the
# DEFAULT keyword each values must be expanded into their
# underlying expressions.
prepared_db_default = field_prepare(field.db_default)
field_values = [
(
prepared_db_default
if isinstance(value, DatabaseDefault)
else value
)
for value in field_values
]
value_cols.append(field_values)
value_rows = list(zip(*value_cols))
result.append("(%s)" % ", ".join(qn(f.column) for f in fields))
else:
# No fields were specified but an INSERT statement must include at
# least one column. This can only happen when the model's primary
# key is composed of a single auto-field so default to including it
# as a placeholder to generate a valid INSERT statement.
value_rows = [
[self.connection.ops.pk_default_value()] for _ in self.query.objs
]
fields = [None]
result.append("(%s)" % qn(opts.pk.column))
# Currently the backends just accept values when generating bulk
# queries and generate their own placeholders. Doing that isn't
# necessary and it should be possible to use placeholders and
# expressions in bulk inserts too.
can_bulk = (
not self.returning_fields and self.connection.features.has_bulk_insert
)
placeholder_rows, param_rows = self.assemble_as_sql(fields, value_rows)
on_conflict_suffix_sql = self.connection.ops.on_conflict_suffix_sql(
fields,
self.query.on_conflict,
(f.column for f in self.query.update_fields),
(f.column for f in self.query.unique_fields),
)
if (
self.returning_fields
and self.connection.features.can_return_columns_from_insert
):
if self.connection.features.can_return_rows_from_bulk_insert:
result.append(
self.connection.ops.bulk_insert_sql(fields, placeholder_rows)
)
params = param_rows
else:
result.append("VALUES (%s)" % ", ".join(placeholder_rows[0]))
params = [param_rows[0]]
if on_conflict_suffix_sql:
result.append(on_conflict_suffix_sql)
# Skip empty r_sql to allow subclasses to customize behavior for
# 3rd party backends. Refs #19096.
r_sql, self.returning_params = self.connection.ops.returning_columns(
self.returning_fields
)
if r_sql:
result.append(r_sql)
params += [self.returning_params]
return [(" ".join(result), tuple(chain.from_iterable(params)))]
if can_bulk:
result.append(self.connection.ops.bulk_insert_sql(fields, placeholder_rows))
if on_conflict_suffix_sql:
result.append(on_conflict_suffix_sql)
return [(" ".join(result), tuple(p for ps in param_rows for p in ps))]
else:
if on_conflict_suffix_sql:
result.append(on_conflict_suffix_sql)
return [
(" ".join([*result, "VALUES (%s)" % ", ".join(p)]), vals)
for p, vals in zip(placeholder_rows, param_rows)
]
def execute_sql(self, returning_fields=None):
assert not (
returning_fields
and len(self.query.objs) != 1
and not self.connection.features.can_return_rows_from_bulk_insert
)
opts = self.query.get_meta()
self.returning_fields = returning_fields
cols = []
with self.connection.cursor() as cursor:
for sql, params in self.as_sql():
cursor.execute(sql, params)
if not self.returning_fields:
return []
obj_len = len(self.query.objs)
if (
self.connection.features.can_return_rows_from_bulk_insert
and obj_len > 1
) or (
self.connection.features.can_return_columns_from_insert and obj_len == 1
):
rows = self.connection.ops.fetch_returned_rows(
cursor, self.returning_params
)
cols = [field.get_col(opts.db_table) for field in self.returning_fields]
elif returning_fields and isinstance(
returning_field := returning_fields[0], AutoField
):
cols = [returning_field.get_col(opts.db_table)]
rows = [
(
self.connection.ops.last_insert_id(
cursor,
opts.db_table,
returning_field.column,
),
)
]
else:
# Backend doesn't support returning fields and no auto-field
# that can be retrieved from `last_insert_id` was specified.
return []
converters = self.get_converters(cols)
if converters:
rows = self.apply_converters(rows, converters)
return list(rows)
class SQLDeleteCompiler(SQLCompiler):
@cached_property
def single_alias(self):
# Ensure base table is in aliases.
self.query.get_initial_alias()
return sum(self.query.alias_refcount[t] > 0 for t in self.query.alias_map) == 1
@classmethod
def _expr_refs_base_model(cls, expr, base_model):
if isinstance(expr, Query):
return expr.model == base_model
if not hasattr(expr, "get_source_expressions"):
return False
return any(
cls._expr_refs_base_model(source_expr, base_model)
for source_expr in expr.get_source_expressions()
)
@cached_property
def contains_self_reference_subquery(self):
return any(
self._expr_refs_base_model(expr, self.query.model)
for expr in chain(
self.query.annotations.values(), self.query.where.children
)
)
def _as_sql(self, query):
delete = "DELETE FROM %s" % self.quote_name_unless_alias(query.base_table)
try:
where, params = self.compile(query.where)
except FullResultSet:
return delete, ()
return f"{delete} WHERE {where}", tuple(params)
def as_sql(self):
"""
Create the SQL for this query. Return the SQL string and list of
parameters.
"""
if self.single_alias and (
self.connection.features.delete_can_self_reference_subquery
or not self.contains_self_reference_subquery
):
return self._as_sql(self.query)
innerq = self.query.clone()
innerq.__class__ = Query
innerq.clear_select_clause()
pk = self.query.model._meta.pk
innerq.select = [pk.get_col(self.query.get_initial_alias())]
outerq = Query(self.query.model)
if not self.connection.features.update_can_self_select:
# Force the materialization of the inner query to allow reference
# to the target table on MySQL.
sql, params = innerq.get_compiler(connection=self.connection).as_sql()
innerq = RawSQL("SELECT * FROM (%s) subquery" % sql, params)
outerq.add_filter("pk__in", innerq)
return self._as_sql(outerq)
class SQLUpdateCompiler(SQLCompiler):
returning_fields = None
returning_params = ()
def as_sql(self):
"""
Create the SQL for this query. Return the SQL string and list of
parameters.
"""
self.pre_sql_setup()
if not self.query.values:
return "", ()
qn = self.quote_name_unless_alias
values, update_params = [], []
for field, model, val in self.query.values:
if hasattr(val, "resolve_expression"):
val = val.resolve_expression(
self.query, allow_joins=False, for_save=True
)
if val.contains_aggregate:
raise FieldError(
"Aggregate functions are not allowed in this query "
"(%s=%r)." % (field.name, val)
)
if val.contains_over_clause:
raise FieldError(
"Window expressions are not allowed in this query "
"(%s=%r)." % (field.name, val)
)
if isinstance(val, ColPairs):
raise FieldError(
"Composite primary keys expressions are not allowed "
"in this query (%s=F('pk'))." % field.name
)
elif hasattr(val, "prepare_database_save"):
if field.remote_field:
val = val.prepare_database_save(field)
else:
raise TypeError(
"Tried to update field %s with a model instance, %r. "
"Use a value compatible with %s."
% (field, val, field.__class__.__name__)
)
val = field.get_db_prep_save(val, connection=self.connection)
# Getting the placeholder for the field.
if hasattr(field, "get_placeholder"):
placeholder = field.get_placeholder(val, self, self.connection)
else:
placeholder = "%s"
name = field.column
if hasattr(val, "as_sql"):
sql, params = self.compile(val)
values.append("%s = %s" % (qn(name), placeholder % sql))
update_params.extend(params)
elif val is not None:
values.append("%s = %s" % (qn(name), placeholder))
update_params.append(val)
else:
values.append("%s = NULL" % qn(name))
table = self.query.base_table
result = [
"UPDATE %s SET" % qn(table),
", ".join(values),
]
try:
where, params = self.compile(self.query.where)
except FullResultSet:
params = []
else:
result.append("WHERE %s" % where)
if self.returning_fields:
# Skip empty r_sql to allow subclasses to customize behavior for
# 3rd party backends. Refs #19096.
r_sql, self.returning_params = self.connection.ops.returning_columns(
self.returning_fields
)
if r_sql:
result.append(r_sql)
params.extend(self.returning_params)
return " ".join(result), tuple(update_params + params)
def execute_sql(self, result_type):
"""
Execute the specified update. Return the number of rows affected by
the primary update query. The "primary update query" is the first
non-empty query that is executed. Row counts for any subsequent,
related queries are not available.
"""
row_count = super().execute_sql(result_type)
is_empty = row_count is None
row_count = row_count or 0
for query in self.query.get_related_updates():
# If the result_type is NO_RESULTS then the aux_row_count is None.
aux_row_count = query.get_compiler(self.using).execute_sql(result_type)
if is_empty and aux_row_count:
# Returns the row count for any related updates as the number
# of rows updated.
row_count = aux_row_count
is_empty = False
return row_count
def execute_returning_sql(self, returning_fields):
"""
Execute the specified update and return rows of the returned columns
associated with the specified returning_field if the backend supports
it.
"""
if self.query.get_related_updates():
raise NotImplementedError(
"Update returning is not implemented for queries with related updates."
)
if (
not returning_fields
or not self.connection.features.can_return_rows_from_update
):
row_count = self.execute_sql(ROW_COUNT)
return [()] * row_count
self.returning_fields = returning_fields
with self.connection.cursor() as cursor:
sql, params = self.as_sql()
cursor.execute(sql, params)
rows = self.connection.ops.fetch_returned_rows(
cursor, self.returning_params
)
opts = self.query.get_meta()
cols = [field.get_col(opts.db_table) for field in self.returning_fields]
converters = self.get_converters(cols)
if converters:
rows = self.apply_converters(rows, converters)
return list(rows)
def pre_sql_setup(self):
"""
If the update depends on results from other tables, munge the "where"
conditions to match the format required for (portable) SQL updates.
If multiple updates are required, pull out the id values to update at
this point so that they don't change as a result of the progressive
updates.
"""
refcounts_before = self.query.alias_refcount.copy()
# Ensure base table is in the query
self.query.get_initial_alias()
count = self.query.count_active_tables()
if not self.query.related_updates and count == 1:
return
query = self.query.chain(klass=Query)
query.select_related = False
query.clear_ordering(force=True)
query.extra = {}
query.select = []
meta = query.get_meta()
fields = [meta.pk.name]
related_ids_index = []
for related in self.query.related_updates:
if all(
path.join_field.primary_key for path in meta.get_path_to_parent(related)
):
# If a primary key chain exists to the targeted related update,
# then the meta.pk value can be used for it.
related_ids_index.append((related, 0))
else:
# This branch will only be reached when updating a field of an
# ancestor that is not part of the primary key chain of a MTI
# tree.
related_ids_index.append((related, len(fields)))
fields.append(related._meta.pk.name)
query.add_fields(fields)
super().pre_sql_setup()
is_composite_pk = meta.is_composite_pk
must_pre_select = (
count > 1 and not self.connection.features.update_can_self_select
)
# Now we adjust the current query: reset the where clause and get rid
# of all the tables we don't need (since they're in the sub-select).
self.query.clear_where()
if self.query.related_updates or must_pre_select:
# Either we're using the idents in multiple update queries (so
# don't want them to change), or the db backend doesn't support
# selecting from the updating table (e.g. MySQL).
idents = []
related_ids = collections.defaultdict(list)
for rows in query.get_compiler(self.using).execute_sql(MULTI):
pks = [row if is_composite_pk else row[0] for row in rows]
idents.extend(pks)
for parent, index in related_ids_index:
related_ids[parent].extend(r[index] for r in rows)
self.query.add_filter("pk__in", idents)
self.query.related_ids = related_ids
else:
# The fast path. Filters and updates in one query.
self.query.add_filter("pk__in", query)
self.query.reset_refcounts(refcounts_before)
class SQLAggregateCompiler(SQLCompiler):
def as_sql(self):
"""
Create the SQL for this query. Return the SQL string and list of
parameters.
"""
sql, params = [], []
for annotation in self.query.annotation_select.values():
ann_sql, ann_params = self.compile(annotation)
ann_sql, ann_params = annotation.select_format(self, ann_sql, ann_params)
sql.append(ann_sql)
params.extend(ann_params)
self.col_count = len(self.query.annotation_select)
sql = ", ".join(sql)
params = tuple(params)
inner_query_sql, inner_query_params = self.query.inner_query.get_compiler(
self.using,
elide_empty=self.elide_empty,
).as_sql(with_col_aliases=True)
sql = "SELECT %s FROM (%s) subquery" % (sql, inner_query_sql)
params += inner_query_params
return sql, params
def cursor_iter(cursor, sentinel, col_count, itersize):
"""
Yield blocks of rows from a cursor and ensure the cursor is closed when
done.
"""
try:
for rows in iter((lambda: cursor.fetchmany(itersize)), sentinel):
yield rows if col_count is None else [r[:col_count] for r in rows]
finally:
cursor.close() | python | github | https://github.com/django/django | django/db/models/sql/compiler.py |
# This file is part of Buildbot. Buildbot is free software: you can
# redistribute it and/or modify it under the terms of the GNU General Public
# License as published by the Free Software Foundation, version 2.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
# details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 51
# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Copyright Buildbot Team Members
import os
import shutil
import tempfile
from twisted.trial import unittest
from buildbot.test.util.decorators import skipUnlessPlatformIs
from buildbot.util.private_tempdir import PrivateTemporaryDirectory
class TestTemporaryDirectory(unittest.TestCase):
# In this test we want to also check potential platform differences, so
# we don't mock the filesystem access
def setUp(self):
self.tempdir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.tempdir)
def test_simple(self):
with PrivateTemporaryDirectory(dir=self.tempdir) as dir:
self.assertTrue(os.path.isdir(dir))
self.assertFalse(os.path.isdir(dir))
@skipUnlessPlatformIs('posix')
def test_mode(self):
with PrivateTemporaryDirectory(dir=self.tempdir, mode=0o700) as dir:
self.assertEqual(0o40700, os.stat(dir).st_mode)
def test_cleanup(self):
ctx = PrivateTemporaryDirectory(dir=self.tempdir)
self.assertTrue(os.path.isdir(ctx.name))
ctx.cleanup()
self.assertFalse(os.path.isdir(ctx.name))
ctx.cleanup() # also check whether multiple calls don't throw
ctx.cleanup() | unknown | codeparrot/codeparrot-clean | ||
"""
Votes for reviews. Ether positive (vote is True) or negative (vote is False).
"""
from critiquebrainz.data import db
from sqlalchemy.dialects.postgresql import UUID
from critiquebrainz.data.model.mixins import DeleteMixin
from datetime import datetime
class Vote(db.Model, DeleteMixin):
__tablename__ = 'vote'
user_id = db.Column(UUID, db.ForeignKey('user.id', ondelete='CASCADE'), primary_key=True)
revision_id = db.Column(db.Integer, db.ForeignKey('revision.id', ondelete='CASCADE'), primary_key=True)
vote = db.Column(db.Boolean, nullable=False)
rated_at = db.Column(db.DateTime, default=datetime.utcnow, nullable=False)
@classmethod
def create(cls, user, review, vote):
"""Create new vote for the latest revision of a specified review."""
# Deleting the vote from the last revision if it exists
user.display_name, cls.query.filter_by(user=user, revision=review.last_revision).delete()
# Creating a new vote for the last revision
vote_obj = cls(user=user, revision=review.last_revision, vote=vote)
db.session.add(vote_obj)
db.session.commit()
return vote_obj
def to_dict(self):
response = dict(vote=self.vote, voted_at=self.rated_at)
return response | unknown | codeparrot/codeparrot-clean | ||
#include <tuple>
#include <vector>
#include <ATen/ATen.h>
#include <torch/library.h>
#include <ATen/native/quantized/cpu/fbgemm_utils.h>
#include <ATen/native/quantized/cpu/QnnpackUtils.h>
#include <ATen/native/quantized/cpu/OnednnUtils.h>
#include <ATen/native/quantized/cpu/QuantUtils.h>
#include <ATen/native/quantized/PackedParams.h>
#ifdef USE_FBGEMM
template <int kSpatialDim>
std::tuple<at::Tensor, std::optional<at::Tensor>> PackedConvWeight<
kSpatialDim>::unpack() {
auto* packed_weights_p = w.get();
// output channels
const int output_channels = packed_weights_p->outputChannels();
const int input_channels = packed_weights_p->inputChannels();
const int groups = packed_weights_p->groups();
const int kernel_d = kSpatialDim == 2 ? 1 : kernel[0];
// R (kernel height)
const int kernel_h = kernel[kSpatialDim - 2];
// S (kernel width)
const int kernel_w = kernel[kSpatialDim - 1];
const int C_per_G = input_channels / groups;
// Tensor for unpacked weights
// Unpacked format would be physical KRS(C/G) but logical KCRS (channels
// first) because that's how
// ChannelsLast3d is not available now.FBGEMM stores the weights
// TODO: Unify 2d and 3d when ChannelsLast3d is ready.
at::Tensor unpacked_weights;
if (q_scheme == c10::kPerTensorAffine) {
unpacked_weights = kSpatialDim == 2
? at::_empty_affine_quantized(
{output_channels, C_per_G, kernel_h, kernel_w},
at::device(c10::kCPU)
.dtype(c10::kQInt8)
.memory_format(c10::MemoryFormat::ChannelsLast),
w_scale[0],
w_zp[0],
std::nullopt)
: at::native::fbgemm_utils::
MakeEmptyAffineQuantizedChannelsLast3dTensor(
output_channels,
C_per_G,
kernel_d,
kernel_h,
kernel_w,
at::device(c10::kCPU).dtype(c10::kQInt8),
w_scale[0],
w_zp[0]);
} else if (q_scheme == c10::kPerChannelAffine) {
TORCH_CHECK(
!transpose(),
"Per Channel Quantization is currently disabled for transposed conv");
auto scales = at::from_blob(
w_scale.data(), w_scale.size(), at::device(c10::kCPU).dtype(c10::kFloat));
auto zero_points = at::from_blob(
w_zp.data(), w_zp.size(), at::device(c10::kCPU).dtype(c10::kInt));
unpacked_weights = kSpatialDim == 2
? at::_empty_per_channel_affine_quantized(
{output_channels, C_per_G, kernel_h, kernel_w},
scales.toType(c10::kDouble),
zero_points.toType(c10::kLong),
0, /* The output channel axis is 0 */
at::device(c10::kCPU).dtype(c10::kQInt8),
c10::MemoryFormat::ChannelsLast)
: at::native::fbgemm_utils::
MakeEmptyPerChannelAffineQuantizedChannelsLast3dTensor(
output_channels,
C_per_G,
kernel_d,
kernel_h,
kernel_w,
at::device(c10::kCPU).dtype(c10::kQInt8),
scales.toType(c10::kDouble),
zero_points.toType(c10::kLong));
} else {
TORCH_CHECK(false, "Unsupported qscheme: ", toString(q_scheme));
}
int8_t* unpacked_weights_p =
reinterpret_cast<int8_t*>(unpacked_weights.data_ptr<c10::qint8>());
packed_weights_p->unpack(unpacked_weights_p);
if(transpose()){
unpacked_weights =
at::native::fbgemm_utils::TransposeConvTensorUnpackConversion<
kSpatialDim>(unpacked_weights, groups);
}
return std::tuple<at::Tensor, std::optional<at::Tensor>>(
unpacked_weights, bias);
}
template std::tuple<at::Tensor, std::optional<at::Tensor>> PackedConvWeight<
2>::unpack();
template std::tuple<at::Tensor, std::optional<at::Tensor>> PackedConvWeight<
3>::unpack();
#endif // USE_FBGEMM
#ifdef USE_PYTORCH_QNNPACK
template <int kSpatialDim>
std::tuple<at::Tensor, std::optional<at::Tensor>> PackedConvWeightsQnnp<
kSpatialDim>::unpack() {
TORCH_CHECK(
kSpatialDim == 2,
"QNNPACK only supports conv2d_unpack right "
"now.");
TORCH_CHECK(
orig_weight.defined(),
"Cannot unpack weights. "
"Call at::globalContext()::setReleaseOriginalWeights(false) before packing or loading to enable unpacking.");
return std::tuple<at::Tensor, std::optional<at::Tensor>>(orig_weight, bias);
}
template std::tuple<at::Tensor, std::optional<at::Tensor>> PackedConvWeightsQnnp<
2>::unpack();
template std::tuple<at::Tensor, std::optional<at::Tensor>> PackedConvWeightsQnnp<
3>::unpack();
#endif // USE_PYTORCH_QNNPACK
#if AT_MKLDNN_ENABLED()
template <int kSpatialDim>
std::tuple<at::Tensor, std::optional<at::Tensor>> PackedConvWeightsOnednn<
kSpatialDim>::unpack() {
return std::tuple<at::Tensor, std::optional<at::Tensor>>(
orig_weight_.clone(), orig_bias_);
}
template std::tuple<at::Tensor, std::optional<at::Tensor>> PackedConvWeightsOnednn<
2>::unpack();
template std::tuple<at::Tensor, std::optional<at::Tensor>> PackedConvWeightsOnednn<
3>::unpack();
#endif // #if AT_MKLDNN_ENABLED() | cpp | github | https://github.com/pytorch/pytorch | aten/src/ATen/native/quantized/cpu/qconv_unpack_impl.cpp |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.