code stringlengths 114 1.05M | path stringlengths 3 312 | quality_prob float64 0.5 0.99 | learning_prob float64 0.2 1 | filename stringlengths 3 168 | kind stringclasses 1
value |
|---|---|---|---|---|---|
[](https://github.com/troycomi/reportseff/actions?workflow=Tests)
[](https://codecov.io/gh/troycomi/reportseff)
[](https://pypi.org/project/reportseff/)
# `reportseff`
> A python script for tabular display of slurm efficiency information

## About
### Motivation
Whether a sys admin or cluster user, knowing how well you are estimating job
resources can help streamline job scheduling and maximize your priority. If you
have ever tried to use `sacct` you probably had some trouble interpreting the
output. While `seff` or `jobstats` can provide detailed summaries, they don't
scale easily to array jobs or offer a way to see all the jobs from a single
user. `reportseff` aims to fill this role. Read more about the [motivation
for reportseff](https://github.com/troycomi/reportseff/blob/main/ABOUT.md).
### Audience
If you are running more than one slurm job at a time, you should try
`reportseff`. Users of HPC systems can get an idea how well they estimate
resource usage. By tuning these values, you can get scheduled earlier and not
be penalized for unused allocations. Since `reportseff` can parse job ids from
slurm output files, it simplifies the task of identifying which jobs have
failed and why. Sys admins can pipe `reportseff` output to identify users with
poor utilization or produce summaries at the end of a billing cycle.
### Implementation
`reportseff` is a wrapper around `sacct` that provides more complex option
parsing, simpler options, and cleaner, colored outputs. All querying is
performed in a single call to `sacct` and should have similar performance.
Multi-node and GPU utilization is acquired from information contained in the
`AdminComment` field, as generated by `jobstats`.
## Usage
### Installation
`reportseff` runs on python >= 3.6.
The only external dependency is click (>= 6.7).
Calling
```sh
pip install --user reportseff
# OR
pipx install reportseff
```
will create command line bindings and install click.
### Sample Usage
Try `reportseff -u $USER` or just `reportseff` in a directory with some slurm
outputs. You may be surprised by your results!
#### Single job
Calling `reportseff` with a single jobid will provide equivalent information to
seff for that job. `reportseff 24371789` and `reportseff map_fastq_24371789`
produce the following output:
```txt
JobID State Elapsed CPUEff MemEff
24371789 COMPLETED 03:08:03 71.2% 45.7%
```
#### Single array job
Providing either the raw job id or the array job id will get efficiency
information for a single element of the array job. `reportseff 24220929_421`
and `reportseff 24221219` generate:
```txt
JobID State Elapsed CPUEff MemEff
24220929_421 COMPLETED 00:09:34 99.0% 34.6%
```
#### Array job group
If the base job id of an array is provided, all elements of the array will
be added to the output. `reportseff 24220929`
```txt
JobID State Elapsed CPUEff MemEff
24220929_1 COMPLETED 00:10:43 99.2% 33.4%
24220929_11 COMPLETED 00:10:10 99.2% 37.5%
24220929_21 COMPLETED 00:09:25 98.8% 36.1%
24220929_31 COMPLETED 00:09:19 98.9% 33.3%
24220929_41 COMPLETED 00:09:23 98.9% 33.3%
24220929_51 COMPLETED 00:08:02 98.5% 36.3%
...
24220929_951 COMPLETED 00:25:12 99.5% 33.5%
24220929_961 COMPLETED 00:39:26 99.7% 34.1%
24220929_971 COMPLETED 00:24:11 99.5% 34.2%
24220929_981 COMPLETED 00:24:50 99.5% 44.3%
24220929_991 COMPLETED 00:13:05 98.7% 33.7%
```
#### Glob expansion of slurm outputs
Because slurm output files can act as job id inputs, the following can
get all seff information for a given job name:
```txt
slurm_out ❯❯❯ reportseff split_ubam_24*
JobID State Elapsed CPUEff MemEff
split_ubam_24342816 COMPLETED 23:30:32 99.9% 4.5%
split_ubam_24342914 COMPLETED 22:40:51 99.9% 4.6%
split_ubam_24393599 COMPLETED 23:43:36 99.4% 4.4%
split_ubam_24393655 COMPLETED 21:36:58 99.3% 4.5%
split_ubam_24418960 RUNNING 02:53:11 --- ---
split_ubam_24419972 RUNNING 01:26:26 --- ---
```
#### No arguments
Without arguments, reportseff will try to find slurm output files in the
current directory. Combine with `watch` to monitor job progress:
`watch -cn 300 reportseff --modified-sort`
```txt
JobID State Elapsed CPUEff MemEff
split_ubam_24418960 RUNNING 02:56:14 --- ---
fastq_to_ubam_24419971 RUNNING 01:29:29 --- ---
split_ubam_24419972 RUNNING 01:29:29 --- ---
fastq_to_ubam_24393600 COMPLETED 1-02:00:47 58.3% 41.1%
map_fastq_24419330 RUNNING 02:14:53 --- ---
map_fastq_24419323 RUNNING 02:15:24 --- ---
map_fastq_24419324 RUNNING 02:15:24 --- ---
map_fastq_24419322 RUNNING 02:15:24 --- ---
mark_adapters_24418437 COMPLETED 01:29:23 99.8% 48.2%
mark_adapters_24418436 COMPLETED 01:29:03 99.9% 47.4%
```
#### Filtering slurm output files
One useful application of `reportseff` is filtering a directory of slurm output
files based on the state or time since running. Additionally, if only the
`jobid` is specified as a format output, the filenames will be returned in a
pipe-friendly manner:
```txt
old_runs ❯❯❯ reportseff --since d=4 --state Timeout
JobID State Elapsed CPUEff MemEff
call_variants_31550458 TIMEOUT 20:05:17 99.5% 0.0%
call_variants_31550474 TIMEOUT 20:05:17 99.6% 0.0%
call_variants_31550500 TIMEOUT 20:05:08 99.7% 0.0%
old_runs ❯❯❯ reportseff --since d=4 --state Timeout --format jobid
call_variants_31550458
call_variants_31550474
call_variants_31550500
```
To find all lines with `output:` in jobs which have timed out or failed
in the last 4 days:
```sh
reportseff --since 'd=4' --state TO,F --format jobid | xargs grep output:
```
### Arguments
Jobs can be passed as arguments in the following ways:
- Job ID such as 1234567. If the id is part of an array job, only the element
for that ID will be displayed. If the id is the base part of an array job,
all elements in the array will be displayed.
- Array Job ID such as 1234567\_89. Will display only the element specified.
- Slurm output file. Format must be BASE\_%A\_%a. BASE is optional as is a
'.out' suffix. Unix glob expansions can also be used to filter which jobs
are displayed.
- From current directory. If no argument is supplied, `reportseff` will attempt
to find slurm output files in the current directory as described above.
If a user is provided, instead `reportseff` will show recent jobs for that user.
If only `since` is set, all recent jobs for all users will be shown (if allowed).
- Supplying a directory as a single argument will override the current
directory to check for slurm outputs.
### Options
- `--color/--no-color`: Force color output or not. By default, will force color
output. With the no-color flag, click will strip color codes for everything
besides stdout.
- `--modified-sort`: Instead of sorting by filename/jobid, sort by last
modification time of the slurm output file.
- `--debug`: Write sacct result to stderr.
- `--user/-u`: Ignore job arguments and instead query sacct with provided user.
Returns all jobs from the last week.
- `--state/-s`: Output only jobs with states matching one of the provided options.
Accepts comma separated values of job codes (e.g. 'R') or full names
(e.g. RUNNING). Case insensitive.
- `--not-state/-S`: Output only jobs with states not matching any of the provided options.
Accepts comma separated values of job codes (e.g. 'R') or full names
(e.g. RUNNING). Case insensitive.
- `--format`: Provide a comma separated list of columns to produce. Prefixing the
argument with `+` adds the specified values to the defaults. Values can
be any valid column name to sacct and the custom efficiency values: TimeEff,
cpuEff, MemEff. Can also optionally set alignment (<, ^, >) and maximum width.
Default is center-aligned with a width of the maximum column entry. For
example, `--format 'jobid%>,state%10,memeff%<5'` produces 3 columns with:
- JobId aligned right, width set automatically
- State with width 10 (center aligned by default)
- MemEff aligned left, width 5
- `--since`: Limit results to those occurring after the specified time. Accepts
sacct formats and a comma separated list of key/value pairs. To get jobs in
the last hour and a half, can pass `h=1,m=30`.
- `--node/-n`: Display information for multi-node jobs; requires additional
sacct fields from jobstats.
- `--node-and-gpu/-g`: Display information for multi-node jobs and GPU information;
requires additional sacct fields from jobstats.
- `--parsable/-p`: Ignore formatting and output as a `|` delimited table. Useful
for piping into more complex analyses.
## Status, Contributions, and Support
`reportseff` is actively maintained but currently feature complete. If there
is a function missing, please open an issue to discuss its merit!
Bug reports, pull requests, and any feedback are welcome! Prior to submitting
a pull request, be sure any new features have been tested and all unit tests
are passing. In the cloned repo with
[poetry](https://github.com/python-poetry/poetry#installation) installed:
```sh
poetry install
poetry run pytest
poetry run pre-commit install
nox
```
## Troubleshooting
### I can't install, what is pip?
[pip](https://pip.pypa.io/en/stable/) is the package installer for python. If
you get an error that pip isn't found, look for a python/anaconda/conda module.
[pipx](https://pypa.github.io/pipx/) ensures that each application is installed
in an isolated environment. This resolves issues of dependency versions and
allows applications to be run from any environment.
### I get an error about broken pipes when chaining to other commands
Python will report that the consumer of process output has closed the stream
(i.e. the pipe) while still attempting to write. Newer versions of click
should suppress the warning output, but it seems to not always work. Besides
some extra printing on stderr, the output is not affected.
### My jobs don't have any information about multiple nodes or GPU efficiency
Because `sacct` doesn't currently record this information, `reportseff`
retrieves it from a custom field from `jobstats`, developed at Princeton
University. If you are outside a Research Computing cluster, that information
will likely be absent. Node-level reporting is only shown for jobs which use
multiple nodes or GPUs. If you need a list of where jobs were run, you can add
`--format +NodeList`.
### My output is garbled with ESC[0m all over, where's the color?
Those are ANSI color codes. Click will usually strip these if it detects
the consuming process can't display color codes, but `reportseff` defaults
to always display them. If you don't care for color, use the `--no-color`
option. For less, you can set
```
export LESS="-R"
```
in your `.bashrc`, or just type `-R` in an active less process. Some versions
of `watch` require the `-c` option to display color, others can't display
colors properly. If you search for `ansi color <tool>` you should get some
solutions.
## Acknowledgments
The code for calling sacct and parsing the returning information was taken
from [Slurmee](https://github.com/PrincetonUniversity/slurmee).
Style and tooling from [hypermodern python](https://cjolowicz.github.io/posts/hypermodern-python-01-setup/)
Code review provided from a [repo-review](https://researchcomputing.princeton.edu/services/repo-review-consultations)
which vastly improved this readme.
| /reportseff-2.4.1.tar.gz/reportseff-2.4.1/README.md | 0.426202 | 0.937982 | README.md | pypi |
# [reposcraping](https://pypi.org/project/reposcraping/)
### Scraping GitHub repository
This library allows you to access the names of files and folders of any GitHub repository. Cloner class allows you to clone the file types you want to the path you want.<br>
## downloads
[](https://pepy.tech/project/reposcraping) [](https://pepy.tech/project/reposcraping) [](https://pepy.tech/project/reposcraping)
## setup
```bash
pip install reposcraping
```
## usage
```python
from reposcraping import RepoScraping
from reposcraping.cloner import Cloner
scraping = RepoScraping(
"https://github.com/emresvd/random-video",
p=True,
)
print(scraping.tree_urls)
print(scraping.file_urls)
cloner = Cloner(scraping)
cloner.clone(
paths={
".py": "files/python_files",
".txt": "files/text_files",
".md": "files/markdown_files",
".html": "files/html_files",
"": "files/other_files",
},
only_file_name=True,
p=True,
)
```
## output
```python
['https://github.com/emresvd/random-video', 'https://github.com/emresvd/random-video/tree/master/random_video', 'https://github.com/emresvd/random-video/tree/master/special_search', 'https://github.com/emresvd/random-video/tree/master/static', 'https://github.com/emresvd/random-video/tree/master/templates', 'https://github.com/emresvd/random-video/tree/master/video', 'https://github.com/emresvd/random-video/tree/master/random_video/__pycache__', 'https://github.com/emresvd/random-video/tree/master/video/__pycache__', 'https://github.com/emresvd/random-video/tree/master/video/migrations', 'https://github.com/emresvd/random-video/tree/master/video/migrations/__pycache__']
['https://github.com/emresvd/random-video/blob/master/LICENSE.md', 'https://github.com/emresvd/random-video/blob/master/.gitignore', 'https://github.com/emresvd/random-video/blob/master/README.md', 'https://github.com/emresvd/random-video/blob/master/db.sqlite3', 'https://github.com/emresvd/random-video/blob/master/manage.py', 'https://github.com/emresvd/random-video/blob/master/requirements.txt', 'https://github.com/emresvd/random-video/blob/master/words.txt', 'https://github.com/emresvd/random-video/blob/master/static/ic_launcher-playstore.png', 'https://github.com/emresvd/random-video/blob/master/random_video/__init__.py', 'https://github.com/emresvd/random-video/blob/master/random_video/asgi.py', 'https://github.com/emresvd/random-video/blob/master/random_video/settings.py', 'https://github.com/emresvd/random-video/blob/master/random_video/urls.py', 'https://github.com/emresvd/random-video/blob/master/random_video/wsgi.py', 'https://github.com/emresvd/random-video/blob/master/special_search/car.txt', 'https://github.com/emresvd/random-video/blob/master/special_search/food.txt', 'https://github.com/emresvd/random-video/blob/master/special_search/rocket.txt', 'https://github.com/emresvd/random-video/blob/master/special_search/space.txt', 'https://github.com/emresvd/random-video/blob/master/special_search/travel.txt', 'https://github.com/emresvd/random-video/blob/master/static/favicon.ico', 'https://github.com/emresvd/random-video/blob/master/templates/download.html', 'https://github.com/emresvd/random-video/blob/master/templates/index.html', 'https://github.com/emresvd/random-video/blob/master/video/__init__.py', 'https://github.com/emresvd/random-video/blob/master/video/admin.py', 'https://github.com/emresvd/random-video/blob/master/video/apps.py', 'https://github.com/emresvd/random-video/blob/master/video/models.py', 'https://github.com/emresvd/random-video/blob/master/video/random_video.py', 'https://github.com/emresvd/random-video/blob/master/video/tests.py', 'https://github.com/emresvd/random-video/blob/master/video/views.py', 'https://github.com/emresvd/random-video/blob/master/random_video/__pycache__/__init__.cpython-37.pyc', 'https://github.com/emresvd/random-video/blob/master/random_video/__pycache__/settings.cpython-37.pyc', 'https://github.com/emresvd/random-video/blob/master/random_video/__pycache__/urls.cpython-37.pyc', 'https://github.com/emresvd/random-video/blob/master/random_video/__pycache__/wsgi.cpython-37.pyc', 'https://github.com/emresvd/random-video/blob/master/video/__pycache__/__init__.cpython-37.pyc', 'https://github.com/emresvd/random-video/blob/master/video/__pycache__/admin.cpython-37.pyc', 'https://github.com/emresvd/random-video/blob/master/video/__pycache__/models.cpython-37.pyc', 'https://github.com/emresvd/random-video/blob/master/video/__pycache__/random_video.cpython-37.pyc', 'https://github.com/emresvd/random-video/blob/master/video/__pycache__/views.cpython-37.pyc', 'https://github.com/emresvd/random-video/blob/master/video/migrations/__init__.py', 'https://github.com/emresvd/random-video/blob/master/video/migrations/__pycache__/__init__.cpython-37.pyc']
``` | /reposcraping-1.1.7.tar.gz/reposcraping-1.1.7/README.md | 0.547222 | 0.932207 | README.md | pypi |
import argparse
import copy
import io
import json
import os
from datetime import datetime
from getpass import getpass
from repocollector.github import GithubRepositoriesCollector
from repocollector.report import create_report
VERSION = "0.0.5"
def date(x: str) -> datetime:
"""
Check the passed date is well-formatted
:param x: a datetime
:return: datetime(x); raise an ArgumentTypeError otherwise
"""
try:
# String to datetime
x = datetime.strptime(x, '%Y-%m-%d')
except Exception:
raise argparse.ArgumentTypeError('Date format must be: YYYY-MM-DD')
return x
def unsigned_int(x: str) -> int:
"""
Check the number is greater than or equal to zero
:param x: a number
:return: int(x); raise an ArgumentTypeError otherwise
"""
x = int(x)
if x < 0:
raise argparse.ArgumentTypeError('Minimum bound is 0')
return x
def valid_path(x: str) -> str:
"""
Check the path exists
:param x: a path
:return: the path if exists; raise an ArgumentTypeError otherwise
"""
if not os.path.isdir(x):
raise argparse.ArgumentTypeError('Insert a valid path')
return x
def get_parser():
description = 'A Python library to collect repositories metadata from GitHub.'
parser = argparse.ArgumentParser(prog='repositories-repocollector', description=description)
parser.add_argument('-v', '--version', action='version', version='%(prog)s ' + VERSION)
parser.add_argument(action='store',
dest='since',
type=date,
default=datetime.strptime('2014-01-01', '%Y-%m-%d'),
help='collect repositories created since this date (default: %(default)s)')
parser.add_argument(action='store',
dest='until',
type=date,
default=datetime.strptime('2014-01-01', '%Y-%m-%d'),
help='collect repositories created up to this date (default: %(default)s)')
parser.add_argument(action='store',
dest='dest',
type=valid_path,
help='destination folder for report')
parser.add_argument('--pushed-after',
action='store',
dest='date_push',
type=date,
default=datetime.strptime('2014-01-01', '%Y-%m-%d'),
help='collect only repositories pushed after this date (default: %(default)s)')
parser.add_argument('--min-issues',
action='store',
dest='min_issues',
type=unsigned_int,
default=0,
help='collect repositories with at least <min-issues> issues (default: %(default)s)')
parser.add_argument('--min-releases',
action='store',
dest='min_releases',
type=unsigned_int,
default=0,
help='collect repositories with at least <min-releases> releases (default: %(default)s)')
parser.add_argument('--min-stars',
action='store',
dest='min_stars',
type=unsigned_int,
default=0,
help='collect repositories with at least <min-stars> stars (default: %(default)s)')
parser.add_argument('--min-watchers',
action='store',
dest='min_watchers',
type=unsigned_int,
default=0,
help='collect repositories with at least <min-watchers> watchers (default: %(default)s)')
parser.add_argument('--primary-language',
action='store',
dest='primary_language',
type=str,
default=None,
help='collect repositories written in this language')
parser.add_argument('--verbose',
action='store_true',
dest='verbose',
default=False,
help='show log (default: %(default)s)')
return parser
def main():
args = get_parser().parse_args()
token = os.getenv('GITHUB_ACCESS_TOKEN')
if not token:
token = getpass('Github access token:')
github = GithubRepositoriesCollector(access_token=token)
repositories = list()
for repository in github.collect_repositories(
since=args.since,
until=args.until,
pushed_after=args.date_push,
min_stars=args.min_stars,
min_releases=args.min_releases,
min_watchers=args.min_watchers,
min_issues=args.min_issues,
primary_language=args.primary_language):
# Save repository to collection
repositories.append(copy.deepcopy(repository))
if args.verbose:
print(f'Collected {repository["url"]}')
# Generate html report
html = create_report(repositories)
html_filename = os.path.join(args.dest, 'repositories.html')
json_filename = os.path.join(args.dest, 'repositories.json')
with io.open(html_filename, "w", encoding="utf-8") as f:
f.write(html)
with io.open(json_filename, "w") as f:
json.dump(repositories, f)
if args.verbose:
print(f'Report created at {html_filename}')
exit(0) | /repositories_collector-0.0.5-py3-none-any.whl/repocollector/cli.py | 0.430866 | 0.168994 | cli.py | pypi |
from collections import OrderedDict
class Repository(dict):
def __init__(self, items: dict = {}):
"""Create a new repository."""
# All of the respository items
self._items = OrderedDict(items)
def has(self, key):
"""Determine if the given configuration value exists."""
return key in self._items
def get(self, key, default = None):
"""Get the specified value from the repository, or a default value."""
if type(key) is list:
raise NotImplementedError
return self._items.get(key, default)
def set(self, key, value = None):
"""Set a given key in the repository."""
keys = {}
if not type(key) is dict:
keys.update({key: value})
self._items.update(keys)
def prepend(self, key, value):
"""Prepend a value onto an list repository value."""
items = self.get(key)
items.insert(0, value)
self.set(key, items)
def append(self, key, value):
"""Append a value onto an list repository value."""
items = self.get(key)
items.append(value)
self.set(key, items)
def remove(self, key):
"""Remove the given key from the repository."""
del self._items[key]
def all(self):
return self._items
def keys(self):
return self._items.keys()
def items(self):
return self._items.items()
def values(self):
return self._items.values()
def __len__(self):
return len(self._items)
def __getitem__(self, key):
return self.get(key)
def __setitem__(self, key, value):
return self.set(key, value)
def __delitem__(self, key):
del self._items[key]
def __missing__(self, key):
raise KeyError(key)
def __iter__(self):
return iter(self._items)
def __reversed__(self):
return OrderedDict(reversed(list(self._items.items())))
def __contains__(self, key):
return key in self._items | /repository-PyOak-0.0.1.tar.gz/repository-PyOak-0.0.1/src/repository/Repository.py | 0.903481 | 0.406685 | Repository.py | pypi |
from collections.abc import Callable
from typing import Any, TypeVar
from click import BOOL, STRING, Choice, File, option
from .click_param_types import JSON
T = TypeVar("T")
def optional_brackets(func: Callable) -> Callable:
"""With this decorator it is possible to write decorators without ()."""
def wrapper(*args: dict, **kwargs: dict) -> Any: # noqa: ANN401
if len(args) >= 1 and callable(args[0]):
return func()(args[0])
return func(*args, **kwargs)
return wrapper
@optional_brackets
def option_quiet() -> Callable[[T], T]:
"""Get parameter option for quiet."""
return option(
"--quiet",
is_flag=True,
default=False,
type=BOOL,
)
@optional_brackets
def option_jq_filter() -> Callable[[T], T]:
"""Get parameter option for jq filter."""
return option(
"--jq-filter",
default=".",
type=STRING,
required=False,
help="filter for jq",
)
@optional_brackets
def option_data_model() -> Callable[[T], T]:
"""Get parameter option for data model."""
return option(
"--data-model",
type=Choice(["rdm", "marc21", "lom"]),
default="rdm",
)
@optional_brackets
def option_record_type() -> Callable[[T], T]:
"""Get parameter option for record type."""
return option(
"--record-type",
type=Choice(["record", "draft"], case_sensitive=True),
default="record",
)
@optional_brackets
def option_identifier(
required: bool = True, # noqa: FBT001, FBT002
) -> Callable[[T], T]:
"""Get parameter options for metadata identifier.
Sample use: --identifier '{ "identifier": "10.48436/fcze8-4vx33", "scheme": "doi"}'
"""
return option(
"--identifier",
"-i",
required=required,
type=JSON(validate=["identifier", "scheme"]),
help="metadata identifier as JSON",
)
# TODO. rename to option_pid
@optional_brackets
def option_pid_identifier(
required: bool = True, # noqa: FBT001, FBT002
) -> Callable[[T], T]:
"""Get parameter options for metadata identifier.
Sample use: --pid-identifier '{"doi": {"identifier": "10.48436/fcze8-4vx33",
"provider": "unmanaged"}}'
"""
return option(
"--pid-identifier",
required=required,
type=JSON(),
help="pid identifier as JSON",
)
# TODO: rename to option_id, refactore to true concept of the used id
@optional_brackets
def option_pid(
required: bool = True, # noqa: FBT001, FBT002
) -> Callable[[T], T]:
"""Get parameter options for record PID.
Sample use: --pid "fcze8-4vx33"
"""
return option(
"--pid",
"-p",
metavar="PID_VALUE",
required=required,
help="persistent identifier of the object to operate on",
)
@optional_brackets
def option_input_file(
required: bool = True, # noqa: FBT001, FBT002
type_: File = None,
name: str = "input_file",
help_: str = "name of file to read from",
) -> Callable[[T], T]:
"""Get parameter options for input file.
Sample use: --input-file "input.json"
"""
if not type_:
type_ = File("r")
return option(
"--input-file",
name,
metavar="string",
required=required,
help=help_,
type=type_,
)
@optional_brackets
def option_output_file(
required: bool = True, # noqa: FBT001, FBT002
) -> Callable[[T], T]:
"""Get parameter options for output file.
Sample use: --output-file "output.json"
"""
return option(
"--output-file",
metavar="string",
required=required,
help="name of file to write to",
type=File("w"),
) | /repository-cli-0.9.0.tar.gz/repository-cli-0.9.0/repository_cli/click_options.py | 0.732879 | 0.247441 | click_options.py | pypi |
"""Commonly used utility functions."""
from __future__ import annotations
from contextlib import suppress
from typing import TYPE_CHECKING
from flask_principal import Identity, RoleNeed
from invenio_access.permissions import any_user, system_process
from invenio_accounts import current_accounts
from invenio_rdm_records.proxies import current_rdm_records
from invenio_rdm_records.records.models import RDMDraftMetadata, RDMRecordMetadata
from invenio_records_lom import current_records_lom
from invenio_records_lom.records.models import LOMDraftMetadata, LOMRecordMetadata
from invenio_records_marc21 import Marc21Metadata, current_records_marc21
from invenio_records_marc21.records import DraftMetadata as Marc21DraftMetadata
from invenio_records_marc21.records import RecordMetadata as Marc21RecordMetadata
from sqlalchemy.orm.exc import NoResultFound
if TYPE_CHECKING:
from invenio_db import db
from invenio_drafts_resources.records.api import Draft, Record
from invenio_records_resources.services import RecordService
from invenio_records_resources.services.records.results import RecordItem
BELOW_CONTROLFIELD = 10
class IdentityNotFoundError(Exception):
"""Identity not found exception."""
def __init__(self, role: str) -> None:
"""Construct IdentityNotFound."""
msg = f"Role {role} does not exist"
super().__init__(msg)
def get_identity(
permission_name: str = "any_user",
role_name: str | None = None,
) -> Identity:
"""Get an identity to perform tasks.
Default permission is "any_user"
"""
identity = Identity(0)
permission = any_user
if permission_name == "system_process":
permission = system_process
if role_name:
role = current_accounts.datastore.find_role(role_name)
if role:
identity.provides.add(RoleNeed(role_name))
else:
raise IdentityNotFoundError(role=role_name)
identity.provides.add(permission)
return identity
def get_draft(service: RecordService, pid: str, identity: Identity) -> Draft | None:
"""Get current draft of record.
None will be returned if there is no draft.
"""
# check if record exists
service.read(id_=pid, identity=identity)
with suppress(Exception):
return service.read_draft(id_=pid, identity=identity)
def get_record_item(service: RecordService, pid: str, identity: Identity) -> RecordItem:
"""Get record item."""
try:
record_item = service.read(id_=pid, identity=identity)
except NoResultFound:
try:
record_item = service.read_draft(id_=pid, identity=identity)
except NoResultFound as exc:
msg = f"Record ({pid}) does not exists"
raise RuntimeError(msg) from exc
return record_item
def get_data(service: RecordService, pid: str, identity: Identity) -> dict:
"""Get data."""
return get_record_item(service, pid, identity).data
def get_record_or_draft(
service: RecordService,
pid: str,
identity: Identity,
) -> Draft | Record:
"""Get record or draft."""
return get_record_item(service, pid, identity)._record
def get_records_service(data_model: str = "rdm") -> RecordService:
"""Get records service."""
available_services = {
"rdm": current_rdm_records.records_service,
"marc21": current_records_marc21.records_service,
"lom": current_records_lom.records_service,
}
return available_services.get(data_model, current_rdm_records.records_service)
def get_metadata_model(
data_model: str = "rdm",
record_type: str = "record",
) -> db.Model:
"""Get the record model."""
available_models = {
"record": {
"rdm": RDMRecordMetadata,
"marc21": Marc21RecordMetadata,
"lom": LOMRecordMetadata,
},
"draft": {
"rdm": RDMDraftMetadata,
"marc21": Marc21DraftMetadata,
"lom": LOMDraftMetadata,
},
}
try:
_type = available_models.get(record_type)
except KeyError as exc:
msg = "the used record_type should be of the list [record, draft]"
raise RuntimeError(msg) from exc
try:
return _type.get(data_model)
except KeyError as exc:
msg = "the used data_model should be of the list [rdm, marc21]"
raise RuntimeError(msg) from exc
def update_record(
service: RecordService,
pid: str,
identity: Identity,
new_data: dict,
old_data: dict,
) -> None:
"""Update record with new data.
If there is an error during publishing, the record will be set back
WARNING: If there is an unpublished draft, the data of it will be lost.
"""
do_publish = False
if not exists_draft(service, pid, identity):
service.edit(id_=pid, identity=identity)
do_publish = True
try:
service.update_draft(id_=pid, identity=identity, data=new_data)
if do_publish:
service.publish(id_=pid, identity=identity)
except Exception as error:
service.update_draft(id_=pid, identity=identity, data=old_data)
raise error # noqa: TRY201
def add_metadata_to_marc21_record(metadata: dict, addition: dict) -> dict:
"""Add fields to marc21 record."""
marc21 = Marc21Metadata(json=metadata["metadata"])
for field_number, fields in addition["metadata"]["fields"].items():
if int(field_number) < BELOW_CONTROLFIELD:
marc21.emplace_controlfield(field_number, fields)
else:
for field in fields:
selector = f"{field_number}.{field['ind1']}.{field['ind2']}."
marc21.emplace_datafield(selector, subfs=field["subfields"])
metadata.update(marc21.json)
return metadata
def exists_record(service: RecordService, pid: str, identity: Identity) -> bool:
"""Check if record exists and is not deleted."""
try:
service.read(id_=pid, identity=identity)
except Exception:
return False
return True
def exists_draft(service: RecordService, pid: str, identity: Identity) -> bool:
"""Check if draft exists."""
try:
service.read_draft(id_=pid, identity=identity)
except Exception:
return False
return True | /repository-cli-0.9.0.tar.gz/repository-cli-0.9.0/repository_cli/utils.py | 0.885786 | 0.170025 | utils.py | pypi |
import os
import pandas as pd
import re
from typing import List
from pydriller.git import Git
from pydriller.repository import Repository
from pydriller.metrics.process.change_set import ChangeSet
from pydriller.metrics.process.code_churn import CodeChurn
from pydriller.metrics.process.commits_count import CommitsCount
from pydriller.metrics.process.contributors_count import ContributorsCount
from pydriller.metrics.process.contributors_experience import ContributorsExperience
from pydriller.metrics.process.hunks_count import HunksCount
from pydriller.metrics.process.lines_count import LinesCount
from repominer.files import FailureProneFile
from typing import Any, Dict, Set, Union
def get_content(path: str) -> Union[str, None]:
""" Get the content of a file as plain text.
Parameters
----------
path : str
The path to the file.
Return
------
str
The file's content, if exists; None otherwise.
"""
if not os.path.isfile(path):
return None
try:
with open(path, 'r') as f:
return f.read()
except UnicodeDecodeError:
return None
def is_remote(path_to_repo: str) -> bool:
""" Check if the path links to a remote or local repository.
Parameters
----------
path_to_repo : str
The path to the repository.
Return
------
bool
True if a remote path; False otherwise.
"""
return path_to_repo.startswith("git@") or path_to_repo.startswith("https://")
class BaseMetricsExtractor:
""" This is the base class to extract metrics from IaC scripts.
It is extended by concrete classes to extract metrics for specific languages (e.g., Ansible and Tosca).
"""
def __init__(self, path_to_repo: str, clone_repo_to: str = None, at: str = 'release'):
""" The class constructor.
Parameters
----------
path_to_repo : str
The path to a local or remote repository.
clone_repo_to : str
Path to clone the repository to.
If path_to_repo links to a local repository, this parameter is not used. Otherwise it is mandatory.
at : str
When to extract metrics: at each release or each commit.
Attributes
----------
dataset: pandas.DataFrame
The metrics dataset, populated after ``extract()``.
Raises
------
ValueError
If `at` is not release or commit, or if the path to the remote repository does not link to a github or
gitlab repository.
NotImplementedError
The commit option is not implemented yet.
"""
if at not in ('release', 'commit'):
raise ValueError(f'{at} is not valid! Use \'release\' or \'commit\'.')
self.path_to_repo = path_to_repo
if is_remote(path_to_repo):
if not clone_repo_to:
raise ValueError('clone_repo_to is mandatory when linking to a remote repository.')
full_name_pattern = re.compile(r'git(hub|lab)\.com/([\w\W]+)$')
match = full_name_pattern.search(path_to_repo.replace('.git', ''))
if not match:
raise ValueError('The remote repository must be hosted on github or gitlab.')
repo_name = match.groups()[1].split('/')[1]
self.path_to_repo = os.path.join(clone_repo_to, repo_name)
if os.path.isdir(self.path_to_repo):
clone_repo_to = None
repo_miner = Repository(path_to_repo=path_to_repo,
clone_repo_to=clone_repo_to,
only_releases=True if at == 'release' else False,
order='date-order', num_workers=1)
self.commits_at = [commit.hash for commit in repo_miner.traverse_commits()]
self.dataset = pd.DataFrame()
def get_files(self) -> Set[str]:
""" Return all the files in the repository
Return
------
Set[str]
The set of filepath relative to the root of repository
"""
files = set()
for root, _, filenames in os.walk(self.path_to_repo):
if '.git' in root:
continue
for filename in filenames:
path = os.path.join(root, filename)
path = path.replace(self.path_to_repo, '')
if path.startswith('/'):
path = path[1:]
files.add(path)
return files
def get_product_metrics(self, script: str) -> Dict[str, Any]:
""" Extract source code metrics from a script.
Parameters
----------
script : str
The content of the script to extract metrics from.
Returns
-------
Dict[str, Any]
A dictionary of <metric, value>.
"""
return {}
def get_process_metrics(self, from_commit: str, to_commit: str) -> dict:
""" Extract process metrics for an evolution period.
Parameters
----------
from_commit : str
Hash of release start
to_commit : str
Hash of release end
"""
change_set = ChangeSet(self.path_to_repo, from_commit=from_commit, to_commit=to_commit)
code_churn = CodeChurn(self.path_to_repo, from_commit=from_commit, to_commit=to_commit, ignore_added_files=True)
commits_count = CommitsCount(self.path_to_repo, from_commit=from_commit, to_commit=to_commit)
contributors_count = ContributorsCount(self.path_to_repo, from_commit=from_commit, to_commit=to_commit)
highest_contributors_experience = ContributorsExperience(self.path_to_repo, from_commit=from_commit,
to_commit=to_commit)
median_hunks_count = HunksCount(self.path_to_repo, from_commit=from_commit, to_commit=to_commit)
lines_count = LinesCount(self.path_to_repo, from_commit=from_commit, to_commit=to_commit)
return {
'dict_change_set_max': change_set.max(),
'dict_change_set_avg': change_set.avg(),
'dict_code_churn_count': code_churn.count(),
'dict_code_churn_max': code_churn.max(),
'dict_code_churn_avg': code_churn.avg(),
'dict_commits_count': commits_count.count(),
'dict_contributors_count': contributors_count.count(),
'dict_minor_contributors_count': contributors_count.count_minor(),
'dict_highest_contributor_experience': highest_contributors_experience.count(),
'dict_hunks_median': median_hunks_count.count(),
'dict_additions': lines_count.count_added(),
'dict_additions_max': lines_count.max_added(),
'dict_additions_avg': lines_count.avg_added(),
'dict_deletions': lines_count.count_removed(),
'dict_deletions_max': lines_count.max_removed(),
'dict_deletions_avg': lines_count.avg_removed()}
def extract(self,
labeled_files: List[FailureProneFile],
product: bool = True,
process: bool = True,
delta: bool = False):
""" Extract metrics from labeled files.
Parameters
----------
labeled_files : List[FailureProneFile]
The list of FailureProneFile objects that are used to label a script as failure-prone (1) or clean (0).
product: bool
Whether to extract product metrics.
process: bool
Whether to extract process metrics.
delta: bool
Whether to extract delta metrics between two successive releases or commits.
"""
self.dataset = pd.DataFrame()
git_repo = Git(self.path_to_repo)
metrics_previous_release = dict() # Values for iac metrics in the last release
for commit in Repository(self.path_to_repo, order='date-order', num_workers=1).traverse_commits():
# To handle renaming in metrics_previous_release
for modified_file in commit.modified_files:
old_path = modified_file.old_path
new_path = modified_file.new_path
if old_path != new_path and old_path in metrics_previous_release:
# Rename key old_path wit new_path
metrics_previous_release[new_path] = metrics_previous_release.pop(old_path)
if commit.hash not in self.commits_at:
continue
# Else
git_repo.checkout(commit.hash)
process_metrics = {}
if process:
# Extract process metrics
i = self.commits_at.index(commit.hash)
from_previous_commit = commit.hash if i == 0 else self.commits_at[i - 1]
to_current_commit = commit.hash # = self.commits_at[i]
process_metrics = self.get_process_metrics(from_previous_commit, to_current_commit)
for filepath in self.get_files():
file_content = get_content(os.path.join(self.path_to_repo, filepath))
if not file_content or self.ignore_file(filepath, file_content):
continue
tmp = FailureProneFile(filepath=filepath, commit=commit.hash, fixing_commit='')
if tmp not in labeled_files:
label = 0 # clean
else:
label = 1 # failure-prone
metrics = dict(
filepath=filepath,
commit=commit.hash,
committed_at=str(commit.committer_date),
failure_prone=label
)
if process_metrics:
metrics['change_set_max'] = process_metrics['dict_change_set_max']
metrics['change_set_avg'] = process_metrics['dict_change_set_avg']
metrics['code_churn_count'] = process_metrics['dict_code_churn_count'].get(filepath, 0)
metrics['code_churn_max'] = process_metrics['dict_code_churn_max'].get(filepath, 0)
metrics['code_churn_avg'] = process_metrics['dict_code_churn_avg'].get(filepath, 0)
metrics['commits_count'] = process_metrics['dict_commits_count'].get(filepath, 0)
metrics['contributors_count'] = process_metrics['dict_contributors_count'].get(filepath, 0)
metrics['minor_contributors_count'] = process_metrics['dict_minor_contributors_count'].get(filepath, 0)
metrics['highest_contributor_experience'] = process_metrics[
'dict_highest_contributor_experience'].get(filepath, 0)
metrics['hunks_median'] = process_metrics['dict_hunks_median'].get(filepath, 0)
metrics['additions'] = process_metrics['dict_additions'].get(filepath, 0)
metrics['additions_max'] = process_metrics['dict_additions_max'].get(filepath, 0)
metrics['additions_avg'] = process_metrics['dict_additions_avg'].get(filepath, 0)
metrics['deletions'] = process_metrics['dict_deletions'].get(filepath, 0)
metrics['deletions_max'] = process_metrics['dict_deletions_max'].get(filepath, 0)
metrics['deletions_avg'] = process_metrics['dict_deletions_avg'].get(filepath, 0)
if product:
metrics.update(self.get_product_metrics(file_content))
if delta:
delta_metrics = dict()
previous = metrics_previous_release.get(filepath, dict())
for metric, value in previous.items():
if metric in ('filepath', 'commit', 'committed_at', 'failure_prone'):
continue
difference = metrics.get(metric, 0) - value
delta_metrics[f'delta_{metric}'] = round(difference, 3)
metrics_previous_release[filepath] = metrics.copy()
metrics.update(delta_metrics)
self.dataset = self.dataset.append(metrics, ignore_index=True)
git_repo.reset()
def ignore_file(self, path_to_file: str, content: str = None):
return False
def to_csv(self, filepath):
""" Save the metrics as csv
The file is saved asa
Parameters
----------
filepath : str
The path to the csv.
"""
with open(filepath, 'w') as out:
self.dataset.to_csv(out, mode='w', index=False) | /repository_miner-1.0.4-py3-none-any.whl/repominer/metrics/base.py | 0.849722 | 0.286862 | base.py | pypi |
import yaml
from typing import List
from pydriller.repository import Repository
from pydriller.domain.commit import ModificationType
from repominer import filters, utils
from repominer.mining.ansible_modules import DATABASE_MODULES, FILE_MODULES, IDENTITY_MODULES, NETWORK_MODULES, \
STORAGE_MODULES
from repominer.mining.base import BaseMiner, FixingCommitClassifier
CONFIG_DATA_MODULES = DATABASE_MODULES + FILE_MODULES + IDENTITY_MODULES + NETWORK_MODULES + STORAGE_MODULES
class AnsibleMiner(BaseMiner):
""" This class extends BaseMiner to mine Ansible-based repositories
"""
def __init__(self, url_to_repo: str, clone_repo_to: str, branch: str = None):
super(self.__class__, self).__init__(url_to_repo, clone_repo_to, branch)
self.FixingCommitClassifier = AnsibleFixingCommitClassifier
def ignore_file(self, path_to_file: str, content: str = None):
"""
Ignore non-Ansible files.
Parameters
----------
path_to_file: str
The filepath (e.g., repominer/mining/base.py).
content: str
The file content.
Returns
-------
bool
True if the file is not an Ansible file, and must be ignored. False, otherwise.
"""
return not filters.is_ansible_file(path_to_file)
class AnsibleFixingCommitClassifier(FixingCommitClassifier):
""" This class extends a FixingCommitClassifier to classify bug-fixing commits of Ansible files.
"""
def is_data_changed(self) -> bool:
for modified_file in self.commit.modified_files:
if modified_file.change_type != ModificationType.MODIFY or not filters.is_ansible_file(
modified_file.new_path):
continue
try:
source_code_before = yaml.safe_load(modified_file.source_code_before)
source_code_current = yaml.safe_load(modified_file.source_code)
data_before = [value for key, value in utils.key_value_list(source_code_before) if
key in CONFIG_DATA_MODULES]
data_current = [value for key, value in utils.key_value_list(source_code_current) if
key in CONFIG_DATA_MODULES]
return data_before != data_current
except yaml.YAMLError:
pass
return False
def is_include_changed(self) -> bool:
for modified_file in self.commit.modified_files:
if modified_file.change_type != ModificationType.MODIFY or not filters.is_ansible_file(
modified_file.new_path):
continue
try:
source_code_before = yaml.safe_load(modified_file.source_code_before)
source_code_current = yaml.safe_load(modified_file.source_code)
includes_before = [value for key, value in utils.key_value_list(source_code_before) if key in (
'include', 'include_role', 'include_tasks', 'include_vars', 'import_playbook', 'import_tasks',
'import_role')]
includes_current = [value for key, value in utils.key_value_list(source_code_current) if key in (
'include', 'include_role', 'include_tasks', 'include_vars', 'import_playbook', 'import_tasks',
'import_role')]
return includes_before != includes_current
except yaml.YAMLError:
pass
return False
def is_service_changed(self) -> bool:
for modified_file in self.commit.modified_files:
if modified_file.change_type != ModificationType.MODIFY or not filters.is_ansible_file(
modified_file.new_path):
continue
try:
source_code_before = yaml.safe_load(modified_file.source_code_before)
source_code_current = yaml.safe_load(modified_file.source_code)
services_before = [value for key, value in utils.key_value_list(source_code_before) if key == 'service']
services_current = [value for key, value in utils.key_value_list(source_code_current) if
key == 'service']
return services_before != services_current
except yaml.YAMLError:
pass
return False | /repository_miner-1.0.4-py3-none-any.whl/repominer/mining/ansible.py | 0.660172 | 0.167321 | ansible.py | pypi |
def has_defect_pattern(text: str) -> bool:
string_pattern = ['error', 'bug', 'fix', 'issu', 'mistake', 'incorrect', 'fault', 'defect', 'flaw']
return any(word in text.lower() for word in string_pattern)
def has_conditional_pattern(text: str) -> bool:
string_pattern = ['logic', 'condit', 'boolean']
return any(word in text.lower() for word in string_pattern)
def has_storage_configuration_pattern(text: str) -> bool:
string_pattern = ['sql', 'db', 'databas']
return any(word in text.lower() for word in string_pattern)
def has_file_configuration_pattern(text: str) -> bool:
string_pattern = ['file', 'permiss']
return any(word in text.lower() for word in string_pattern)
def has_network_configuration_pattern(text: str) -> bool:
string_pattern = ['network', 'ip', 'address', 'port', 'tcp', 'dhcp']
return any(word in text.lower() for word in string_pattern)
def has_user_configuration_pattern(text: str) -> bool:
string_pattern = ['user', 'usernam', 'password']
return any(word in text.lower() for word in string_pattern)
def has_cache_configuration_pattern(text: str) -> bool:
return 'cach' in text.lower()
def has_dependency_pattern(text: str) -> bool:
string_pattern = ['requir', 'depend', 'relat', 'order', 'sync', 'compat', 'ensur', 'inherit']
return any(word in text.lower() for word in string_pattern)
def has_documentation_pattern(text: str) -> bool:
string_pattern = ['doc', 'comment', 'spec', 'licens', 'copyright', 'notic', 'header', 'readm']
return any(word in text.lower() for word in string_pattern)
def has_idempotency_pattern(text: str) -> bool:
return 'idempot' in text.lower()
def has_security_pattern(text: str) -> bool:
string_pattern = ['vul', 'ssl', 'secr', 'authent', 'password', 'secur', 'cve']
return any(word in text.lower() for word in string_pattern)
def has_service_pattern(text: str) -> bool:
string_pattern = ['servic', 'server']
return any(word in text.lower() for word in string_pattern)
def has_syntax_pattern(text: str) -> bool:
string_pattern = ['compil', 'lint', 'warn', 'typo', 'spell', 'indent', 'regex', 'variabl', 'whitespac']
return any(word in text.lower() for word in string_pattern) | /repository_miner-1.0.4-py3-none-any.whl/repominer/mining/rules.py | 0.592431 | 0.408513 | rules.py | pypi |
import abc
import logging
from contextlib import suppress
from typing import Dict, List, Type, TypeVar
from repository_orm.exceptions import TooManyEntitiesError
from ...exceptions import AutoIncrementError, EntityNotFoundError
from ...model import Entity, EntityID, EntityOrEntitiesT, EntityT
from .cache import Cache
log = logging.getLogger(__name__)
class Repository(abc.ABC):
"""Gather common methods and define the interface of the repositories.
Attributes:
database_url: URL specifying the connection to the database.
"""
@abc.abstractmethod
def __init__(
self,
database_url: str = "",
) -> None:
"""Initialize the repository attributes.
Args:
database_url: URL specifying the connection to the database.
"""
self.database_url = database_url
self.cache = Cache()
def add(
self, entities: EntityOrEntitiesT, merge: bool = False
) -> EntityOrEntitiesT:
"""Append an entity or list of entities to the repository.
If the id is not set, it will automatically increment the last available one.
If `merge` is True, added entities will be merged with the existent ones in
the cache.
Args:
entities: Entity or entities to add to the repository.
Returns:
entity or entities
"""
if isinstance(entities, Entity):
entity = entities
if isinstance(entity.id_, int) and entity.id_ < 0:
entity.id_ = self.next_id(entity)
if merge:
with suppress(EntityNotFoundError):
stored_entity = self.get(entity.id_, type(entity))
entity = stored_entity.merge(entity)
if self.cache.entity_has_not_changed(entity):
log.debug(
f"Skipping the addition of entity {entity} as it hasn't changed"
)
return entity
entity = self._add(entity)
self.cache.add(entity)
return entity
if isinstance(entities, list):
updated_entities: List[Entity] = []
for entity in entities:
updated_entities.append(self.add(entity, merge))
return updated_entities
raise ValueError("Please add an entity or a list of entities")
@abc.abstractmethod
def _add(self, entity: EntityT) -> EntityT:
"""Append an entity to the repository.
This method is specific to each database adapter.
Args:
entity: Entity to add to the repository.
Returns:
entity
"""
raise NotImplementedError
@abc.abstractmethod
def delete(self, entity: EntityT) -> None:
"""Delete an entity from the repository.
Args:
entity: Entity to remove from the repository.
"""
raise NotImplementedError
def get(
self,
id_: EntityID,
model: Type[EntityT],
attribute: str = "id_",
) -> EntityT:
"""Obtain an entity from the repository by it's ID.
Also save the entity in the cache
Args:
model: Entity class to obtain.
id_: ID of the entity to obtain.
Returns:
entity: Entity object that matches the id_
Raises:
EntityNotFoundError: If the entity is not found.
TooManyEntitiesError: If more than one entity was found.
"""
entities = self._get(value=id_, model=model, attribute=attribute)
if len(entities) > 1:
raise TooManyEntitiesError(
f"More than one entity was found with the {attribute} {id_}"
)
if len(entities) == 0:
raise EntityNotFoundError(
f"There are no entities of type {model.__name__} in the repository "
f"with {attribute} {id_}."
)
entity = entities[0]
entity.clear_defined_values()
self.cache.add(entity)
return entity
@abc.abstractmethod
def _get(
self,
value: EntityID,
model: Type[EntityT],
attribute: str = "id_",
) -> List[EntityT]:
"""Obtain all entities from the repository that match an id_.
If the attribute argument is passed, check that attribute instead.
Args:
value: Value of the entity attribute to obtain.
model: Entity class to obtain.
attribute: Entity attribute to check.
Returns:
entities: All entities that match the criteria.
"""
raise NotImplementedError
def all(
self,
model: Type[EntityT],
) -> List[EntityT]:
"""Get all the entities from the repository that match a model.
Also store the entities in the cache.
Args:
model: Entity class or classes to obtain.
"""
entities = sorted(self._all(model))
for entity in entities:
entity.clear_defined_values()
self.cache.add(entity)
return entities
@abc.abstractmethod
def _all(self, model: Type[EntityT]) -> List[EntityT]:
"""Get all the entities from the repository that match a model.
Particular implementation of the database adapter.
Args:
model: Entity class to obtain.
"""
raise NotImplementedError
@abc.abstractmethod
def commit(self) -> None:
"""Persist the changes into the repository."""
raise NotImplementedError
def search(
self,
fields: Dict[str, EntityID],
model: Type[EntityT],
) -> List[EntityT]:
"""Get the entities whose attributes match one or several conditions.
Also add the found entities to the cache.
Args:
model: Entity class to obtain.
fields: Dictionary with the {key}:{value} to search.
Returns:
entities: List of Entity object that matches the search criteria.
"""
found_entities = sorted(self._search(fields, model))
for entity in found_entities:
entity.clear_defined_values()
self.cache.add(entity)
return found_entities
@abc.abstractmethod
def _search(
self,
fields: Dict[str, EntityID],
model: Type[EntityT],
) -> List[EntityT]:
"""Get the entities whose attributes match one or several conditions.
Particular implementation of the database adapter.
Args:
model: Entity class to obtain.
fields: Dictionary with the {key}:{value} to search.
Returns:
entities: List of Entity object that matches the search criteria.
"""
raise NotImplementedError
@abc.abstractmethod
def apply_migrations(self, migrations_directory: str) -> None:
"""Run the migrations of the repository schema.
Args:
migrations_directory: path to the directory containing the migration
scripts.
"""
raise NotImplementedError
def last(
self,
model: Type[EntityT],
) -> EntityT:
"""Get the biggest entity from the repository.
Args:
model: Entity class to obtain.
Returns:
entity: Biggest Entity object that matches a model.
Raises:
EntityNotFoundError: If there are no entities.
"""
try:
return max(self.all(model))
except ValueError as error:
raise EntityNotFoundError(
f"There are no entities of type {model.__name__} in the repository."
) from error
def first(
self,
model: Type[EntityT],
) -> EntityT:
"""Get the smallest entity from the repository.
Args:
model: Type of entity object to obtain.
Returns:
entity: Smallest Entity object that matches a model.
Raises:
EntityNotFoundError: If there are no entities.
"""
try:
return min(self.all(model))
except ValueError as error:
raise EntityNotFoundError(
f"There are no entities of type {model.__name__} in the repository."
) from error
def next_id(self, entity: EntityT) -> int:
"""Return one id unit more than the last entity id in the repository index.
Args:
entity: Entity whose model we want to get the next entity id.
"""
try:
last_id = self.last(type(entity)).id_
except EntityNotFoundError:
return 0
if isinstance(last_id, int):
return last_id + 1
raise AutoIncrementError(
"Auto increment is not yet supported for Entities with string id_s. "
"Please set the id_ yourself before adding the entities to the "
"repository."
)
@abc.abstractmethod
def close(self) -> None:
"""Close the connection to the database."""
raise NotImplementedError
@property
@abc.abstractmethod
def is_closed(self) -> bool:
"""Inform if the connection is closed."""
raise NotImplementedError
@abc.abstractmethod
def empty(self) -> None:
"""Remove all entities from the repository."""
raise NotImplementedError
RepositoryT = TypeVar("RepositoryT", bound=Repository) | /repository-orm-1.3.3.tar.gz/repository-orm-1.3.3/src/repository_orm/adapters/data/abstract.py | 0.911303 | 0.323113 | abstract.py | pypi |
import logging
import os
import re
import sqlite3
from sqlite3 import ProgrammingError
from typing import Dict, List, Type, Union
from pypika import Query, Table, functions
from yoyo import get_backend, read_migrations
from ...exceptions import EntityNotFoundError
from ...model import EntityID, EntityT
from .abstract import Repository
log = logging.getLogger(__name__)
def _regexp(expression: str, item: str) -> bool:
"""Implement the REGEXP filter for SQLite.
Args:
expression: regular expression to check.
item: element to check.
Returns:
if the item matches the regular expression.
"""
reg = re.compile(expression)
return reg.search(item) is not None
class PypikaRepository(Repository):
"""Implement the repository pattern using the Pypika query builder."""
def __init__(
self,
database_url: str = "",
) -> None:
"""Initialize the repository attributes.
Args:
database_url: URL specifying the connection to the database.
"""
super().__init__(database_url)
database_file = database_url.replace("sqlite:///", "")
if not os.path.isfile(database_file):
try:
with open(database_file, "a", encoding="utf-8") as file_cursor:
file_cursor.close()
except FileNotFoundError as error:
raise ConnectionError(
f"Could not create the database file: {database_file}"
) from error
self.connection = sqlite3.connect(database_file)
self.connection.create_function("REGEXP", 2, _regexp)
self.cursor = self.connection.cursor()
def _execute(self, query: Union[Query, str]) -> sqlite3.Cursor:
"""Execute an SQL statement from a Pypika query object.
Args:
query: Pypika query
"""
return self.cursor.execute(str(query))
@staticmethod
def _table(entity: EntityT) -> Table:
"""Return the table of the selected entity object."""
return Table(entity.model_name.lower())
@staticmethod
def _table_model(model: Type[EntityT]) -> Table:
"""Return the table of the selected entity class."""
return Table(model.__name__.lower())
def _add(self, entity: EntityT) -> EntityT:
"""Append an entity to the repository.
If the id is not set, autoincrement the last.
Args:
entity: Entity to add to the repository.
Returns:
entity
"""
table = self._table(entity)
columns = list(entity.dict().keys())
columns[columns.index("id_")] = "id"
values = [value for key, value in entity.dict().items()]
insert_query = Query.into(table).columns(tuple(columns)).insert(tuple(values))
# Until https://github.com/kayak/pypika/issues/535 is solved we need to write
# The upsert statement ourselves.
# nosec: B608:hardcoded_sql_expressions, Possible SQL injection vector through
# string-based query construction. We're not letting the user define the
# values of the query, the only variable inputs are the keys, that are
# defined by the developer, so it's not probable that he chooses an
# entity attributes that are an SQL injection. Once the #535 issue is
# solved, we should get rid of this error too.
upsert_query = (
str(insert_query)
+ " ON CONFLICT(id) DO UPDATE SET " # nosec
+ ", ".join([f"{key}=excluded.{key}" for key in columns])
)
self._execute(upsert_query)
return entity
def delete(self, entity: EntityT) -> None:
"""Delete an entity from the repository.
Args:
entity: Entity to remove from the repository.
Raises:
EntityNotFoundError: If the entity is not found.
"""
table = self._table(entity)
try:
self.get(entity.id_, type(entity))
except EntityNotFoundError as error:
raise EntityNotFoundError(
f"Unable to delete entity {entity} because it's not in the repository"
) from error
query = Query.from_(table).delete().where(table.id == entity.id_)
self._execute(query)
def _get(
self,
value: EntityID,
model: Type[EntityT],
attribute: str = "id_",
) -> List[EntityT]:
"""Obtain all entities from the repository that match an id_.
If the attribute argument is passed, check that attribute instead.
Args:
value: Value of the entity attribute to obtain.
model: Entity class to obtain.
attribute: Entity attribute to check.
Returns:
entities: All entities that match the criteria.
"""
table = self._table_model(model)
query = Query.from_(table).select("*")
if attribute == "id_":
query = query.where(table.id == value)
else:
query = query.where(getattr(table, attribute) == value)
return self._build_entities(model, query)
def _all(self, model: Type[EntityT]) -> List[EntityT]:
"""Get all the entities from the repository that match a model.
Particular implementation of the database adapter.
Args:
model: Entity class to obtain.
"""
table = self._table_model(model)
query = Query.from_(table).select("*")
return self._build_entities(model, query)
def _build_entities(self, model: Type[EntityT], query: Query) -> List[EntityT]:
"""Build Entity objects from the data extracted from the database.
Args:
model: Entity class model to build.
query: pypika query of the entities you want to build
"""
cursor = self._execute(query)
entities_data = cursor.fetchall()
attributes = [description[0] for description in cursor.description]
entities = []
for entity_data in entities_data:
entity_dict = {
attributes[index]: entity_data[index]
for index in range(0, len(entity_data))
}
entity_dict["id_"] = entity_dict.pop("id")
entities.append(model(**entity_dict))
return entities
def commit(self) -> None:
"""Persist the changes into the repository."""
self.connection.commit()
def _search(
self,
fields: Dict[str, EntityID],
model: Type[EntityT],
) -> List[EntityT]:
"""Get the entities whose attributes match one or several conditions.
Particular implementation of the database adapter.
Args:
model: Entity class to obtain.
fields: Dictionary with the {key}:{value} to search.
Returns:
entities: List of Entity object that matches the search criteria.
"""
table = self._table_model(model)
query = Query.from_(table).select("*")
for key, value in fields.items():
if key == "id_":
key = "id"
if isinstance(value, str):
query = query.where(
functions.Lower(getattr(table, key)).regexp(value.lower())
)
else:
query = query.where(getattr(table, key) == value)
return self._build_entities(model, query)
def apply_migrations(self, migrations_directory: str) -> None:
"""Run the migrations of the repository schema.
Args:
migrations_directory: path to the directory containing the migration
scripts.
"""
backend = get_backend(self.database_url)
migrations = read_migrations(migrations_directory)
with backend.lock():
log.info("Running database migrations")
try:
backend.apply_migrations(backend.to_apply(migrations))
except Exception as error: # noqa: W0703
# We need to add tests for this function and use a less generic
# exception
log.error("Error running database migrations")
log.error(error)
log.debug("Rolling back the database migrations")
try:
backend.rollback_migrations(backend.to_rollback(migrations))
except Exception as rollback_error: # noqa: W0703
# We need to add tests for this function and use a less generic
# exception
log.error("Error rolling back database migrations")
log.error(rollback_error)
raise rollback_error from error
log.debug("Complete running database migrations")
def close(self) -> None:
"""Close the connection to the database."""
self.connection.close()
def empty(self) -> None:
"""Remove all entities from the repository."""
for table in self.tables:
self._execute(Query.from_(table).delete())
@property
def tables(self) -> List[str]:
"""Return the entity tables of the database."""
if re.match("sqlite://", self.database_url):
query = "SELECT name FROM sqlite_master WHERE type='table'"
tables = [
table[0]
for table in self._execute(query).fetchall()
if not re.match(r"^_", table[0]) and not re.match("yoyo", table[0])
]
return tables
@property
def is_closed(self) -> bool:
"""Inform if the connection is closed."""
try:
self.connection.cursor()
return False
except ProgrammingError:
return True | /repository-orm-1.3.3.tar.gz/repository-orm-1.3.3/src/repository_orm/adapters/data/pypika.py | 0.852905 | 0.332812 | pypika.py | pypi |
import copy
import re
from contextlib import suppress
from typing import Dict, List, Type
from deepdiff import extract, grep
from repository_orm import Repository
from ...exceptions import EntityNotFoundError
from ...model import EntityID, EntityT
FakeRepositoryDB = Dict[Type[EntityT], Dict[EntityID, EntityT]]
class FakeRepository(Repository):
"""Implement the repository pattern using a memory dictionary."""
def __init__(
self,
database_url: str = "",
) -> None:
"""Initialize the repository attributes."""
super().__init__()
if database_url == "/inexistent_dir/database.db":
raise ConnectionError(f"Could not create database file: {database_url}")
# ignore: Type variable "repository_orm.adapters.data.fake.Entity" is unbound
# I don't know how to fix this
self.entities: FakeRepositoryDB[EntityT] = {} # type: ignore
self.new_entities: FakeRepositoryDB[EntityT] = {} # type: ignore
self.is_connection_closed = False
def _add(self, entity: EntityT) -> EntityT:
"""Append an entity to the repository.
Args:
entity: Entity to add to the repository.
Returns:
entity
"""
if self.new_entities == {}:
self.new_entities = copy.deepcopy(self.entities.copy())
try:
self.new_entities[type(entity)]
except KeyError:
self.new_entities[type(entity)] = {}
self.new_entities[type(entity)][entity.id_] = entity
return entity
def delete(self, entity: EntityT) -> None:
"""Delete an entity from the repository.
Args:
entity: Entity to remove from the repository.
Raises:
EntityNotFoundError: If the entity is not found.
"""
if self.new_entities == {}:
self.new_entities = copy.deepcopy(self.entities.copy())
try:
self.new_entities[type(entity)].pop(entity.id_, None)
except KeyError as error:
raise EntityNotFoundError(
f"Unable to delete entity {entity} because it's not in the repository"
) from error
def _get(
self,
value: EntityID,
model: Type[EntityT],
attribute: str = "id_",
) -> List[EntityT]:
"""Obtain all entities from the repository that match an id_.
If the attribute argument is passed, check that attribute instead.
Args:
value: Value of the entity attribute to obtain.
model: Entity class to obtain.
attribute: Entity attribute to check.
Returns:
entities: All entities that match the criteria.
"""
matching_entities = []
if attribute == "id_":
with suppress(KeyError):
matching_entities.append(self.entities[model][value])
else:
matching_entities = self._search({attribute: value}, model)
return copy.deepcopy(matching_entities)
def _all(self, model: Type[EntityT]) -> List[EntityT]:
"""Get all the entities from the repository that match a model.
Particular implementation of the database adapter.
Args:
model: Entity class to obtain.
"""
entities = []
with suppress(KeyError):
entities += sorted(
entity for entity_id, entity in self.entities[model].items()
)
return entities
def commit(self) -> None:
"""Persist the changes into the repository."""
for model, entities in self.new_entities.items():
self.entities[model] = entities
self.new_entities = {}
def _search(
self,
fields: Dict[str, EntityID],
model: Type[EntityT],
) -> List[EntityT]:
"""Get the entities whose attributes match one or several conditions.
Particular implementation of the database adapter.
Args:
model: Entity class to obtain.
fields: Dictionary with the {key}:{value} to search.
Returns:
entities: List of Entity object that matches the search criteria.
"""
all_entities = self.all(model)
entities_dict = {entity.id_: entity for entity in all_entities}
entity_attributes = {entity.id_: entity.dict() for entity in all_entities}
for key, value in fields.items():
# Get entities that have the value `value`
entities_with_value = entity_attributes | grep(
value, use_regexp=True, strict_checking=False
)
matching_entity_attributes = {}
try:
entities_with_value["matched_values"]
except KeyError:
return []
for path in entities_with_value["matched_values"]:
entity_id = re.sub(r"root\['?(.*?)'?\]\[.*", r"\1", path)
# Convert int ids from str to int
try:
# ignore: waiting for ADR-006 to be resolved
entity_id = int(entity_id) # type: ignore
except ValueError:
entity_id = re.sub(r"'(.*)'", r"\1", entity_id)
# Add the entity to the matching ones only if the value is of the
# attribute `key`.
if re.match(rf"root\['?{entity_id}'?\]\['{key}'\]", path):
matching_entity_attributes[entity_id] = extract(
entity_attributes, f"root[{entity_id}]"
)
# ignore: waiting for ADR-006 to be resolved
entity_attributes = matching_entity_attributes # type: ignore
entities = [entities_dict[key] for key in entity_attributes.keys()]
return entities
def apply_migrations(self, migrations_directory: str) -> None:
"""Run the migrations of the repository schema.
Args:
migrations_directory: path to the directory containing the migration
scripts.
"""
# The fake repository doesn't have any schema
def last(
self,
model: Type[EntityT],
) -> EntityT:
"""Get the biggest entity from the repository.
Args:
model: Entity class to obtain.
Returns:
entity: Biggest Entity object that matches a model.
Raises:
EntityNotFoundError: If there are no entities.
"""
try:
last_index_entity = super().last(model)
except EntityNotFoundError as empty_repo:
try:
# Empty repo but entities staged to be commited.
return max(self._staged_entities(model))
except KeyError as no_staged_entities:
# Empty repo and no entities staged.
raise empty_repo from no_staged_entities
try:
last_staged_entity = max(self._staged_entities(model))
except KeyError:
# Full repo and no staged entities.
return last_index_entity
# Full repo and staged entities.
return max([last_index_entity, last_staged_entity])
def _staged_entities(self, model: Type[EntityT]) -> List[EntityT]:
"""Return a list of staged entities of a model type.
Args:
model: Return only instances of this model.
"""
return [entity for _, entity in self.new_entities[model].items()]
def close(self) -> None:
"""Close the connection to the database."""
self.is_connection_closed = True
@property
def is_closed(self) -> bool:
"""Inform if the connection is closed."""
return self.is_connection_closed
def empty(self) -> None:
"""Remove all entities from the repository."""
self.entities = {}
self.new_entities = {} | /repository-orm-1.3.3.tar.gz/repository-orm-1.3.3/src/repository_orm/adapters/data/fake.py | 0.862771 | 0.257453 | fake.py | pypi |
import logging
import os
import re
from contextlib import suppress
from typing import Any, Dict, Iterable, List, Optional, Type
from pydantic import ValidationError
from tinydb import Query, TinyDB
from tinydb.queries import QueryInstance
from tinydb.storages import JSONStorage
from tinydb_serialization import SerializationMiddleware
from tinydb_serialization.serializers import DateTimeSerializer
from ...exceptions import EntityNotFoundError
from ...model import EntityID, EntityT
from .abstract import Repository
log = logging.getLogger(__name__)
class TinyDBRepository(Repository):
"""Implement the repository pattern using the TinyDB."""
def __init__(
self,
database_url: str = "",
) -> None:
"""Initialize the repository attributes.
Args:
database_url: URL specifying the connection to the database.
"""
super().__init__(database_url)
self.database_file = os.path.expanduser(database_url.replace("tinydb://", ""))
if not os.path.isfile(self.database_file):
try:
with open(self.database_file, "a", encoding="utf-8") as file_cursor:
file_cursor.close()
except FileNotFoundError as error:
raise ConnectionError(
f"Could not create the database file: {self.database_file}"
) from error
serialization = SerializationMiddleware(JSONStorage)
serialization.register_serializer(DateTimeSerializer(), "TinyDate")
self.db_ = TinyDB(
self.database_file, storage=serialization, sort_keys=True, indent=4
)
self.staged: Dict[str, List[Any]] = {"add": [], "remove": []}
def _add(self, entity: EntityT) -> EntityT:
"""Append an entity to the repository.
If the id is not set, autoincrement the last.
Args:
entity: Entity to add to the repository.
Returns:
entity
"""
self.staged["add"].append(entity)
return entity
def delete(self, entity: EntityT) -> None:
"""Delete an entity from the repository.
Args:
entity: Entity to remove from the repository.
"""
try:
self.get(entity.id_, type(entity))
except EntityNotFoundError as error:
raise EntityNotFoundError(
f"Unable to delete entity {entity} because it's not in the repository"
) from error
self.staged["remove"].append(entity)
def _get(
self,
value: EntityID,
model: Type[EntityT],
attribute: str = "id_",
) -> List[EntityT]:
"""Obtain all entities from the repository that match an id_.
If the attribute argument is passed, check that attribute instead.
Args:
value: Value of the entity attribute to obtain.
model: Entity class to obtain.
attribute: Entity attribute to check.
Returns:
entities: All entities that match the criteria.
"""
model_query = Query().model_type_ == model.__name__.lower()
matching_entities_data = self.db_.search(
(Query()[attribute] == value) & (model_query)
)
return [
self._build_entity(entity_data, model)
for entity_data in matching_entities_data
]
@staticmethod
def _build_entity(
entity_data: Dict[Any, Any],
model: Type[EntityT],
) -> EntityT:
"""Create an entity from the data stored in a row of the database.
Args:
entity_data: Dictionary with the attributes of the entity.
model: Type of entity object to obtain.
Returns:
entity: Built Entity.
"""
# If we don't copy the data, the all method stop being idempotent.
entity_data = entity_data.copy()
try:
return model.parse_obj(entity_data)
except ValidationError as error:
log.error(
f"Error loading the model {model.__name__} "
f"for the register {str(entity_data)}"
)
raise error
def _all(self, model: Type[EntityT]) -> List[EntityT]:
"""Get all the entities from the repository that match a model.
Particular implementation of the database adapter.
Args:
model: Entity class to obtain.
"""
entities = []
query = Query().model_type_ == model.__name__.lower()
entities_data = self.db_.search(query)
for entity_data in entities_data:
entities.append(self._build_entity(entity_data, model))
return entities
@staticmethod
def _export_entity(entity: EntityT) -> Dict[Any, Any]:
"""Export the attributes of the entity appending the required by TinyDB.
Args:
entity: Entity to export.
Returns:
entity_data: Dictionary with the attributes of the entity.
"""
entity_data = entity.dict()
entity_data["model_type_"] = entity.model_name.lower()
return entity_data
def commit(self) -> None:
"""Persist the changes into the repository."""
for entity in self.staged["add"]:
self.db_.upsert(
self._export_entity(entity),
(Query().model_type_ == entity.model_name.lower())
& (Query().id_ == entity.id_),
)
self.staged["add"].clear()
for entity in self.staged["remove"]:
self.db_.remove(
(Query().model_type_ == entity.model_name.lower())
& (Query().id_ == entity.id_)
)
self.staged["remove"].clear()
def _search(
self,
fields: Dict[str, EntityID],
model: Type[EntityT],
) -> List[EntityT]:
"""Get the entities whose attributes match one or several conditions.
Particular implementation of the database adapter.
Args:
model: Entity class to obtain.
fields: Dictionary with the {key}:{value} to search.
Returns:
entities: List of Entity object that matches the search criteria.
"""
entities = []
try:
query = self._build_search_query(fields, model)
except EntityNotFoundError:
return []
# Build entities
entities_data = self.db_.search(query)
for entity_data in entities_data:
entities.append(self._build_entity(entity_data, model))
return entities
def _build_search_query(
self,
fields: Dict[str, EntityID],
model: Type[EntityT],
) -> QueryInstance:
"""Build the Query parts for a repository search.
If the field type is a list, change the query accordingly.
Args:
model: Type of entity object to obtain.
fields: Dictionary with the {key}:{value} to search.
Returns:
Query based on the type of model and fields.
"""
query_parts = []
schema = model.schema()["properties"]
for field, value in fields.items():
if field not in schema.keys():
continue
with suppress(KeyError):
if schema[field]["type"] == "array":
query_parts.append(
(Query().model_type_ == model.__name__.lower())
& (Query()[field].test(_regexp_in_list, value))
)
continue
if isinstance(value, str):
query_parts.append(
(Query().model_type_ == model.__name__.lower())
& (Query()[field].search(value, flags=re.IGNORECASE))
)
else:
query_parts.append(
(Query().model_type_ == model.__name__.lower())
& (Query()[field] == value)
)
if len(query_parts) != 0:
return self._merge_query(query_parts, mode="and")
raise EntityNotFoundError(
f"There are no entities of type {model.__name__} in the repository "
f" that match the search filter {fields}"
)
@staticmethod
def _merge_query(
query_parts: List[QueryInstance], mode: str = "and"
) -> QueryInstance:
"""Join all the query parts into a query.
Args:
query_parts: List of queries to concatenate.
mode: "and" or "or" for the join method.
Returns:
A query object that joins all parts.
"""
query = query_parts[0]
for query_part in query_parts[1:]:
if mode == "and":
query = query & query_part
else:
query = query | query_part
return query
def apply_migrations(self, migrations_directory: str) -> None:
"""Run the migrations of the repository schema.
Args:
migrations_directory: path to the directory containing the migration
scripts.
"""
raise NotImplementedError
def last(
self,
model: Type[EntityT],
) -> EntityT:
"""Get the biggest entity from the repository.
Args:
model: Entity class to obtain.
Returns:
entity: Biggest Entity object that matches a model.
Raises:
EntityNotFoundError: If there are no entities.
"""
try:
last_index_entity = super().last(model)
except EntityNotFoundError as empty_repo:
try:
# Empty repo but entities staged to be commited.
return max(
entity for entity in self.staged["add"] if entity.__class__ == model
)
except ValueError as no_staged_entities:
# Empty repo and no entities staged.
raise empty_repo from no_staged_entities
try:
last_staged_entity = max(
entity for entity in self.staged["add"] if entity.__class__ == model
)
except ValueError:
# Full repo and no staged entities.
return last_index_entity
# Full repo and staged entities.
return max([last_index_entity, last_staged_entity])
def close(self) -> None:
"""Close the connection to the database."""
self.db_.close()
@property
def is_closed(self) -> bool:
"""Inform if the connection is closed."""
try:
self.db_.tables()
return False
except ValueError:
return True
def empty(self) -> None:
"""Remove all entities from the repository."""
self.db_.truncate()
def _regexp_in_list(list_: Optional[Iterable[Any]], regular_expression: str) -> bool:
"""Test if regexp matches any element of the list."""
if list_ is None:
return False
regexp = re.compile(regular_expression)
return any(regexp.search(element) for element in list_) | /repository-orm-1.3.3.tar.gz/repository-orm-1.3.3/src/repository_orm/adapters/data/tinydb.py | 0.86757 | 0.207998 | tinydb.py | pypi |
from typing import Union
from reposcorer.attributes.community import core_contributors
from reposcorer.attributes.continuous_integration import has_continuous_integration
from reposcorer.attributes.history import commit_frequency
from reposcorer.attributes.iac import iac_ratio
from reposcorer.attributes.issues import github_issue_event_frequency, gitlab_issue_event_frequency
from reposcorer.attributes.licensing import has_license
from reposcorer.attributes.loc_info import loc_info
def score_repository(
path_to_repo: str,
host: str,
full_name: Union[str, int],
calculate_comments_ratio: bool = False,
calculate_commit_frequency: bool = False,
calculate_core_contributors: bool = False,
calculate_has_ci: bool = False,
calculate_has_license: bool = False,
calculate_iac_ratio: bool = False,
calculate_issue_frequency: bool = False,
calculate_repository_size: bool = False):
"""
Score a repository to identify well-engineered projects.
:param path_to_repo: path to the local repository
:param host: the SVM hosting platform. That is, github or gitlab
:param full_name: the full name of a repository (e.g., radon-h2020/radon-repository-scorer)
:param calculate_comments_ratio: if calculate this attribute
:param calculate_commit_frequency: if calculate this attribute
:param calculate_core_contributors: if calculate this attribute
:param calculate_has_ci: if calculate this attribute
:param calculate_has_license: if calculate this attribute
:param calculate_iac_ratio: if calculate this attribute
:param calculate_issue_frequency: if calculate this attribute
:param calculate_repository_size: if calculate this attribute
:return: a dictionary with a score for every indicator
"""
scores = {}
if calculate_issue_frequency:
if host == 'github':
issues = github_issue_event_frequency(full_name)
elif host == 'gitlab':
issues = gitlab_issue_event_frequency(full_name)
else:
raise ValueError(f'{host} not supported. Please select github or gitlab')
scores.update({'issue_frequency': round(issues, 2)})
if calculate_commit_frequency:
scores.update({'commit_frequency': round(commit_frequency(path_to_repo), 2)})
if calculate_core_contributors:
scores.update({'core_contributors': core_contributors(path_to_repo)})
if calculate_has_ci:
scores.update({'has_ci': has_continuous_integration(path_to_repo)})
if calculate_has_license:
scores.update({'has_license': has_license(path_to_repo)})
if calculate_iac_ratio:
scores.update({'iac_ratio': round(iac_ratio(path_to_repo), 4)})
if calculate_comments_ratio or calculate_repository_size:
cloc, sloc = loc_info(path_to_repo)
ratio_comments = cloc / (cloc + sloc) if (cloc + sloc) != 0 else 0
if calculate_comments_ratio:
scores.update({'comments_ratio': round(ratio_comments, 4)})
if calculate_repository_size:
scores.update({'repository_size': sloc})
return scores | /repository-scorer-0.5.1.tar.gz/repository-scorer-0.5.1/reposcorer/scorer.py | 0.799755 | 0.315248 | scorer.py | pypi |
import os
import github, gitlab
from datetime import datetime
from typing import Union
def github_issue_event_frequency(full_name_or_id: Union[str, int],
since: datetime = None,
until: datetime = None) -> float:
"""
Return the average number of issue events per month
:param full_name_or_id: the full name of a repository or its id (e.g., radon-h2020/radon-repository-scorer)
:param since: look for events since this date
:param until: look for events until this date
:return: the monthly average number of issue events
"""
gh = github.Github(os.getenv('GITHUB_ACCESS_TOKEN'))
repo = gh.get_repo(full_name_or_id)
if not since:
since = repo.created_at
if not until:
until = repo.updated_at
months = round((until - since).days / 30)
events = 0
for issue in repo.get_issues(sort='created'):
if not issue:
continue
issue_events = issue.get_events()
if not issue_events:
continue
for event in issue_events:
if since <= event.created_at <= until:
events += 1
if months == 0:
return 0
return events / months
def gitlab_issue_event_frequency(full_name_or_id: Union[str, int],
since: datetime = None,
until: datetime = None) -> float:
"""
Return the average number of issue events per month
:param full_name_or_id: the full name of a repository or its id (e.g., radon-h2020/radon-repository-scorer)
:param since: look for events since this date
:param until: look for events until this date
:return: the monthly average number of issue events
"""
gl = gitlab.Gitlab('http://gitlab.com', os.getenv('GITLAB_ACCESS_TOKEN'))
project = gl.projects.get(full_name_or_id)
if not since:
since = datetime.strptime(project.created_at, '%Y-%m-%dT%H:%M:%S.%fZ')
if not until:
until = datetime.strptime(project.last_activity_at, '%Y-%m-%dT%H:%M:%S.%fZ')
months = round((until - since).days / 30)
events = 0
for issue in project.issues.list(all=True):
for note in issue.notes.list(all=True, as_list=False, sort='asc'):
if since <= datetime.strptime(note.created_at, '%Y-%m-%dT%H:%M:%S.%fZ') <= until:
events += 1
else:
break
if months == 0:
return 0
print(events, months)
return events / months | /repository-scorer-0.5.1.tar.gz/repository-scorer-0.5.1/reposcorer/attributes/issues.py | 0.695235 | 0.255611 | issues.py | pypi |
====================================
Contributor Covenant Code of Conduct
====================================
Our Pledge
==========
We as members, contributors, and leaders pledge to make participation in this
project and our community a harassment-free experience for everyone, regardless
of age, body size, visible or invisible disability, ethnicity, sex
characteristics, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race,
religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
Our Standards
=============
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
Enforcement Responsibilities
============================
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
Scope
=====
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
Enforcement
===========
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at oss-coc@vmware.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
Enforcement Guidelines
======================
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
1. Correction
-------------
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
2. Warning
----------
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
3. Temporary Ban
----------------
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
4. Permanent Ban
----------------
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
Attribution
===========
This Code of Conduct is adapted from the `Contributor Covenant`_,
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by `Mozilla's code of conduct
enforcement ladder <https://github.com/mozilla/diversity>`_.
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.
.. _Contributor Covenant: https://www.contributor-covenant.org | /repository_service_tuf-0.5.0b1.tar.gz/repository_service_tuf-0.5.0b1/CODE_OF_CONDUCT.rst | 0.66888 | 0.743308 | CODE_OF_CONDUCT.rst | pypi |
from datetime import datetime, timedelta
from typing import Any, Dict, List
from rich import box, markdown, prompt, table
from securesystemslib.exceptions import StorageError # type: ignore
from tuf.api.metadata import Metadata, Root
from tuf.api.serialization import DeserializationError
from repository_service_tuf.cli import click, console
from repository_service_tuf.cli.admin import admin
from repository_service_tuf.constants import KeyType
from repository_service_tuf.helpers.api_client import (
URL,
get_md_file,
send_payload,
task_status,
)
from repository_service_tuf.helpers.tuf import (
RootInfo,
RSTUFKey,
load_key,
load_payload,
save_payload,
)
INTRODUCTION = """
# Metadata Update
The metadata update ceremony allows to:
- extend Root expiration
- change Root signature threshold
- change any signing key
"""
CURRENT_ROOT_INFO = """
# Current Root Content
Before deciding what you want to update it's recommended that you
get familiar with the current state of the root metadata file.
"""
AUTHORIZATION = """
# STEP 1: Authorization
Before continuing, you must authorize using the current root key(s).
In order to complete the authorization you will be asked to provide information
about one or more keys used to sign the current root metadata.
To complete the authorization, you must provide information about one or more
keys used to sign the current root metadata.
The number of required keys is based on the current "threshold".
You will need local access to the keys as well as their corresponding
passwords.
"""
EXPIRY_CHANGES_MSG = """
# STEP 2: Extend Root Expiration
Now, you will be given the opportunity to extend root's expiration.
Note: the root expiration can be extended ONLY during the metadata update
ceremony.
"""
ROOT_KEYS_CHANGES_MSG = """
# STEP 3: Root Keys Changes
You are starting the Root keys changes procedure.
Note: when asked about specific attributes the default values that are
suggested will be the ones used in the current root metadata.
"""
ROOT_KEYS_REMOVAL_MSG = """
## Root Keys Removal
You are starting the root keys modification procedure.
First, you will be asked if you want to remove any of the keys.
Then you will be given the opportunity to add as many keys as you want.
In the end, the number of keys that are left must be equal or above the
threshold you have given.
"""
ROOT_KEY_ADDITIONS_MSG = """
## Root Keys Addition
Now, you will be able to add root keys.
"""
ONLINE_KEY_CHANGE = """
# STEP 4: Online Key Change
Now you will be given the opportunity to change the online key.
The online key is used to sign all roles except root.
Note: there can be only one online key at a time.
"""
@admin.group()
@click.pass_context
def metadata(context):
"""
Metadata management.
"""
def _create_keys_table(
keys: List[Dict[str, Any]], offline_keys: bool, is_minimal: bool
) -> table.Table:
"""Gets a new keys table."""
keys_table: table.Table
if is_minimal:
keys_table = table.Table(box=box.MINIMAL)
else:
keys_table = table.Table()
keys_table.add_column("Id", justify="center")
keys_table.add_column("Name/Tag", justify="center")
keys_table.add_column("Key Type", justify="center")
keys_table.add_column("Storage", justify="center")
keys_table.add_column("Public Value", justify="center")
keys_location: str
if offline_keys:
keys_location = "[bright_blue]Offline[/]"
else:
keys_location = "[green]Online[/]"
for key in keys:
keys_table.add_row(
f"[yellow]{key['keyid']}",
f'[yellow]{key["name"]}',
key["keytype"],
keys_location,
f'[yellow]{key["keyval"]["public"]}',
)
return keys_table
def _print_root_info(root_info: RootInfo):
root_table = table.Table()
root_table.add_column("Root", justify="left", vertical="middle")
root_table.add_column("KEYS", justify="center", vertical="middle")
number_of_keys = len(root_info.keys)
root_keys_table = _create_keys_table(root_info.keys, True, True)
root_table.add_row(
(
f"\nNumber of Keys: [yellow]{number_of_keys}[/]"
f"\nThreshold: [yellow]{root_info.threshold}[/]"
f"\nRoot Expiration: [yellow]{root_info.expiration_str}[/]"
),
root_keys_table,
)
console.print("\n", root_table)
console.print("\n")
def _get_key(role: str) -> RSTUFKey:
key_type = prompt.Prompt.ask(
f"\nChoose [cyan]{role}[/] key type",
choices=KeyType.get_all_members(),
default=KeyType.KEY_TYPE_ED25519.value,
)
filepath = prompt.Prompt.ask(
f"Enter the [cyan]{role}[/]`s private key [green]path[/]"
)
colored_role = click.style(role, fg="cyan")
colored_pass = click.style("password", fg="green")
password = click.prompt(
f"Enter the {colored_role}`s private key {colored_pass}",
hide_input=True,
)
return load_key(filepath, key_type, password, "")
def _is_valid_current_key(
keyid: str, root_info: RootInfo, already_loaded_keyids: List[str]
) -> bool:
"""Verify that key with `keyid` have been used to sign the current root"""
if keyid in already_loaded_keyids:
console.print(
":cross_mark: [red]Failed[/]: You already loaded this key",
width=100,
)
return False
if not root_info.is_keyid_used(keyid):
console.print(
(
":cross_mark: [red]Failed[/]: This key has not been used "
"to sign current root metadata",
),
width=100,
)
return False
return True
def _current_root_keys_validation(root_info: RootInfo):
"""
Authorize user by loading current root threshold number of root keys
used for signing the current root metadata.
"""
console.print(markdown.Markdown(AUTHORIZATION), width=100)
threshold = root_info.threshold
console.print(f"You will need to load {threshold} key(s).")
loaded: List[str] = []
key_count = 0
while key_count < root_info.threshold:
console.print(
f"You will enter information for key {key_count} of {threshold}"
)
root_key: RSTUFKey = _get_key(Root.type)
if root_key.error:
console.print(f"Failed loading key {key_count} of {threshold}")
console.print(root_key.error)
continue
keyid = root_key.key["keyid"]
if not _is_valid_current_key(keyid, root_info, loaded):
continue
key_count += 1
loaded.append(keyid)
root_info.save_current_root_key(root_key)
console.print(
":white_check_mark: Key "
f"{key_count}/{threshold} [green]Verified[/]"
)
console.print("\n[green]Authorization is successful [/]\n", width=100)
def _keys_removal(root_info: RootInfo):
"""Asking the user if they want to remove any of the root keys"""
while True:
if len(root_info.keys) < 1:
console.print("No keys are left for removal.")
break
keys_table = _create_keys_table(root_info.keys, True, False)
console.print("Here are the current root keys:")
console.print(keys_table)
console.print("\n")
key_removal = prompt.Confirm.ask("Do you want to remove a key")
if not key_removal:
break
name = prompt.Prompt.ask(
"[green]Name/Tag/ID prefix[/] of the key to remove"
)
if not root_info.remove_key(name):
console.print(
"\n", f":cross_mark: [red]Failed[/]: key {name} is not in root"
)
continue
console.print(f"Key with name/tag [yellow]{name}[/] removed\n")
def _keys_additions(root_info: RootInfo):
while True:
# Get all signing keys that are still inside the new root.
keys: List[Dict[str, Any]] = []
all_keys = root_info.keys
for signing_keyid, signing_key in root_info.signing_keys.items():
if any(signing_keyid == key["keyid"] for key in all_keys):
keys.append(signing_key.to_dict())
keys_table = _create_keys_table(keys, True, False)
console.print("\nHere are the keys that will be used for signing:")
console.print(keys_table)
signing_keys_needed = root_info.new_signing_keys_required()
if signing_keys_needed < 1:
agree = prompt.Confirm.ask("\nDo you want to add a new key?")
if not agree:
return
else:
console.print(f"You must add {signing_keys_needed} more key(s)")
root_key: RSTUFKey = _get_key(Root.type)
if root_key.error:
console.print(root_key.error)
continue
if root_key.key["keyid"] == root_info.online_key["keyid"]:
console.print(
":cross_mark: [red]Failed[/]: This is the current online key. "
"Cannot be added"
)
continue
if root_info.is_keyid_used(root_key.key["keyid"]):
console.print(":cross_mark: [red]Failed[/]: Key is already used")
continue
root_key.name = prompt.Prompt.ask(
"[Optional] Give a [green]name/tag[/] to the key"
)
root_info.add_key(root_key)
def _get_positive_int_input(msg: str, input_name: str, default: Any) -> int:
input: int = 0
while True:
input = prompt.IntPrompt.ask(msg, default=default, show_default=True)
if input >= 1:
return input
console.print(f"{input_name} must be at least 1")
def _modify_expiration(root_info: RootInfo):
console.print(markdown.Markdown(EXPIRY_CHANGES_MSG), width=100)
console.print("\n")
change: bool
while True:
console.print(
f"Current root expiration: [cyan]{root_info.expiration_str}[/]",
highlight=False, # disable built-in rich highlight
)
if root_info.expiration < (datetime.now() + timedelta(days=1)):
console.print("Root root has expired - expiration must be extend")
change = True
else:
change = prompt.Confirm.ask(
"Do you want to extend the [cyan]root's expiration[/]?"
)
if not change:
console.print("Skipping root expiration changes")
return
else:
m = "Days to extend [cyan]root's expiration[/] starting from today"
bump = _get_positive_int_input(m, "Expiration extension", 365)
new_expiry = datetime.now() + timedelta(days=bump)
new_exp_str = new_expiry.strftime("%Y-%b-%d")
agree = prompt.Confirm.ask(
f"New root expiration: [cyan]{new_exp_str}[/]. Do you agree?"
)
if agree:
root_info.expiration = new_expiry
return
def _modify_root_keys(root_info: RootInfo):
"""Modify root keys"""
console.print(markdown.Markdown(ROOT_KEYS_CHANGES_MSG), width=100)
console.print("\n")
while True:
change = prompt.Confirm.ask(
"Do you want to modify [cyan]root[/] keys?"
)
if not change:
console.print("Skipping further root keys changes")
break
msg = "\nWhat should be the [cyan]root[/] role [green]threshold?[/]"
root_info.threshold = _get_positive_int_input(
msg, "Threshold", root_info.threshold
)
console.print(markdown.Markdown(ROOT_KEYS_REMOVAL_MSG), width=100)
_keys_removal(root_info)
console.print(markdown.Markdown(ROOT_KEY_ADDITIONS_MSG), width=100)
_keys_additions(root_info)
console.print("\nHere is the current content of root:")
_print_root_info(root_info)
def _modify_online_key(root_info: RootInfo):
console.print(markdown.Markdown(ONLINE_KEY_CHANGE), width=100)
while True:
online_key_table = _create_keys_table(
[root_info.online_key], False, False
)
console.print("\nHere is the information for the current online key:")
console.print("\n")
console.print(online_key_table)
console.print("\n")
change = prompt.Confirm.ask(
"Do you want to change the [cyan]online key[/]?"
)
if not change:
console.print("Skipping further online key changes")
break
online_key: RSTUFKey = _get_key("online")
if online_key.error:
console.print(online_key.error)
continue
if online_key.key["keyid"] == root_info.online_key["keyid"]:
console.print(
":cross_mark: [red]Failed[/]: New online key and current match"
)
continue
if root_info.is_keyid_used(online_key.key["keyid"]):
console.print(
":cross_mark: [red]Failed[/]: Key matches one of the root keys"
)
continue
online_key.name = prompt.Prompt.ask(
"[Optional] Give a [green]name/tag[/] to the key"
)
root_info.change_online_key(online_key)
@metadata.command() # type: ignore
@click.option(
"--current-root-uri",
help="URL or local path to the current root.json file.",
required=False,
)
@click.option(
"-f",
"--file",
"file",
default="metadata-update-payload.json",
help="Generate specific JSON payload file",
show_default=True,
required=False,
)
@click.option(
"-u",
"--upload",
help=(
"Upload existent payload 'file'. "
"Optional '-f/--file' to use non default file name."
),
required=False,
is_flag=True,
)
@click.option(
"--run-ceremony",
help=(
"When '--upload' is set this flag can be used to run the ceremony "
"and the result will be uploaded."
),
default=False,
show_default=True,
required=False,
is_flag=True,
)
@click.option(
"-s",
"--save",
help=(
"Save a copy of the metadata locally. This option saves the JSON "
"metadata update payload file in the current directory."
),
default=False,
show_default=True,
is_flag=True,
)
@click.option(
"--upload-server",
help="[when using '--auth'] Upload to RSTUF API Server address. ",
required=False,
hidden=True,
)
@click.pass_context
def update(
context,
current_root_uri: str,
file: str,
upload: bool,
run_ceremony: bool,
save: bool,
upload_server: str,
) -> None:
"""
Start a new metadata update ceremony.
"""
settings = context.obj["settings"]
if upload and not run_ceremony:
# Sever authentication or setup
if settings.AUTH and not upload_server:
raise click.ClickException(
"Requires '--upload-server' when using '--auth'. "
"Example: --upload-server https://rstuf-api.example.com"
)
if upload_server:
settings.SERVER = upload_server
console.print(
f"Uploading existing metadata update payload {file} to "
f"{settings.SERVER}"
)
payload = load_payload(file)
task_id = send_payload(
settings=settings,
url=URL.metadata.value,
payload=payload,
expected_msg="Metadata update accepted.",
command_name="Metadata Update",
)
task_status(task_id, settings, "Metadata Update status: ")
console.print(f"Existing payload {file} sent")
return
console.print(markdown.Markdown(INTRODUCTION), width=100)
if save or not upload:
console.print(f"\nThis ceremony will generate a new {file} file.")
console.print("\n")
NOTICE = (
"**NOTICE: This is an alpha feature and will get updated over time!**"
)
console.print(markdown.Markdown(NOTICE), width=100)
console.print("\n")
if current_root_uri is None:
current_root_uri = prompt.Prompt.ask(
"[cyan]File name or URL[/] to the current root metadata"
)
console.print("\n")
try:
root_md: Metadata = get_md_file(current_root_uri)
root_info: RootInfo = RootInfo(root_md)
except StorageError:
raise click.ClickException(
f"Cannot fetch/load current root {current_root_uri}"
)
except DeserializationError:
raise click.ClickException("Metadata is invalid JSON file")
console.print(markdown.Markdown(CURRENT_ROOT_INFO), width=100)
_print_root_info(root_info)
_current_root_keys_validation(root_info)
_modify_expiration(root_info)
_modify_root_keys(root_info)
_modify_online_key(root_info)
console.print(markdown.Markdown("## Payload Generation"))
if root_info.has_changed():
# There are one or more changes to the root metadata file.
payload = root_info.generate_payload()
# Save if the users asks for it or if the payload won't be uploaded.
if save or not upload:
save_payload(file, payload)
console.print(f"File {file} successfully generated")
if upload:
task_id = send_payload(
settings=settings,
url=URL.metadata.value,
payload=payload,
expected_msg="Metadata update accepted.",
command_name="Metadata Update",
)
task_status(task_id, settings, "Metadata Update status: ")
console.print("Ceremony done. 🔐 🎉. Root metadata update completed.")
else:
# There are no changes made to the root metadata file.
console.print("\nNo file will be generated as no changes were made\n") | /repository_service_tuf-0.5.0b1.tar.gz/repository_service_tuf-0.5.0b1/repository_service_tuf/cli/admin/metadata.py | 0.711631 | 0.277901 | metadata.py | pypi |
import pygit2 as git
import pandas as pd
from typing import List, Generator
from numpy import isnan
from .gitdata import TagsData
class GitTag:
def __init__(self, tag_df: pd.DataFrame):
self.df_ref = tag_df
def __repr__(self):
return self.name
@property
def name(self) -> str:
tag_name = self.df_ref.tag_name.unique()
assert len(tag_name) == 1
tag_name = tag_name[0]
return tag_name if tag_name is not None else 'unreleased'
@property
def contributors(self) -> pd.DataFrame:
return self.df_ref[['commit_author', 'is_merge']].groupby('commit_author')\
.count().rename(columns={'is_merge': 'commits_count'}).sort_values(by='commits_count', ascending=False)
@property
def created(self):
"""
This is tagger time, i.e. when tag as created
:return: timestamp
"""
ts = self.df_ref['tagger_time'].unique()
# all tagger_time's for particular tag should be the same
assert len(ts) == 1
ts = ts[0]
return pd.to_datetime(ts, unit='s', utc=True) if ts != -1 else None
@property
def initiated(self):
"""
This is author's time of the very first commit that "belongs" to this tag
:return: timestamp
"""
return pd.to_datetime(self.df_ref['commit_time'].min(), unit='s', utc=True)
@property
def commits_count(self) -> int:
"""
How many commits "belong" to this tag
:return: commits count
"""
return self.df_ref.shape[0]
@property
def tagger(self):
tagger_name = self.df_ref['tagger_name'].unique()
assert len(tagger_name) == 1
tagger_name = tagger_name[0]
return tagger_name
class GitTags:
def __init__(self, repo: git.Repository):
self.tags_data = TagsData(repo).as_dataframe()
def filter(self, regexp: str) -> List[GitTag]:
pass
def all(self) -> 'Generator[GitTag, None]':
return (self.get(name) for name in self.names)
def get(self, tag_name: str) -> GitTag:
predicate = self.tags_data.tag_name.isna() if tag_name is None else self.tags_data.tag_name == tag_name
return GitTag(self.tags_data[predicate])
@property
def names(self):
return self.tags_data.tag_name.unique()
@property
def count(self):
return len(self.names) | /repostat_app-2.2.0-py3-none-any.whl/analysis/gittags.py | 0.809653 | 0.392803 | gittags.py | pypi |
import os
import datetime
import calendar
import collections
import json
from jinja2 import Environment, FileSystemLoader
import pandas as pd
from analysis.gitrepository import GitRepository
from tools.configuration import Configuration
from tools import packages_info
from . import colormaps
from .html_page import HtmlPage, JsPlot
HERE = os.path.dirname(os.path.abspath(__file__))
class HTMLReportCreator:
recent_activity_period_weeks = 32
assets_subdir = "assets"
templates_subdir = "templates"
def __init__(self, config: Configuration, repository: GitRepository):
self.path = None
self.configuration = config
self.assets_path = os.path.join(HERE, self.assets_subdir)
self.git_repository_statistics = repository
self.has_tags_page = config.do_process_tags()
self._time_sampling_interval = "W"
self._do_generate_index_page = False
self._is_blame_data_allowed = False
self._max_orphaned_extensions_count = 0
templates_dir = os.path.join(HERE, self.templates_subdir)
self.j2_env = Environment(loader=FileSystemLoader(templates_dir), trim_blocks=True)
self.j2_env.filters['to_month_name_abr'] = lambda im: calendar.month_abbr[im]
self.j2_env.filters['to_weekday_name'] = lambda i: calendar.day_name[i]
self.j2_env.filters['to_ratio'] = lambda val, max_val: (float(val) / max_val) if max_val != 0 else 0
self.j2_env.filters['to_percentage'] = lambda val, max_val: (100 * float(val) / max_val) if max_val != 0 else 0
colors = colormaps.colormaps[self.configuration['colormap']]
self.j2_env.filters['to_heatmap'] = lambda val, max_val: "%d, %d, %d" % colors[int(float(val) / max_val * (len(colors) - 1))]
def set_time_sampling(self, offset: str):
"""
:param offset: any valid string composed of Pandas' offset aliases
https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases
"""
self._time_sampling_interval = offset
return self
def allow_blame_data(self):
self._is_blame_data_allowed = True
def generate_index_page(self, do_generate: bool = True):
self._do_generate_index_page = do_generate
return self
def set_max_orphaned_extensions_count(self, count):
self._max_orphaned_extensions_count = count
return self
def _clamp_orphaned_extensions(self, extensions_df: pd.DataFrame, group_name: str = "~others~"):
# Group together all extensions used only once (probably not really extensions)
is_orphan = extensions_df["files_count"] <= self._max_orphaned_extensions_count
excluded = extensions_df[is_orphan]
print(excluded.shape)
if excluded.shape[0] > 0:
excluded_summary = excluded.agg({"size_bytes": ["sum"], "lines_count": ["sum", "count"]})
orphans_summary_df = pd.DataFrame(data=[{
"files_count": excluded_summary['lines_count']['count'],
"lines_count": excluded_summary['lines_count']['sum'],
"size_bytes": excluded_summary['size_bytes']['sum'].astype('int32')
}], index=[(False, group_name)]) # index is a tuple (is_binary, extension)
extensions_df = extensions_df[~is_orphan].sort_values(by="files_count", ascending=False)
# and we do not sort after we appended "orphans", as we want them to appear at the end
extensions_df = extensions_df.append(orphans_summary_df, sort=False)
return extensions_df
def _get_recent_activity_data(self):
recent_weekly_commits = self.git_repository_statistics.\
get_recent_weekly_activity(self.recent_activity_period_weeks)
values = [
{
'x': int(self.recent_activity_period_weeks - i - 1),
'y': int(commits)
} for i, commits in enumerate(recent_weekly_commits)
]
graph_data = {
"config": {
"noData": "No recent activity.",
"padData": True,
"showXAxis": True,
"xDomain": [self.recent_activity_period_weeks - 1, 0]
},
"data": [
{'key': "Commits", 'color': "#9400D3", 'values': values}
]
}
return graph_data
def _bundle_assets(self):
from distutils.dir_util import copy_tree
# copy assets to report output folder
assets_local_abs_path = os.path.join(self.path, self.assets_subdir)
copy_tree(src=self.assets_path, dst=assets_local_abs_path)
# relative path to assets to embed into html pages
self.assets_path = os.path.relpath(assets_local_abs_path, self.path)
def _squash_authors_history(self, authors_history, max_authors_count):
authors = self.git_repository_statistics.authors.sort(by='commits_count').names()
most_productive_authors = authors[:max_authors_count]
rest_authors = authors[max_authors_count:]
most_productive_authors_history = authors_history[most_productive_authors] \
.asfreq(freq=self._time_sampling_interval, fill_value=0) \
.cumsum()
rest_authors_history = authors_history[rest_authors].sum(axis=1)\
.asfreq(freq=self._time_sampling_interval, fill_value=0)\
.cumsum()
most_productive_authors_history.columns = most_productive_authors_history.columns.add_categories(['Others'])
most_productive_authors_history['Others'] = rest_authors_history.values
return most_productive_authors_history
def create(self, path):
self.path = path
if self.configuration.is_report_relocatable():
self._bundle_assets()
HtmlPage.set_assets_path(self.assets_path)
pages = [
self.make_general_page(),
self.make_activity_page(),
self.make_authors_page(),
self.make_files_page(),
]
if self.has_tags_page:
pages.append(self.make_tags_page())
pages.append(self.make_about_page())
# render and save all pages
for page in pages:
rendered_page = page.render(self.j2_env, linked_pages=pages)
page.save(self.path, rendered_page)
if self._do_generate_index_page:
# make the landing page for a web server
from shutil import copyfile
copyfile(os.path.join(path, "general.html"), os.path.join(path, "index.html"))
def make_general_page(self):
first_commit_datetime = datetime.datetime.fromtimestamp(self.git_repository_statistics.first_commit_timestamp)
last_commit_datetime = datetime.datetime.fromtimestamp(self.git_repository_statistics.last_commit_timestamp)
project_data = {
"name": self.git_repository_statistics.name,
"branch": self.git_repository_statistics.branch,
"age": (last_commit_datetime - first_commit_datetime).days,
"active_days_count": self.git_repository_statistics.active_days_count,
"commits_count": self.git_repository_statistics.total_commits_count,
"merge_commits_count": self.git_repository_statistics.merge_commits_count,
"authors_count": self.git_repository_statistics.authors.count(),
"files_count": self.git_repository_statistics.head.files_count,
"total_lines_count": self.git_repository_statistics.total_lines_count,
"added_lines_count": self.git_repository_statistics.total_lines_added,
"removed_lines_count": self.git_repository_statistics.total_lines_removed,
"first_commit_date": first_commit_datetime,
"last_commit_date": last_commit_datetime,
}
generation_data = {
"datetime": datetime.datetime.today().strftime('%Y-%m-%d %H:%M')
}
page = HtmlPage(name="General",
project=project_data,
generation=generation_data)
return page
def make_activity_page(self):
# TODO: this conversion from old 'data' to new 'project data' should perhaps be removed in future
project_data = {
'timezones_activity': collections.OrderedDict(
sorted(self.git_repository_statistics.timezones_distribution.items(), key=lambda n: int(n[0]))),
'month_in_year_activity': self.git_repository_statistics.month_of_year_distribution.to_dict()
}
wd_h_distribution = self.git_repository_statistics.weekday_hour_distribution.astype('int32')
project_data['weekday_hourly_activity'] = wd_h_distribution
project_data['weekday_hour_max_commits_count'] = wd_h_distribution.max().max()
project_data['weekday_activity'] = wd_h_distribution.sum(axis=1)
project_data['hourly_activity'] = wd_h_distribution.sum(axis=0)
page = HtmlPage(name='Activity', project=project_data)
page.add_plot(self.make_activity_plot())
return page
def make_activity_plot(self) -> JsPlot:
recent_activity = self._get_recent_activity_data()
# Commits by current year's months
current_year = datetime.date.today().year
current_year_monthly_activity = self.git_repository_statistics.history('m')
current_year_monthly_activity = current_year_monthly_activity \
.loc[current_year_monthly_activity.index.year == current_year]
current_year_monthly_activity = pd.Series(current_year_monthly_activity.values,
index=current_year_monthly_activity.index.month).to_dict()
values = [
{
'x': imonth,
'y': current_year_monthly_activity.get((imonth + 1), 0)
} for imonth in range(0, 12)
]
by_month = {
"yAxis": {"axisLabel": "Commits in %d" % current_year},
"xAxis": {"rotateLabels": -90, "ticks": len(values)},
"data": [
{"key": "Commits", "color": "#9400D3", "values": values}
]
}
# Commits by year
yearly_activity = self.git_repository_statistics.history('Y')
values = [{'x': int(x.year), 'y': int(y)} for x, y in zip(yearly_activity.index, yearly_activity.values)]
by_year = {
"xAxis": {"rotateLabels": -90, "ticks": len(values)},
"yAxis": {"axisLabel": "Commits"},
"data": [
{"key": "Commits", "color": "#9400D3", "values": values}
]
}
review_duration = [
{
"label": label,
"value": count
}
for label, count in self.git_repository_statistics.review_duration_distribution.items()
]
activity_plot = JsPlot('activity.js',
commits_by_month=json.dumps(by_month),
commits_by_year=json.dumps(by_year),
recent_activity=json.dumps(recent_activity),
review_duration=json.dumps(review_duration),
)
return activity_plot
def make_authors_page(self):
authors_summary = self.git_repository_statistics.authors.summary \
.sort_values(by="commits_count", ascending=False)
top_authors_statistics = authors_summary[:self.configuration['max_authors']]
non_top_authors_names = authors_summary[self.configuration['max_authors']:]['author_name'].values
project_data = {
'top_authors_statistics': top_authors_statistics,
'non_top_authors': non_top_authors_names,
'authors_top': self.configuration['authors_top'],
'total_commits_count': self.git_repository_statistics.total_commits_count,
'total_lines_count': self.git_repository_statistics.total_lines_count,
'is_blame_data_available': self._is_blame_data_allowed,
}
if self._is_blame_data_allowed:
project_data.update({
'top_knowledge_carriers': self.git_repository_statistics.head.get_top_knowledge_carriers()
.head(self.configuration['authors_top'])
})
raw_authors_data = self.git_repository_statistics.get_authors_ranking_by_month()
ordered_months = raw_authors_data.index.get_level_values(0).unique().sort_values(ascending=False)
project_data['months'] = []
for yymm in ordered_months[0:self.configuration['max_authors_of_months']]:
authors_in_month = raw_authors_data.loc[yymm]
project_data['months'].append({
'date': yymm,
'top_author': {'name': authors_in_month.index[0], 'commits_count': authors_in_month.iloc[0]},
'next_top_authors': ', '.join(list(authors_in_month.index[1:5])),
'all_commits_count': authors_in_month.sum(),
'total_authors_count': authors_in_month.size
})
project_data['years'] = []
raw_authors_data = self.git_repository_statistics.get_authors_ranking_by_year()
max_top_authors_index = self.configuration['authors_top'] + 1
ordered_years = raw_authors_data.index.get_level_values(0).unique().sort_values(ascending=False)
for y in ordered_years:
authors_in_year = raw_authors_data.loc[y]
project_data['years'].append({
'date': y,
'top_author': {'name': authors_in_year.index[0], 'commits_count': authors_in_year.iloc[0]},
'next_top_authors': ', '.join(list(authors_in_year.index[1:max_top_authors_index])),
'all_commits_count': authors_in_year.sum(),
'total_authors_count': authors_in_year.size,
})
page = HtmlPage('Authors', project=project_data)
page.add_plot(self.make_authors_plot())
return page
def make_authors_plot(self) -> JsPlot:
max_authors_per_plot_count = self.configuration['max_plot_authors_count']
authors_activity_history = self.git_repository_statistics.authors.history(self._time_sampling_interval)
authors_commits_history = self._squash_authors_history(authors_activity_history.commits_count,
max_authors_per_plot_count)
authors_added_lines_history = self._squash_authors_history(authors_activity_history.insertions,
max_authors_per_plot_count)
# "Added lines" graph
data = []
for author in authors_added_lines_history:
authorstats = {'key': author}
series = authors_added_lines_history[author]
authorstats['values'] = [{'x': int(x.timestamp()) * 1000, 'y': int(y)} for x, y in zip(series.index, series.values)]
data.append(authorstats)
lines_by_authors = {
"data": data
}
# "Commit count" and streamgraph
# TODO move the "added lines" into the same JSON to save space and download time
data = []
for author in authors_commits_history:
authorstats = {'key': author}
series = authors_commits_history[author]
stream = series.diff().fillna(0)
authorstats['values'] = [[int(x.timestamp() * 1000), int(y), int(z)] for x, y, z in zip(series.index, series.values, stream.values)]
data.append(authorstats)
commits_by_authors = {
"data": data
}
email_domains_distribution = self.git_repository_statistics.domains_distribution\
.sort_values(ascending=False)
if self.configuration['max_domains'] < email_domains_distribution.shape[0]:
top_domains = email_domains_distribution[:self.configuration['max_domains']]
other_domains = email_domains_distribution[self.configuration['max_domains']:].sum()
email_domains_distribution = top_domains.append(pd.Series(other_domains, index=["Others"]))
from collections import OrderedDict
email_domains_distribution = email_domains_distribution.to_dict(OrderedDict)
# Domains
domains = {
"data": [{"key": domain, "y": commits_count} for domain, commits_count in email_domains_distribution.items()]
}
if self._is_blame_data_allowed:
# sort by contribution
sorted_contribution = self.git_repository_statistics.head.authors_contribution.sort_values(ascending=False)
# limit to only top authors
if sorted_contribution.shape[0] > max_authors_per_plot_count + 1:
rest_contributions = sorted_contribution[max_authors_per_plot_count:].sum()
sorted_contribution = sorted_contribution[:max_authors_per_plot_count]
# at this point index is a CategoricalIndex and without next line cannot accept new category: "others"
sorted_contribution.index = sorted_contribution.index.to_list()
sorted_contribution = sorted_contribution.append(pd.Series(rest_contributions, index=["others"]))
sorted_contribution = sorted_contribution.to_dict(OrderedDict)
# Contribution plot data
contribution = {
"data": [{"key": name, "y": lines_count} for name, lines_count in sorted_contribution.items()]
}
else:
contribution = {}
authors_plot = JsPlot('authors.js',
lines_by_authors=json.dumps(lines_by_authors),
commits_by_authors=json.dumps(commits_by_authors),
domains=json.dumps(domains),
contribution=json.dumps(contribution)
)
return authors_plot
def make_files_page(self):
file_ext_summary = self.git_repository_statistics.head.files_extensions_summary
if self._max_orphaned_extensions_count > 0:
file_ext_summary = self._clamp_orphaned_extensions(file_ext_summary)
else:
file_ext_summary = file_ext_summary.sort_values(by="files_count", ascending=False)
project_data = {
'total_files_count': self.git_repository_statistics.head.files_count,
'total_lines_count': self.git_repository_statistics.total_lines_count,
'size': self.git_repository_statistics.head.size,
'file_summary': file_ext_summary,
'is_blame_data_available': self._is_blame_data_allowed
}
if self._is_blame_data_allowed:
project_data.update({
'top_files_by_contributors_count': self.git_repository_statistics.head.get_top_files_by_contributors_count(),
'monoauthor_files_count': self.git_repository_statistics.head.monoauthor_files.count(),
'lost_knowledge_ratio': self.git_repository_statistics.head.get_lost_knowledge_percentage()
})
page = HtmlPage('Files', project=project_data)
page.add_plot(self.make_files_plot())
return page
def make_files_plot(self) -> JsPlot:
hst = self.git_repository_statistics.linear_history(self._time_sampling_interval).copy()
hst["epoch"] = (hst.index - pd.Timestamp("1970-01-01 00:00:00+00:00")) // pd.Timedelta('1s') * 1000
files_count_ts = hst[["epoch", 'files_count']].rename(columns={"epoch": "x", 'files_count': "y"})\
.to_dict('records')
lines_count_ts = hst[["epoch", 'lines_count']].rename(columns={"epoch": "x", 'lines_count': "y"})\
.to_dict('records')
graph_data = {
"data": [
{"key": "Files", "color": "#9400d3", "type": "line", "yAxis": 1, "values": files_count_ts},
{"key": "Lines", "color": "#d30094", "type": "line", "yAxis": 2, "values": lines_count_ts},
]
}
files_plot = JsPlot('files.js', json_data=json.dumps(graph_data))
return files_plot
def make_tags_page(self):
if 'max_recent_tags' not in self.configuration:
tags = list(self.git_repository_statistics.tags.all())
else:
tags = [next(self.git_repository_statistics.tags) for _ in range(self.configuration['max_recent_tags'])]
project_data = {
'tags': tags,
# this is total tags count, generally len(tags) != total_tags_count
'tags_count': self.git_repository_statistics.tags.count
}
page = HtmlPage(name='Tags', project=project_data)
return page
def make_about_page(self):
repostat_version = self.configuration.get_release_data_info()['develop_version']
repostat_version_date = self.configuration.get_release_data_info()['user_version']
page_data = {
"version": f"{repostat_version} ({repostat_version_date})",
"tools": [packages_info.get_pygit2_info(),
packages_info.get_jinja_info()],
"contributors": self.configuration.get_release_data_info()['contributors']
}
page = HtmlPage('About', repostat=page_data)
return page | /repostat_app-2.2.0-py3-none-any.whl/report/htmlreportcreator.py | 0.613121 | 0.167661 | htmlreportcreator.py | pypi |
import os
class JsPlot:
def __init__(self, template_filename, **kwargs):
self.template_filename = template_filename
self.kwargs = kwargs
def bootstrap(self, j2_env):
bootstrapped = j2_env.get_template(self.template_filename).render(
**self.kwargs,
)
return bootstrapped
@property
def filename(self):
# property used for readability and in case in future jinja template name becomes different from
# rendered filename
return self.template_filename
def save(self, path, bootstrapped_js):
with open(os.path.join(path, self.filename), 'w') as fg:
fg.write(bootstrapped_js)
class HtmlPage:
assets_path = None
def __init__(self, name: str, **kwargs):
self.name = name
self.is_active = False
self.kwargs = kwargs
self._plots = []
self._bootstrapped_plots = []
@classmethod
def set_assets_path(cls, assets_path):
cls.assets_path = assets_path
@property
def filename(self):
return self.name.lower() + '.html'
@property
def template_name(self):
return self.name.lower() + '.html'
def add_plot(self, plot: JsPlot):
self._plots.append(plot)
def render(self, j2_env, linked_pages):
print(f"Rendering '{self.name}'-page")
# linked_pages contain reference to this page as well
# the following sets currently rendered page as active to apply appropriate css style in navigation bar
self.is_active = True
# load and render template
template_rendered = j2_env.get_template(self.template_name).render(
**self.kwargs,
page_title=self.name,
pages=linked_pages,
assets_path=self.assets_path,
)
self.is_active = False
# also bootstrap all page's plots
for p in self._plots:
self._bootstrapped_plots.append(p.bootstrap(j2_env))
return template_rendered.encode('utf-8')
def save(self, path, rendered):
# save all plots
for p, bp in zip(self._plots, self._bootstrapped_plots):
p.save(path, bp)
# and the htm itself
with open(os.path.join(path, self.filename), 'w', encoding='utf-8') as f:
f.write(rendered.decode('utf-8')) | /repostat_app-2.2.0-py3-none-any.whl/report/html_page.py | 0.744006 | 0.15759 | html_page.py | pypi |
from zope.interface import providedBy
from zope.deprecation import deprecated
from repoze.bfg.interfaces import IAuthenticationPolicy
from repoze.bfg.interfaces import IAuthorizationPolicy
from repoze.bfg.interfaces import ISecuredView
from repoze.bfg.interfaces import IViewClassifier
from repoze.bfg.exceptions import Forbidden as Unauthorized # b/c import
from repoze.bfg.threadlocal import get_current_registry
Unauthorized # prevent PyFlakes from complaining
deprecated('Unauthorized',
"('from repoze.bfg.security import Unauthorized' was "
"deprecated as of repoze.bfg 1.1; instead use 'from "
"repoze.bfg.exceptions import Forbidden')",
)
Everyone = 'system.Everyone'
Authenticated = 'system.Authenticated'
Allow = 'Allow'
Deny = 'Deny'
class AllPermissionsList(object):
""" Stand in 'permission list' to represent all permissions """
def __iter__(self):
return ()
def __contains__(self, other):
return True
def __eq__(self, other):
return isinstance(other, self.__class__)
ALL_PERMISSIONS = AllPermissionsList()
DENY_ALL = (Deny, Everyone, ALL_PERMISSIONS)
def has_permission(permission, context, request):
""" Provided a permission (a string or unicode object), a context
(a :term:`model` instance) and a request object, return an
instance of :data:`repoze.bfg.security.Allowed` if the permission
is granted in this context to the user implied by the
request. Return an instance of :mod:`repoze.bfg.security.Denied`
if this permission is not granted in this context to this user.
This function delegates to the current authentication and
authorization policies. Return
:data:`repoze.bfg.security.Allowed` unconditionally if no
authentication policy has been configured in this application."""
try:
reg = request.registry
except AttributeError:
reg = get_current_registry() # b/c
authn_policy = reg.queryUtility(IAuthenticationPolicy)
if authn_policy is None:
return Allowed('No authentication policy in use.')
authz_policy = reg.queryUtility(IAuthorizationPolicy)
if authz_policy is None:
raise ValueError('Authentication policy registered without '
'authorization policy') # should never happen
principals = authn_policy.effective_principals(request)
return authz_policy.permits(context, principals, permission)
def authenticated_userid(request):
""" Return the userid of the currently authenticated user or
``None`` if there is no :term:`authentication policy` in effect or
there is no currently authenticated user."""
try:
reg = request.registry
except AttributeError:
reg = get_current_registry() # b/c
policy = reg.queryUtility(IAuthenticationPolicy)
if policy is None:
return None
return policy.authenticated_userid(request)
def effective_principals(request):
""" Return the list of 'effective' :term:`principal` identifiers
for the ``request``. This will include the userid of the
currently authenticated user if a user is currently
authenticated. If no :term:`authentication policy` is in effect,
this will return an empty sequence."""
try:
reg = request.registry
except AttributeError:
reg = get_current_registry() # b/c
policy = reg.queryUtility(IAuthenticationPolicy)
if policy is None:
return []
return policy.effective_principals(request)
def principals_allowed_by_permission(context, permission):
""" Provided a ``context`` (a model object), and a ``permission``
(a string or unicode object), if a :term:`authorization policy` is
in effect, return a sequence of :term:`principal` ids that possess
the permission in the ``context``. If no authorization policy is
in effect, this will return a sequence with the single value
:mod:`repoze.bfg.security.Everyone` (the special principal
identifier representing all principals).
.. note:: even if an :term:`authorization policy` is in effect,
some (exotic) authorization policies may not implement the
required machinery for this function; those will cause a
:exc:`NotImplementedError` exception to be raised when this
function is invoked.
"""
reg = get_current_registry()
policy = reg.queryUtility(IAuthorizationPolicy)
if policy is None:
return [Everyone]
return policy.principals_allowed_by_permission(context, permission)
def view_execution_permitted(context, request, name=''):
""" If the view specified by ``context`` and ``name`` is protected
by a :term:`permission`, check the permission associated with the
view using the effective authentication/authorization policies and
the ``request``. Return a boolean result. If no
:term:`authorization policy` is in effect, or if the view is not
protected by a permission, return ``True``."""
try:
reg = request.registry
except AttributeError:
reg = get_current_registry() # b/c
provides = [IViewClassifier] + map(providedBy, (request, context))
view = reg.adapters.lookup(provides, ISecuredView, name=name)
if view is None:
return Allowed(
'Allowed: view name %r in context %r (no permission defined)' %
(name, context))
return view.__permitted__(context, request)
def remember(request, principal, **kw):
""" Return a sequence of header tuples (e.g. ``[('Set-Cookie',
'foo=abc')]``) suitable for 'remembering' a set of credentials
implied by the data passed as ``principal`` and ``*kw`` using the
current :term:`authentication policy`. Common usage might look
like so within the body of a view function (``response`` is
assumed to be an :term:`WebOb` -style :term:`response` object
computed previously by the view code)::
from repoze.bfg.security import remember
headers = remember(request, 'chrism', password='123', max_age='86400')
response.headerlist.extend(headers)
return response
If no :term:`authentication policy` is in use, this function will
always return an empty sequence. If used, the composition and
meaning of ``**kw`` must be agreed upon by the calling code and
the effective authentication policy."""
try:
reg = request.registry
except AttributeError:
reg = get_current_registry() # b/c
policy = reg.queryUtility(IAuthenticationPolicy)
if policy is None:
return []
else:
return policy.remember(request, principal, **kw)
def forget(request):
""" Return a sequence of header tuples (e.g. ``[('Set-Cookie',
'foo=abc')]``) suitable for 'forgetting' the set of credentials
possessed by the currently authenticated user. A common usage
might look like so within the body of a view function
(``response`` is assumed to be an :term:`WebOb` -style
:term:`response` object computed previously by the view code)::
from repoze.bfg.security import forget
headers = forget(request)
response.headerlist.extend(headers)
return response
If no :term:`authentication policy` is in use, this function will
always return an empty sequence."""
try:
reg = request.registry
except AttributeError:
reg = get_current_registry() # b/c
policy = reg.queryUtility(IAuthenticationPolicy)
if policy is None:
return []
else:
return policy.forget(request)
class PermitsResult(int):
def __new__(cls, s, *args):
inst = int.__new__(cls, cls.boolval)
inst.s = s
inst.args = args
return inst
@property
def msg(self):
return self.s % self.args
def __str__(self):
return self.msg
def __repr__(self):
return '<%s instance at %s with msg %r>' % (self.__class__.__name__,
id(self),
self.msg)
class Denied(PermitsResult):
""" An instance of ``Denied`` is returned when a security-related
API or other :mod:`repoze.bfg` code denies an action unrelated to
an ACL check. It evaluates equal to all boolean false types. It
has an attribute named ``msg`` describing the circumstances for
the deny."""
boolval = 0
class Allowed(PermitsResult):
""" An instance of ``Allowed`` is returned when a security-related
API or other :mod:`repoze.bfg` code allows an action unrelated to
an ACL check. It evaluates equal to all boolean true types. It
has an attribute named ``msg`` describing the circumstances for
the allow."""
boolval = 1
class ACLPermitsResult(int):
def __new__(cls, ace, acl, permission, principals, context):
inst = int.__new__(cls, cls.boolval)
inst.permission = permission
inst.ace = ace
inst.acl = acl
inst.principals = principals
inst.context = context
return inst
@property
def msg(self):
s = ('%s permission %r via ACE %r in ACL %r on context %r for '
'principals %r')
return s % (self.__class__.__name__,
self.permission,
self.ace,
self.acl,
self.context,
self.principals)
def __str__(self):
return self.msg
def __repr__(self):
return '<%s instance at %s with msg %r>' % (self.__class__.__name__,
id(self),
self.msg)
class ACLDenied(ACLPermitsResult):
""" An instance of ``ACLDenied`` represents that a security check
made explicitly against ACL was denied. It evaluates equal to all
boolean false types. It also has attributes which indicate which
acl, ace, permission, principals, and context were involved in the
request. Its __str__ method prints a summary of these attributes
for debugging purposes. The same summary is available as the
``msg`` attribute."""
boolval = 0
class ACLAllowed(ACLPermitsResult):
""" An instance of ``ACLAllowed`` represents that a security check
made explicitly against ACL was allowed. It evaluates equal to
all boolean true types. It also has attributes which indicate
which acl, ace, permission, principals, and context were involved
in the request. Its __str__ method prints a summary of these
attributes for debugging purposes. The same summary is available
as the ``msg`` attribute."""
boolval = 1 | /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/repoze/bfg/security.py | 0.7478 | 0.220951 | security.py | pypi |
from zope.interface import implements
from repoze.bfg.interfaces import IAuthorizationPolicy
from repoze.bfg.location import lineage
from repoze.bfg.security import ACLAllowed
from repoze.bfg.security import ACLDenied
from repoze.bfg.security import Allow
from repoze.bfg.security import Deny
from repoze.bfg.security import Everyone
class ACLAuthorizationPolicy(object):
""" An :term:`authorization policy` which consults an :term:`ACL`
object attached to a :term:`context` to determine authorization
information about a :term:`principal` or multiple principals.
If the context is part of a :term:`lineage`, the context's parents
are consulted for ACL information too. The following is true
about this security policy.
- When checking whether the 'current' user is permitted (via the
``permits`` method), the security policy consults the
``context`` for an ACL first. If no ACL exists on the context,
or one does exist but the ACL does not explicitly allow or deny
access for any of the effective principals, consult the
context's parent ACL, and so on, until the lineage is exhausted
or we determine that the policy permits or denies.
During this processing, if any :data:`repoze.bfg.security.Deny`
ACE is found matching any principal in ``principals``, stop
processing by returning an
:class:`repoze.bfg.security.ACLDenied` instance (equals
``False``) immediately. If any
:data:`repoze.bfg.security.Allow` ACE is found matching any
principal, stop processing by returning an
:class:`repoze.bfg.security.ACLAllowed` instance (equals
``True``) immediately. If we exhaust the context's
:term:`lineage`, and no ACE has explicitly permitted or denied
access, return an instance of
:class:`repoze.bfg.security.ACLDenied` (equals ``False``).
- When computing principals allowed by a permission via the
:func:`repoze.bfg.security.principals_allowed_by_permission`
method, we compute the set of principals that are explicitly
granted the ``permission`` in the provided ``context``. We do
this by walking 'up' the object graph *from the root* to the
context. During this walking process, if we find an explicit
:data:`repoze.bfg.security.Allow` ACE for a principal that
matches the ``permission``, the principal is included in the
allow list. However, if later in the walking process that
principal is mentioned in any :data:`repoze.bfg.security.Deny`
ACE for the permission, the principal is removed from the allow
list. If a :data:`repoze.bfg.security.Deny` to the principal
:data:`repoze.bfg.security.Everyone` is encountered during the
walking process that matches the ``permission``, the allow list
is cleared for all principals encountered in previous ACLs. The
walking process ends after we've processed the any ACL directly
attached to ``context``; a set of principals is returned.
"""
implements(IAuthorizationPolicy)
def permits(self, context, principals, permission):
""" Return an instance of
:class:`repoze.bfg.security.ACLAllowed` instance if the policy
permits access, return an instance of
:class:`repoze.bfg.security.ACLDenied` if not."""
acl = '<No ACL found on any object in model lineage>'
for location in lineage(context):
try:
acl = location.__acl__
except AttributeError:
continue
for ace in acl:
ace_action, ace_principal, ace_permissions = ace
if ace_principal in principals:
if not hasattr(ace_permissions, '__iter__'):
ace_permissions = [ace_permissions]
if permission in ace_permissions:
if ace_action == Allow:
return ACLAllowed(ace, acl, permission,
principals, location)
else:
return ACLDenied(ace, acl, permission,
principals, location)
# default deny (if no ACL in lineage at all, or if none of the
# principals were mentioned in any ACE we found)
return ACLDenied(
'<default deny>',
acl,
permission,
principals,
context)
def principals_allowed_by_permission(self, context, permission):
""" Return the set of principals explicitly granted the
permission named ``permission`` according to the ACL directly
attached to the ``context`` as well as inherited ACLs based on
the :term:`lineage`."""
allowed = set()
for location in reversed(list(lineage(context))):
# NB: we're walking *up* the object graph from the root
try:
acl = location.__acl__
except AttributeError:
continue
allowed_here = set()
denied_here = set()
for ace_action, ace_principal, ace_permissions in acl:
if not hasattr(ace_permissions, '__iter__'):
ace_permissions = [ace_permissions]
if ace_action == Allow and permission in ace_permissions:
if not ace_principal in denied_here:
allowed_here.add(ace_principal)
if ace_action == Deny and permission in ace_permissions:
denied_here.add(ace_principal)
if ace_principal == Everyone:
# clear the entire allowed set, as we've hit a
# deny of Everyone ala (Deny, Everyone, ALL)
allowed = set()
break
elif ace_principal in allowed:
allowed.remove(ace_principal)
allowed.update(allowed_here)
return allowed | /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/repoze/bfg/authorization.py | 0.856947 | 0.446434 | authorization.py | pypi |
from repoze.bfg.compat import wraps
from repoze.bfg.traversal import quote_path_segment
def wsgiapp(wrapped):
""" Decorator to turn a WSGI application into a :mod:`repoze.bfg`
:term:`view callable`. This decorator differs from the
:func:`repoze.bfg.wsgi.wsgiapp2` decorator inasmuch as fixups of
``PATH_INFO`` and ``SCRIPT_NAME`` within the WSGI environment *are
not* performed before the application is invoked.
E.g., the following in a ``views.py`` module::
@wsgiapp
def hello_world(environ, start_response):
body = 'Hello world'
start_response('200 OK', [ ('Content-Type', 'text/plain'),
('Content-Length', len(body)) ] )
return [body]
Allows the following ZCML view declaration to be made::
<view
view=".views.hello_world"
name="hello_world.txt"
/>
Or the following call to
:meth:`repoze.bfg.configuration.Configurator.add_view`::
from views import hello_world
config.add_view(hello_world, name='hello_world.txt')
The ``wsgiapp`` decorator will convert the result of the WSGI
application to a :term:`Response` and return it to
:mod:`repoze.bfg` as if the WSGI app were a :mod:`repoze.bfg`
view.
"""
def decorator(context, request):
return request.get_response(wrapped)
return wraps(wrapped)(decorator) # grokkability
def wsgiapp2(wrapped):
""" Decorator to turn a WSGI application into a :mod:`repoze.bfg`
view callable. This decorator differs from the
:func:`repoze.bfg.wsgi.wsgiapp` decorator inasmuch as fixups of
``PATH_INFO`` and ``SCRIPT_NAME`` within the WSGI environment
*are* performed before the application is invoked.
E.g. the following in a ``views.py`` module::
@wsgiapp2
def hello_world(environ, start_response):
body = 'Hello world'
start_response('200 OK', [ ('Content-Type', 'text/plain'),
('Content-Length', len(body)) ] )
return [body]
Allows the following ZCML view declaration to be made::
<view
view=".views.hello_world"
name="hello_world.txt"
/>
Or the following call to
:meth:`repoze.bfg.configuration.Configurator.add_view`::
from views import hello_world
config.add_view(hello_world, name='hello_world.txt')
The ``wsgiapp2`` decorator will convert the result of the WSGI
application to a Response and return it to :mod:`repoze.bfg` as if
the WSGI app were a :mod:`repoze.bfg` view. The ``SCRIPT_NAME``
and ``PATH_INFO`` values present in the WSGI environment are fixed
up before the application is invoked. """
def decorator(context, request):
traversed = request.traversed
vroot_path = request.virtual_root_path or ()
view_name = request.view_name
subpath = request.subpath or ()
script_tuple = traversed[len(vroot_path):]
script_list = [ quote_path_segment(name) for name in script_tuple ]
if view_name:
script_list.append(quote_path_segment(view_name))
script_name = '/' + '/'.join(script_list)
path_list = [ quote_path_segment(name) for name in subpath ]
path_info = '/' + '/'.join(path_list)
request.environ['PATH_INFO'] = path_info
script_name = request.environ['SCRIPT_NAME'] + script_name
if script_name.endswith('/'):
script_name = script_name[:-1]
request.environ['SCRIPT_NAME'] = script_name
return request.get_response(wrapped)
return wraps(wrapped)(decorator) # grokkability | /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/repoze/bfg/wsgi.py | 0.773002 | 0.345726 | wsgi.py | pypi |
import os
import pkg_resources
from webob import Response
from zope.deprecation import deprecated
from repoze.bfg.interfaces import IRendererGlobalsFactory
from repoze.bfg.interfaces import IRendererFactory
from repoze.bfg.interfaces import IResponseFactory
from repoze.bfg.interfaces import ITemplateRenderer
from repoze.bfg.compat import json
from repoze.bfg.path import caller_package
from repoze.bfg.settings import get_settings
from repoze.bfg.threadlocal import get_current_registry
from repoze.bfg.resource import resolve_resource_spec
from repoze.bfg.decorator import reify
# API
def render(renderer_name, value, request=None, package=None):
""" Using the renderer specified as ``renderer_name`` (a template
or a static renderer) render the value (or set of values) present
in ``value``. Return the result of the renderer's ``__call__``
method (usually a string or Unicode).
If the renderer name refers to a file on disk (such as when the
renderer is a template), it's usually best to supply the name as a
:term:`resource specification`
(e.g. ``packagename:path/to/template.pt``).
You may supply a relative resource spec as ``renderer_name``. If
the ``package`` argument is supplied, a relative renderer path
will be converted to an absolute resource specification by
combining the package supplied as ``package`` with the relative
resource specification supplied as ``renderer_name``. If you do
not supply a ``package`` (or ``package`` is ``None``) the package
name of the *caller* of this function will be used as the package.
The ``value`` provided will be supplied as the input to the
renderer. Usually, for template renderings, this should be a
dictionary. For other renderers, this will need to be whatever
sort of value the renderer expects.
The 'system' values supplied to the renderer will include a basic
set of top-level system names, such as ``request``, ``context``,
and ``renderer_name`. If :term:`renderer globals`` have been
specified, these will also be used to agument the value.
Supply a ``request`` parameter in order to provide the renderer
with the most correct 'system' values (``request`` and ``context``
in particular).
.. note:: This API is new in :mod:`repoze.bfg` 1.3.
"""
try:
registry = request.registry
except AttributeError:
registry = None
if package is None:
package = caller_package()
renderer = RendererHelper(renderer_name, package=package, registry=registry)
return renderer.render(value, None, request=request)
def render_to_response(renderer_name, value, request=None, package=None):
""" Using the renderer specified as ``renderer_name`` (a template
or a static renderer) render the value (or set of values) using
the result of the renderer's ``__call__`` method (usually a string
or Unicode) as the response body.
If the renderer name refers to a file on disk (such as when the
renderer is a template), it's usually best to supply the name as a
:term:`resource specification`.
You may supply a relative resource spec as ``renderer_name``. If
the ``package`` argument is supplied, a relative renderer name
will be converted to an absolute resource specification by
combining the package supplied as ``package`` with the relative
resource specification supplied as ``renderer_name``. If you do
not supply a ``package`` (or ``package`` is ``None``) the package
name of the *caller* of this function will be used as the package.
The ``value`` provided will be supplied as the input to the
renderer. Usually, for template renderings, this should be a
dictionary. For other renderers, this will need to be whatever
sort of value the renderer expects.
The 'system' values supplied to the renderer will include a basic
set of top-level system names, such as ``request``, ``context``,
and ``renderer_name``. If :term:`renderer globals` have been
specified, these will also be used to agument the value.
Supply a ``request`` parameter in order to provide the renderer
with the most correct 'system' values (``request`` and ``context``
in particular).
.. note:: This API is new in :mod:`repoze.bfg` 1.3.
"""
try:
registry = request.registry
except AttributeError:
registry = None
if package is None:
package = caller_package()
renderer = RendererHelper(renderer_name, package=package, registry=registry)
return renderer.render_to_response(value, None, request=request)
def get_renderer(renderer_name, package=None):
""" Return the renderer object for the renderer named as
``renderer_name``.
You may supply a relative resource spec as ``renderer_name``. If
the ``package`` argument is supplied, a relative renderer name
will be converted to an absolute resource specification by
combining the package supplied as ``package`` with the relative
resource specification supplied as ``renderer_name``. If you do
not supply a ``package`` (or ``package`` is ``None``) the package
name of the *caller* of this function will be used as the package.
"""
if package is None:
package = caller_package()
renderer = RendererHelper(renderer_name, package=package)
return renderer.get_renderer()
# concrete renderer factory implementations (also API)
def json_renderer_factory(name):
def _render(value, system):
request = system.get('request')
if request is not None:
if not hasattr(request, 'response_content_type'):
request.response_content_type = 'application/json'
return json.dumps(value)
return _render
def string_renderer_factory(name):
def _render(value, system):
if not isinstance(value, basestring):
value = str(value)
request = system.get('request')
if request is not None:
if not hasattr(request, 'response_content_type'):
request.response_content_type = 'text/plain'
return value
return _render
# utility functions, not API
def template_renderer_factory(spec, impl):
reg = get_current_registry()
if os.path.isabs(spec):
# 'spec' is an absolute filename
if not os.path.exists(spec):
raise ValueError('Missing template file: %s' % spec)
renderer = reg.queryUtility(ITemplateRenderer, name=spec)
if renderer is None:
renderer = impl(spec)
reg.registerUtility(renderer, ITemplateRenderer, name=spec)
else:
# spec is a package:relpath resource spec
renderer = reg.queryUtility(ITemplateRenderer, name=spec)
if renderer is None:
try:
package_name, filename = spec.split(':', 1)
except ValueError: # pragma: no cover
# somehow we were passed a relative pathname; this
# should die
package_name = caller_package(4).__name__
filename = spec
abspath = pkg_resources.resource_filename(package_name, filename)
if not pkg_resources.resource_exists(package_name, filename):
raise ValueError(
'Missing template resource: %s (%s)' % (spec, abspath))
renderer = impl(abspath)
if not _reload_resources():
# cache the template
reg.registerUtility(renderer, ITemplateRenderer, name=spec)
return renderer
def _reload_resources():
settings = get_settings()
return settings and settings.get('reload_resources')
def renderer_from_name(path, package=None):
return RendererHelper(path, package=package).get_renderer()
def rendered_response(renderer, result, view, context, request, renderer_name):
# XXX: deprecated, left here only to not break old code; use
# render_to_response instead
if ( hasattr(result, 'app_iter') and hasattr(result, 'headerlist')
and hasattr(result, 'status') ):
return result
system = {'view':view, 'renderer_name':renderer_name,
'context':context, 'request':request}
helper = RendererHelper(renderer_name)
helper.renderer = renderer
return helper.render_to_response(result, system, request=request)
deprecated('rendered_response',
"('repoze.bfg.renderers.rendered_response' is not a public API; it is "
"officially deprecated as of repoze.bfg 1.3; "
"use repoze.bfg.renderers.render_to_response instead')",
)
class RendererHelper(object):
def __init__(self, renderer_name, registry=None, package=None):
if registry is None:
registry = get_current_registry()
self.registry = registry
self.package = package
if renderer_name is None:
factory = registry.queryUtility(IRendererFactory)
renderer_type = None
else:
if '.' in renderer_name:
renderer_type = os.path.splitext(renderer_name)[1]
renderer_name = self.resolve_spec(renderer_name)
else:
renderer_type = renderer_name
renderer_name = renderer_name
factory = registry.queryUtility(IRendererFactory,
name=renderer_type)
self.renderer_name = renderer_name
self.renderer_type = renderer_type
self.factory = factory
@reify
def renderer(self):
if self.factory is None:
raise ValueError('No such renderer factory %s' % self.renderer_type)
return self.factory(self.renderer_name)
def resolve_spec(self, path_or_spec):
if path_or_spec is None:
return path_or_spec
package, filename = resolve_resource_spec(path_or_spec,
self.package)
if package is None:
return filename # absolute filename
return '%s:%s' % (package, filename)
def get_renderer(self):
return self.renderer
def render(self, value, system_values, request=None):
renderer = self.renderer
if system_values is None:
system_values = {
'view':None,
'renderer_name':self.renderer_name,
'context':getattr(request, 'context', None),
'request':request,
}
registry = self.registry
globals_factory = registry.queryUtility(IRendererGlobalsFactory)
if globals_factory is not None:
renderer_globals = globals_factory(system_values)
if renderer_globals:
system_values.update(renderer_globals)
result = renderer(value, system_values)
return result
def render_to_response(self, value, system_values, request=None):
result = self.render(value, system_values, request=request)
return self._make_response(result, request)
def _make_response(self, result, request):
registry = self.registry
response_factory = registry.queryUtility(IResponseFactory,
default=Response)
response = response_factory(result)
if request is not None:
attrs = request.__dict__
content_type = attrs.get('response_content_type', None)
if content_type is not None:
response.content_type = content_type
headerlist = attrs.get('response_headerlist', None)
if headerlist is not None:
for k, v in headerlist:
response.headers.add(k, v)
status = attrs.get('response_status', None)
if status is not None:
response.status = status
charset = attrs.get('response_charset', None)
if charset is not None:
response.charset = charset
cache_for = attrs.get('response_cache_for', None)
if cache_for is not None:
response.cache_expires = cache_for
return response | /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/repoze/bfg/renderers.py | 0.692954 | 0.321939 | renderers.py | pypi |
import re
always_safe = ('ABCDEFGHIJKLMNOPQRSTUVWXYZ'
'abcdefghijklmnopqrstuvwxyz'
'0123456789' '_.-')
_safemaps = {}
_must_quote = {}
def url_quote(s, safe=''):
"""quote('abc def') -> 'abc%20def'
Faster version of Python stdlib urllib.quote which also quotes
the '/' character.
Each part of a URL, e.g. the path info, the query, etc., has a
different set of reserved characters that must be quoted.
RFC 2396 Uniform Resource Identifiers (URI): Generic Syntax lists
the following reserved characters.
reserved = ";" | "/" | "?" | ":" | "@" | "&" | "=" | "+" |
"$" | ","
Each of these characters is reserved in some component of a URL,
but not necessarily in all of them.
Unlike the default version of this function in the Python stdlib,
by default, the quote function is intended for quoting individual
path segments instead of an already composed path that might have
'/' characters in it. Thus, it *will* encode any '/' character it
finds in a string.
"""
cachekey = (safe, always_safe)
try:
safe_map = _safemaps[cachekey]
if not _must_quote[cachekey].search(s):
return s
except KeyError:
safe += always_safe
_must_quote[cachekey] = re.compile(r'[^%s]' % safe)
safe_map = {}
for i in range(256):
c = chr(i)
safe_map[c] = (c in safe) and c or ('%%%02X' % i)
_safemaps[cachekey] = safe_map
res = map(safe_map.__getitem__, s)
return ''.join(res)
def quote_plus(s, safe=''):
""" Version of stdlib quote_plus which uses faster url_quote """
if ' ' in s:
s = url_quote(s, safe + ' ')
return s.replace(' ', '+')
return url_quote(s, safe)
def urlencode(query, doseq=True):
"""
An alternate implementation of Python's stdlib `urllib.urlencode
function <http://docs.python.org/library/urllib.html>`_ which
accepts unicode keys and values within the ``query``
dict/sequence; all Unicode keys and values are first converted to
UTF-8 before being used to compose the query string.
The value of ``query`` must be a sequence of two-tuples
representing key/value pairs *or* an object (often a dictionary)
with an ``.items()`` method that returns a sequence of two-tuples
representing key/value pairs.
For minimal calling convention backwards compatibility, this
version of urlencode accepts *but ignores* a second argument
conventionally named ``doseq``. The Python stdlib version behaves
differently when ``doseq`` is False and when a sequence is
presented as one of the values. This version always behaves in
the ``doseq=True`` mode, no matter what the value of the second
argument.
See the Python stdlib documentation for ``urllib.urlencode`` for
more information.
"""
try:
# presumed to be a dictionary
query = query.items()
except AttributeError:
pass
result = ''
prefix = ''
for (k, v) in query:
if k.__class__ is unicode:
k = k.encode('utf-8')
k = quote_plus(str(k))
if hasattr(v, '__iter__'):
for x in v:
if x.__class__ is unicode:
x = x.encode('utf-8')
x = quote_plus(str(x))
result += '%s%s=%s' % (prefix, k, x)
prefix = '&'
else:
if v.__class__ is unicode:
v = v.encode('utf-8')
v = quote_plus(str(v))
result += '%s%s=%s' % (prefix, k, v)
prefix = '&'
return result | /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/repoze/bfg/encode.py | 0.575111 | 0.283502 | encode.py | pypi |
from zope.configuration.exceptions import ConfigurationError as ZCE
from zope.interface import implements
from repoze.bfg.decorator import reify
from repoze.bfg.interfaces import IExceptionResponse
import cgi
class ExceptionResponse(Exception):
""" Abstract class to support behaving as a WSGI response object """
implements(IExceptionResponse)
status = None
def __init__(self, message=''):
Exception.__init__(self, message) # B / C
self.message = message
@reify # defer execution until asked explicitly
def app_iter(self):
return [
"""
<html>
<title>%s</title>
<body>
<h1>%s</h1>
<code>%s</code>
</body>
</html>
""" % (self.status, self.status, cgi.escape(self.message))
]
@reify # defer execution until asked explicitly
def headerlist(self):
return [
('Content-Length', str(len(self.app_iter[0]))),
('Content-Type', 'text/html')
]
class Forbidden(ExceptionResponse):
"""
Raise this exception within :term:`view` code to immediately
return the :term:`forbidden view` to the invoking user. Usually
this is a basic ``401`` page, but the forbidden view can be
customized as necessary. See :ref:`changing_the_forbidden_view`.
This exception's constructor accepts a single positional argument,
which should be a string. The value of this string will be placed
into the WSGI environment by the router under the
``repoze.bfg.message`` key, for availability to the
:term:`Forbidden View`.
"""
status = '401 Unauthorized'
class NotFound(ExceptionResponse):
"""
Raise this exception within :term:`view` code to immediately
return the :term:`Not Found view` to the invoking user. Usually
this is a basic ``404`` page, but the Not Found view can be
customized as necessary. See :ref:`changing_the_notfound_view`.
This exception's constructor accepts a single positional argument,
which should be a string. The value of this string will be placed
into the WSGI environment by the router under the
``repoze.bfg.message`` key, for availability to the :term:`Not Found
View`.
"""
status = '404 Not Found'
class PredicateMismatch(NotFound):
"""
Internal exception (not an API) raised by multiviews when no
view matches. This exception subclasses the ``NotFound``
exception only one reason: if it reaches the main exception
handler, it should be treated like a ``NotFound`` by any exception
view registrations.
"""
class URLDecodeError(UnicodeDecodeError):
"""
This exception is raised when :mod:`repoze.bfg` cannot
successfully decode a URL or a URL path segment. This exception
it behaves just like the Python builtin
:exc:`UnicodeDecodeError`. It is a subclass of the builtin
:exc:`UnicodeDecodeError` exception only for identity purposes,
mostly so an exception view can be registered when a URL cannot be
decoded.
"""
class ConfigurationError(ZCE):
""" Raised when inappropriate input values are supplied to an API
method of a :term:`Configurator`""" | /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/repoze/bfg/exceptions.py | 0.890954 | 0.217504 | exceptions.py | pypi |
import gettext
import os
from translationstring import Translator
from translationstring import Pluralizer
from translationstring import TranslationString # API
from translationstring import TranslationStringFactory # API
TranslationString = TranslationString # PyFlakes
TranslationStringFactory = TranslationStringFactory # PyFlakes
from repoze.bfg.interfaces import ILocalizer
from repoze.bfg.interfaces import ITranslationDirectories
from repoze.bfg.interfaces import ILocaleNegotiator
from repoze.bfg.settings import get_settings
from repoze.bfg.threadlocal import get_current_registry
class Localizer(object):
"""
An object providing translation and pluralizations related to
the current request's locale name. A
:class:`repoze.bfg.i18n.Localizer` object is created using the
:func:`repoze.bfg.i18n.get_localizer` function.
"""
def __init__(self, locale_name, translations):
self.locale_name = locale_name
self.translations = translations
self.pluralizer = None
self.translator = None
def translate(self, tstring, domain=None, mapping=None):
"""
Translate a :term:`translation string` to the current language
and interpolate any *replacement markers* in the result. The
``translate`` method accepts three arguments: ``tstring``
(required), ``domain`` (optional) and ``mapping`` (optional).
When called, it will translate the ``tstring`` translation
string to a ``unicode`` object using the current locale. If
the current locale could not be determined, the result of
interpolation of the default value is returned. The optional
``domain`` argument can be used to specify or override the
domain of the ``tstring`` (useful when ``tstring`` is a normal
string rather than a translation string). The optional
``mapping`` argument can specify or override the ``tstring``
interpolation mapping, useful when the ``tstring`` argument is
a simple string instead of a translation string.
Example::
from repoze.bfg.18n import TranslationString
ts = TranslationString('Add ${item}', domain='mypackage',
mapping={'item':'Item'})
translated = localizer.translate(ts)
Example::
translated = localizer.translate('Add ${item}', domain='mypackage',
mapping={'item':'Item'})
"""
if self.translator is None:
self.translator = Translator(self.translations)
return self.translator(tstring, domain=domain, mapping=mapping)
def pluralize(self, singular, plural, n, domain=None, mapping=None):
"""
Return a Unicode string translation by using two
:term:`message identifier` objects as a singular/plural pair
and an ``n`` value representing the number that appears in the
message using gettext plural forms support. The ``singular``
and ``plural`` objects passed may be translation strings or
unicode strings. ``n`` represents the number of elements.
``domain`` is the translation domain to use to do the
pluralization, and ``mapping`` is the interpolation mapping
that should be used on the result. Note that if the objects
passed are translation strings, their domains and mappings are
ignored. The domain and mapping arguments must be used
instead. If the ``domain`` is not supplied, a default domain
is used (usually ``messages``).
Example::
num = 1
translated = localizer.pluralize('Add ${num} item',
'Add ${num} items',
num,
mapping={'num':num})
"""
if self.pluralizer is None:
self.pluralizer = Pluralizer(self.translations)
return self.pluralizer(singular, plural, n, domain=domain,
mapping=mapping)
def default_locale_negotiator(request):
""" The default :term:`locale negotiator`. Returns a locale name
or ``None``.
- First, the negotiator looks for the ``_LOCALE_`` attribute of
the request object (possibly set by a view or a listener for an
:term:`event`).
- Then it looks for the ``request.params['_LOCALE_']`` value.
- Then it looks for the ``request.cookies['_LOCALE_']`` value.
- Finally, the negotiator returns ``None`` if the locale could not
be determined via any of the previous checks (when a locale
negotiator returns ``None``, it signifies that the
:term:`default locale name` should be used.)
"""
name = '_LOCALE_'
locale_name = getattr(request, name, None)
if locale_name is None:
locale_name = request.params.get(name)
if locale_name is None:
locale_name = request.cookies.get(name)
return locale_name
def negotiate_locale_name(request):
""" Negotiate and return the :term:`locale name` associated with
the current request (never cached)."""
try:
registry = request.registry
except AttributeError:
registry = get_current_registry()
negotiator = registry.queryUtility(ILocaleNegotiator,
default=default_locale_negotiator)
locale_name = negotiator(request)
if locale_name is None:
settings = get_settings() or {}
locale_name = settings.get('default_locale_name', 'en')
return locale_name
def get_locale_name(request):
""" Return the :term:`locale name` associated with the current
request (possibly cached)."""
locale_name = getattr(request, 'bfg_locale_name', None)
if locale_name is None:
locale_name = negotiate_locale_name(request)
request.bfg_locale_name = locale_name
return locale_name
def get_localizer(request):
""" Retrieve a :class:`repoze.bfg.i18n.Localizer` object
corresponding to the current request's locale name. """
localizer = getattr(request, 'bfg_localizer', None)
if localizer is None:
# no locale object cached on request
try:
registry = request.registry
except AttributeError:
registry = get_current_registry()
current_locale_name = get_locale_name(request)
localizer = registry.queryUtility(ILocalizer, name=current_locale_name)
if localizer is None:
# no localizer utility registered yet
translations = Translations()
translations._catalog = {}
tdirs = registry.queryUtility(ITranslationDirectories, default=[])
for tdir in tdirs:
locale_dirs = [ (lname, os.path.join(tdir, lname)) for lname in
os.listdir(tdir) ]
for locale_name, locale_dir in locale_dirs:
if locale_name != current_locale_name:
continue
messages_dir = os.path.join(locale_dir, 'LC_MESSAGES')
if not os.path.isdir(os.path.realpath(messages_dir)):
continue
for mofile in os.listdir(messages_dir):
mopath = os.path.realpath(os.path.join(messages_dir,
mofile))
if mofile.endswith('.mo') and os.path.isfile(mopath):
mofp = open(mopath, 'rb')
domain = mofile[:-3]
dtrans = Translations(mofp, domain)
translations.add(dtrans)
localizer = Localizer(locale_name=current_locale_name,
translations=translations)
registry.registerUtility(localizer, ILocalizer,
name=current_locale_name)
request.bfg_localizer = localizer
return localizer
class Translations(gettext.GNUTranslations, object):
"""An extended translation catalog class (ripped off from Babel) """
DEFAULT_DOMAIN = 'messages'
def __init__(self, fileobj=None, domain=DEFAULT_DOMAIN):
"""Initialize the translations catalog.
:param fileobj: the file-like object the translation should be read
from
"""
gettext.GNUTranslations.__init__(self, fp=fileobj)
self.files = filter(None, [getattr(fileobj, 'name', None)])
self.domain = domain
self._domains = {}
@classmethod
def load(cls, dirname=None, locales=None, domain=DEFAULT_DOMAIN):
"""Load translations from the given directory.
:param dirname: the directory containing the ``MO`` files
:param locales: the list of locales in order of preference (items in
this list can be either `Locale` objects or locale
strings)
:param domain: the message domain
:return: the loaded catalog, or a ``NullTranslations`` instance if no
matching translations were found
:rtype: `Translations`
"""
if locales is not None:
if not isinstance(locales, (list, tuple)):
locales = [locales]
locales = [str(l) for l in locales]
if not domain:
domain = cls.DEFAULT_DOMAIN
filename = gettext.find(domain, dirname, locales)
if not filename:
return gettext.NullTranslations()
return cls(fileobj=open(filename, 'rb'), domain=domain)
def __repr__(self):
return '<%s: "%s">' % (type(self).__name__,
self._info.get('project-id-version'))
def add(self, translations, merge=True):
"""Add the given translations to the catalog.
If the domain of the translations is different than that of the
current catalog, they are added as a catalog that is only accessible
by the various ``d*gettext`` functions.
:param translations: the `Translations` instance with the messages to
add
:param merge: whether translations for message domains that have
already been added should be merged with the existing
translations
:return: the `Translations` instance (``self``) so that `merge` calls
can be easily chained
:rtype: `Translations`
"""
domain = getattr(translations, 'domain', self.DEFAULT_DOMAIN)
if merge and domain == self.domain:
return self.merge(translations)
existing = self._domains.get(domain)
if merge and existing is not None:
existing.merge(translations)
else:
translations.add_fallback(self)
self._domains[domain] = translations
return self
def merge(self, translations):
"""Merge the given translations into the catalog.
Message translations in the specified catalog override any messages
with the same identifier in the existing catalog.
:param translations: the `Translations` instance with the messages to
merge
:return: the `Translations` instance (``self``) so that `merge` calls
can be easily chained
:rtype: `Translations`
"""
if isinstance(translations, gettext.GNUTranslations):
self._catalog.update(translations._catalog)
if isinstance(translations, Translations):
self.files.extend(translations.files)
return self
def dgettext(self, domain, message):
"""Like ``gettext()``, but look the message up in the specified
domain.
"""
return self._domains.get(domain, self).gettext(message)
def ldgettext(self, domain, message):
"""Like ``lgettext()``, but look the message up in the specified
domain.
"""
return self._domains.get(domain, self).lgettext(message)
def dugettext(self, domain, message):
"""Like ``ugettext()``, but look the message up in the specified
domain.
"""
return self._domains.get(domain, self).ugettext(message)
def dngettext(self, domain, singular, plural, num):
"""Like ``ngettext()``, but look the message up in the specified
domain.
"""
return self._domains.get(domain, self).ngettext(singular, plural, num)
def ldngettext(self, domain, singular, plural, num):
"""Like ``lngettext()``, but look the message up in the specified
domain.
"""
return self._domains.get(domain, self).lngettext(singular, plural, num)
def dungettext(self, domain, singular, plural, num):
"""Like ``ungettext()`` but look the message up in the specified
domain.
"""
return self._domains.get(domain, self).ungettext(singular, plural, num) | /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/repoze/bfg/i18n.py | 0.777384 | 0.24805 | i18n.py | pypi |
=====================
Author Introduction
=====================
Welcome to "The :mod:`repoze.bfg` Web Application Framework". In this
introduction, I'll describe the audience for this book, I'll describe
the book content, I'll provide some context regarding the genesis of
:mod:`repoze.bfg`, and I'll thank some important people.
I hope you enjoy both this book and the software it documents. I've
had a blast writing both.
.. index::
single: book audience
Audience
========
This book is aimed primarily at a reader that has the following
attributes:
- At least a moderate amount of :term:`Python` experience.
- A familiarity with web protocols such as HTTP and CGI.
If you fit into both of these categories, you're in the direct target
audience for this book. But don't worry, even if you have no
experience with Python or the web, both are easy to pick up "on the
fly".
Python is an *excellent* language in which to write applications;
becoming productive in Python is almost mind-blowingly easy. If you
already have experience in another language such as Java, Visual
Basic, Perl, Ruby, or even C/C++, learning Python will be a snap; it
should take you no longer than a couple of days to become modestly
productive. If you don't have previous programming experience, it
will be slightly harder, and it will take a little longer, but you'd
be hard-pressed to find a better "first language."
Web technology familiarity is assumed in various places within the
book. For example, the book doesn't try to define common web-related
concepts like "URL" or "query string." Likewise, the book describes
various interactions in terms of the HTTP protocol, but it does not
describe how the HTTP protocol works in detail. Like any good web
framework, though, :mod:`repoze.bfg` shields you from needing to know
most of the gory details of web protocols and low-level data
structures. As a result, you can usually avoid becoming "blocked"
while you read this book even if you don't yet deeply understand web
technologies.
.. index::
single: book content overview
Book Content
============
This book is divided into four major parts:
:ref:`narrative_documentation`
This is documentation which describes :mod:`repoze.bfg` concepts in
narrative form, written in a largely conversational tone. Each
narrative documentation chapter describes an isolated
:mod:`repoze.bfg` concept. You should be able to get useful
information out of the narrative chapters if you read them
out-of-order, or when you need only a reminder about a particular
topic while you're developing an application.
:ref:`tutorials`
Each tutorial builds a sample application or implements a set of
concepts with a sample; it then describes the application or
concepts in terms of the sample. You should read the tutorials if
you want a guided tour of :mod:`repoze.bfg`.
:ref:`api_reference`
Comprehensive reference material for every public API exposed by
:mod:`repoze.bfg`. The API documentation is organized
alphabetically by module name.
:ref:`zcml_reference`
Comprehensive reference material for every :term:`ZCML directive`
provided by :mod:`repoze.bfg`. The ZCML directive documentation is
organized alphabetically by directive name.
.. index::
single: repoze.zope2
single: Zope 3
single: Zope 2
single: repoze.bfg genesis
The Genesis of :mod:`repoze.bfg`
================================
I wrote :mod:`repoze.bfg` after many years of writing applications
using :term:`Zope`. Zope provided me with a lot of mileage: it wasn't
until almost a decade of successfully creating applications using it
that I decided to write a different web framework. Although
:mod:`repoze.bfg` takes inspiration from a variety of web frameworks,
it owes more of its core design to Zope than any other.
The Repoze "brand" existed before :mod:`repoze.bfg` was created. One
of the first packages developed as part of the Repoze brand was a
package named :mod:`repoze.zope2`. This was a package that allowed
Zope 2 applications to run under a :term:`WSGI` server without
modification. Zope 2 did not have reasonable WSGI support at the
time.
During the development of the :mod:`repoze.zope2` package, I found
that replicating the Zope 2 "publisher" -- the machinery that maps
URLs to code -- was time-consuming and fiddly. Zope 2 had evolved
over many years, and emulating all of its edge cases was extremely
difficult. I finished the :mod:`repoze.zope2` package, and it
emulates the normal Zope 2 publisher pretty well. But during its
development, it became clear that Zope 2 had simply begun to exceed my
tolerance for complexity, and I began to look around for simpler
options.
I considered using the Zope 3 application server machinery, but it
turned out that it had become more indirect than the Zope 2 machinery
it aimed to replace, which didn't fulfill the goal of simplification.
I also considered using Django and Pylons, but neither of those
frameworks offer much along the axes of traversal, contextual
declarative security, or application extensibility; these were
features I had become accustomed to as a Zope developer.
I decided that in the long term, creating a simpler framework that
retained features I had become accustomed to when developing Zope
applications was a more reasonable idea than continuing to use any
Zope publisher or living with the limitations and unfamiliarities of a
different framework. The result is what is now :mod:`repoze.bfg`.
It is immodest to say so, but I believe :mod:`repoze.bfg` has turned
out to be the very best Python web framework available today, bar
none. It combines all the "good parts" from other web frameworks into
a cohesive whole that is reliable, down-to-earth, flexible, speedy,
and well-documented.
.. index::
single: Bicking, Ian
single: Everitt, Paul
single: Seaver, Tres
single: Sawyers, Andrew
single: Borch, Malthe
single: de la Guardia, Carlos
single: Brandl, Georg
single: Oram, Simon
single: Hardwick, Nat
single: Fulton, Jim
single: Moroz, Tom
single: Koym, Todd
single: van Rossum, Guido
single: Peters, Tim
single: Rossi, Chris
single: Holth, Daniel
single: Hathaway, Shane
single: Akkerman, Wichert
Thanks
======
This book is dedicated to my grandmother, who gave me my first
typewriter (a Royal), and my mother, who bought me my first computer
(a VIC-20).
Thanks to the following people for providing expertise, resources, and
software. Without the help of these folks, neither this book nor the
software which it details would exist: Paul Everitt, Tres Seaver,
Andrew Sawyers, Malthe Borch, Carlos de la Guardia, Chris Rossi, Shane
Hathaway, Daniel Holth, Wichert Akkerman, Georg Brandl, Simon Oram and
Nat Hardwick of Electrosoup, Ian Bicking of the Open Planning Project,
Jim Fulton of Zope Corporation, Tom Moroz of the Open Society
Institute, and Todd Koym of Environmental Health Sciences.
Thanks to Guido van Rossum and Tim Peters for Python.
Special thanks to Tricia for putting up with me.
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/authorintro.rst | 0.806319 | 0.70767 | authorintro.rst | pypi |
What's New In :mod:`repoze.bfg` 1.1
===================================
This article explains the new features in :mod:`repoze.bfg` version
1.1 as compared to the previous 1.0 release. It also documents
backwards incompatibilities between the two versions and deprecations
added to 1.1, as well as software dependency changes and notable
documentation additions.
Major Feature Additions
-----------------------
The major feature additions of 1.1 are:
- Allow the use of additional :term:`view predicate` parameters to
more finely control view matching.
- Allow the use of :term:`route predicate` parameters to more finely
control route matching.
- Make it possible to write views that use a :term:`renderer`.
- Make it possible to write views that use a "wrapper view".
- Added ``<static>`` ZCML directive which registers a view which
serves up files in a package directory.
- A new API function: ``repoze.bfg.url.static_url`` is available to
compute the path of static resources.
- View configurations can now name an ``attr`` representing the method
or attribute of the view callable that should be called to return a
response.
- ``@bfg_view`` decorators may now be stacked, allowing for the same
view callable to be associated with multiple different view
configurations without resorting to ZCML for view configuration.
- ``@bfg_view`` decorators may now be placed on a class method.
- ``paster bfgshell`` now supports IPython if it is found in the
Python environment invoking Paster.
- Commonly executed codepaths have been accelerated.
.. _view_predicates_in_1dot1:
View Predicates
~~~~~~~~~~~~~~~
:mod:`repoze.bfg` 1.0, :term:`view configuration` allowed relatively
coarse matching of a :term:`request` to a :term:`view callable`.
Individual view configurations could match the same URL depending on
the :term:`context` and the URL path, as well as a limited number of
other request values.
For example, two view configurations mentioning the same :term:`view
name` could be spelled via a ``@bfg_view`` decorator with a different
``for_`` parameter. The view ultimately chosen would be based on the
:term:`context` type based on the ``for_`` attribute like so:
.. ignore-next-block
.. code-block:: python
from webob import Response
from repoze.bfg.view import bfg_view
from myapp.models import Document
from myapp.models import Folder
@bfg_view(name='index.html', for_=Document)
def document_view(context, request):
return Response('document view')
@bfg_view(name='index.html', for_=Folder)
def folder_view(context, request):
return Response('folder view')
In the above configuration, the ``document_view`` :term:`view
callable` will be chosen when the :term:`context` is of the class
``myapp.models.Document``, while the ``folder_view`` view callable
will be chosen when the context is of class ``myapp.models.Folder``.
There were a number of other attributes that could influence the
choosing of view callables, such as ``request_type``, and others.
However, the matching algorithm was rather limited.
In :mod:`repoze.bfg` 1.1, this facility has been enhanced via the
availability of additional :term:`view predicate` attributes. For
example, one view predicate new to 1.1 is ``containment``, which
implies that the view will be called when the class or interface
mentioned as ``containment`` is present with respect to any instance
in the :term:`lineage` of the context:
.. ignore-next-block
.. code-block:: python
from webob import Response
from repoze.bfg.view import bfg_view
from myapp.models import Document
from myapp.models import Folder
from myapp.models import Blog
from myapp.models import Calendar
@bfg_view(name='index.html', for_=Document, containment=Blog)
def blog_document_view(context, request):
return Response('blog document view')
@bfg_view(name='index.html', for_=Folder, containment=Blog)
def blog_folder_view(context, request):
return Response('blog folder view')
@bfg_view(name='index.html', for_=Document, containment=Calendar)
def calendar_document_view(context, request):
return Response('calendar document view')
@bfg_view(name='index.html', for_=Folder, containment=Calendar)
def calendar_folder_view(context, request):
return Response('calendar folder view')
As might be evident in the above example, you can use the
``containment`` predicate to arrange for different view callables to
be called based on the lineage of the context. In the above example,
the ``blog_document_view`` will be called when the context is of the
class ``myapp.models.Document`` and the containment has an instance of
the class ``myapp.models.Blog`` in it. But when all else is equal,
except the containment has an instance of the class
``myapp.models.Calendar`` in it instead of ``myapp.models.Blog``, the
``calendar_document_view`` will be called instead.
All view predicates configurable via the ``@bfg_view`` decorator are
available via :term:`ZCML` :term:`view configuration` as well.
Additional new 1.1 view predicates besides ``containment`` are:
``request_method``
True if the specified value (e.g. GET/POST/HEAD/PUT/DELETE) is the
request.method value.
``request_param``
True if the specified value is present in the request.GET or
request.POST multidicts.
``xhr``
True if the request.is_xhr attribute is ``True``, meaning that the
request has an ``X-Requested-With`` header with the value
``XMLHttpRequest``
``accept``
True if the value of this attribute represents matches one or more
mimetypes in the ``Accept`` HTTP request header.
``header``
True if the value of this attribute represents an HTTP header name
or a header name/value pair present in the request.
``path_info``
True if the value of this attribute (a regular expression pattern)
matches the ``PATH_INFO`` WSGI environment variable.
All other existing view configuration parameters from 1.0 still exist.
Any number of view predicates can be specified in a view
configuration. All view predicates in a view configuration must be
True for a view callable to be invoked. If one does not evaluate to
True, the view will not be invoked, and view matching will continue,
until all potential matches are exhausted (and the Not Found view is
invoked).
.. _route_predicates_in_1dot1:
Route Predicates
~~~~~~~~~~~~~~~~
In :mod:`repoze.bfg` 1.0, a :term:`route` would match or not match
based on only one value: the ``PATH_INFO`` value of the WSGI
environment, as specified by the ``path`` parameter of the ``<route>``
ZCML directive.
In 1.1, matching can be more finely controlled via the use of one or
more :term:`route predicate` attributes.
The additional route predicates in 1.1 are:
``xhr``
True if the request.is_xhr attribute is ``True``, meaning that the
request has an ``X-Requested-With`` header with the value
``XMLHttpRequest``.
``request_method``
True if the specified value (e.g. GET/POST/HEAD/PUT/DELETE) is the
request.method value.
``path_info``
True if the value of this attribute (a regular expression pattern)
matches the ``PATH_INFO`` WSGI environment variable.
``request_param``
True if the specified value is present in either of the
``request.GET`` or ``request.POST`` multidicts.
``header``
True if the value of this attribute represents an HTTP header name
or a header name/value pair present in the request.
``accept``
True if the value of this attribute represents matches one or more
mimetypes in the ``Accept`` HTTP request header.
All other existing route configuration parameters from 1.0 still exist.
Any number of route predicates can be specified in a route
configuration. All route predicates in a route configuration must be
True for a route to match a request. If one does not evaluate to
True, the route will not be invoked, and route matching will continue,
until all potential routes are exhausted (at which point, traversal is
attempted).
View Renderers
~~~~~~~~~~~~~~
In :mod:`repoze.bfg` 1.0 and prior, views were required to return a
:term:`response` object unconditionally.
In :mod:`repoze.bfg` 1.1, a :term:`view configuration` can name a
:term:`renderer`. A renderer can either be a template or a token that
is associated with a serialization technique (e.g. ``json``). When a
view configuration names a renderer, the view can return a data
structure understood by the renderer (such as a dictionary), and the
renderer will convert the data structure to a response on the behalf
of the developer.
View configuration can vary the renderer associated with a view via
the ``renderer`` attribute to the configuration. For example, this
ZCML associates the ``json`` renderer with a view:
.. code-block:: xml
:linenos:
<view
view=".views.my_view"
renderer="json"
/>
The ``@bfg_view`` decorator can also associate a view callable with a
renderer:
.. code-block:: python
:linenos:
from repoze.bfg.view import bfg_view
@bfg_view(renderer='json')
def my_view(context, request):
return {'abc':123}
The ``json`` renderer renders view return values to a :term:`JSON`
serialization.
Another built-in renderer uses the :term:`Chameleon` templating
language to render a dictionary to a response. For example:
.. code-block:: python
:linenos:
from repoze.bfg.view import bfg_view
@bfg_view(renderer='templates/my_template.pt')
def my_view(context, request):
return {'abc':123}
See :ref:`built_in_renderers` for the available built-in renderers.
If the ``view`` callable associated with a ``view`` directive returns
a Response object (an object with the attributes ``status``,
``headerlist`` and ``app_iter``), any renderer associated with the
``view`` declaration is ignored, and the response is passed back to
BFG unmolested. For example, if your view callable returns an
``HTTPFound`` response, no renderer will be employed.
.. code-block:: python
:linenos:
from webob.exc import HTTPFound
from repoze.bfg.view import bfg_view
@bfg_view(renderer='templates/my_template.pt')
def my_view(context, request):
return HTTPFound(location='http://example.com') # renderer avoided
Additional renderers can be added to the system as necessary via a
ZCML directive (see :ref:`adding_and_overriding_renderers`).
If you do not define a ``renderer`` attribute in view configuration
for a view, no renderer is associated with the view. In such a
configuration, an error is raised when a view does not return an
object which implements :term:`Response` interface, as was the case
under BFG 1.0.
Views Which Use Wrappers
~~~~~~~~~~~~~~~~~~~~~~~~
In :mod:`repoze.bfg` 1.1, view configuration may specify a ``wrapper``
attribute. For example:
.. code-block:: xml
:linenos:
<view
name="one"
view=".views.wrapper_view"
/>
<view
name="two"
view=".views.my_view"
wrapper="one"
/>
The ``wrapper`` attribute of a view configuration is a :term:`view
name` (*not* an object dotted name). It specifies *another* view
callable declared elsewhere in :term:`view configuration`. In the
above example, the wrapper of the ``two`` view is the ``one`` view.
The wrapper view will be called when after the wrapped view is
invoked; it will receive the response body of the wrapped view as the
``wrapped_body`` attribute of its own request, and the response
returned by this view as the ``wrapped_response`` attribute of its own
request.
Using a wrapper makes it possible to "chain" views together to form a
composite response. The response of the outermost wrapper view will
be returned to the user.
The wrapper view will be found as any view is found: see
:ref:`view_lookup`. The "best" wrapper view will be found based on
the lookup ordering: "under the hood" this wrapper view is looked up
via ``repoze.bfg.view.render_view_to_response(context, request,
'wrapper_viewname')``. The context and request of a wrapper view is
the same context and request of the inner view.
If the ``wrapper`` attribute is unspecified in a view configuration,
no view wrapping is done.
The ``@bfg_view`` decorator accepts a ``wrapper`` parameter, mirroring
its ZCML view configuration counterpart.
``<static>`` ZCML Directive
~~~~~~~~~~~~~~~~~~~~~~~~~~~
A new ZCML directive named ``static`` has been added. Inserting a
``static`` declaration in a ZCML file will cause static resources to
be served at a configurable URL.
Here's an example of a ``static`` directive that will serve files up
from the ``templates/static`` directory of the :mod:`repoze.bfg`
application containing the following configuration at the URL
``/static``.
.. code-block:: xml
:linenos:
<static
name="static"
path="templates/static"
/>
Using the ``static`` ZCML directive is now the preferred way to serve
static resources (such as JavaScript and CSS files) within a
:mod:`repoze.bfg` application. Previous strategies for serving static
resources will still work, however.
New ``static_url`` API
~~~~~~~~~~~~~~~~~~~~~~
The new ``repoze.bfg.url.static_url`` API generates a fully qualified
URL to a static resource available via a path exposed via the
``<static>`` ZCML directive (see :ref:`static_resources_section`).
For example, if a ``<static>`` directive is in ZCML configuration like
so:
.. code-block:: xml
:linenos:
<static
name="static"
path="templates/static"
/>
You can generate a URL to a resource which lives within the
``templates/static`` subdirectory using the ``static_url`` API like
so:
.. ignore-next-block
.. code-block:: python
:linenos:
from repoze.bfg.url import static_url
url = static_url('templates/static/example.css', request)
Use of the ``static_url`` API prevents the developer from needing to
hardcode path values in template URLs.
``attr`` View Configuration Value
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The view machinery defaults to using the ``__call__`` method of the
view callable (or the function itself, if the view callable is a
function) to obtain a response.
In :mod:`repoze.bfg` 1.1, the ``attr`` view configuration value allows
you to vary the attribute of a view callable used to obtain the
response.
For example, if your view is a class, and the class has a method named
``index`` and you want to use this method instead of the class'
``__call__`` method to return the response, you'd say ``attr="index"``
in the view configuration for the view.
Specifying ``attr`` is most useful when the view definition is a
class. For example:
.. code-block:: xml
:linenos:
<view
view=".views.MyViewClass"
attr="index"
/>
The referenced ``MyViewClass`` might look like so:
.. code-block:: python
:linenos:
from webob import Response
class MyViewClass(object):
def __init__(context, request):
self.context = context
self.request = request
def index(self):
return Response('OK')
The ``index`` method of the class will be used to obtain a response.
``@bfg_view`` Decorators May Now Be Stacked
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
More than one ``@bfg_view`` decorator may now be stacked on top of any
number of others. Each invocation of the decorator registers a single
view configuration. For instance, the following combination of
decorators and a function will register two view configurations for
the same view callable:
.. code-block:: python
:linenos:
from repoze.bfg.view import bfg_view
@bfg_view(name='edit')
@bfg_view(name='change')
def edit(context, request):
pass
This makes it possible to associate more than one view configuration
with a single callable without requiring any ZCML.
Stacking ``@bfg_view`` decorators was not possible in
:mod:`repoze.bfg` 1.0.
``@bfg_view`` Decorators May Now Be Applied to A Class Method
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In :mod:`repoze.bfg` 1.0, the ``@bfg_view`` decorator could not be
used on class methods. In 1.1, the ``@bfg_view`` decorator can be
used against a class method:
.. code-block:: python
:linenos:
from webob import Response
from repoze.bfg.view import bfg_view
class MyView(object):
def __init__(self, context, request):
self.context = context
self.request = request
@bfg_view(name='hello')
def amethod(self):
return Response('hello from %s!' % self.context)
When the bfg_view decorator is used against a class method, a view is
registered for the *class* (it's a "class view" where the "attr"
happens to be the name of the method it is attached to), so the class
it's defined within must have a suitable constructor: one that accepts
``context, request`` or just ``request``.
IPython Support
~~~~~~~~~~~~~~~
If it is installed in the environment used to run :mod:`repoze.bfg`,
an `IPython <http://ipython.scipy.org/moin/>`_ shell will be opened
when the ``paster bfgshell`` command is invoked.
Common Codepaths Have Been Accelerated
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:mod:`repoze.bfg` 1.1 is roughly 10% - 20% faster in commonly executed
codepaths than :mod:`repoze.bfg` 1.0 was on average. Accelerated APIs
include ``repoze.bfg.location.lineage``, ``repoze.bfg.url.model_url``,
and ``repoze.bfg.url.route_url``. Other internal (non-API) functions
were similarly accelerated.
Minor Miscellaneous Feature Additions
-------------------------------------
- For behavior like Django's ``APPEND_SLASH=True``, use the
``repoze.bfg.view.append_slash_notfound_view`` view as the Not Found
view in your application. When this view is the Not Found view
(indicating that no view was found), and any routes have been
defined in the configuration of your application, if the value of
``PATH_INFO`` does not already end in a slash, and if the value of
``PATH_INFO`` *plus* a slash matches any route's path, do an HTTP
redirect to the slash-appended PATH_INFO. Note that this will
*lose* ``POST`` data information (turning it into a GET), so you
shouldn't rely on this to redirect POST requests.
- Add ``repoze.bfg.testing.registerSettings`` API, which is documented
in the "repoze.bfg.testing" API chapter. This allows for
registration of "settings" values obtained via
``repoze.bfg.settings.get_settings()`` for use in unit tests.
- Added ``max_age`` parameter to ``authtktauthenticationpolicy`` ZCML
directive. If this value is set, it must be an integer representing
the number of seconds which the auth tkt cookie will survive.
Mainly, its existence allows the auth_tkt cookie to survive across
browser sessions.
- The ``reissue_time`` argument to the ``authtktauthenticationpolicy``
ZCML directive now actually works. When it is set to an integer
value, an authticket set-cookie header is appended to the response
whenever a request requires authentication and 'now' minus the
authticket's timestamp is greater than ``reissue_time`` seconds.
- Expose and document ``repoze.bfg.testing.zcml_configure`` API. This
function populates a component registry from a ZCML file for testing
purposes. It is documented in the "Unit and Integration Testing"
chapter.
- Virtual hosting narrative docs chapter updated with info about
``mod_wsgi``.
- Added "Creating Integration Tests" section to unit testing narrative
documentation chapter. As a result, the name of the unittesting
chapter is now "Unit and Integration Testing".
- Add a new ``repoze.bfg.testing`` API: ``registerRoute``, for
registering routes to satisfy calls to
e.g. ``repoze.bfg.url.route_url`` in unit tests.
- Added a tutorial which explains how to use ``repoze.session``
(ZODB-based sessions) in a ZODB-based repoze.bfg app.
- Added a tutorial which explains how to add ZEO to a ZODB-based
``repoze.bfg`` application.
- Added a tutorial which explains how to run a ``repoze.bfg``
application under `mod_wsgi <http://code.google.com/p/modwsgi/>`_.
See "Running a repoze.bfg Application under mod_wsgi" in the
tutorials section of the documentation.
- Allow ``repoze.bfg.traversal.find_interface`` API to use a class
object as the argument to compare against the ``model`` passed in.
This means you can now do ``find_interface(model, SomeClass)`` and
the first object which is found in the lineage which has
``SomeClass`` as its class (or the first object found which has
``SomeClass`` as any of its superclasses) will be returned.
- The ordering of route declarations vs. the ordering of view
declarations that use a "route_name" in ZCML no longer matters.
Previously it had been impossible to use a route_name from a route
that had not yet been defined in ZCML (order-wise) within a "view"
declaration.
- The repoze.bfg router now catches both
``repoze.bfg.exceptions.Unauthorized`` and
``repoze.bfg.exceptions.NotFound`` exceptions while rendering a view.
When the router catches an ``Unauthorized``, it returns the
registered forbidden view. When the router catches a ``NotFound``,
it returns the registered notfound view.
- Add a new event type: ``repoze.bfg.events.AfterTraversal``. Events
of this type will be sent after traversal is completed, but before
any view code is invoked. Like ``repoze.bfg.events.NewRequest``,
This event will have a single attribute: ``request`` representing
the current request. Unlike the request attribute of
``repoze.bfg.events.NewRequest`` however, during an AfterTraversal
event, the request object will possess attributes set by the
traverser, most notably ``context``, which will be the context used
when a view is found and invoked. The interface
``repoze.bfg.events.IAfterTraversal`` can be used to subscribe to
the event. For example::
<subscriber for="repoze.bfg.interfaces.IAfterTraversal"
handler="my.app.handle_after_traverse"/>
Like any framework event, a subscriber function should expect one
parameter: ``event``.
- A ``repoze.bfg.testing.registerRoutesMapper`` testing facility has
been added. This testing function registers a routes "mapper"
object in the registry, for tests which require its presence. This
function is documented in the ``repoze.bfg.testing`` API
documentation.
Backwards Incompatibilities
---------------------------
- The ``authtkt`` authentication policy ``remember`` method now no
longer honors ``token`` or ``userdata`` keyword arguments.
- Importing ``getSiteManager`` and ``get_registry`` from
``repoze.bfg.registry`` is no longer supported. These imports were
deprecated in repoze.bfg 1.0. Import of ``getSiteManager`` should
be done as ``from zope.component import getSiteManager``. Import of
``get_registry`` should be done as ``from repoze.bfg.threadlocal
import get_current_registry``. This was done to prevent a circular
import dependency.
- Code bases which alternately invoke both
``zope.testing.cleanup.cleanUp`` and ``repoze.bfg.testing.cleanUp``
(treating them equivalently, using them interchangeably) in the
setUp/tearDown of unit tests will begin to experience test failures
due to lack of test isolation. The "right" mechanism is
``repoze.bfg.testing.cleanUp`` (or the combination of
``repoze.bfg.testing.setUp`` and
``repoze.bfg.testing.tearDown``). but a good number of legacy
codebases will use ``zope.testing.cleanup.cleanUp`` instead. We
support ``zope.testing.cleanup.cleanUp`` but not in combination with
``repoze.bfg.testing.cleanUp`` in the same codebase. You should use
one or the other test cleanup function in a single codebase, but not
both.
- In 0.8a7, the return value expected from an object implementing
``ITraverserFactory`` was changed from a sequence of values to a
dictionary containing the keys ``context``, ``view_name``,
``subpath``, ``traversed``, ``virtual_root``, ``virtual_root_path``,
and ``root``. Until now, old-style traversers which returned a
sequence have continued to work but have generated a deprecation
warning. In this release, traversers which return a sequence
instead of a dictionary will no longer work.
- The interfaces ``IPOSTRequest``, ``IGETRequest``, ``IPUTRequest``,
``IDELETERequest``, and ``IHEADRequest`` have been removed from the
``repoze.bfg.interfaces`` module. These were not documented as APIs
post-1.0. Instead of using one of these, use a ``request_method``
ZCML attribute or ``request_method`` bfg_view decorator parameter
containing an HTTP method name (one of ``GET``, ``POST``, ``HEAD``,
``PUT``, ``DELETE``) instead of one of these interfaces if you were
using one explicitly. Passing a string in the set (``GET``,
``HEAD``, ``PUT``, ``POST``, ``DELETE``) as a ``request_type``
argument will work too. Rationale: instead of relying on interfaces
attached to the request object, BFG now uses a "view predicate" to
determine the request type.
- Views registered without the help of the ZCML ``view`` directive are
now responsible for performing their own authorization checking.
- The ``registry_manager`` backwards compatibility alias importable
from "repoze.bfg.registry", deprecated since repoze.bfg 0.9 has been
removed. If you are trying to use the registry manager within a
debug script of your own, use a combination of the
"repoze.bfg.paster.get_app" and "repoze.bfg.scripting.get_root" APIs
instead.
- The ``INotFoundAppFactory`` interface has been removed; it has
been deprecated since repoze.bfg 0.9. If you have something like
the following in your ``configure.zcml``::
<utility provides="repoze.bfg.interfaces.INotFoundAppFactory"
component="helloworld.factories.notfound_app_factory"/>
Replace it with something like::
<notfound
view="helloworld.views.notfound_view"/>
See "Changing the Not Found View" in the "Hooks" chapter of the
documentation for more information.
- The ``IUnauthorizedAppFactory`` interface has been removed; it has
been deprecated since repoze.bfg 0.9. If you have something like
the following in your ``configure.zcml``::
<utility provides="repoze.bfg.interfaces.IUnauthorizedAppFactory"
component="helloworld.factories.unauthorized_app_factory"/>
Replace it with something like::
<forbidden
view="helloworld.views.forbidden_view"/>
See "Changing the Forbidden View" in the "Hooks" chapter of the
documentation for more information.
- ``ISecurityPolicy``-based security policies, deprecated since
repoze.bfg 0.9, have been removed. If you have something like this
in your ``configure.zcml``, it will no longer work::
<utility
provides="repoze.bfg.interfaces.ISecurityPolicy"
factory="repoze.bfg.security.RemoteUserInheritingACLSecurityPolicy"
/>
If ZCML like the above exists in your application, you will receive
an error at startup time. Instead of the above, you'll need
something like::
<remoteuserauthenticationpolicy/>
<aclauthorizationpolicy/>
This is just an example. See the "Security" chapter of the
repoze.bfg documentation for more information about configuring
security policies.
- The ``repoze.bfg.scripting.get_root`` function now expects a
``request`` object as its second argument rather than an
``environ``.
Deprecations and Behavior Differences
-------------------------------------
- In previous versions of BFG, the "root factory" (the ``get_root``
callable passed to ``make_app`` or a function pointed to by the
``factory`` attribute of a route) was called with a "bare" WSGI
environment. In this version, and going forward, it will be called
with a ``request`` object. The request object passed to the factory
implements dictionary-like methods in such a way that existing root
factory code which expects to be passed an environ will continue to
work.
- The ``__call__`` of a plugin "traverser" implementation (registered
as an adapter for ``ITraverser`` or ``ITraverserFactory``) will now
receive a *request* as the single argument to its ``__call__``
method. In previous versions it was passed a WSGI ``environ``
object. The request object passed to the factory implements
dictionary-like methods in such a way that existing traverser code
which expects to be passed an environ will continue to work.
- The request implements dictionary-like methods that mutate and query
the WSGI environ. This is only for the purpose of backwards
compatibility with root factories which expect an ``environ`` rather
than a request.
- The order in which the router calls the request factory and the root
factory has been reversed. The request factory is now called first;
the resulting request is passed to the root factory.
- Add ``setUp`` and ``tearDown`` functions to the
``repoze.bfg.testing`` module. Using ``setUp`` in a test setup and
``tearDown`` in a test teardown is now the recommended way to do
component registry setup and teardown. Previously, it was
recommended that a single function named
``repoze.bfg.testing.cleanUp`` be called in both the test setup and
tear down. ``repoze.bfg.testing.cleanUp`` still exists (and will
exist "forever" due to its widespread use); it is now just an alias
for ``repoze.bfg.testing.setUp`` and is nominally deprecated.
- The import of ``repoze.bfg.security.Unauthorized`` is deprecated in
favor of ``repoze.bfg.exceptions.Forbidden``. The old location
still functions but emits a deprecation warning. The rename from
``Unauthorized`` to ``Forbidden`` brings parity to the name of the
exception and the system view it invokes when raised.
- Custom ZCML directives which register an authentication or
authorization policy (ala "authtktauthenticationpolicy" or
"aclauthorizationpolicy") should register the policy "eagerly" in
the ZCML directive instead of from within a ZCML action. If an
authentication or authorization policy is not found in the component
registry by the view machinery during deferred ZCML processing, view
security will not work as expected.
Dependency Changes
------------------
- When used under Python < 2.6, BFG now has an installation time
dependency on the ``simplejson`` package.
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/whatsnew-1.1.rst | 0.912743 | 0.689632 | whatsnew-1.1.rst | pypi |
.. _hybrid_chapter:
Combining Traversal and URL Dispatch
====================================
When you write most :mod:`repoze.bfg` applications, you'll be using
one or the other of two available :term:`context finding` subsystems:
traversal or URL dispatch. However, to solve a limited set of
problems, it's useful to use *both* traversal and URL dispatch
together within the same application. :mod:`repoze.bfg` makes this
possible via *hybrid* applications.
.. warning::
Reasoning about the behavior of a "hybrid" URL dispatch + traversal
application can be challenging. To successfully reason about using
URL dispatch and traversal together, you need to understand URL
pattern matching, root factories, and the :term:`traversal`
algorithm, and the potential interactions between them. Therefore,
we don't recommend creating an application that relies on hybrid
behavior unless you must.
A Review of Non-Hybrid Applications
-----------------------------------
When used according to the tutorials in its documentation
:mod:`repoze.bfg` is a "dual-mode" framework: the tutorials explain
how to create an application in terms of using either :term:`url
dispatch` *or* :term:`traversal`. This chapter details how you might
combine these two dispatch mechanisms, but we'll review how they work
in isolation before trying to combine them.
URL Dispatch Only
~~~~~~~~~~~~~~~~~
An application that uses :term:`url dispatch` exclusively to map URLs
to code will often have declarations like this within :term:`ZCML`:
.. code-block:: xml
:linenos:
<route
path=":foo/:bar"
name="foobar"
view=".views.foobar"
/>
<route
path=":baz/:buz"
name="bazbuz"
view=".views.bazbuz"
/>
Each :term:`route` typically corresponds to a single view callable,
and when that route is matched during a request, the view callable
named by the ``view`` attribute is invoked.
Typically, an application that uses only URL dispatch won't perform
any configuration in ZCML that includes a ``<view>`` declaration and
won't have any calls to
:meth:`repoze.bfg.configuration.Configurator.add_view` in its startup
code.
Traversal Only
~~~~~~~~~~~~~~
An application that uses only traversal will have view configuration
declarations that look like this:
.. code-block:: xml
:linenos:
<view
name="foobar"
view=".views.foobar"
/>
<view
name="bazbuz"
view=".views.bazbuz"
/>
When the above configuration is applied to an application, the
``.views.foobar`` view callable above will be called when the URL
``/foobar`` is visited. Likewise, the view ``.views.bazbuz`` will be
called when the URL ``/bazbuz`` is visited.
An application that uses :term:`traversal` exclusively to map URLs to
code usually won't have any ZCML ``<route>`` declarations nor will it
make any calls to the
:meth:`repoze.bfg.configuration.Configurator.add_route` method.
Hybrid Applications
-------------------
Either traversal or url dispatch alone can be used to create a
:mod:`repoze.bfg` application. However, it is also possible to
combine the concepts of traversal and url dispatch when building an
application: the result is a hybrid application. In a hybrid
application, traversal is performed *after* a particular route has
matched.
A hybrid application is a lot more like a "pure" traversal-based
application than it is like a "pure" URL-dispatch based application.
But unlike in a "pure" traversal-based application, in a hybrid
application, :term:`traversal` is performed during a request after a
route has already matched. This means that the URL pattern that
represents the ``path`` argument of a route must match the
``PATH_INFO`` of a request, and after the route path has matched, most
of the "normal" rules of traversal with respect to :term:`context
finding` and :term:`view lookup` apply.
There are only four real differences between a purely traversal-based
application and a hybrid application:
- In a purely traversal based application, no routes are defined; in a
hybrid application, at least one route will be defined.
- In a purely traversal based application, the root object used is
global implied by the :term:`root factory` provided at startup
time; in a hybrid application, the :term:`root` object at which
traversal begins may be varied on a per-route basis.
- In a purely traversal-based application, the ``PATH_INFO`` of the
underlying :term:`WSGI` environment is used wholesale as a traversal
path; in a hybrid application, the traversal path is not the entire
``PATH_INFO`` string, but a portion of the URL determined by a
matching pattern in the matched route configuration's path.
- In a purely traversal based application, view configurations which
do not mention a ``route_name`` argument are considered during
:term:`view lookup`; in a hybrid application, when a route is
matched, only view configurations which mention that route's name as
a ``route_name`` are considered during :term:`view lookup`.
More generally, a hybrid application *is* a traversal-based
application except:
- the traversal *root* is chosen based on the route configuration of
the route that matched instead of from the ``root_factory`` supplied
during application startup configuration.
- the traversal *path* is chosen based on the route configuration of
the route that matched rather than from the ``PATH_INFO`` of a
request.
- the set of views that may be chosen during :term:`view lookup` when
a route matches are limited to those which specifically name a
``route_name`` in their configuration that is the same as the
matched route's ``name``.
To create a hybrid mode application, use a :term:`route configuration`
that implies a particular :term:`root factory` and which also includes
a ``path`` argument that contains a special dynamic part: either
``*traverse`` or ``*subpath``.
The Root Object for a Route Match
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A hybrid application implies that traversal is performed during a
request after a route has matched. Traversal, by definition, must
always begin at a root object. Therefore it's important to know
*which* root object will be traversed after a route has matched.
Figuring out which :term:`root` object results from a particular route
match is straightforward. When a route is matched:
- If the route's configuration has a ``factory`` argument which
points to a :term:`root factory` callable, that callable will be
called to generate a :term:`root` object.
- If the route's configuration does not have a ``factory``
argument, the *global* :term:`root factory` will be called to
generate a :term:`root` object. The global root factory is the
callable implied by the ``root_factory`` argument passed to
:class:`repoze.bfg.configuration.Configurator` at application
startup time.
- If a ``root_factory`` argument is not provided to the
:class:`repoze.bfg.configuration.Configurator` at startup time, a
*default* root factory is used. The default root factory is used to
generate a root object.
.. note::
Root factories related to a route were explained previously within
:ref:`route_factories`. Both the global root factory and default
root factory were explained previously within
:ref:`the_object_graph`.
.. _using_traverse_in_a_route_path:
Using ``*traverse`` In a Route Path
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A hybrid application most often implies the inclusion of a route
configuration that contains the special token ``*traverse`` at the end
of a route's path:
.. code-block:: xml
:linenos:
<route
path=":foo/:bar/*traverse"
name="home"
/>
A ``*traverse`` token at the end of the path in a route's
configuration implies a "remainder" *capture* value. When it is used,
it will match the remainder of the path segments of the URL. This
remainder becomes the path used to perform traversal.
.. note::
The ``*remainder`` route path pattern syntax is explained in more
detail within :ref:`route_path_pattern_syntax`.
Note that unlike the examples provided within
:ref:`urldispatch_chapter`, the ``<route>`` configuration named
previously does not name a ``view`` attribute. This is because a
hybrid mode application relies on :term:`traversal` to do
:term:`context finding` and :term:`view lookup` instead of invariably
invoking a specific view callable named directly within the matched
route's configuration.
Because the path of the above route ends with ``*traverse``, when this
route configuration is matched during a request, :mod:`repoze.bfg`
will attempt to use :term:`traversal` against the :term:`root` object
implied by the :term:`root factory` implied by the route's
configuration. Once :term:`traversal` has found a :term:`context`,
:term:`view lookup` will be invoked in almost exactly the same way it
would have been invoked in a "pure" traversal-based application.
The *default* :term:`root factory` cannot be traversed: it has no
useful ``__getitem__`` method. So we'll need to associate this route
configuration with a non-default root factory in order to create a
useful hybrid application. To that end, let's imagine that we've
created a root factory that looks like so in a module named
``routes.py``:
.. code-block:: python
:linenos:
class Traversable(object):
def __init__(self, subobjects):
self.subobjects = subobjects
def __getitem__(self, name):
return self.subobjects[name]
root = Traversable(
{'a':Traversable({'b':Traversable({'c':Traversable({})})})}
)
def root_factory(request):
return root
Above, we've defined a (bogus) graph here that can be traversed, and a
``root_factory`` function that can be used as part of a particular
route configuration statement:
.. code-block:: xml
:linenos:
<route
path=":foo/:bar/*traverse"
name="home"
factory=".routes.root_factory"
/>
The ``factory`` above points at the function we've defined. It
will return an instance of the ``Traversable`` class as a root object
whenever this route is matched. Because the ``Traversable`` object
we've defined has a ``__getitem__`` method that does something
nominally useful, and because traversal uses ``__getitem__`` to walk
the nodes that make up an object graph, using traversal against the
root object implied by our route statement becomes a reasonable thing
to do.
.. note::
We could have also used our ``root_factory`` callable as the
``root_factory`` argument of the
:class:`repoze.bfg.configuration.Configurator` constructor instead
of associating it with a particular route inside the route's
configuration. Every hybrid route configuration that is matched but
which does *not* name a ``factory``` attribute will use the use
global ``root_factory`` function to generate a root object.
When the route configuration named ``home`` above is matched during a
request, the matchdict generated will be based on its path:
``:foo/:bar/*traverse``. The "capture value" implied by the
``*traverse`` element in the path pattern will be used to traverse the
graph in order to find a context, starting from the root object
returned from the root factory. In the above example, the
:term:`root` object found will be the instance named ``root`` in
``routes.py``.
If the URL that matched a route with the path ``:foo/:bar/*traverse``,
is ``http://example.com/one/two/a/b/c``, the traversal path used
against the root object will be ``a/b/c``. As a result,
:mod:`repoze.bfg` will attempt to traverse through the edges ``a``,
``b``, and ``c``, beginning at the root object.
In our above example, this particular set of traversal steps will mean
that the :term:`context` of the view would be the ``Traversable``
object we've named ``c`` in our bogus graph and the :term:`view name`
resulting from traversal will be the empty string; if you need a
refresher about why this outcome is presumed, see
:ref:`traversal_algorithm`.
At this point, a suitable view callable will be found and invoked
using :term:`view lookup` as described in :ref:`view_configuration`,
but with a caveat: in order for view lookup to work, we need to define
a view configuration that will match when :term:`view lookup` is
invoked after a route matches:
.. code-block:: xml
:linenos:
<route
path=":foo/:bar/*traverse"
name="home"
factory=".routes.root_factory"
/>
<view
route_name="home"
view=".views.myview"
/>
Note that the above ``view`` declaration includes a ``route_name``
argument. Views that include a ``route_name`` argument are meant to
associate a particular view declaration with a route, using the
route's name, in order to indicate that the view should *only be
invoked when the route matches*.
View configurations may have a ``route_name`` attribute which refers
to the value of the ``<route>`` declaration's ``name`` attribute. In
the above example, the route name is ``home``, referring to the name
of the route defined above it.
The above ``.views.myview`` view will be invoked when:
- the route named "home" is matched
- the :term:`view name` resulting from traversal is the empty string.
- the :term:`context` is any object.
It is also possible to declare alternate views that may be invoked
when a hybrid route is matched:
.. code-block:: xml
:linenos:
<route
path=":foo/:bar/*traverse"
name="home"
factory=".routes.root_factory"
/>
<view
route_name="home"
view=".views.myview"
/>
<view
route_name="home"
name="another"
view=".views.another_view"
/>
The ``view`` declaration for ``.views.another_view`` above names a
different view and, more importantly, a different :term:`view name`.
The above ``.views.another_view`` view will be invoked when:
- the route named "home" is matched
- the :term:`view name` resulting from traversal is ``another``.
- the :term:`context` is any object.
For instance, if the URL ``http://example.com/one/two/a/another`` is
provided to an application that uses the previously mentioned object
graph, the ``.views.another`` view callable will be called instead of
the ``.views.myview`` view callable because the :term:`view name` will
be ``another`` instead of the empty string.
More complicated matching can be composed. All arguments to *route*
configuration statements and *view* configuration statements are
supported in hybrid applications (such as :term:`predicate`
arguments).
Using the ``traverse`` Argument In a Route Definition
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Rather than using the ``*traverse`` remainder marker in a path
pattern, you can use the ``traverse`` argument to the
:meth:`repoze.bfg.configuration.Configurator.add_route`` method or the
``traverse`` attribute of the :ref:`route_directive` ZCML directive.
(either method is equivalent).
When you use the ``*traverse`` remainder marker, the traversal path is
limited to being the remainder segments of a request URL when a route
matches. However, when you use the ``traverse`` argument or
attribute, you have more control over how to compose a traversal path.
Here's a use of the ``traverse`` pattern in a ZCML ``route``
declaration:
.. code-block:: xml
:linenos:
<route
name="abc"
path="/articles/:article/edit"
traverse="/articles/:article"
/>
The syntax of the ``traverse`` argument is the same as it is for
``path``.
If, as above, the ``path`` provided is ``articles/:article/edit``, and
the ``traverse`` argument provided is ``/:article``, when a request
comes in that causes the route to match in such a way that the
``article`` match value is ``1`` (when the request URI is
``/articles/1/edit``), the traversal path will be generated as ``/1``.
This means that the root object's ``__getitem__`` will be called with
the name ``1`` during the traversal phase. If the ``1`` object
exists, it will become the :term:`context` of the request.
:ref:`traversal_chapter` has more information about traversal.
If the traversal path contains segment marker names which are not
present in the path argument, a runtime error will occur. The
``traverse`` pattern should not contain segment markers that do not
exist in the ``path``.
Note that the ``traverse`` argument is ignored when attached to a
route that has a ``*traverse`` remainder marker in its path.
Traversal will begin at the root object implied by this route (either
the global root, or the object returned by the ``factory`` associated
with this route).
Making Global Views Match
+++++++++++++++++++++++++
By default, view configurations that don't mention a ``route_name``
will be not found by view lookup when a route that mentions a
``*traverse`` in its path matches. You can make these match forcibly
by adding the ``use_global_views`` flag to the route definition. For
example, the ``views.bazbuz`` view below will be found if the route
named ``abc`` below is matched and the ``PATH_INFO`` is
``/abc/bazbuz``, even though the view configuration statement does not
have the ``route_name="abc"`` attribute.
.. code-block:: xml
:linenos:
<route
path="/abc/*traverse"
name="abc"
use_global_views="True"
/>
<view
name="bazbuz"
view=".views.bazbuz"
/>
.. index::
single: route subpath
single: subpath (route)
.. _star_subpath:
Using ``*subpath`` in a Route Path
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are certain extremely rare cases when you'd like to influence
the traversal :term:`subpath` when a route matches without actually
performing traversal. For instance, the
:func:`repoze.bfg.wsgi.wsgiapp2` decorator and the
:class:`repoze.bfg.view.static` helper attempt to compute
``PATH_INFO`` from the request's subpath, so it's useful to be able to
influence this value.
When ``*subpath`` exists in a path pattern, no path is actually
traversed, but the traversal algorithm will return a :term:`subpath`
list implied by the capture value of ``*subpath``. You'll see this
pattern most commonly in route declarations that look like this:
.. code-block:: xml
:linenos:
<route
path="/static/*subpath"
name="static"
view=".views.static_view"
/>
Where ``.views.static_view`` is an instance of
:class:`repoze.bfg.view.static`. This effectively tells the static
helper to traverse everything in the subpath as a filename.
Corner Cases
------------
A number of corner case "gotchas" exist when using a hybrid
application. We'll detail them here.
Registering a Default View for a Route That Has a ``view`` Attribute
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is an error to provide *both* a ``view`` argument to a :term:`route
configuration` *and* a :term:`view configuration` which names a
``route_name`` that has no ``name`` value or the empty ``name`` value.
For example, this pair of route/view ZCML declarations will generate a
"conflict" error at startup time.
.. code-block:: xml
:linenos:
<route
path=":foo/:bar/*traverse"
name="home"
view=".views.home"
/>
<view
route_name="home"
view=".views.another"
/>
This is because the ``view`` attribute of the ``<route>`` statement
above is an *implicit* default view when that route matches.
``<route>`` declarations don't *need* to supply a view attribute. For
example, this ``<route>`` statement:
.. code-block:: xml
:linenos:
<route
path=":foo/:bar/*traverse"
name="home"
view=".views.home"
/>
Can also be spelled like so:
.. code-block:: xml
:linenos:
<route
path=":foo/:bar/*traverse"
name="home"
/>
<view
route_name="home"
view=".views.home"
/>
The two spellings are logically equivalent. In fact, the former is
just a syntactical shortcut for the latter.
Binding Extra Views Against a Route Configuration that Doesn't Have a ``*traverse`` Element In Its Path
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Here's another corner case that just makes no sense.
.. code-block:: xml
:linenos:
<route
path="/abc"
name="abc"
view=".views.abc"
/>
<view
name="bazbuz"
view=".views.bazbuz"
route_name="abc"
/>
The above ``<view>`` declaration is useless, because it will never be
matched when the route it references has matched. Only the view
associated with the route itself (``.views.abc``) will ever be invoked
when the route matches, because the default view is always invoked
when a route matches and when no post-match traversal is performed.
To make the above ``<view>`` declaration non-useless, the special
``*traverse`` token must end the route's path. For example:
.. code-block:: xml
:linenos:
<route
path="/abc/*traverse"
name="abc"
view=".views.abc"
/>
<view
name="bazbuz"
view=".views.bazbuz"
route_name="abc"
/>
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/narr/hybrid.rst | 0.948358 | 0.674694 | hybrid.rst | pypi |
.. _traversal_chapter:
Traversal
=========
:term:`Traversal` is a :term:`context finding` mechanism. It is the
act of finding a :term:`context` and a :term:`view name` by walking
over an *object graph*, starting from a :term:`root` object, using a
:term:`request` object as a source of path information.
In this chapter, we'll provide a high-level overview of traversal,
we'll explain the concept of an *object graph*, and we'll show how
traversal might be used within an application.
.. index::
single: traversal analogy
A Traversal Analogy
-------------------
We use an analogy to provide an introduction to :term:`traversal`.
Imagine an inexperienced UNIX computer user, wishing only to use the
command line to find a file and to invoke the ``cat`` command against
that file. Because he is inexperienced, the only commands he knows
how to use are ``cd``, which changes the current directory and
``cat``, which prints the contents of a file. And because he is
inexperienced, he doesn't understand that ``cat`` can take an absolute
path specification as an argument, so he doesn't know that you can
issue a single command command ``cat /an/absolute/path`` to get the
desired result. Instead, this user believes he must issue the ``cd``
command, starting from the root, for each intermediate path segment,
*even the path segment that represents the file itself*. Once he gets
an error (because you cannot successfully ``cd`` into a file), he
knows he has reached the file he wants, and he will be able to execute
``cat`` against the resulting path segment.
This inexperienced user's attempt to execute ``cat`` against the file
named ``/fiz/buz/myfile`` might be to issue the following set of UNIX
commands:
.. code-block:: text
cd /
cd fiz
cd buz
cd myfile
The user now knows he has found a *file*, because the ``cd`` command
issues an error when he executed ``cd myfile``. Now he knows that he
can run the ``cat`` command:
.. code-block:: text
cat myfile
The contents of ``myfile`` are now printed on the user's behalf.
:mod:`repoze.bfg` is very much like this inexperienced UNIX user as it
uses :term:`traversal` against an object graph. In this analogy, we
can map the ``cat`` program to the :mod:`repoze.bfg` concept of a
:term:`view callable`: it is a program that can be run against some
:term:`context` as the result of :term:`view lookup`. The file being
operated on in this analogy is the :term:`context` object; the context
is the "last node found" in a traversal. The directory structure is
the object graph being traversed. The act of progressively changing
directories to find the file as well as the handling of a ``cd`` error
as a stop condition is analogous to :term:`traversal`.
The analogy we've used is not *exactly* correct, because, while the
naive user already knows which command he wants to invoke before he
starts "traversing" (``cat``), :mod:`repoze.bfg` needs to obtain that
information from the path being traversed itself. In
:term:`traversal`, the "command" meant to be invoked is a :term:`view
callable`. A view callable is derived via :term:`view lookup` from
the combination of the :term:`view name` and the :term:`context`.
Traversal is the act of obtaining these two items.
.. index::
single: traversal overview
A High-Level Overview of Traversal
----------------------------------
:term:`Traversal` is dependent on information in a :term:`request`
object. Every :term:`request` object contains URL path information in
the ``PATH_INFO`` portion of the :term:`WSGI` environment. The
``PATH_INFO`` portion of the WSGI environment is the portion of a
request's URL following the hostname and port number, but before any
query string elements or fragment element. For example the
``PATH_INFO`` portion of the URL
``http://example.com:8080/a/b/c?foo=1`` is ``/a/b/c``.
Traversal treats the ``PATH_INFO`` segment of a URL as a sequence of
path segments. For example, the ``PATH_INFO`` string ``/a/b/c`` is
converted to the sequence ``['a', 'b', 'c']``.
After the path info is converted, a lookup is performed against the
object graph for each path segment. Each lookup uses the
``__getitem__`` method of an object in the graph.
For example, if the path info sequence is ``['a', 'b', 'c']``:
- :term:`Traversal` pops the first element (``a``) from the path
segment sequence and attempts to call the root object's
``__getitem__`` method using that value (``a``) as an argument;
we'll presume it succeeds.
- When the root object's ``__getitem__`` succeeds it will return an
object, which we'll call "A". The :term:`context` temporarily
becomes the "A" object.
- The next segment (``b``) is popped from the path sequence, and the
"A" object's ``__getitem__`` is called with that value (``b``) as an
argument; we'll presume it succeeds.
- When the "A" object's ``__getitem__`` succeeds it will return an
object, which we'll call "B". The :term:`context` temporarily
becomes the "B" object.
This process continues until the path segment sequence is exhausted or
a lookup for a path element fails. In either case, a :term:`context`
is found.
Traversal "stops" when it either reaches a leaf level model instance
in your object graph or when the path segments implied by the URL "run
out". The object that traversal "stops on" becomes the
:term:`context`. If at any point during traversal any node in the
graph doesn't have a ``__getitem__`` method, or if the ``__getitem__``
method of a node raises a :exc:`KeyError`, traversal ends immediately,
and that node becomes the :term:`context`.
The results of a :term:`traversal` also include a :term:`view name`.
The :term:`view name` is the *first* URL path segment in the set of
``PATH_INFO`` segments "left over" in the path segment list popped by
the traversal process *after* traversal finds a context object.
The combination of the :term:`context` object and the :term:`view
name` found via traversal is used later in the same request by a
separate :mod:`repoze.bfg` subsystem -- the :term:`view lookup`
subsystem -- to find a :term:`view callable` later within the same
request. How :mod:`repoze.bfg` performs view lookup is explained
within the :ref:`views_chapter` chapter.
.. index::
single: object graph
single: traversal graph
single: model graph
.. _the_object_graph:
The Object Graph
----------------
When your application uses :term:`traversal` to resolve URLs to code,
your application must supply an *object graph* to :mod:`repoze.bfg`.
This graph is represented by a :term:`root` object.
In order to supply a root object for an application, at system startup
time, the :mod:`repoze.bfg` :term:`Router` is configured with a
callback known as a :term:`root factory`. The root factory is
supplied by the application developer as the ``root_factory`` argument
to the application's :term:`Configurator`.
Here's an example of a simple root factory:
.. code-block:: python
:linenos:
class Root(dict):
def __init__(self, request):
pass
Here's an example of using this root factory within startup
configuration, by passing it to an instance of a :term:`Configurator`
named ``config``:
.. code-block:: python
:linenos:
config = Configurator(root_factory=Root)
Using the ``root_factory`` argument to a
:class:`repoze.bfg.configuration.Configurator` constructor tells your
:mod:`repoze.bfg` application to call this root factory to generate a
root object whenever a request enters the application. This root
factory is also known as the global root factory. A root factory can
alternately be passed to the ``Configurator`` as a :term:`dotted
Python name` which refers to a root factory object defined in a
different module.
A root factory is passed a :term:`request` object and it is expected
to return an object which represents the root of the object graph.
All :term:`traversal` will begin at this root object. Usually a root
factory for a traversal-based application will be more complicated
than the above ``Root`` object; in particular it may be associated
with a database connection or another persistence mechanism. A root
object is often an instance of a class which has a ``__getitem__``
method.
.. warning:: In :mod:`repoze.bfg` 1.0 and prior versions, the root
factory was passed a WSGI *environment* object (a dictionary) while
in :mod:`repoze.bfg` 1.1+ it is passed a :term:`request` object.
For backwards compatibility purposes, the request object passed to
the root factory has a dictionary-like interface that emulates the
WSGI environment, so code expecting the argument to be a dictionary
will continue to work.
If no :term:`root factory` is passed to the :mod:`repoze.bfg`
:term:`Configurator` constructor, or the ``root_factory`` is specified
as the value ``None``, a *default* root factory is used. The default
root factory always returns an object that has no child nodes.
.. sidebar:: Emulating the Default Root Factory
For purposes of understanding the default root factory better,
we'll note that you can emulate the default root factory by using
this code as an explicit root factory in your application setup:
.. code-block:: python
:linenos:
class Root(object):
def __init__(self, request):
pass
config = Configurator(root_factory=Root)
The default root factory is just a really stupid object that has no
behavior or state. Using :term:`traversal` against an application
that uses the object graph supplied by the default root object is
not very interesting, because the default root object has no
children. Its availability is more useful when you're developing
an application using :term:`URL dispatch`.
Items contained within the object graph are sometimes analogous to the
concept of :term:`model` objects used by many other frameworks (and
:mod:`repoze.bfg` APIs often refers to them as "models", as well).
They are typically instances of Python classes.
The object graph consists of *container* nodes and *leaf* nodes.
There is only one difference between a *container* node and a *leaf*
node: *container* nodes possess a ``__getitem__`` method while *leaf*
nodes do not. The ``__getitem__`` method was chosen as the signifying
difference between the two types of nodes because the presence of this
method is how Python itself typically determines whether an object is
"containerish" or not.
Each container node is presumed to be willing to return a child node
or raise a ``KeyError`` based on a name passed to its ``__getitem__``.
Leaf-level instances must not have a ``__getitem__``. If
instances that you'd like to be leaves already happen to have a
``__getitem__`` through some historical inequity, you should subclass
these node types and cause their ``__getitem__`` methods to simply
raise a ``KeyError``. Or just disuse them and think up another
strategy.
Usually, the traversal root is a *container* node, and as such it
contains other nodes. However, it doesn't *need* to be a container.
Your object graph can be as shallow or as deep as you require.
In general, the object graph is traversed beginning at its root object
using a sequence of path elements described by the ``PATH_INFO`` of
the current request; if there are path segments, the root object's
``__getitem__`` is called with the next path segment, and it is
expected to return another graph object. The resulting object's
``__getitem__`` is called with the very next path segment, and it is
expected to return another graph object. This happens *ad infinitum*
until all path segments are exhausted.
.. index::
single: traversal algorithm
single: view lookup
.. _traversal_algorithm:
The Traversal Algorithm
-----------------------
This section will attempt to explain the :mod:`repoze.bfg` traversal
algorithm. We'll provide a description of the algorithm, a diagram of
how the algorithm works, and some example traversal scenarios that
might help you understand how the algorithm operates against a
specific object graph.
We'll also talk a bit about :term:`view lookup`. The
:ref:`views_chapter` chapter discusses :term:`view lookup` in detail,
and it is the canonical source for information about views.
Technically, :term:`view lookup` is a :mod:`repoze.bfg` subsystem that
is separated from traversal entirely. However, we'll describe the
fundamental behavior of view lookup in the examples in the next few
sections to give you an idea of how traversal and view lookup
cooperate, because they are almost always used together.
.. index::
single: view name
single: context
single: subpath
single: root factory
single: default view
A Description of The Traversal Algorithm
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When a user requests a page from your :mod:`traversal` -powered
application, the system uses this algorithm to find a :term:`context`
and a :term:`view name`.
#. The request for the page is presented to the :mod:`repoze.bfg`
:term:`router` in terms of a standard :term:`WSGI` request, which
is represented by a WSGI environment and a WSGI ``start_response``
callable.
#. The router creates a :term:`request` object based on the WSGI
environment.
#. The :term:`root factory` is called with the :term:`request`. It
returns a :term:`root` object.
#. The router uses the WSGI environment's ``PATH_INFO`` information
to determine the path segments to traverse. The leading slash is
stripped off ``PATH_INFO``, and the remaining path segments are
split on the slash character to form a traversal sequence.
The traversal algorithm by default attempts to first URL-unquote
and then Unicode-decode each path segment derived from
``PATH_INFO`` from its natural byte string (``str`` type)
representation. URL unquoting is performed using the Python
standard library ``urllib.unquote`` function. Conversion from a
URL-decoded string into Unicode is attempted using the UTF-8
encoding. If any URL-unquoted path segment in ``PATH_INFO`` is
not decodeable using the UTF-8 decoding, a :exc:`TypeError` is
raised. A segment will be fully URL-unquoted and UTF8-decoded
before it is passed it to the ``__getitem__`` of any model object
during traversal.
Thus, a request with a ``PATH_INFO`` variable of ``/a/b/c`` maps
to the traversal sequence ``[u'a', u'b', u'c']``.
#. :term:`Traversal` begins at the root object returned by the root
factory. For the traversal sequence ``[u'a', u'b', u'c']``, the
root object's ``__getitem__`` is called with the name ``a``.
Traversal continues through the sequence. In our example, if the
root object's ``__getitem__`` called with the name ``a`` returns
an object (aka "object ``a``"), that object's ``__getitem__`` is
called with the name ``b``. If object A returns an object when
asked for ``b``, "object ``b``"'s ``__getitem__`` is then asked
for the name ``c``, and may return "object ``c``".
#. Traversal ends when a) the entire path is exhausted or b) when any
graph element raises a :exc:`KeyError` from its ``__getitem__`` or
c) when any non-final path element traversal does not have a
``__getitem__`` method (resulting in a :exc:`NameError`) or d)
when any path element is prefixed with the set of characters
``@@`` (indicating that the characters following the ``@@`` token
should be treated as a :term:`view name`).
#. When traversal ends for any of the reasons in the previous step,
the last object found during traversal is deemed to be the
:term:`context`. If the path has been exhausted when traversal
ends, the :term:`view name` is deemed to be the empty string
(``''``). However, if the path was *not* exhausted before
traversal terminated, the first remaining path segment is treated
as the view name.
#. Any subsequent path elements after the :term:`view name` is found
are deemed the :term:`subpath`. The subpath is always a sequence
of path segments that come from ``PATH_INFO`` that are "left over"
after traversal has completed.
Once :term:`context` and :term:`view name` and associated attributes
such as the :term:`subpath` are located, the job of :term:`traversal`
is finished. It passes back the information it obtained to its
caller, the :mod:`repoze.bfg` :term:`Router`, which subsequently
invokes :term:`view lookup` with the context and view name
information.
The traversal algorithm exposes two special cases:
- You will often end up with a :term:`view name` that is the empty
string as the result of a particular traversal. This indicates that
the view lookup machinery should look up the :term:`default view`.
The default view is a view that is registered with no name or a view
which is registered with a name that equals the empty string.
- If any path segment element begins with the special characters
``@@`` (think of them as goggles), the value of that segment minus
the goggle characters is considered the :term:`view name`
immediately and traversal stops there. This allows you to address
views that may have the same names as model instance names in the
graph unambiguously.
Finally, traversal is responsible for locating a :term:`virtual root`.
A virtual root is used during "virtual hosting"; see the
:ref:`vhosting_chapter` chapter for information. We won't speak more
about it in this chapter.
.. image:: modelgraphtraverser.png
.. index::
single: traversal examples
Traversal Algorithm Examples
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No one can be expected to understand the traversal algorithm by
analogy and description alone, so let's examine some traversal
scenarios that use concrete URLs and object graph compositions.
Let's pretend the user asks for
``http://example.com/foo/bar/baz/biz/buz.txt``. The request's
``PATH_INFO`` in that case is ``/foo/bar/baz/biz/buz.txt``. Let's
further pretend that when this request comes in that we're traversing
the following object graph::
/--
|
|-- foo
|
----bar
Here's what happens:
- :mod:`traversal` traverses the root, and attempts to find "foo",
which it finds.
- :mod:`traversal` traverses "foo", and attempts to find "bar", which
it finds.
- :mod:`traversal` traverses bar, and attempts to find "baz", which it
does not find ("bar" raises a :exc:`KeyError` when asked for "baz").
The fact that it does not find "baz" at this point does not signify an
error condition. It signifies that:
- the :term:`context` is "bar" (the context is the last item found
during traversal).
- the :term:`view name` is ``baz``
- the :term:`subpath` is ``('biz', 'buz.txt')``
At this point, traversal has ended, and :term:`view lookup` begins.
Because it's the "context", the view lookup machinery examines "bar"
to find out what "type" it is. Let's say it finds that the context is
a ``Bar`` type (because "bar" happens to be an instance of the class
``Bar``). Using the :term:`view name` (``baz``) and the type, view
lookup asks the :term:`application registry` this question:
- Please find me a :term:`view callable` registered using a
:term:`view configuration` with the name "baz" that can be used for
the class ``Bar``.
Let's say that view lookup finds no matching view type. In this
circumstance, the :mod:`repoze.bfg` :term:`router` returns the result
of the :term:`not found view` and the request ends.
However, for this graph::
/--
|
|-- foo
|
----bar
|
----baz
|
biz
The user asks for ``http://example.com/foo/bar/baz/biz/buz.txt``
- :mod:`traversal` traverses "foo", and attempts to find "bar", which
it finds.
- :mod:`traversal` traverses "bar", and attempts to find "baz", which
it finds.
- :mod:`traversal` traverses "baz", and attempts to find "biz", which
it finds.
- :mod:`traversal` traverses "biz", and attempts to find "buz.txt"
which it does not find.
The fact that it does not find "buz.txt" at this point does not
signify an error condition. It signifies that:
- the :term:`context` is "biz" (the context is the last item found
during traversal).
- the :term:`view name` is "buz.txt"
- the :term:`subpath` is an empty sequence ( ``()`` ).
At this point, traversal has ended, and :term:`view lookup` begins.
Because it's the "context", the view lookup machinery examines "biz"
to find out what "type" it is. Let's say it finds that the context is
a ``Biz`` type (because "biz" is an instance of the Python class
``Biz``). Using the :term:`view name` (``buz.txt``) and the type,
view lookup asks the :term:`application registry` this question:
- Please find me a :term:`view callable` registered with a :term:`view
configuration` with the name ``buz.txt`` that can be used for class
``Biz``.
Let's say that question is answered by the application registry; in
such a situation, the application registry returns a :term:`view
callable`. The view callable is then called with the current
:term:`WebOb` :term:`request` as the sole argument: ``request``; it is
expected to return a response.
.. sidebar:: The Example View Callables Accept Only a Request; How Do I Access the Context?
Most of the examples in this book assume that a view callable is
typically passed only a :term:`request` object. Sometimes your
view callables need access to the :term:`context`, especially when
you use :term:`traversal`. You might use a supported alternate
view callable argument list in your view callables such as the
``(context, request)`` calling convention described in
:ref:`request_and_context_view_definitions`. But you don't need to
if you don't want to. In view callables that accept only a
request, the :term:`context` found by traversal is available as the
``context`` attribute of the request object,
e.g. ``request.context``. The :term:`view name` is available as
the ``view_name`` attribute of the request object,
e.g. ``request.view_name``. Other :mod:`repoze.bfg` -specific
request attributes are also available as described in
:ref:`special_request_attributes`.
References
----------
A tutorial showing how :term:`traversal` can be used within a
:mod:`repoze.bfg` application exists in :ref:`bfg_wiki_tutorial`.
See the :ref:`views_chapter` chapter for detailed information about
:term:`view lookup`.
The :mod:`repoze.bfg.traversal` module contains API functions that
deal with traversal, such as traversal invocation from within
application code.
The :func:`repoze.bfg.url.model_url` function generates a URL when
given an object retrieved from an object graph.
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/narr/traversal.rst | 0.930482 | 0.739352 | traversal.rst | pypi |
Models
======
A :term:`model` class is typically a simple Python class defined in a
module. References to these classes and instances of such classes are
omnipresent in :mod:`repoze.bfg`:
- Model instances make up the graph that :mod:`repoze.bfg` is
willing to walk over when :term:`traversal` is used.
- The ``context`` and ``containment`` arguments to
:meth:`repoze.bfg.configuration.Configurator.add_view` often
reference a model class.
- A :term:`root factory` returns a model instance.
- A model instance is generated as a result of :term:`url dispatch`
(see the ``factory`` argument to
:meth:`repoze.bfg.configuration.Configurator.add_route`).
- A model instance is exposed to :term:`view` code as the
:term:`context` of a view.
Model objects typically store data and offer methods related to
mutating that data.
.. note::
A terminology overlap confuses people who write applications that
always use ORM packages such as SQLAlchemy, which has a very
different notion of the definition of a "model". When using the API
of common ORM packages, its conception of "model" is almost
certainly not the same conception of "model" used by
:mod:`repoze.bfg`. In particular, it can be unnatural to think of
:mod:`repoze.bfg` model objects as "models" if you develop your
application using :term:`traversal` and a relational database. When
you develop such applications, the object graph *might* be composed
completely of "model" objects (as defined by the ORM) but it also
might not be. The things that :mod:`repoze.bfg` refers to as
"models" in such an application may instead just be stand-ins that
perform a query and generate some wrapper *for* an ORM "model" or
set of ORM models. This naming overlap is slightly unfortunate.
However, many :mod:`repoze.bfg` applications (especially ones which
use :term:`ZODB`) do indeed traverse a graph full of literal model
nodes. Each node in the graph is a separate persistent object that
is stored within a database. This was the use case considered when
coming up with the "model" terminology. However, if we had it to do
all over again, we'd probably call these objects something
different to avoid confusion.
.. index::
single: model constructor
Defining a Model Constructor
----------------------------
An example of a model constructor, ``BlogEntry`` is presented below.
It is implemented as a class which, when instantiated, becomes a model
instance.
.. code-block:: python
:linenos:
import datetime
class BlogEntry(object):
def __init__(self, title, body, author):
self.title = title
self.body = body
self.author = author
self.created = datetime.datetime.now()
A model constructor may be essentially any Python object which is
callable, and which returns a model instance. In the above example,
the ``BlogEntry`` class can be "called", returning a model instance.
.. index::
single: model interfaces
.. _models_which_implement_interfaces:
Model Instances Which Implement Interfaces
------------------------------------------
Model instances can optionally be made to implement an
:term:`interface`. An interface is used to tag a model object with a
"type" that can later be referred to within :term:`view
configuration`.
Specifying an interface instead of a class as the ``context`` or
``containment`` arguments within :term:`view configuration` statements
effectively makes it possible to use a single view callable for more
than one class of object. If your application is simple enough that
you see no reason to want to do this, you can skip reading this
section of the chapter.
For example, here's some code which describes a blog entry which also
declares that the blog entry implements an :term:`interface`.
.. code-block:: python
:linenos:
import datetime
from zope.interface import implements
from zope.interface import Interface
class IBlogEntry(Interface):
pass
class BlogEntry(object):
implements(IBlogEntry)
def __init__(self, title, body, author):
self.title = title
self.body = body
self.author = author
self.created = datetime.datetime.now()
This model consists of two things: the class which defines the model
constructor (above as the class ``BlogEntry``), and an
:term:`interface` attached to the class (via an ``implements``
statement at class scope using the ``IBlogEntry`` interface as its
sole argument).
The interface object used must be an instance of a class that inherits
from :class:`zope.interface.Interface`.
A model class may *implement* zero or more interfaces. You specify
that a model implements an interface by using the
:func:`zope.interface.implements` function at class scope. The above
``BlogEntry`` model implements the ``IBlogEntry`` interface.
You can also specify that a *particular* model instance provides an
interface (as opposed to its class). To do so, use the
:func:`zope.interface.directlyProvides` function:
.. code-block:: python
:linenos:
from zope.interface import directlyProvides
from zope.interface import Interface
class IBlogEntry(Interface):
pass
class BlogEntry(object):
def __init__(self, title, body, author):
self.title = title
self.body = body
self.author = author
self.created = datetime.datetime.now()
entry = BlogEntry('title', 'body', 'author')
directlyProvides(entry, IBlogEntry)
:func:`zope.interface.directlyProvides` will replace any existing
interface that was previously provided by an instance. If a model
object already has instance-level interface declarations that you
don't want to replace, use the :func:`zope.interface.alsoProvides`
function:
.. code-block:: python
:linenos:
from zope.interface import alsoProvides
from zope.interface import directlyProvides
from zope.interface import Interface
class IBlogEntry1(Interface):
pass
class IBlogEntry2(Interface):
pass
class BlogEntry(object):
def __init__(self, title, body, author):
self.title = title
self.body = body
self.author = author
self.created = datetime.datetime.now()
entry = BlogEntry('title', 'body', 'author')
directlyProvides(entry, IBlogEntry1)
alsoProvides(entry, IBlogEntry2)
:func:`zope.interface.alsoProvides` will augment the set of interfaces
directly provided by an instance instead of overwriting them like
:func:`zope.interface.directlyProvides` does.
For more information about how model interfaces can be used by view
configuration, see :ref:`using_model_interfaces`.
.. index::
single: model graph
single: traversal graph
single: object graph
single: container nodes
single: leaf nodes
Defining a Graph of Model Instances for Traversal
-------------------------------------------------
When :term:`traversal` is used (as opposed to a purely :term:`url
dispatch` based application), :mod:`repoze.bfg` expects to be able to
traverse a graph composed of model instances. Traversal begins at a
root model, and descends into the graph recursively via each found
model's ``__getitem__`` method. :mod:`repoze.bfg` imposes the
following policy on model instance nodes in the graph:
- Nodes which contain other nodes (aka "container" nodes) must supply
a ``__getitem__`` method which is willing to resolve a unicode name
to a subobject. If a subobject by that name does not exist in the
container, ``__getitem__`` must raise a :exc:`KeyError`. If a
subobject by that name *does* exist, the container should return the
subobject (another model instance).
- Nodes which do not contain other nodes (aka "leaf" nodes) must not
implement a ``__getitem__``, or if they do, their ``__getitem__``
method must raise a :exc:`KeyError`.
See :ref:`traversal_chapter` for more information about how traversal
works against model instances.
.. index::
pair: location-aware; model
.. _location_aware:
Location-Aware Model Instances
------------------------------
.. sidebar:: Using :mod:`repoze.bfg.traversalwrapper`
If you'd rather not manage the ``__name__`` and ``__parent__``
attributes of your models "by hand", an add on package named
:mod:`repoze.bfg.traversalwrapper` can help.
In order to use this helper feature, you must first install the
:mod:`repoze.bfg.traversalwrapper` package (available via `SVN
<http://svn.repoze.org/repoze.bfg.traversalwrapper>`_), then
register its ``ModelGraphTraverser`` as the traversal policy, rather
than the default :mod:`repoze.bfg` traverser. The package contains
instructions.
Once :mod:`repoze.bfg` is configured with this feature, you will no
longer need to manage the ``__parent__`` and ``__name__`` attributes
on graph objects "by hand". Instead, as necessary, during traversal
:mod:`repoze.bfg` will wrap each object (even the root object) in a
``LocationProxy`` which will dynamically assign a ``__name__`` and a
``__parent__`` to the traversed object (based on the last traversed
object and the name supplied to ``__getitem__``). The root object
will have a ``__name__`` attribute of ``None`` and a ``__parent__``
attribute of ``None``.
Applications which use :term:`traversal` to locate the :term:`context`
of a view must ensure that the model instances that make up the model
graph are "location aware".
In order for :mod:`repoze.bfg` location, security, URL-generation, and
traversal functions (such as the functions exposed in
:ref:`location_module`, :ref:`traversal_module`, and :ref:`url_module`
as well as certain functions in :ref:`security_module` ) to work
properly against the instances in an object graph, all nodes in the
graph must be :term:`location` -aware. This means they must have two
attributes: ``__parent__`` and ``__name__``.
The ``__parent__`` attribute should be a reference to the node's
parent model instance in the graph. The ``__name__`` attribute should
be the name that a node's parent refers to the node via
``__getitem__``.
The ``__parent__`` of the root object should be ``None`` and its
``__name__`` should be the empty string. For instance:
.. code-block:: python
class MyRootObject(object):
__name__ = ''
__parent__ = None
A node returned from the root item's ``__getitem__`` method should
have a ``__parent__`` attribute that is a reference to the root
object, and its ``__name__`` attribute should match the name by which
it is reachable via the root object's ``__getitem__``. *That*
object's ``__getitem__`` should return objects that have a
``__parent__`` attribute that points at that object, and
``__getitem__``-returned objects should have a ``__name__`` attribute
that matches the name by which they are retrieved via ``__getitem__``,
and so on.
.. warning:: If your root model object has a ``__name__`` argument
that is not ``None`` or the empty string, URLs returned by the
:func:`repoze.bfg.url.model_url` function and paths generated by
the :func:`repoze.bfg.traversal.model_path` and
:func:`repoze.bfg.traversal.model_path_tuple` APIs will be
generated improperly. The value of ``__name__`` will be prepended
to every path and URL generated (as opposed to a single leading
slash or empty tuple element).
.. index::
single: model API functions
single: url generation (traversal)
:mod:`repoze.bfg` API Functions That Act Against Models
-------------------------------------------------------
A model instance is used as the :term:`context` argument provided to a
view. See :ref:`traversal_chapter` and :ref:`urldispatch_chapter` for
more information about how a model instance becomes the context.
The APIs provided by :ref:`traversal_module` are used against model
instances. These functions can be used to find the "path" of a model,
the root model in an object graph, or generate a URL to a model.
The APIs provided by :ref:`location_module` are used against model
instances. These can be used to walk down an object graph, or
conveniently locate one object "inside" another.
Some APIs in :ref:`security_module` accept a model object as a
parameter. For example, the
:func:`repoze.bfg.security.has_permission` API accepts a "context" (a
model object) as one of its arguments; the ACL is obtained from this
model or one of its ancestors. Other APIs in the
:mod:`repoze.bfg.security` module also accept :term:`context` as an
argument, and a context is always a model.
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/narr/models.rst | 0.952585 | 0.707379 | models.rst | pypi |
.. index::
single: context finding
.. _contextfinding_chapter:
Context Finding and View Lookup
-------------------------------
In order for a web application to perform any useful action, the web
framework must provide a mechanism to find and invoke code written by
the application developer based on parameters present in the
:term:`request`.
:mod:`repoze.bfg` uses two separate but cooperating subsystems to find
and invoke code written by the application developer: :term:`context
finding` and :term:`view lookup`.
- A :mod:`repoze.bfg` :term:`context finding` subsystem is given a
:term:`request`; it is responsible for finding a :term:`context`
object and a :term:`view name` based on information present in the
request.
- Using the context and view name provided by :term:`context finding`,
the :mod:`repoze.bfg` :term:`view lookup` subsystem is provided with
a :term:`request`, a :term:`context` and a :term:`view name`. It is
then responsible for finding and invoking a :term:`view callable`.
A view callable is a specific bit of code written and registered by
the application developer which receives the :term:`request` and
which returns a :term:`response`.
These two subsystems are used by :mod:`repoze.bfg` serially:
first, a :term:`context finding` subsystem does its job. Then the
result of context finding is passed to the :term:`view lookup`
subsystem. The view lookup system finds a :term:`view callable`
written by an application developer, and invokes it. A view callable
returns a :term:`response`. The response is returned to the
requesting user.
.. sidebar:: What Good is A Context Finding Subsystem?
The :term:`URL dispatch` mode of :mod:`repoze.bfg` as well as many
other web frameworks such as :term:`Pylons` or :term:`Django`
actually collapse the two steps of context finding and view lookup
into a single step. In these systems, a URL can map *directly* to
a view callable. This makes them simpler to understand than
systems which use distinct subsystems to locate a context and find
a view. However, explicitly finding a context provides extra
flexibility. For example, it makes it possible to protect your
application with declarative context-sensitive instance-level
:term:`authorization`, which is not well-supported in frameworks
that do not provide a notion of a context.
There are two separate :term:`context finding` subsystems in
:mod:`repoze.bfg`: :term:`traversal` and :term:`URL dispatch`. The
subsystems are documented within this chapter. They can be used
separately or they can be combined. Three chapters which follow
describe :term:`context finding`: :ref:`traversal_chapter`,
:ref:`urldispatch_chapter` and :ref:`hybrid_chapter`.
There is only one :term:`view lookup` subsystem present in
:mod:`repoze.bfg`. Where appropriate, within this chapter, we
describe how view lookup interacts with context finding. One chapter
which follows describes :term:`view lookup`: :ref:`views_chapter`.
Should I Use Traversal or URL Dispatch for Context Finding?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:term:`URL dispatch` is very straightforward. When you limit your
application to using URL dispatch, you know every URL that your
application might generate or respond to, all the URL matching
elements are listed in a single place, and you needn't think about
:term:`context finding` or :term:`view lookup` at all.
URL dispatch can easily handle URLs such as
``http://example.com/members/Chris``, where it's assumed that each
item "below" ``members`` in the URL represents a single member in some
system. You just match everything "below" ``members`` to a particular
:term:`view callable`, e.g. ``/members/:memberid``.
However, URL dispatch is not very convenient if you'd like your URLs
to represent an arbitrary hierarchy. For example, if you need to
infer the difference between sets of URLs such as these, where the
``document`` in the first URL represents a PDF document, and
``/stuff/page`` in the second represents an OpenOffice document in a
"stuff" folder.
.. code-block:: text
http://example.com/members/Chris/document
http://example.com/members/Chris/stuff/page
It takes more pattern matching assertions to be able to make
hierarchies work in URL-dispatch based systems, and some assertions
just aren't possible. Essentially, URL-dispatch based systems just
don't deal very well with URLs that represent arbitrary-depth
hierarchies.
But :term:`traversal` *does* work well for URLs that represent
arbitrary-depth hierarchies. Since the path segments that compose a
URL are addressed separately, it becomes very easy to form URLs that
represent arbitrary depth hierarchies in a system that uses traversal.
When you're willing to treat your application models as a graph that
can be traversed, it also becomes easy to provide "instance-level
security": you just attach a security declaration to each instance in
the graph. This is not nearly as easy to do when using URL dispatch.
In essence, the choice to use traversal vs. URL dispatch is largely
religious. Traversal dispatch probably just doesn't make any sense
when you possess completely "square" data stored in a relational
database because it requires the construction and maintenance of a
graph and requires that the developer think about mapping URLs to code
in terms of traversing that graph. However, when you have a
hierarchical data store, using traversal can provide significant
advantages over using URL-based dispatch.
Since :mod:`repoze.bfg` provides support for both approaches, you can
use either exclusively or combine them as you see fit.
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/narr/contextfinding.rst | 0.928092 | 0.663206 | contextfinding.rst | pypi |
.. _views_chapter:
Views
=====
The primary job of any :mod:`repoze.bfg` application is is to find and
invoke a :term:`view callable` when a :term:`request` reaches the
application. View callables are bits of code written by you -- the
application developer -- which do something interesting in response to
a request made to your application.
.. note::
A :mod:`repoze.bfg` :term:`view callable` is often referred to in
conversational shorthand as a :term:`view`. In this documentation,
however, we need to use less ambiguous terminology because there
are significant differences between view *configuration*, the code
that implements a view *callable*, and the process of view
*lookup*.
The chapter named :ref:`contextfinding_chapter` describes how, using
information from the :term:`request`, a :term:`context` and a
:term:`view name` are computed. But neither the context nor the view
name found are very useful unless those elements can eventually be
mapped to a :term:`view callable`.
The job of actually locating and invoking the "best" :term:`view
callable` is the job of the :term:`view lookup` subsystem. The view
lookup subsystem compares information supplied by :term:`context
finding` against :term:`view configuration` statements made by the
developer stored in the :term:`application registry` to choose the
most appropriate view callable for a specific request.
This chapter provides documentation detailing the process of creating
view callables, documentation about performing view configuration, and
a detailed explanation of view lookup.
View Callables
--------------
No matter how a view callable is eventually found, all view callables
used by :mod:`repoze.bfg` must be constructed in the same way, and
must return the same kind of return value.
Most view callables accept a single argument named ``request``. This
argument represents a :term:`WebOb` :term:`Request` object as
represented to :mod:`repoze.bfg` by the upstream :term:`WSGI` server.
A view callable may always return a :term:`WebOb` :term:`Response`
object directly. It may optionally return another arbitrary
non-Response value: if a view callable returns a non-Response result,
the result must be converted into a response by the :term:`renderer`
associated with the :term:`view configuration` for the view.
View callables can be functions, instances, or classes. View
callables can optionally be defined with an alternate calling
convention.
.. index::
single: view calling convention
single: view function
.. _function_as_view:
Defining a View Callable as a Function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The easiest way to define a view callable is to create a function that
accepts a single argument named ``request`` and which returns a
:term:`Response` object. For example, this is a "hello world" view
callable implemented as a function:
.. code-block:: python
:linenos:
from webob import Response
def hello_world(request):
return Response('Hello world!')
.. index::
single: view calling convention
single: view class
.. _class_as_view:
Defining a View Callable as a Class
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note:: This feature is new as of :mod:`repoze.bfg` 0.8.1.
A view callable may also be a class instead of a function. When a
view callable is a class, the calling semantics are slightly different
than when it is a function or another non-class callable. When a view
callable is a class, the class' ``__init__`` is called with a
``request`` parameter. As a result, an instance of the class is
created. Subsequently, that instance's ``__call__`` method is invoked
with no parameters. Views defined as classes must have the following
traits:
- an ``__init__`` method that accepts a ``request`` as its sole
positional argument or an ``__init__`` method that accepts two
arguments: ``request`` and ``context`` as per
:ref:`request_and_context_view_definitions`.
- a ``__call__`` method that accepts no parameters and which returns a
response.
For example:
.. code-block:: python
:linenos:
from webob import Response
class MyView(object):
def __init__(self, request):
self.request = request
def __call__(self):
return Response('hello')
The request object passed to ``__init__`` is the same type of request
object described in :ref:`function_as_view`.
If you'd like to use a different attribute than ``__call__`` to
represent the method expected to return a response, you can use an
``attr`` value as part of view configuration. See
:ref:`view_configuration_parameters`.
.. index::
single: view calling convention
.. _request_and_context_view_definitions:
Context-And-Request View Callable Definitions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Usually, view callables are defined to accept only a single argument:
``request``. However, view callables may alternately be defined as
classes or functions (or any callable) that accept *two* positional
arguments: a :term:`context` as the first argument and a
:term:`request` as the second argument.
The :term:`context` and :term:`request` arguments passed to a view
function defined in this style can be defined as follows:
context
An instance of a :term:`context` found via graph :term:`traversal`
or :term:`URL dispatch`. If the context is found via traversal, it
will be a :term:`model` object.
request
A :term:`WebOb` Request object representing the current WSGI
request.
The following types work as view callables in this style:
#. Functions that accept two arguments: ``context``, and ``request``,
e.g.:
.. code-block:: python
:linenos:
from webob import Response
def view(context, request):
return Response('OK')
#. Classes that have an ``__init__`` method that accepts ``context,
request`` and a ``__call__`` which accepts no arguments, e.g.:
.. code-block:: python
:linenos:
from webob import Response
class view(object):
def __init__(self, context, request):
self.context = context
self.request = request
def __call__(self):
return Response('OK')
#. Arbitrary callables that have a ``__call__`` method that accepts
``context, request``, e.g.:
.. code-block:: python
:linenos:
from webob import Response
class View(object):
def __call__(self, context, request):
return Response('OK')
view = View() # this is the view callable
This style of calling convention is most useful for :term:`traversal`
based applications, where the context object is frequently used within
the view callable code itself.
No matter which view calling convention is used, the view code always
has access to the context via ``request.context``.
.. index::
single: view response
single: response
.. _the_response:
View Callable Responses
~~~~~~~~~~~~~~~~~~~~~~~
A view callable may always return an object that implements the
:term:`WebOb` :term:`Response` interface. The easiest way to return
something that implements this interface is to return a
:class:`webob.Response` object instance directly. But any object that
has the following attributes will work:
status
The HTTP status code (including the name) for the response.
E.g. ``200 OK`` or ``401 Unauthorized``.
headerlist
A sequence of tuples representing the list of headers that should be
set in the response. E.g. ``[('Content-Type', 'text/html'),
('Content-Length', '412')]``
app_iter
An iterable representing the body of the response. This can be a
list, e.g. ``['<html><head></head><body>Hello
world!</body></html>']`` or it can be a file-like object, or any
other sort of iterable.
If a view happens to return something to the :mod:`repoze.bfg`
:term:`router` which does not implement this interface,
:mod:`repoze.bfg` will attempt to use a :term:`renderer` to
construct a response. The renderer associated with a view callable
can be varied by changing the ``renderer`` attribute in the view's
configuration. See :ref:`views_which_use_a_renderer`.
.. index::
single: view http redirect
single: http redirect (from a view)
Using a View Callable to Do A HTTP Redirect
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can issue an HTTP redirect from within a view by returning a
particular kind of response.
.. code-block:: python
:linenos:
from webob.exc import HTTPFound
def myview(request):
return HTTPFound(location='http://example.com')
All exception types from the :mod:`webob.exc` module implement the
Webob :term:`Response` interface; any can be returned as the response
from a view. See :term:`WebOb` for the documentation for this module;
it includes other response types that imply other HTTP response codes,
such as ``401 Unauthorized``.
.. index::
single: renderer
single: view renderer
.. _views_which_use_a_renderer:
Writing View Callables Which Use a Renderer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note:: This feature is new as of :mod:`repoze.bfg` 1.1
View callables needn't always return a WebOb Response object.
Instead, they may return an arbitrary Python object, with the
expectation that a :term:`renderer` will convert that object into a
response instance on behalf of the developer. Some renderers use a
templating system; other renderers use object serialization
techniques.
If you do not define a ``renderer`` attribute in :term:`view
configuration` for an associated :term:`view callable`, no renderer is
associated with the view. In such a configuration, an error is raised
when a view callable does not return an object which implements the
WebOb :term:`Response` interface, documented within
:ref:`the_response`.
View configuration can vary the renderer associated with a view
callable via the ``renderer`` attribute. For example, this ZCML
associates the ``json`` renderer with a view callable:
.. code-block:: xml
:linenos:
<view
view=".views.my_view"
renderer="json"
/>
When this configuration is added to an application, the
``.views.my_view`` view callable will now use a ``json`` renderer,
which renders view return values to a :term:`JSON` serialization.
Other built-in renderers include renderers which use the
:term:`Chameleon` templating language to render a dictionary to a
response.
If the :term:`view callable` associated with a :term:`view
configuration` returns a Response object directly (an object with the
attributes ``status``, ``headerlist`` and ``app_iter``), any renderer
associated with the view configuration is ignored, and the response is
passed back to :mod:`repoze.bfg` unmolested. For example, if your
view callable returns an instance of the :class:`webob.exc.HTTPFound`
class as a response, no renderer will be employed.
.. code-block:: python
:linenos:
from webob.exc import HTTPFound
def view(request):
return HTTPFound(location='http://example.com') # renderer avoided
Views which use a renderer can vary non-body response attributes (such
as headers and the HTTP status code) by attaching properties to the
request. See :ref:`response_request_attrs`.
Additional renderers can be added to the system as necessary via a
ZCML directive (see :ref:`adding_and_overriding_renderers`).
.. index::
single: renderers (built-in)
single: built-in renderers
.. _built_in_renderers:
Built-In Renderers
~~~~~~~~~~~~~~~~~~
Several built-in "renderers" exist in :mod:`repoze.bfg`. These
renderers can be used in the ``renderer`` attribute of view
configurations.
.. index::
pair: renderer; string
``string``: String Renderer
+++++++++++++++++++++++++++
The ``string`` renderer is a renderer which renders a view callable
result to a string. If a view callable returns a non-Response object,
and the ``string`` renderer is associated in that view's
configuration, the result will be to run the object through the Python
``str`` function to generate a string. Note that if a Unicode object
is returned by the view callable, it is not ``str()`` -ified.
Here's an example of a view that returns a dictionary. If the
``string`` renderer is specified in the configuration for this view,
the view will render the returned dictionary to the ``str()``
representation of the dictionary:
.. code-block:: python
:linenos:
from webob import Response
from repoze.bfg.view import bfg_view
@bfg_view(renderer='string')
def hello_world(request):
return {'content':'Hello!'}
The body of the response returned by such a view will be a string
representing the ``str()`` serialization of the return value:
.. code-block:: python
:linenos:
{'content': 'Hello!'}
Views which use the string renderer can vary non-body response
attributes by attaching properties to the request. See
:ref:`response_request_attrs`.
.. index::
pair: renderer; JSON
``json``: JSON Renderer
+++++++++++++++++++++++
The ``json`` renderer is a renderer which renders view callable
results to :term:`JSON`. If a view callable returns a non-Response
object it is called. It passes the return value through the
``json.dumps`` standard library function, and wraps the result in a
response object. It also sets the response content-type to
``application/json``.
Here's an example of a view that returns a dictionary. If the
``json`` renderer is specified in the configuration for this view, the
view will render the returned dictionary to a JSON serialization:
.. code-block:: python
:linenos:
from webob import Response
from repoze.bfg.view import bfg_view
@bfg_view(renderer='json')
def hello_world(request):
return {'content':'Hello!'}
The body of the response returned by such a view will be a string
representing the JSON serialization of the return value:
.. code-block:: python
:linenos:
'{"content": "Hello!"}'
The return value needn't be a dictionary, but the return value must
contain values serializable by :func:`json.dumps`.
You can configure a view to use the JSON renderer in ZCML by naming
``json`` as the ``renderer`` attribute of a view configuration, e.g.:
.. code-block:: xml
:linenos:
<view
context=".models.Hello"
view=".views.hello_world"
name="hello"
renderer="json"
/>
Views which use the JSON renderer can vary non-body response
attributes by attaching properties to the request. See
:ref:`response_request_attrs`.
.. index::
pair: renderer; chameleon
.. _chameleon_template_renderers:
``*.pt`` or ``*.txt``: Chameleon Template Renderers
+++++++++++++++++++++++++++++++++++++++++++++++++++
Two built-in renderers exist for :term:`Chameleon` templates.
If the ``renderer`` attribute of a view configuration is an absolute
path, a relative path or :term:`resource specification` which has a
final path element with a filename extension of ``.pt``, the Chameleon
ZPT renderer is used. See :ref:`chameleon_zpt_templates` for more
information about ZPT templates.
If the ``renderer`` attribute of a view configuration is an absolute
path, a source-file relative path, or a :term:`resource specification`
which has a final path element with a filename extension of ``.txt``,
the :term:`Chameleon` text renderer is used. See
:ref:`chameleon_zpt_templates` for more information about Chameleon
text templates.
The behavior of these renderers is the same, except for the engine
used to render the template.
When a ``renderer`` attribute that names a Chameleon template path
(e.g. ``templates/foo.pt`` or ``templates/foo.txt``) is used, the view
must return a Response object or a Python *dictionary*. If the view
callable with an associated template returns a Python dictionary, the
named template will be passed the dictionary as its keyword arguments,
and the template renderer implementation will return the resulting
rendered template in a response to the user. If the view callable
returns anything but a Response object or a dictionary, an error will
be raised.
Before passing keywords to the template, the keywords derived from the
dictionary returned by the view are augmented. The callable object
-- whatever object was used to define the ``view`` -- will be
automatically inserted into the set of keyword arguments passed to the
template as the ``view`` keyword. If the view callable was a class,
the ``view`` keyword will be an instance of that class. Also inserted
into the keywords passed to the template are ``renderer_name`` (the
name of the renderer, which may be a full path or a package-relative
name, typically the full string used in the ``renderer`` attribute of
the directive), ``context`` (the context of the view used to render
the template), and ``request`` (the request passed to the view used to
render the template).
Here's an example view configuration which uses a Chameleon ZPT
renderer:
.. code-block:: xml
:linenos:
<view
context=".models.Hello"
view=".views.hello_world"
name="hello"
renderer="templates/foo.pt"
/>
Here's an example view configuration which uses a Chameleon text
renderer:
.. code-block:: xml
:linenos:
<view
context=".models.Hello"
view=".views.hello_world"
name="hello"
renderer="templates/foo.txt"
/>
Views which use a Chameleon renderer can vary response attributes by
attaching properties to the request. See
:ref:`response_request_attrs`.
.. index::
single: response headers (from a renderer)
single: renderer response headers
.. _response_request_attrs:
Varying Attributes of Rendered Responses
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before a response that is constructed as the result of the use of a
:term:`renderer` is returned to :mod:`repoze.bfg`, several attributes
of the request are examined which have the potential to influence
response behavior.
View callables that don't directly return a response should set these
values on the ``request`` object via ``setattr`` within the view
callable to influence associated response attributes.
``response_content_type``
Defines the content-type of the resulting response,
e.g. ``text/xml``.
``response_headerlist``
A sequence of tuples describing cookie values that should be set in
the response, e.g. ``[('Set-Cookie', 'abc=123'), ('X-My-Header',
'foo')]``.
``response_status``
A WSGI-style status code (e.g. ``200 OK``) describing the status of
the response.
``response_charset``
The character set (e.g. ``UTF-8``) of the response.
``response_cache_for``
A value in seconds which will influence ``Cache-Control`` and
``Expires`` headers in the returned response. The same can also be
achieved by returning various values in the ``response_headerlist``,
this is purely a convenience.
For example, if you need to change the response status from within a
view callable that uses a renderer, assign the ``response_status``
attribute to the request before returning a result:
.. code-block:: python
:linenos:
from repoze.bfg.view import bfg_view
@bfg_view(name='gone', renderer='templates/gone.pt')
def myview(request):
request.response_status = '404 Not Found'
return {'URL':request.URL}
.. index::
single: renderer (adding)
.. _adding_and_overriding_renderers:
Adding and Overriding Renderers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
New templating systems and serializers can be associated with
:mod:`repoze.bfg` renderer names. To this end, configuration
declarations can be made which override an existing :term:`renderer
factory` and which add a new renderer factory.
Adding or overriding a renderer is accomplished via :term:`ZCML` or
via imperative configuration. Renderers can be registered
imperatively using the
:meth:`repoze.bfg.configuration.Configurator.add_renderer` API or via
the :ref:`renderer_directive` ZCML directive.
For example, to add a renderer which renders views which have a
``renderer`` attribute that is a path that ends in ``.jinja2``:
.. topic:: Via ZCML
.. code-block:: xml
:linenos:
<renderer
name=".jinja2"
factory="my.package.MyJinja2Renderer"/>
The ``factory`` attribute is a :term:`dotted Python name` that must
point to an implementation of a :term:`renderer factory`.
The ``name`` attribute is the renderer name.
.. topic:: Via Imperative Configuration
.. code-block:: python
:linenos:
from my.package import MyJinja2Renderer
config.add_renderer('.jinja2', MyJinja2Renderer)
The first argument is the renderer name.
The second argument is a reference to an implementation of a
:term:`renderer factory` or a :term:`dotted Python name` referring
to such an object.
Adding a New Renderer
+++++++++++++++++++++
You may a new renderer by creating and registering a :term:`renderer
factory`.
A renderer factory implementation is usually a class which has the
following interface:
.. code-block:: python
:linenos:
class RendererFactory:
def __init__(self, name):
""" Constructor: ``name`` may be an absolute path or a
resource specification """
def __call__(self, value, system):
""" Call a the renderer implementation with the value and
the system value passed in as arguments and return the
result (a string or unicode object). The value is the
return value of a view. The system value is a dictionary
containing available system values (e.g. ``view``,
``context``, and ``request``). """
There are essentially two different kinds of renderer factories:
- A renderer factory which expects to accept a :term:`resource
specification` or an absolute path as the ``name`` value in its
constructor. These renderer factories are registered with a
``name`` value that begins with a dot (``.``). These types of
renderer factories usually relate to a file on the filesystem, such
as a template.
- A renderer factory which expects to accept a token that does not
represent a filesystem path or a resource specification in its
constructor. These renderer factories are registered with a
``name`` value that does not begin with a dot. These renderer
factories are typically object serializers.
.. sidebar:: Resource Specifications
A resource specification is a colon-delimited identifier for a
:term:`resource`. The colon separates a Python :term:`package`
name from a package subpath. For example, the resource
specification ``my.package:static/baz.css`` identifies the file
named ``baz.css`` in the ``static`` subdirectory of the
``my.package`` Python :term:`package`.
Here's an example of the registration of a simple renderer factory via
ZCML:
.. code-block:: xml
:linenos:
<renderer
name="amf"
factory="my.package.MyAMFRenderer"/>
Adding the above ZCML to your application will allow you to use the
``my.package.MyAMFRenderer`` renderer factory implementation in view
configurations by referring to it as ``amf`` in the ``renderer``
attribute of a :term:`view configuration`:
.. code-block:: python
:linenos:
from repoze.bfg.view import bfg_view
@bfg_view(renderer='amf')
def myview(request):
return {'Hello':'world'}
At startup time, when a :term:`view configuration` is encountered
which has a ``name`` argument that does not contain a dot, such as the
above ``amf`` is encountered, the full value of the ``name`` attribute
is used to construct a renderer from the associated renderer factory.
In this case, the view configuration will create an instance of an
``AMFRenderer`` for each view configuration which includes ``amf`` as
its renderer value. The ``name`` passed to the ``AMFRenderer``
constructor will always be ``amf``.
Here's an example of the registration of a more complicated renderer
factory, which expects to be passed a filesystem path:
.. code-block:: xml
:linenos:
<renderer
name=".jinja2"
factory="my.package.MyJinja2Renderer"/>
Adding the above ZCML to your application will allow you to use the
``my.package.MyJinja2Renderer`` renderer factory implementation in
view configurations by referring to any ``renderer`` which *ends in*
``.jinja`` in the ``renderer`` attribute of a :term:`view
configuration`:
.. code-block:: python
:linenos:
from repoze.bfg.view import bfg_view
@bfg_view(renderer='templates/mytemplate.jinja2')
def myview(request):
return {'Hello':'world'}
When a :term:`view configuration` which has a ``name`` attribute that
does contain a dot, such as ``templates/mytemplate.jinja2`` above is
encountered at startup time, the value of the name attribute is split
on its final dot. The second element of the split is typically the
filename extension. This extension is used to look up a renderer
factory for the configured view. Then the value of ``renderer`` is
passed to the factory to create a renderer for the view. In this
case, the view configuration will create an instance of a
``Jinja2Renderer`` for each view configuration which includes anything
ending with ``.jinja2`` as its ``renderer`` value. The ``name``
passed to the ``Jinja2Renderer`` constructor will usually be a
:term:`resource specification`, but may also be an absolute path; the
renderer factory implementation should be able to deal with either.
See also :ref:`renderer_directive` and
:meth:`repoze.bfg.configuration.Configurator.add_renderer`.
Overriding an Existing Renderer
+++++++++++++++++++++++++++++++
You can associate more than one filename extension with the same
existing renderer implementation as necessary if you need to use a
different file extension for the same kinds of templates. For
example, to associate the ``.zpt`` extension with the Chameleon ZPT
renderer factory, use:
.. code-block:: xml
:linenos:
<renderer
name=".zpt"
factory="repoze.bfg.chameleon_zpt.renderer_factory"/>
After you do this, :mod:`repoze.bfg` will treat templates ending in
both the ``.pt`` and ``.zpt`` filename extensions as Chameleon ZPT
templates.
To override the default mapping in which files with a ``.pt``
extension are rendered via a Chameleon ZPT page template renderer, use
a variation on the following in your application's ZCML:
.. code-block:: xml
:linenos:
<renderer
name=".pt"
factory="my.package.pt_renderer"/>
After you do this, the :term:`renderer factory` in
``my.package.pt_renderer`` will be used to render templates which end
in ``.pt``, replacing the default Chameleon ZPT renderer.
To override the default mapping in which files with a ``.txt``
extension are rendered via a Chameleon text template renderer, use a
variation on the following in your application's ZCML:
.. code-block:: xml
:linenos:
<renderer
name=".txt"
factory="my.package.text_renderer"/>
After you do this, the :term:`renderer factory` in
``my.package.text_renderer`` will be used to render templates which
end in ``.txt``, replacing the default Chameleon text renderer.
To associate a *default* renderer with *all* view configurations (even
ones which do not possess a ``renderer`` attribute), use a variation
on the following (ie. omit the ``name`` attribute to the renderer
tag):
.. code-block:: xml
:linenos:
<renderer
factory="repoze.bfg.renderers.json_renderer_factory"/>
See also :ref:`renderer_directive` and
:meth:`repoze.bfg.configuration.Configurator.add_renderer`.
.. index::
single: view exceptions
.. _special_exceptions_in_callables:
Using Special Exceptions In View Callables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Usually when a Python exception is raised within a view callable,
:mod:`repoze.bfg` allows the exception to propagate all the way out to
the :term:`WSGI` server which invoked the application.
However, for convenience, two special exceptions exist which are
always handled by :mod:`repoze.bfg` itself. These are
:exc:`repoze.bfg.exceptions.NotFound` and
:exc:`repoze.bfg.exceptions.Forbidden`. Both are exception classes
which accept a single positional constructor argument: a ``message``.
If :exc:`repoze.bfg.exceptions.NotFound` is raised within view code,
the result of the :term:`Not Found View` will be returned to the user
agent which performed the request.
If :exc:`repoze.bfg.exceptions.Forbidden` is raised within view code,
the result of the :term:`Forbidden View` will be returned to the user
agent which performed the request.
In all cases, the message provided to the exception constructor is
made available to the view which :mod:`repoze.bfg` invokes as
``request.exception.args[0]``.
.. index::
single: exception views
.. _exception_views:
Exception Views
~~~~~~~~~~~~~~~~
The machinery which allows the special
:exc:`repoze.bfg.exceptions.NotFound` and
:exc:`repoze.bfg.exceptions.Forbidden` exceptions to be caught by
specialized views as described in
:ref:`special_exceptions_in_callables` can also be used by application
developers to convert arbitrary exceptions to responses.
To register a view that should be called whenever a particular
exception is raised from with :mod:`repoze.bfg` view code, use the
exception class or one of its superclasses as the ``context`` of a
view configuration which points at a view callable you'd like to
generate a response.
For example, given the following exception class in a module named
``helloworld.exceptions``:
.. code-block:: python
:linenos:
class ValidationFailure(Exception):
def __init__(self, msg):
self.msg = msg
You can wire a view callable to be called whenever any of your *other*
code raises a ``hellworld.exceptions.ValidationFailure`` exception:
.. code-block:: python
:linenos:
from helloworld.exceptions import ValidationFailure
@bfg_view(context=ValidationFailure)
def failed_validation(exc, request):
response = Response('Failed validation: %s' % exc.msg)
response.status_int = 500
return response
Assuming that a :term:`scan` was run to pick up this view
registration, this view callable will be invoked whenever a
``helloworld.exceptions.ValidationError`` is raised by your
application's view code. The same exception raised by a custom root
factory or a custom traverser is also caught and hooked.
Other normal view predicates can also be used in combination with an
exception view registration:
.. code-block:: python
:linenos:
from repoze.bfg.view import bfg_view
from repoze.bfg.exceptions import NotFound
from webob.exc import HTTPNotFound
@bfg_view(context=NotFound, route_name='home')
def notfound_view(request):
return HTTPNotFound()
The above exception view names the ``route_name`` of ``home``, meaning
that it will only be called when the route matched has a name of
``home``. You can therefore have more than one exception view for any
given exception in the system: the "most specific" one will be called
when the set of request circumstances which match the view
registration.
The only view predicate that cannot be not be used successfully when
creating an exception view configuration is ``name``. The name used
to look up an exception view is always the empty string. Views
registered as exception views which have a name will be ignored.
.. note::
Normal (non-exception) views registered against a context which
inherits from :exc:`Exception` will work normally. When an
exception view configuraton is processed, *two* exceptions are
registered. One as a "normal" view, the other as an "exception"
view. This means that you can use an exception as ``context`` for a
normal view.
The feature can be used with any view registration mechanism
(``@bfg_view`` decorator, ZCML, or imperative ``add_view`` styles).
.. index::
single: unicode, views, and forms
single: forms, views, and unicode
single: views, forms, and unicode
Handling Form Submissions in View Callables (Unicode and Character Set Issues)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Most web applications need to accept form submissions from web
browsers and various other clients. In :mod:`repoze.bfg`, form
submission handling logic is always part of a :term:`view`. For a
general overview of how to handle form submission data using the
:term:`WebOb` API, see :ref:`webob_chapter` and `"Query and POST
variables" within the WebOb documentation
<http://pythonpaste.org/webob/reference.html#query-post-variables>`_.
:mod:`repoze.bfg` defers to WebOb for its request and response
implementations, and handling form submission data is a property of
the request implementation. Understanding WebOb's request API is the
key to understanding how to process form submission data.
There are some defaults that you need to be aware of when trying to
handle form submission data in a :mod:`repoze.bfg` view. Because
having high-order (non-ASCII) characters in data contained within form
submissions is exceedingly common, and because the UTF-8 encoding is
the most common encoding used on the web for non-ASCII character data,
and because working and storing Unicode values is much saner than
working with and storing bytestrings, :mod:`repoze.bfg` configures the
:term:`WebOb` request machinery to attempt to decode form submission
values into Unicode from the UTF-8 character set implicitly. This
implicit decoding happens when view code obtains form field values via
the :term:`WebOb` ``request.params``, ``request.GET``, or
``request.POST`` APIs.
For example, let's assume that the following form page is served up to
a browser client, and its ``action`` points at some :mod:`repoze.bfg`
view code:
.. code-block:: xml
:linenos:
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
</head>
<form method="POST" action="myview">
<div>
<input type="text" name="firstname"/>
</div>
<div>
<input type="text" name="lastname"/>
</div>
<input type="submit" value="Submit"/>
</form>
</html>
The ``myview`` view code in the :mod:`repoze.bfg` application *must*
expect that the values returned by ``request.params`` will be of type
``unicode``, as opposed to type ``str``. The following will work to
accept a form post from the above form:
.. code-block:: python
:linenos:
def myview(request):
firstname = request.params['firstname']
lastname = request.params['lastname']
But the following ``myview`` view code *may not* work, as it tries to
decode already-decoded (``unicode``) values obtained from
``request.params``:
.. code-block:: python
:linenos:
def myview(request):
# the .decode('utf-8') will break below if there are any high-order
# characters in the firstname or lastname
firstname = request.params['firstname'].decode('utf-8')
lastname = request.params['lastname'].decode('utf-8')
For implicit decoding to work reliably, you must ensure that every
form you render that posts to a :mod:`repoze.bfg` view is rendered via
a response that has a ``;charset=UTF-8`` in its ``Content-Type``
header; or, as in the form above, with a ``meta http-equiv`` tag that
implies that the charset is UTF-8 within the HTML ``head`` of the page
containing the form. This must be done explicitly because all known
browser clients assume that they should encode form data in the
character set implied by ``Content-Type`` value of the response
containing the form when subsequently submitting that form; there is
no other generally accepted way to tell browser clients which charset
to use to encode form data. If you do not specify an encoding
explicitly, the browser client will choose to encode form data in its
default character set before submitting it. The browser client may
have a non-UTF-8 default encoding. If such a request is handled by
your view code, when the form submission data is encoded in a non-UTF8
charset, eventually the WebOb request code accessed within your view
will throw an error when it can't decode some high-order character
encoded in another character set within form data e.g. when
``request.params['somename']`` is accessed.
If you are using the :class:`webob.Response` class to generate a
response, or if you use the ``render_template_*`` templating APIs, the
UTF-8 charset is set automatically as the default via the
``Content-Type`` header. If you return a ``Content-Type`` header
without an explicit charset, a WebOb request will add a
``;charset=utf-8`` trailer to the ``Content-Type`` header value for
you for response content types that are textual (e.g. ``text/html``,
``application/xml``, etc) as it is rendered. If you are using your
own response object, you will need to ensure you do this yourself.
To avoid implicit form submission value decoding, so that the values
returned from ``request.params``, ``request.GET`` and ``request.POST``
are returned as bytestrings rather than Unicode, add the following to
your application's ``configure.zcml``::
<subscriber for="repoze.bfg.interfaces.INewRequest"
handler="repoze.bfg.request.make_request_ascii"/>
You can then control form post data decoding "by hand" as necessary.
For example, when this subscriber is active, the second example above
will work unconditionally as long as you ensure that your forms are
rendered in a request that has a ``;charset=utf-8`` stanza on its
``Content-Type`` header.
.. note:: The behavior that form values are decoded from UTF-8 to
Unicode implicitly was introduced in :mod:`repoze.bfg` 0.7.0.
Previous versions of :mod:`repoze.bfg` performed no implicit
decoding of form values (the default was to treat values as
bytestrings).
.. note:: Only the *values* of request params obtained via
``request.params``, ``request.GET`` or ``request.POST`` are decoded
to Unicode objects implicitly in :mod:`repoze.bfg`'s default
configuration. The keys are still strings.
.. index::
single: view configuration
.. _view_configuration:
View Configuration: Mapping a Context to a View
-----------------------------------------------
A developer makes a :term:`view callable` available for use within a
:mod:`repoze.bfg` application via :term:`view configuration`. A view
configuration associates a view callable with a set of statements
about the set of circumstances which must be true for the view
callable to be invoked.
A view configuration statement is made about information present in
the :term:`context` and in the :term:`request`, as well as the
:term:`view name`. These three pieces of information are known,
collectively, as a :term:`triad`.
View configuration is performed in one of three ways:
- by adding a ``<view>`` declaration to :term:`ZCML` used by your
application as per :ref:`mapping_views_using_zcml_section` and
:ref:`view_directive`.
- by running a :term:`scan` against application source code which has
a :class:`repoze.bfg.view.bfg_view` decorator attached to a Python
object as per :class:`repoze.bfg.view.bfg_view` and
:ref:`mapping_views_using_a_decorator_section`.
- by using the :meth:`repoze.bfg.configuration.Configurator.add_view`
method as per :meth:`repoze.bfg.configuration.Configurator.add_view`
and :ref:`mapping_views_using_imperative_config_section`.
Each of these mechanisms is completely equivalent to the other.
A view configuration might also be performed by virtue of :term:`route
configuration`. View configuration via route configuration is
performed in one of the following two ways:
- by using the :meth:`repoze.bfg.configuration.Configurator.add_route`
method to create a route with a ``view`` argument.
- by adding a ``<route>`` declaration that uses a ``view`` attribute to
:term:`ZCML` used by your application as per :ref:`route_directive`.
.. _view_configuration_parameters:
View Configuration Parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All forms of view configuration accept the same general types of
arguments.
Many arguments supplied during view configuration are :term:`view
predicate` arguments. View predicate arguments used during view
configuration are used to narrow the set of circumstances in which
:mod:`view lookup` will find a particular view callable. In general,
the fewer number of predicates which are supplied to a particular view
configuration, the more likely it is that the associated view callable
will be invoked. The greater the number supplied, the less likely.
Some view configuration arguments are non-predicate arguments. These
tend to modify the response of the view callable or prevent the view
callable from being invoked due to an authorization policy. The
presence of non-predicate arguments in a view configuration does not
narrow the circumstances in which the view callable will be invoked.
Non-Predicate Arguments
+++++++++++++++++++++++
``permission``
The name of a :term:`permission` that the user must possess in order
to invoke the :term:`view callable`. See
:ref:`view_security_section` for more information about view
security and permissions.
If ``permission`` is not supplied, no permission is registered for
this view (it's accessible by any caller).
``attr``
The view machinery defaults to using the ``__call__`` method of the
:term:`view callable` (or the function itself, if the view callable
is a function) to obtain a response. The ``attr`` value allows you
to vary the method attribute used to obtain the response. For
example, if your view was a class, and the class has a method named
``index`` and you wanted to use this method instead of the class'
``__call__`` method to return the response, you'd say
``attr="index"`` in the view configuration for the view. This is
most useful when the view definition is a class.
If ``attr`` is not supplied, ``None`` is used (implying the function
itself if the view is a function, or the ``__call__`` callable
attribute if the view is a class).
``renderer``
This is either a single string term (e.g. ``json``) or a string
implying a path or :term:`resource specification`
(e.g. ``templates/views.pt``) naming a :term:`renderer`
implementation. If the ``renderer`` value does not contain a dot
(``.``), the specified string will be used to look up a renderer
implementation, and that renderer implementation will be used to
construct a response from the view return value. If the
``renderer`` value contains a dot (``.``), the specified term will
be treated as a path, and the filename extension of the last element
in the path will be used to look up the renderer implementation,
which will be passed the full path. The renderer implementation
will be used to construct a :term:`response` from the view return
value.
When the renderer is a path, although a path is usually just a
simple relative pathname (e.g. ``templates/foo.pt``, implying that a
template named "foo.pt" is in the "templates" directory relative to
the directory of the current :term:`package`), a path can be
absolute, starting with a slash on UNIX or a drive letter prefix on
Windows. The path can alternately be a :term:`resource
specification` in the form
``some.dotted.package_name:relative/path``, making it possible to
address template resources which live in a separate package.
The ``renderer`` attribute is optional. If it is not defined, the
"null" renderer is assumed (no rendering is performed and the value
is passed back to the upstream :mod:`repoze.bfg` machinery
unmolested). Note that if the view callable itself returns a
:term:`response` (see :ref:`the_response`), the specified renderer
implementation is never called.
``wrapper``
The :term:`view name` of a different :term:`view configuration`
which will receive the response body of this view as the
``request.wrapped_body`` attribute of its own :term:`request`, and
the :term:`response` returned by this view as the
``request.wrapped_response`` attribute of its own request. Using a
wrapper makes it possible to "chain" views together to form a
composite response. The response of the outermost wrapper view will
be returned to the user. The wrapper view will be found as any view
is found: see :ref:`view_lookup`. The "best" wrapper view will be
found based on the lookup ordering: "under the hood" this wrapper
view is looked up via
``repoze.bfg.view.render_view_to_response(context, request,
'wrapper_viewname')``. The context and request of a wrapper view is
the same context and request of the inner view.
If ``wrapper`` is not supplied, no wrapper view is used.
Predicate Arguments
+++++++++++++++++++
``name``
The :term:`view name` required to match this view callable. Read
:ref:`traversal_chapter` to understand the concept of a view name.
If ``name`` is not supplied, the empty string is used (implying the
default view).
``context``
An object representing Python class that the :term:`context` must be
an instance of, *or* the :term:`interface` that the :term:`context`
must provide in order for this view to be found and called. This
predicate is true when the :term:`context` is an instance of the
represented class or if the :term:`context` provides the represented
interface; it is otherwise false.
If ``context`` is not supplied, the value ``None``, which matches
any model, is used.
``route_name``
If ``route_name`` is supplied, the view callable will be invoked
only when the named route has matched.
This value must match the ``name`` of a :term:`route configuration`
declaration (see :ref:`urldispatch_chapter`) that must match before
this view will be called. Note that the ``route`` configuration
referred to by ``route_name`` usually has a ``*traverse`` token in
the value of its ``path``, representing a part of the path that will
be used by :term:`traversal` against the result of the route's
:term:`root factory`.
If ``route_name`` is not supplied, the view callable will be have a
chance of being invoked for when the :term:`triad` includes a
request object that does not indicate it matched a route.
``request_type``
This value should be an :term:`interface` that the :term:`request`
must provide in order for this view to be found and called.
If ``request_type`` is not supplied, the value ``None`` is used,
implying any request type.
*This is an advanced feature, not often used by "civilians"*.
``request_method``
This value can either be one of the strings ``GET``, ``POST``,
``PUT``, ``DELETE``, or ``HEAD`` representing an HTTP
``REQUEST_METHOD``. A view declaration with this argument ensures
that the view will only be called when the request's ``method``
attribute (aka the ``REQUEST_METHOD`` of the WSGI environment)
string matches the supplied value.
If ``request_method`` is not supplied, the view will be invoked
regardless of the ``REQUEST_METHOD`` of the :term:`WSGI`
environment.
``request_param``
This value can be any string. A view declaration with this argument
ensures that the view will only be called when the :term:`request`
has a key in the ``request.params`` dictionary (an HTTP ``GET`` or
``POST`` variable) that has a name which matches the supplied value.
If the value supplied has a ``=`` sign in it,
e.g. ``request_params="foo=123"``, then the key (``foo``) must both
exist in the ``request.params`` dictionary, *and* the value must
match the right hand side of the expression (``123``) for the view
to "match" the current request.
If ``request_param`` is not supplied, the view will be invoked
without consideration of keys and values in the ``request.params``
dictionary.
``containment``
This value should be a reference to a Python class or
:term:`interface` that a parent object in the :term:`lineage` must
provide in order for this view to be found and called. The nodes in
your object graph must be "location-aware" to use this feature.
If ``containment`` is not supplied, the interfaces and classes in
the lineage are not considered when deciding whether or not to
invoke the view callable.
See :ref:`location_aware` for more information about
location-awareness.
``xhr``
This value should be either ``True`` or ``False``. If this value is
specified and is ``True``, the :term:`WSGI` environment must possess
an ``HTTP_X_REQUESTED_WITH`` (aka ``X-Requested-With``) header that
has the value ``XMLHttpRequest`` for the associated view callable to
be found and called. This is useful for detecting AJAX requests
issued from jQuery, Prototype and other Javascript libraries.
If ``xhr`` is not specified, the ``HTTP_X_REQUESTED_WITH`` HTTP
header is not taken into consideration when deciding whether or not
to invoke the associated view callable.
``accept``
The value of this argument represents a match query for one or more
mimetypes in the ``Accept`` HTTP request header. If this value is
specified, it must be in one of the following forms: a mimetype
match token in the form ``text/plain``, a wildcard mimetype match
token in the form ``text/*`` or a match-all wildcard mimetype match
token in the form ``*/*``. If any of the forms matches the
``Accept`` header of the request, this predicate will be true.
If ``accept`` is not specified, the ``HTTP_ACCEPT`` HTTP header is
not taken into consideration when deciding whether or not to invoke
the associated view callable.
``header``
This value represents an HTTP header name or a header name/value
pair.
If ``header`` is specified, it must be a header name or a
``headername:headervalue`` pair.
If ``header`` is specified without a value (a bare header name only,
e.g. ``If-Modified-Since``), the view will only be invoked if the
HTTP header exists with any value in the request.
If ``header`` is specified, and possesses a name/value pair
(e.g. ``User-Agent:Mozilla/.*``), the view will only be invoked if
the HTTP header exists *and* the HTTP header matches the value
requested. When the ``headervalue`` contains a ``:`` (colon), it
will be considered a name/value pair (e.g. ``User-Agent:Mozilla/.*``
or ``Host:localhost``). The value portion should be a regular
expression.
Whether or not the value represents a header name or a header
name/value pair, the case of the header name is not significant.
If ``header`` is not specified, the composition, presence or absence
of HTTP headers is not taken into consideration when deciding
whether or not to invoke the associated view callable.
``path_info``
This value represents a regular expression pattern that will be
tested against the ``PATH_INFO`` WSGI environment variable to decide
whether or not to call the associated view callable. If the regex
matches, this predicate will be ``True``.
If ``path_info`` is not specified, the WSGI ``PATH_INFO`` is not
taken into consideration when deciding whether or not to invoke the
associated view callable.
``custom_predicates``
If ``custom_predicates`` is specified, it must be a sequence of
references to custom predicate callables. Use custom predicates
when no set of predefined predicates do what you need. Custom
predicates can be combined with predefined predicates as necessary.
Each custom predicate callable should accept two arguments:
``context`` and ``request`` and should return either ``True`` or
``False`` after doing arbitrary evaluation of the context and/or the
request. If all callables return ``True``, the associated view
callable will be considered viable for a given request.
If ``custom_predicates`` is not specified, no custom predicates are
used.
.. note:: This feature is new as of :mod:`repoze.bfg` 1.2.
.. index::
single: ZCML view configuration
.. _mapping_views_using_zcml_section:
View Configuration Via ZCML
~~~~~~~~~~~~~~~~~~~~~~~~~~~
You may associate a view with a URL by adding :ref:`view_directive`
declarations via :term:`ZCML` in a ``configure.zcml`` file. An
example of a view declaration in ZCML is as follows:
.. code-block:: xml
:linenos:
<view
context=".models.Hello"
view=".views.hello_world"
name="hello.html"
/>
The above maps the ``.views.hello_world`` view callable function to
the following set of :term:`context finding` results:
- A :term:`context` object which is an instance (or subclass) of the
Python class represented by ``.models.Hello``
- A :term:`view name` equalling ``hello.html``.
.. note:: Values prefixed with a period (``.``) for the ``context``
and ``view`` attributes of a ``view`` declaration (such as those
above) mean "relative to the Python package directory in which this
:term:`ZCML` file is stored". So if the above ``view`` declaration
was made inside a ``configure.zcml`` file that lived in the
``hello`` package, you could replace the relative ``.models.Hello``
with the absolute ``hello.models.Hello``; likewise you could
replace the relative ``.views.hello_world`` with the absolute
``hello.views.hello_world``. Either the relative or absolute form
is functionally equivalent. It's often useful to use the relative
form, in case your package's name changes. It's also shorter to
type.
You can also declare a *default view callable* for a :term:`model`
type:
.. code-block:: xml
:linenos:
<view
context=".models.Hello"
view=".views.hello_world"
/>
A *default view callable* simply has no ``name`` attribute. For the
above registration, when a :term:`context` is found that is of the
type ``.models.Hello`` and there is no :term:`view name` associated
with the result of :term:`context finding`, the *default view
callable* will be used. In this case, it's the view at
``.views.hello_world``.
A default view callable can alternately be defined by using the empty
string as its ``name`` attribute:
.. code-block:: xml
:linenos:
<view
context=".models.Hello"
view=".views.hello_world"
name=""
/>
You may also declare that a view callable is good for any context type
by using the special ``*`` character as the value of the ``context``
attribute:
.. code-block:: xml
:linenos:
<view
context="*"
view=".views.hello_world"
name="hello.html"
/>
This indicates that when :mod:`repoze.bfg` identifies that the
:term:`view name` is ``hello.html`` and the context is of any type,
the ``.views.hello_world`` view callable will be invoked.
A ZCML ``view`` declaration's ``view`` attribute can also name a
class. In this case, the rules described in :ref:`class_as_view`
apply for the class which is named.
See :ref:`view_directive` for complete ZCML directive documentation.
.. index::
single: bfg_view decorator
.. _mapping_views_using_a_decorator_section:
View Configuration Using the ``@bfg_view`` Decorator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For better locality of reference, you may use the
:class:`repoze.bfg.view.bfg_view` decorator to associate your view
functions with URLs instead of using :term:`ZCML` or imperative
configuration for the same purpose.
.. warning::
Using this feature tends to slows down application startup
slightly, as more work is performed at application startup to scan
for view declarations. Additionally, if you use decorators, it
means that other people will not be able to override your view
declarations externally using ZCML: this is a common requirement if
you're developing an extensible application (e.g. a framework).
See :ref:`extending_chapter` for more information about building
extensible applications.
Usage of the ``bfg_view`` decorator is a form of :term:`declarative
configuration`, like ZCML, but in decorator form.
:class:`repoze.bfg.view.bfg_view` can be used to associate :term:`view
configuration` information -- as done via the equivalent ZCML -- with
a function that acts as a :mod:`repoze.bfg` view callable. All ZCML
:ref:`view_directive` attributes (save for the ``view`` attribute) are
available in decorator form and mean precisely the same thing.
An example of the :class:`repoze.bfg.view.bfg_view` decorator might
reside in a :mod:`repoze.bfg` application module ``views.py``:
.. ignore-next-block
.. code-block:: python
:linenos:
from models import MyModel
from repoze.bfg.view import bfg_view
from repoze.bfg.chameleon_zpt import render_template_to_response
@bfg_view(name='my_view', request_method='POST', context=MyModel,
permission='read', renderer='templates/my.pt')
def my_view(request):
return {'a':1}
Using this decorator as above replaces the need to add this ZCML to
your application registry:
.. code-block:: xml
:linenos:
<view
context=".models.MyModel"
view=".views.my_view"
name="my_view"
permission="read"
request_method="POST"
renderer="templates/my.pt"
/>
Or replaces the need to add this imperative configuration stanza:
.. ignore-next-block
.. code-block:: python
config.add_view('.views.my_view', name='my_view', request_method='POST',
context=MyModel, permission='read')
All arguments to ``bfg_view`` may be omitted. For example:
.. code-block:: python
:linenos:
from webob import Response
from repoze.bfg.view import bfg_view
@bfg_view()
def my_view(request):
""" My view """
return Response()
Such a registration as the one directly above implies that the view
name will be ``my_view``, registered with a ``context`` argument that
matches any model type, using no permission, registered against
requests with any request method / request type / request param /
route name / containment.
The mere existence of a ``@bfg_view`` decorator doesn't suffice to
perform view configuration. To make :mod:`repoze.bfg` process your
:class:`repoze.bfg.view.bfg_view` declarations, you *must* do one of
the following:
- If you are using :term:`ZCML`, insert the following boilerplate into
your application's ``configure.zcml``:
.. code-block:: xml
<scan package="."/>
- If you are using :term:`imperative configuration`, use the ``scan``
method of a :class:`repoze.bfg.configuration.Configurator`:
.. code-block:: python
# config is assumed to be an instance of the
# repoze.bfg.configuration.Configurator class
config.scan()
Please see :ref:`decorations_and_code_scanning` for detailed
information about what happens when code is scanned for configuration
declarations resulting from use of decorators like
:class:`repoze.bfg.view.bfg_view`.
See :ref:`configuration_module` for additional API arguments to the
:meth:`repoze.bfg.configuration.Configurator.scan` method. For
example, the method allows you to supply a ``package`` argument to
better control exactly *which* code will be scanned. This is the same
value implied by the ``package`` attribute of the ZCML ``<scan>``
directive (see :ref:`scan_directive`).
``@bfg_view`` Placement
+++++++++++++++++++++++
A :class:`repoze.bfg.view.bfg_view` decorator can be placed in various
points in your application.
If your view callable is a function, it may be used as a function
decorator:
.. code-block:: python
:linenos:
from repoze.bfg.view import bfg_view
from webob import Response
@bfg_view(name='edit')
def edit(request):
return Response('edited!')
If your view callable is a class, the decorator can also be used as a
class decorator in Python 2.6 and better (Python 2.5 and below do not
support class decorators). All the arguments to the decorator are the
same when applied against a class as when they are applied against a
function. For example:
.. code-block:: python
:linenos:
from webob import Response
from repoze.bfg.view import bfg_view
@bfg_view()
class MyView(object):
def __init__(self, request):
self.request = request
def __call__(self):
return Response('hello')
You can use the :class:`repoze.bfg.view.bfg_view` decorator as a
simple callable to manually decorate classes in Python 2.5 and below
without the decorator syntactic sugar, if you wish:
.. code-block:: python
:linenos:
from webob import Response
from repoze.bfg.view import bfg_view
class MyView(object):
def __init__(self, request):
self.request = request
def __call__(self):
return Response('hello')
my_view = bfg_view()(MyView)
More than one :class:`repoze.bfg.view.bfg_view` decorator can be
stacked on top of any number of others. Each decorator creates a
separate view registration. For example:
.. code-block:: python
:linenos:
from repoze.bfg.view import bfg_view
from webob import Response
@bfg_view(name='edit')
@bfg_view(name='change')
def edit(request):
return Response('edited!')
This registers the same view under two different names.
.. note:: :class:`repoze.bfg.view.bfg_view` decorator stacking is a
feature new in :mod:`repoze.bfg` 1.1. Previously, these decorators
could not be stacked without the effect of the "upper" decorator
cancelling the effect of the decorator "beneath" it.
The decorator can also be used against class methods:
.. code-block:: python
:linenos:
from webob import Response
from repoze.bfg.view import bfg_view
class MyView(object):
def __init__(self, request):
self.request = request
@bfg_view(name='hello')
def amethod(self):
return Response('hello')
When the decorator is used against a class method, a view is
registered for the *class*, so the class constructor must accept an
argument list in one of two forms: either it must accept a single
argument ``request`` or it must accept two arguments, ``context,
request`` as per :ref:`request_and_context_view_definitions`.
The method which is decorated must return a :term:`response` or it
must rely on a :term:`renderer` to generate one.
Using the decorator against a particular method of a class is
equivalent to using the ``attr`` parameter in a decorator attached to
the class itself. For example, the above registration implied by the
decorator being used against the ``amethod`` method could be spelled
equivalently as the below:
.. code-block:: python
:linenos:
from webob import Response
from repoze.bfg.view import bfg_view
@bfg_view(attr='amethod', name='hello')
class MyView(object):
def __init__(self, request):
self.request = request
def amethod(self):
return Response('hello')
.. note:: The ability to use the :class:`repoze.bfg.view.bfg_view`
decorator as a method decorator is new in :mod:`repoze.bfg`
version 1.1. Previously it could only be used as a class or
function decorator.
.. index::
single: add_view
.. _mapping_views_using_imperative_config_section:
View Configuration Using the ``add_view`` Method of a Configurator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :meth:`repoze.bfg.configuration.Configurator.add_view` method
within :ref:`configuration_module` is used to configure a view
imperatively. The arguments to this method are very similar to the
arguments that you provide to the ``@bfg_view`` decorator. For
example:
.. code-block:: python
:linenos:
from webob import Response
def hello_world(request):
return Response('hello!')
# config is assumed to be an instance of the
# repoze.bfg.configuration.Configurator class
config.add_view(hello_world, name='hello.html')
The first argument, ``view``, is required. It must either be a Python
object which is the view itself or a :term:`dotted Python name` to
such an object. All other arguments are optional. See
:meth:`repoze.bfg.configuration.Configurator.add_view` for more
information.
.. index::
single: model interfaces
.. _using_model_interfaces:
Using Model Interfaces In View Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Instead of registering your views with a ``context`` that names a
Python model *class*, you can optionally register a view callable with
a ``context`` which is an :term:`interface`. An interface can be
attached arbitrarily to any model instance. View lookup treats
context interfaces specially, and therefore the identity of a model
can be divorced from that of the class which implements it. As a
result, associating a view with an interface can provide more
flexibility for sharing a single view between two or more different
implementations of a model type. For example, if two model object
instances of different Python class types share the same interface,
you can use the same view against each of them.
In order to make use of interfaces in your application during view
dispatch, you must create an interface and mark up your model classes
or instances with interface declarations that refer to this interface.
To attach an interface to a model *class*, you define the interface
and use the :func:`zope.interface.implements` function to associate
the interface with the class.
.. code-block:: python
:linenos:
from zope.interface import Interface
from zope.interface import implements
class IHello(Interface):
""" A marker interface """
class Hello(object):
implements(IHello)
To attach an interface to a model *instance*, you define the interface
and use the :func:`zope.interface.alsoProvides` function to associate
the interface with the instance. This function mutates the instance
in such a way that the interface is attached to it.
.. code-block:: python
:linenos:
from zope.interface import Interface
from zope.interface import alsoProvides
class IHello(Interface):
""" A marker interface """
class Hello(object):
pass
def make_hello():
hello = Hello()
alsoProvides(hello, IHello)
return hello
Regardless of how you associate an interface with a model instance or
a model class, the resulting ZCML to associate that interface with a
view callable is the same. Assuming the above code that defines an
``IHello`` interface lives in the root of your application, and its
module is named "models.py", the below interface declaration will
associate the ``.views.hello_world`` view with models that implement
(aka provide) this interface.
.. code-block:: xml
:linenos:
<view
context=".models.IHello"
view=".views.hello_world"
name="hello.html"
/>
Any time a model that is determined to be the :term:`context` provides
this interface, and a view named ``hello.html`` is looked up against
it as per the URL, the ``.views.hello_world`` view callable will be
invoked.
Note that views registered against a model class take precedence over
views registered for any interface the model class implements when an
ambiguity arises. If a view is registered for both the class type of
the context and an interface implemented by the context's class, the
view registered for the context's class will "win".
For more information about defining models with interfaces for use
within view configuration, see
:ref:`models_which_implement_interfaces`.
.. index::
single: view security
pair: security; view
.. _view_security_section:
Configuring View Security
~~~~~~~~~~~~~~~~~~~~~~~~~
If a :term:`authorization policy` is active, any :term:`permission`
attached to a :term:`view configuration` found during view lookup will
be consulted to ensure that the currently authenticated user possesses
that permission against the :term:`context` before the view function
is actually called. Here's an example of specifying a permission in a
view configuration declaration in ZCML:
.. code-block:: xml
:linenos:
<view
context=".models.IBlog"
view=".views.add_entry"
name="add.html"
permission="add"
/>
When an authentication policy is enabled, this view will be protected
with the ``add`` permission. The view will *not be called* if the
user does not possess the ``add`` permission relative to the current
:term:`context` and an authorization policy is enabled. Instead the
:term:`forbidden view` result will be returned to the client as per
:ref:`protecting_views`.
.. index::
single: view lookup
.. _view_lookup:
View Lookup and Invocation
--------------------------
:term:`View lookup` is the :mod:`repoze.bfg` subsystem responsible for
finding an invoking a :term:`view callable`. The view lookup
subsystem is passed a :term:`context`, a :term:`view name`, and the
:term:`request` object. These three bits of information are referred
to within this chapter as a :term:`triad`.
:term:`View configuration` information stored within in the
:term:`application registry` is compared against a triad by the view
lookup subsystem in order to find the "best" view callable for the set
of circumstances implied by the triad.
Predicate attributes of view configuration can be thought of like
"narrowers". In general, the greater number of predicate attributes
possessed by a view's configuration, the more specific the
circumstances need to be before the registered view callable will be
invoked.
For any given request, a view with five predicates will always be
found and evaluated before a view with two, for example. All
predicates must match for the associated view to be called.
This does not mean however, that :mod:`repoze.bfg` "stops looking"
when it finds a view registration with predicates that don't match.
If one set of view predicates does not match, the "next most specific"
view (if any) view is consulted for predicates, and so on, until a
view is found, or no view can be matched up with the request. The
first view with a set of predicates all of which match the request
environment will be invoked.
If no view can be found which has predicates which allow it to be
matched up with the request, :mod:`repoze.bfg` will return an error to
the user's browser, representing a "not found" (404) page. See
:ref:`changing_the_notfound_view` for more information about changing
the default notfound view.
.. index::
single: debugging not found errors
single: not found error (debugging)
.. _debug_notfound_section:
:exc:`NotFound` Errors
~~~~~~~~~~~~~~~~~~~~~~
It's useful to be able to debug :exc:`NotFound` error responses when
they occur unexpectedly due to an application registry
misconfiguration. To debug these errors, use the
``BFG_DEBUG_NOTFOUND`` environment variable or the ``debug_notfound``
configuration file setting. Details of why a view was not found will
be printed to ``stderr``, and the browser representation of the error
will include the same information. See :ref:`environment_chapter` for
more information about how and where to set these values.
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/narr/views.rst | 0.945153 | 0.701937 | views.rst | pypi |
.. _wiki2_adding_authorization:
====================
Adding Authorization
====================
Our application currently allows anyone with access to the server to
view, edit, and add pages to our wiki. For purposes of demonstration
we'll change our application to allow only people whom possess a
specific username (`editor`) to add and edit wiki pages but we'll
continue allowing anyone with access to the server to view pages.
:mod:`repoze.bfg` provides facilities for *authorization* and
*authentication*. We'll make use of both features to provide security
to our application.
The source code for this tutorial stage can be browsed at
`docs.repoze.org
<http://docs.repoze.org/bfgwiki2-1.3/authorization>`_.
Adding A Root Factory
---------------------
We're going to start to use a custom :term:`root factory` within our
``run.py`` file. The objects generated by the root factory will be
used as the :term:`context` of each request to our application. In
order for :mod:`repoze.bfg` declarative security to work properly, the
context object generated during a request must be decorated with
security declarations; when we begin to use a custom root factory to
generate our contexts, we can begin to make use of the declarative
security features of :mod:`repoze.bfg`.
Let's modify our ``run.py``, passing in a :term:`root factory` to our
:term:`Configurator` constructor. We'll point it at a new class we
create inside our ``models.py`` file. Add the following statements to
your ``models.py`` file:
.. code-block:: python
from repoze.bfg.security import Allow
from repoze.bfg.security import Everyone
class RootFactory(object):
__acl__ = [ (Allow, Everyone, 'view'),
(Allow, 'group:editors', 'edit') ]
def __init__(self, request):
self.__dict__.update(request.matchdict)
The ``RootFactory`` class we've just added will be used by
:mod:`repoze.bfg` to construct a ``context`` object. The context is
attached to the request object passed to our view callables as the
``context`` attribute.
All of our context objects will possess an ``__acl__`` attribute that
allows :data:`repoze.bfg.security.Everyone` (a special principal) to
view all pages, while allowing only a :term:`principal` named
``group:editors`` to edit and add pages. The ``__acl__`` attribute
attached to a context is interpreted specially by :mod:`repoze.bfg` as
an access control list during view callable execution. See
:ref:`assigning_acls` for more information about what an :term:`ACL`
represents.
.. note: Although we don't use the functionality here, the ``factory``
used to create route contexts may differ per-route as opposed to
globally. See the ``factory`` attribute in
:ref:`route_zcml_directive` for more info.
We'll pass the ``RootFactory`` we created in the step above in as the
``root_factory`` argument to a :term:`Configurator`. When we're done,
your application's ``run.py`` will look like this.
.. literalinclude:: src/authorization/tutorial/run.py
:linenos:
:language: python
Configuring a ``repoze.bfg`` Authorization Policy
-------------------------------------------------
For any :mod:`repoze.bfg` application to perform authorization, we
need to add a ``security.py`` module and we'll need to change our
``configure.zcml`` file to add an :term:`authentication policy` and an
:term:`authorization policy`.
Changing ``configure.zcml``
~~~~~~~~~~~~~~~~~~~~~~~~~~~
We'll change our ``configure.zcml`` file to enable an
``AuthTktAuthenticationPolicy`` and an ``ACLAuthorizationPolicy`` to
enable declarative security checking. We'll also change
``configure.zcml`` to add a view stanza which points at our ``login``
:term:`view callable`, also known as a :term:`forbidden view`. This
configures our newly created login view to show up when
:mod:`repoze.bfg` detects that a view invocation can not be
authorized. Also, we'll add ``view_permission`` attributes with the
value ``edit`` to the ``edit_page`` and ``add_page`` route
declarations. This indicates that the view callables which these
routes reference cannot be invoked without the authenticated user
possessing the ``edit`` permission with respect to the current
context.
This makes the assertion that only users who possess the effective
``edit`` permission at the time of the request may invoke those two
views. We've granted the ``group:editors`` principal the ``edit``
permission at the root model via its ACL, so only the a user whom is a
member of the group named ``group:editors`` will able to invoke the
views associated with the ``add_page`` or ``edit_page`` routes.
When you're done, your ``configure.zcml`` will look like so
.. literalinclude:: src/authorization/tutorial/configure.zcml
:linenos:
:language: xml
Note that the ``authtktauthenticationpolicy`` tag has two attributes:
``secret`` and ``callback``. ``secret`` is a string representing an
encryption key used by the "authentication ticket" machinery
represented by this policy: it is required. The ``callback`` is a
string, representing a :term:`dotted Python name`, which points at the
``groupfinder`` function in the current directory's ``security.py``
file. We haven't added that module yet, but we're about to.
Adding ``security.py``
~~~~~~~~~~~~~~~~~~~~~~
Add a ``security.py`` module within your package (in the same
directory as "run.py", "views.py", etc) with the following content:
.. literalinclude:: src/authorization/tutorial/security.py
:linenos:
:language: python
The groupfinder defined here is an :term:`authentication policy`
"callback"; it is a callable that accepts a userid and a request. If
the userid exists in the system, the callback will return a sequence
of group identifiers (or an empty sequence if the user isn't a member
of any groups). If the userid *does not* exist in the system, the
callback will return ``None``. In a production system, user and group
data will most often come from a database, but here we use "dummy"
data to represent user and groups sources. Note that the ``editor``
user is a member of the ``group:editors`` group in our dummy group
data (the ``GROUPS`` data structure).
We've given the ``editor`` user membership to the ``group:editors`` by
mapping him to this group in the ``GROUPS`` data structure (``GROUPS =
{'editor':['group:editors']}``). Since the ``groupfinder`` function
consults the ``GROUPS`` data structure, this will mean that, as a
result of the ACL attached to the root returned by the root factory,
and the permission associated with the ``add_page`` and ``edit_page``
views, the ``editor`` user should be able to add and edit pages.
Adding Login and Logout Views
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We'll add a ``login`` view callable which renders a login form and
processes the post from the login form, checking credentials.
We'll also add a ``logout`` view callable to our application and
provide a link to it. This view will clear the credentials of the
logged in user and redirect back to the front page.
We'll add a different file (for presentation convenience) to add login
and logout view callables. Add a file named ``login.py`` to your
application (in the same directory as ``views.py``) with the following
content:
.. literalinclude:: src/authorization/tutorial/login.py
:linenos:
:language: python
Changing Existing Views
~~~~~~~~~~~~~~~~~~~~~~~
Then we need to change each of our ``view_page``, ``edit_page`` and
``add_page`` views in ``views.py`` to pass a "logged in" parameter to
its template. We'll add something like this to each view body:
.. ignore-next-block
.. code-block:: python
:linenos:
from repoze.bfg.security import authenticated_userid
logged_in = authenticated_userid(request)
We'll then change the return value of these views to pass the
`resulting `logged_in`` value to the template, e.g.:
.. ignore-next-block
.. code-block:: python
:linenos:
return dict(page = context,
content = content,
logged_in = logged_in,
edit_url = edit_url)
Adding the ``login.pt`` Template
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Add a ``login.pt`` template to your templates directory. It's
referred to within the login view we just added to ``login.py``.
.. literalinclude:: src/authorization/tutorial/templates/login.pt
:linenos:
:language: xml
Change ``view.pt`` and ``edit.pt``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We'll also need to change our ``edit.pt`` and ``view.pt`` templates to
display a "Logout" link if someone is logged in. This link will
invoke the logout view.
To do so we'll add this to both templates within the ``<div
class="main_content">`` div:
.. code-block:: xml
:linenos:
<span tal:condition="logged_in">
<a href="${request.application_url}/logout">Logout</a>
</span>
Viewing the Application in a Browser
------------------------------------
We can finally examine our application in a browser. The views we'll
try are as follows:
- Visiting ``http://localhost:6543/`` in a browser invokes the
``view_wiki`` view. This always redirects to the ``view_page`` view
of the FrontPage page object. It is executable by any user.
- Visiting ``http://localhost:6543/FrontPage`` in a browser invokes
the ``view_page`` view of the FrontPage page object.
- Visiting ``http://localhost:6543/FrontPage/edit_page`` in a browser
invokes the edit view for the FrontPage object. It is executable by
only the ``editor`` user. If a different user (or the anonymous
user) invokes it, a login form will be displayed. Supplying the
credentials with the username ``editor``, password ``editor`` will
display the edit page form.
- Visiting ``http://localhost:6543/add_page/SomePageName`` in a
browser invokes the add view for a page. It is executable by only
the ``editor`` user. If a different user (or the anonymous user)
invokes it, a login form will be displayed. Supplying the
credentials with the username ``editor``, password ``editor`` will
display the edit page form.
Seeing Our Changes To ``views.py`` and our Templates
----------------------------------------------------
Our ``views.py`` module will look something like this when we're done:
.. literalinclude:: src/authorization/tutorial/views.py
:linenos:
:language: python
Our ``edit.pt`` template will look something like this when we're done:
.. literalinclude:: src/authorization/tutorial/templates/edit.pt
:linenos:
:language: xml
Our ``view.pt`` template will look something like this when we're done:
.. literalinclude:: src/authorization/tutorial/templates/view.pt
:linenos:
:language: xml
Revisiting the Application
---------------------------
When we revisit the application in a browser, and log in (as a result
of hitting an edit or add page and submitting the login form with the
``editor`` credentials), we'll see a Logout link in the upper right
hand corner. When we click it, we're logged out, and redirected back
to the front page.
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/tutorials/bfgwiki2/authorization.rst | 0.790085 | 0.689815 | authorization.rst | pypi |
==============
Defining Views
==============
A :term:`view callable` in a :term:`url dispatch` -based
:mod:`repoze.bfg` application is typically a simple Python function
that accepts a single parameter named :term:`request`. A view
callable is assumed to return a :term:`response` object.
.. note:: A :mod:`repoze.bfg` view can also be defined as callable
which accepts *two* arguments: a :term:`context` and a
:term:`request`. You'll see this two-argument pattern used in
other :mod:`repoze.bfg` tutorials and applications. Either calling
convention will work in any :mod:`repoze.bfg` application; the
calling conventions can be used interchangeably as necessary. In
:term:`url dispatch` based applications, however, the context
object is rarely used in the view body itself, so within this
tutorial we define views as callables that accept only a request to
avoid the visual "noise". If you do need the ``context`` within a
view function that only takes the request as a single argument, you
can obtain it via ``request.context``.
The request passed to every view that is called as the result of a
route match has an attribute named ``matchdict`` that contains the
elements placed into the URL by the ``path`` of a ``route`` statement.
For instance, if a route statement in ``configure.zcml`` had the path
``:one/:two``, and the URL at ``http://example.com/foo/bar`` was
invoked, matching this path, the matchdict dictionary attached to the
request passed to the view would have a ``one`` key with the value
``foo`` and a ``two`` key with the value ``bar``.
The source code for this tutorial stage can be browsed at
`docs.repoze.org <http://docs.repoze.org/bfgwiki2-1.3/views>`_.
Declaring Dependencies in Our ``setup.py`` File
===============================================
The view code in our application will depend on a package which is not
a dependency of the original "tutorial" application. The original
"tutorial" application was generated by the ``paster create`` command;
it doesn't know about our custom application requirements. We need to
add a dependency on the ``docutils`` package to our ``tutorial``
package's ``setup.py`` file by assigning this dependency to the
``install_requires`` parameter in the ``setup`` function.
Our resulting ``setup.py`` should look like so:
.. literalinclude:: src/views/setup.py
:linenos:
:language: python
.. note:: After these new dependencies are added, you will need to
rerun ``python setup.py develop`` inside the root of the
``tutorial`` package to obtain and register the newly added
dependency package.
Adding View Functions
=====================
We'll get rid of our ``my_view`` view function in our ``views.py``
file. It's only an example and isn't relevant to our application.
Then we're going to add four :term:`view callable` functions to our
``views.py`` module. One view callable (named ``view_wiki``) will
display the wiki itself (it will answer on the root URL), another
named ``view_page`` will display an individual page, another named
``add_page`` will allow a page to be added, and a final view callable
named ``edit_page`` will allow a page to be edited. We'll describe
each one briefly and show the resulting ``views.py`` file afterward.
.. note::
There is nothing special about the filename ``views.py``. A project
may have many view callables throughout its codebase in
arbitrarily-named files. Files implementing view callables often
have ``view`` in their filenames (or may live in a Python subpackage
of your application package named ``views``), but this is only by
convention.
The ``view_wiki`` view function
-------------------------------
The ``view_wiki`` function will respond as the :term:`default view` of
a ``Wiki`` model object. It always redirects to a URL which
represents the path to our "FrontPage". It returns an instance of the
:class:`webob.exc.HTTPFound` class (instances of which implement the
WebOb :term:`response` interface), It will use the
:func:`repoze.bfg.url.route_url` API to construct a URL to the
``FrontPage`` page (e.g. ``http://localhost:6543/FrontPage``), and
will use it as the "location" of the HTTPFound response, forming an
HTTP redirect.
The ``view_page`` view function
-------------------------------
The ``view_page`` function will respond as the :term:`default view` of
a ``Page`` object. The ``view_page`` function renders the
:term:`ReStructuredText` body of a page (stored as the ``data``
attribute of a Page object) as HTML. Then it substitutes an HTML
anchor for each *WikiWord* reference in the rendered HTML using a
compiled regular expression.
The curried function named ``check`` is used as the first argument to
``wikiwords.sub``, indicating that it should be called to provide a
value for each WikiWord match found in the content. If the wiki
already contains a page with the matched WikiWord name, the ``check``
function generates a view link to be used as the substitution value
and returns it. If the wiki does not already contain a page with with
the matched WikiWord name, the function generates an "add" link as the
substitution value and returns it.
As a result, the ``content`` variable is now a fully formed bit of
HTML containing various view and add links for WikiWords based on the
content of our current page object.
We then generate an edit URL (because it's easier to do here than in
the template), and we return a dictionary with a number of arguments.
The fact that this view returns a dictionary (as opposed to a
:term:`response` object) is a cue to :mod:`repoze.bfg` that it should
try to use a :term:`renderer` associated with the view configuration
to render a template. In our case, the template which will be
rendered will be the ``templates/view.pt`` template, as per the
configuration put into effect in ``configure.zcml``.
The ``add_page`` view function
------------------------------
The ``add_page`` function will be invoked when a user clicks on a
*WikiWord* which isn't yet represented as a page in the system. The
``check`` function within the ``view_page`` view generates URLs to
this view. It also acts as a handler for the form that is generated
when we want to add a page object. The ``matchdict`` attribute of the
request passed to the ``add_page`` view will have the values we need
to construct URLs and find model objects.
The matchdict will have a ``pagename`` key that matches the name of
the page we'd like to add. If our add view is invoked via,
e.g. ``http://localhost:6543/add_page/SomeName``, the ``pagename``
value in the matchdict will be ``SomeName``.
If the view execution is *not* a result of a form submission (if the
expression ``'form.submitted' in request.params`` is ``False``), the
view callable renders a template. To do so, it generates a "save url"
which the template use as the form post URL during rendering. We're
lazy here, so we're trying to use the same template
(``templates/edit.pt``) for the add view as well as the page edit
view, so we create a dummy Page object in order to satisfy the edit
form's desire to have *some* page object exposed as ``page``, and
:mod:`repoze.bfg` will render the template associated with this view
to a response.
If the view execution *is* a result of a form submission (if the
expression ``'form.submitted' in request.params`` is ``True``), we
scrape the page body from the form data, create a Page object using
the name in the matchdict ``pagename``, and obtain the page body from
the request, and save it into the database using ``session.add``. We
then redirect back to the ``view_page`` view (the :term:`default view`
for a Page) for the newly created page.
The ``edit_page`` view function
-------------------------------
The ``edit_page`` function will be invoked when a user clicks the
"Edit this Page" button on the view form. It renders an edit form but
it also acts as the handler for the form it renders. The
``matchdict`` attribute of the request passed to the ``add_page`` view
will have a ``pagename`` key matching the name of the page the user
wants to edit.
If the view execution is *not* a result of a form submission (if the
expression ``'form.submitted' in request.params`` is ``False``), the
view simply renders the edit form, passing the request, the page
object, and a save_url which will be used as the action of the
generated form.
If the view execution *is* a result of a form submission (if the
expression ``'form.submitted' in request.params`` is ``True``), the
view grabs the ``body`` element of the request parameter and sets it
as the ``data`` key in the matchdict. It then redirects to the
default view of the wiki page, which will always be the ``view_page``
view.
Viewing the Result of Our Edits to ``views.py``
===============================================
The result of all of our edits to ``views.py`` will leave it looking
like this:
.. literalinclude:: src/views/tutorial/views.py
:linenos:
:language: python
Adding Templates
================
The views we've added all reference a :term:`template`. Each template
is a :term:`Chameleon` template. The default templating system in
:mod:`repoze.bfg` is a variant of :term:`ZPT` provided by
:term:`Chameleon`. These templates will live in the ``templates``
directory of our tutorial package.
The ``view.pt`` Template
------------------------
The ``view.pt`` template is used for viewing a single wiki page. It
is used by the ``view_page`` view function. It should have a div that
is "structure replaced" with the ``content`` value provided by the
view. It should also have a link on the rendered page that points at
the "edit" URL (the URL which invokes the ``edit_page`` view for the
page being viewed).
Once we're done with the ``view.pt`` template, it will look a lot like
the below:
.. literalinclude:: src/views/tutorial/templates/view.pt
:linenos:
:language: xml
.. note:: The names available for our use in a template are always
those that are present in the dictionary returned by the view
callable. But our templates make use of a ``request`` object that
none of our tutorial views return in their dictionary. This value
appears as if "by magic". However, ``request`` is one of several
names that are available "by default" in a template when a template
renderer is used. See :ref:`chameleon_template_renderers` for more
information about other names that are available by default in a
template when a Chameleon template is used as a renderer.
The ``edit.pt`` Template
------------------------
The ``edit.pt`` template is used for adding and editing a wiki page.
It is used by the ``add_page`` and ``edit_page`` view functions. It
should display a page containing a form that POSTs back to the
"save_url" argument supplied by the view. The form should have a
"body" textarea field (the page data), and a submit button that has
the name "form.submitted". The textarea in the form should be filled
with any existing page data when it is rendered.
Once we're done with the ``edit.pt`` template, it will look a lot like
the below:
.. literalinclude:: src/views/tutorial/templates/edit.pt
:linenos:
:language: xml
Static Resources
----------------
Our templates name a single static resource named ``style.css``. We
need to create this and place it in a file named ``style.css`` within
our package's ``templates/static`` directory. This file is a little
too long to replicate within the body of this guide, however it is
available `online
<http://docs.repoze.org/bfgwiki2-1.2/views/tutorial/templates/static/style.css>`_.
This CSS file will be accessed via
e.g. ``http://localhost:6543/static/style.css`` by virtue of the
``<static>`` directive we've defined in the ``configure.zcml`` file.
Any number and type of static resources can be placed in this
directory (or subdirectories) and are just referred to by URL within
templates.
Mapping Views to URLs in ``configure.zcml``
===========================================
The ``configure.zcml`` file contains ``route`` declarations (and a
lone ``view`` declaration) which serve to map URLs via :term:`url
dispatch` to view functions. First, we’ll get rid of the existing
``route`` created by the template using the name ``home``. It’s only
an example and isn’t relevant to our application.
We then need to add four ``route`` declarations to ``configure.zcml``.
Note that the *ordering* of these declarations is very important.
``route`` declarations are matched in the order they're found in the
``configure.zcml`` file.
#. Add a declaration which maps the empty path (signifying the root
URL) to the view named ``view_wiki`` in our ``views.py`` file with
the name ``view_wiki``. This is the :term:`default view` for the
wiki.
#. Add a declaration which maps the path pattern ``:pagename`` to the
view named ``view_page`` in our ``views.py`` file with the view
name ``view_page``. This is the regular view for a page.
#. Add a declaration which maps the path pattern
``:pagename/edit_page`` to the view named ``edit_page`` in our
``views.py`` file with the name ``edit_page``. This is the edit view
for a page.
#. Add a declaration which maps the path pattern
``add_page/:pagename`` to the view named ``add_page`` in our
``views.py`` file with the name ``add_page``. This is the add view
for a new page.
As a result of our edits, the ``configure.zcml`` file should look
something like so:
.. literalinclude:: src/views/tutorial/configure.zcml
:linenos:
:language: xml
The WSGI Pipeline
-----------------
Within ``tutorial.ini``, note the existence of a ``[pipeline:main]``
section which specifies our WSGI pipeline. This "pipeline" will be
served up as our WSGI application. As far as the WSGI server is
concerned the pipeline *is* our application. Simpler configurations
don't use a pipeline: instead they expose a single WSGI application as
"main". Our setup is more complicated, so we use a pipeline.
``egg:repoze.tm2#tm`` is at the "top" of the pipeline. This is a
piece of middleware which commits a transaction if no exception
occurs; if an exception occurs, the transaction will be aborted. This
is the piece of software that allows us to forget about needing to do
manual commits and aborts of our database connection in view code.
Adding an Element to the Pipeline
---------------------------------
Let's add a piece of middleware to the WSGI pipeline. We'll add
``egg:Paste#evalerror`` middleware which displays debuggable errors in
the browser while you're developing (this is *not* recommended for
deployment as it is a security risk). Let's insert evalerror into the
pipeline right above ``egg:repoze.tm2#tm``, making our resulting
``tutorial.ini`` file look like so:
.. literalinclude:: src/views/tutorial.ini
:linenos:
:language: ini
Viewing the Application in a Browser
====================================
Once we've set up the WSGI pipeline properly, we can finally examine
our application in a browser. The views we'll try are as follows:
- Visiting ``http://localhost:6543`` in a browser invokes the
``view_wiki`` view. This always redirects to the ``view_page`` view
of the FrontPage page object.
- Visiting ``http://localhost:6543/FrontPage`` in a browser invokes
the ``view_page`` view of the front page page object.
- Visiting ``http://localhost:6543/FrontPage/edit_page`` in a browser
invokes the edit view for the front page object.
- Visiting ``http://localhost:6543/add_page/SomePageName`` in a
browser invokes the add view for a page.
Try generating an error within the body of a view by adding code to
the top of it that generates an exception (e.g. ``raise
Exception('Forced Exception')``). Then visit the error-raising view
in a browser. You should see an interactive exception handler in the
browser which allows you to examine values in a post-mortem mode.
Adding Tests
============
Since we've added a good bit of imperative code here, it's useful to
define tests for the views we've created. We'll change our tests.py
module to look like this:
.. literalinclude:: src/views/tutorial/tests.py
:linenos:
:language: python
We can then run the tests using something like:
.. code-block:: text
:linenos:
$ python setup.py test -q
The expected output is something like:
.. code-block:: text
:linenos:
running test
running egg_info
writing requirements to tutorial.egg-info/requires.txt
writing tutorial.egg-info/PKG-INFO
writing top-level names to tutorial.egg-info/top_level.txt
writing dependency_links to tutorial.egg-info/dependency_links.txt
writing entry points to tutorial.egg-info/entry_points.txt
unrecognized .svn/entries format in
reading manifest file 'tutorial.egg-info/SOURCES.txt'
writing manifest file 'tutorial.egg-info/SOURCES.txt'
running build_ext
......
----------------------------------------------------------------------
Ran 6 tests in 0.181s
OK
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/tutorials/bfgwiki2/definingviews.rst | 0.906978 | 0.79538 | definingviews.rst | pypi |
====================
Adding Authorization
====================
Our application currently allows anyone with access to the server to
view, edit, and add pages to our wiki. For purposes of demonstration
we'll change our application to allow people whom are members of a
*group* named ``group:editors`` to add and edit wiki pages but we'll
continue allowing anyone with access to the server to view pages.
:mod:`repoze.bfg` provides facilities for *authorization* and
*authentication*. We'll make use of both features to provide security
to our application.
The source code for this tutorial stage can be browsed at
`docs.repoze.org <http://docs.repoze.org/bfgwiki-1.3/authorization>`_.
Configuring a ``repoze.bfg`` Authentication Policy
--------------------------------------------------
For any :mod:`repoze.bfg` application to perform authorization, we
need to add a ``security.py`` module and we'll need to change our
:term:`application registry` to add an :term:`authentication policy`
and a :term:`authorization policy`.
Changing ``configure.zcml``
~~~~~~~~~~~~~~~~~~~~~~~~~~~
We'll change our ``configure.zcml`` file to enable an
``AuthTktAuthenticationPolicy`` and an ``ACLAuthorizationPolicy`` to
enable declarative security checking. We'll also add a new view
stanza, which species a :term:`forbidden view`. This configures our
login view to show up when :mod:`repoze.bfg` detects that a view
invocation can not be authorized. When you're done, your
``configure.zcml`` will look like so:
.. literalinclude:: src/authorization/tutorial/configure.zcml
:linenos:
:language: xml
Note that the ``authtktauthenticationpolicy`` tag has two attributes:
``secret`` and ``callback``. ``secret`` is a string representing an
encryption key used by the "authentication ticket" machinery
represented by this policy: it is required. The ``callback`` is a
string, representing a :term:`dotted Python name`, which points at the
``groupfinder`` function in the current directory's ``security.py``
file. We haven't added that module yet, but we're about to.
Adding ``security.py``
~~~~~~~~~~~~~~~~~~~~~~
Add a ``security.py`` module within your package (in the same
directory as ``run.py``, ``views.py``, etc) with the following
content:
.. literalinclude:: src/authorization/tutorial/security.py
:linenos:
:language: python
The ``groupfinder`` function defined here is an authorization policy
"callback"; it is a callable that accepts a userid and a request. If
the userid exists in the set of users known by the system, the
callback will return a sequence of group identifiers (or an empty
sequence if the user isn't a member of any groups). If the userid
*does not* exist in the system, the callback will return ``None``. In
a production system this data will most often come from a database,
but here we use "dummy" data to represent user and groups
sources. Note that the ``editor`` user is a member of the
``group:editors`` group in our dummy group data (the ``GROUPS`` data
structure).
Adding Login and Logout Views
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We'll add a ``login`` view which renders a login form and processes
the post from the login form, checking credentials.
We'll also add a ``logout`` view to our application and provide a link
to it. This view will clear the credentials of the logged in user and
redirect back to the front page.
We'll add a different file (for presentation convenience) to add login
and logout views. Add a file named ``login.py`` to your application
(in the same directory as ``views.py``) with the following content:
.. literalinclude:: src/authorization/tutorial/login.py
:linenos:
:language: python
Changing Existing Views
~~~~~~~~~~~~~~~~~~~~~~~
Then we need to change each of our ``view_page``, ``edit_page`` and
``add_page`` views in ``views.py`` to pass a "logged in" parameter
into its template. We'll add something like this to each view body:
.. ignore-next-block
.. code-block:: python
:linenos:
from repoze.bfg.security import authenticated_userid
logged_in = authenticated_userid(request)
We'll then change the return value of each view that has an associated
``renderer`` to pass the `resulting `logged_in`` value to the
template. For example:
.. ignore-next-block
.. code-block:: python
:linenos:
return dict(page = context,
content = content,
logged_in = logged_in,
edit_url = edit_url)
Adding the ``login.pt`` Template
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Add a ``login.pt`` template to your templates directory. It's
referred to within the login view we just added to ``login.py``.
.. literalinclude:: src/authorization/tutorial/templates/login.pt
:linenos:
:language: xml
Change ``view.pt`` and ``edit.pt``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We'll also need to change our ``edit.pt`` and ``view.pt`` templates to
display a "Logout" link if someone is logged in. This link will
invoke the logout view.
To do so we'll add this to both templates within the ``<div
class="main_content">`` div:
.. code-block:: xml
:linenos:
<span tal:condition="logged_in">
<a href="${request.application_url}/logout">Logout</a>
</span>
Giving Our Root Model Object an ACL
-----------------------------------
We need to give our root model object an :term:`ACL`. This ACL will
be sufficient to provide enough information to the :mod:`repoze.bfg`
security machinery to challenge a user who doesn't have appropriate
credentials when he attempts to invoke the ``add_page`` or
``edit_page`` views.
We need to perform some imports at module scope in our ``models.py``
file:
.. code-block:: python
:linenos:
from repoze.bfg.security import Allow
from repoze.bfg.security import Everyone
Our root model is a ``Wiki`` object. We'll add the following line at
class scope to our ``Wiki`` class:
.. code-block:: python
:linenos:
__acl__ = [ (Allow, Everyone, 'view'),
(Allow, 'group:editors', 'edit') ]
It's only happenstance that we're assigning this ACL at class scope.
An ACL can be attached to an object *instance* too; this is how "row
level security" can be achieved in :mod:`repoze.bfg` applications. We
actually only need *one* ACL for the entire system, however, because
our security requirements are simple, so this feature is not
demonstrated.
Our resulting ``models.py`` file will now look like so:
.. literalinclude:: src/authorization/tutorial/models.py
:linenos:
:language: python
Adding ``permission`` Declarations to our ``bfg_view`` Decorators
-----------------------------------------------------------------
To protect each of our views with a particular permission, we need to
pass a ``permission`` argument to each of our
:class:`repoze.bfg.view.bfg_view` decorators. To do so, within
``views.py``:
- We add ``permission='view'`` to the decorator attached to the
``view_wiki`` view function. This makes the assertion that only
users who possess the effective ``view`` permission at the time of
the request may invoke this view. We've granted
:data:`repoze.bfg.security.Everyone` the view permission at the root
model via its ACL, so everyone will be able to invoke the
``view_wiki`` view.
- We add ``permission='view'`` to the decorator attached to the
``view_page`` view function. This makes the assertion that only
users who possess the effective ``view`` permission at the time of
the request may invoke this view. We've granted
:data:`repoze.bfg.security.Everyone` the view permission at the root
model via its ACL, so everyone will be able to invoke the
``view_page`` view.
- We add ``permission='edit'`` to the decorator attached to the
``add_page`` view function. This makes the assertion that only
users who possess the effective ``edit`` permission at the time of
the request may invoke this view. We've granted the
``group:editors`` principal the ``edit`` permission at the root
model via its ACL, so only the a user whom is a member of the group
named ``group:editors`` will able to invoke the ``add_page`` view.
We've likewise given the ``editor`` user membership to this group
via thes ``security.py`` file by mapping him to the
``group:editors`` group in the ``GROUPS`` data structure (``GROUPS =
{'editor':['group:editors']}``); the ``groupfinder`` function
consults the ``GROUPS`` data structure. This means that the
``editor`` user can add pages.
- We add ``permission='edit'`` to the ``bfg_view`` decorator attached
to the ``edit_page`` view function. This makes the assertion that
only users who possess the effective ``edit`` permission at the time
of the request may invoke this view. We've granted the
``group:editors`` principal the ``edit`` permission at the root
model via its ACL, so only the a user whom is a member of the group
named ``group:editors`` will able to invoke the ``edit_page`` view.
We've likewise given the ``editor`` user membership to this group
via thes ``security.py`` file by mapping him to the
``group:editors`` group in the ``GROUPS`` data structure (``GROUPS =
{'editor':['group:editors']}``); the ``groupfinder`` function
consults the ``GROUPS`` data structure. This means that the
``editor`` user can edit pages.
Viewing the Application in a Browser
------------------------------------
We can finally examine our application in a browser. The views we'll
try are as follows:
- Visiting ``http://localhost:6543/`` in a browser invokes the
``view_wiki`` view. This always redirects to the ``view_page`` view
of the FrontPage page object. It is executable by any user.
- Visiting ``http://localhost:6543/FrontPage/`` in a browser invokes
the ``view_page`` view of the front page page object. This is
because it's the :term:`default view` (a view without a ``name``)
for ``Page`` objects. It is executable by any user.
- Visiting ``http://localhost:6543/FrontPage/edit_page`` in a browser
invokes the edit view for the front page object. It is executable
by only the ``editor`` user. If a different user (or the anonymous
user) invokes it, a login form will be displayed. Supplying the
credentials with the username ``editor``, password ``editor`` will
show the edit page form being displayed.
- Visiting ``http://localhost:6543/add_page/SomePageName`` in a
browser invokes the add view for a page. It is executable by only
the ``editor`` user. If a different user (or the anonymous user)
invokes it, a login form will be displayed. Supplying the
credentials with the username ``editor``, password ``editor`` will
show the edit page form being displayed.
Seeing Our Changes To ``views.py`` and our Templates
----------------------------------------------------
Our ``views.py`` module will look something like this when we're done:
.. literalinclude:: src/authorization/tutorial/views.py
:linenos:
:language: python
Our ``edit.pt`` template will look something like this when we're done:
.. literalinclude:: src/authorization/tutorial/templates/edit.pt
:linenos:
:language: xml
Our ``view.pt`` template will look something like this when we're done:
.. literalinclude:: src/authorization/tutorial/templates/view.pt
:linenos:
:language: xml
Revisiting the Application
---------------------------
When we revisit the application in a browser, and log in (as a result
of hitting an edit or add page and submitting the login form with the
``editor`` credentials), we'll see a Logout link in the upper right
hand corner. When we click it, we're logged out, and redirected back
to the front page.
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/tutorials/bfgwiki/authorization.rst | 0.780997 | 0.689092 | authorization.rst | pypi |
==============
Defining Views
==============
A :term:`view callable` in a traversal-based :mod:`repoze.bfg`
applications is typically a simple Python function that accepts two
parameters: :term:`context`, and :term:`request`. A view callable is
assumed to return a :term:`response` object.
.. note:: A :mod:`repoze.bfg` view can also be defined as callable
which accepts *one* arguments: a :term:`request`. You'll see this
one-argument pattern used in other :mod:`repoze.bfg` tutorials and
applications. Either calling convention will work in any
:mod:`repoze.bfg` application; the calling conventions can be used
interchangeably as necessary. In :term:`traversal` based
applications, such as this tutorial, the context is used frequently
within the body of a view method, so it makes sense to use the
two-argument syntax in this application. However, in :term:`url
dispatch` based applications, however, the context object is rarely
used in the view body itself, so within code that uses
URL-dispatch-only, it's common to define views as callables that
accept only a request to avoid the visual "noise".
We're going to define several :term:`view callable` functions then
wire them into :mod:`repoze.bfg` using some :term:`view
configuration` via :term:`ZCML`.
The source code for this tutorial stage can be browsed at
`docs.repoze.org <http://docs.repoze.org/bfgwiki-1.3/views>`_.
Adding View Functions
=====================
We're going to add four :term:`view callable` functions to our
``views.py`` module. One view (named ``view_wiki``) will display the
wiki itself (it will answer on the root URL), another named
``view_page`` will display an individual page, another named
``add_page`` will allow a page to be added, and a final view named
``edit_page`` will allow a page to be edited.
.. note::
There is nothing automagically special about the filename
``views.py``. A project may have many views throughout its codebase
in arbitrarily-named files. Files implementing views often have
``view`` in their filenames (or may live in a Python subpackage of
your application package named ``views``), but this is only by
convention.
The ``view_wiki`` view function
-------------------------------
The ``view_wiki`` function will be configured to respond as the
default view of a ``Wiki`` model object. It always redirects to the
``Page`` object named "FrontPage". It returns an instance of the
:class:`webob.exc.HTTPFound` class (instances of which implement the
WebOb :term:`response` interface), and the
:func:`repoze.bfg.url.model_url` API.
:func:`repoze.bfg.url.model_url` constructs a URL to the ``FrontPage``
page (e.g. ``http://localhost:6543/FrontPage``), and uses it as the
"location" of the HTTPFound response, forming an HTTP redirect.
The ``view_page`` view function
-------------------------------
The ``view_page`` function will be configured to respond as the
default view of a ``Page`` object. The ``view_page`` function renders
the :term:`ReStructuredText` body of a page (stored as the ``data``
attribute of the context passed to the view; the context will be a
Page object) as HTML. Then it substitutes an HTML anchor for each
*WikiWord* reference in the rendered HTML using a compiled regular
expression.
The curried function named ``check`` is used as the first argument to
``wikiwords.sub``, indicating that it should be called to provide a
value for each WikiWord match found in the content. If the wiki (our
page's ``__parent__``) already contains a page with the matched
WikiWord name, the ``check`` function generates a view link to be used
as the substitution value and returns it. If the wiki does not
already contain a page with with the matched WikiWord name, the
function generates an "add" link as the substitution value and returns
it.
As a result, the ``content`` variable is now a fully formed bit of
HTML containing various view and add links for WikiWords based on the
content of our current page object.
We then generate an edit URL (because it's easier to do here than in
the template), and we wrap up a number of arguments in a dictionary
and return it.
The arguments we wrap into a dictionary include ``page``, ``content``,
and ``edit_url``. As a result, the *template* associated with this
view callable will be able to use these names to perform various
rendering tasks. The template associated with this view callable will
be a template which lives in ``templates/view.pt``, which we'll
associate with this view via the :term:`view configuration` which
lives in the ``configure.zcml`` file.
Note the contrast between this view callable and the ``view_wiki``
view callable. In the ``view_wiki`` view callable, we return a
:term:`response` object. In the ``view_page`` view callable, we
return a *dictionary*. It is *always* fine to return a
:term:`response` object from a :mod:`repoze.bfg` view. Returning a
dictionary is allowed only when there is a :term:`renderer` associated
with the view callable in the view configuration.
The ``add_page`` view function
------------------------------
The ``add_page`` function will be invoked when a user clicks on a
WikiWord which isn't yet represented as a page in the system. The
``check`` function within the ``view_page`` view generates URLs to
this view. It also acts as a handler for the form that is generated
when we want to add a page object. The ``context`` of the
``add_page`` view is always a Wiki object (*not* a Page object).
The request :term:`subpath` in :mod:`repoze.bfg` is the sequence of
names that are found *after* the view name in the URL segments given
in the ``PATH_INFO`` of the WSGI request as the result of
:term:`traversal`. If our add view is invoked via,
e.g. ``http://localhost:6543/add_page/SomeName``, the :term:`subpath`
will be a tuple: ``('SomeName',)``.
The add view takes the zeroth element of the subpath (the wiki page
name), and aliases it to the name attribute in order to know the name
of the page we're trying to add.
If the view rendering is *not* a result of a form submission (if the
expression ``'form.submitted' in request.params`` is ``False``), the
view renders a template. To do so, it generates a "save url" which
the template use as the form post URL during rendering. We're lazy
here, so we're trying to use the same template (``templates/edit.pt``)
for the add view as well as the page edit view. To do so, we create a
dummy Page object in order to satisfy the edit form's desire to have
*some* page object exposed as ``page``, and we'll render the template
to a response.
If the view rendering *is* a result of a form submission (if the
expression ``'form.submitted' in request.params`` is ``True``), we
scrape the page body from the form data, create a Page object using
the name in the subpath and the page body, and save it into "our
context" (the wiki) using the ``__setitem__`` method of the
context. We then redirect back to the ``view_page`` view (the default
view for a page) for the newly created page.
The ``edit_page`` view function
-------------------------------
The ``edit_page`` function will be invoked when a user clicks the
"Edit this Page" button on the view form. It renders an edit form but
it also acts as the handler for the form it renders. The ``context``
of the ``edit_page`` view will *always* be a Page object (never a Wiki
object).
If the view execution is *not* a result of a form submission (if the
expression ``'form.submitted' in request.params`` is ``False``), the
view simply renders the edit form, passing the request, the page
object, and a save_url which will be used as the action of the
generated form.
If the view execution *is* a result of a form submission (if the
expression ``'form.submitted' in request.params`` is ``True``), the
view grabs the ``body`` element of the request parameter and sets it
as the ``data`` attribute of the page context. It then redirects to
the default view of the context (the page), which will always be the
``view_page`` view.
Viewing the Result of Our Edits to ``views.py``
===============================================
The result of all of our edits to ``views.py`` will leave it looking
like this:
.. literalinclude:: src/views/tutorial/views.py
:linenos:
:language: python
Adding Templates
================
Most view callables we've added expected to be rendered via a
:term:`template`. Each template is a :term:`Chameleon` template. The
default templating system in :mod:`repoze.bfg` is a variant of
:term:`ZPT` provided by Chameleon. These templates will live in the
``templates`` directory of our tutorial package.
The ``view.pt`` Template
------------------------
The ``view.pt`` template is used for viewing a single wiki page. It
is used by the ``view_page`` view function. It should have a div that
is "structure replaced" with the ``content`` value provided by the
view. It should also have a link on the rendered page that points at
the "edit" URL (the URL which invokes the ``edit_page`` view for the
page being viewed).
Once we're done with the ``view.pt`` template, it will look a lot like
the below:
.. literalinclude:: src/views/tutorial/templates/view.pt
:linenos:
:language: xml
.. note:: The names available for our use in a template are always
those that are present in the dictionary returned by the view
callable. But our templates make use of a ``request`` object that
none of our tutorial views return in their dictionary. This value
appears as if "by magic". However, ``request`` is one of several
names that are available "by default" in a template when a template
renderer is used. See :ref:`chameleon_template_renderers` for more
information about other names that are available by default in a
template when a Chameleon template is used as a renderer.
The ``edit.pt`` Template
------------------------
The ``edit.pt`` template is used for adding and editing a wiki page.
It is used by the ``add_page`` and ``edit_page`` view functions. It
should display a page containing a form that POSTs back to the
"save_url" argument supplied by the view. The form should have a
"body" textarea field (the page data), and a submit button that has
the name "form.submitted". The textarea in the form should be filled
with any existing page data when it is rendered.
Once we're done with the ``edit.pt`` template, it will look a lot like
the below:
.. literalinclude:: src/views/tutorial/templates/edit.pt
:linenos:
:language: xml
Static Resources
----------------
Our templates name a single static resource named ``style.css``. We
need to create this and place it in a file named ``style.css`` within
our package's ``templates/static`` directory. This file is a little
too long to replicate within the body of this guide, however it is
available `online
<http://docs.repoze.org/bfgwiki-1.2/views/tutorial/templates/static/style.css>`_.
This CSS file will be accessed via
e.g. ``http://localhost:6543/static/style.css`` by virtue of the
``static`` directive we've defined in the ``configure.zcml`` file.
Any number and type of static resources can be placed in this
directory (or subdirectories) and are just referred to by URL within
templates.
Testing the Views
=================
We'll modify our ``tests.py`` file, adding tests for each view
function we added above. As a result, we'll *delete* the
``ViewTests`` test in the file, and add four other test classes:
``ViewWikiTests``, ``ViewPageTests``, ``AddPageTests``, and
``EditPageTests``. These test the ``view_wiki``, ``view_page``,
``add_page``, and ``edit_page`` views respectively.
Once we're done with the ``tests.py`` module, it will look a lot like
the below:
.. literalinclude:: src/views/tutorial/tests.py
:linenos:
:language: python
Running the Tests
=================
We can run these tests by using ``setup.py test`` in the same way we
did in :ref:`running_tests`. Assuming our shell's current working
directory is the "tutorial" distribution directory:
On UNIX:
.. code-block:: text
$ ../bin/python setup.py test -q
On Windows:
.. code-block:: text
c:\bigfntut\tutorial> ..\Scripts\python setup.py test -q
The expected result looks something like:
.. code-block:: text
.........
----------------------------------------------------------------------
Ran 9 tests in 0.203s
OK
Mapping Views to URLs in ``configure.zcml``
===========================================
The ``configure.zcml`` file contains ``view`` declarations which serve
to map URLs (via :term:`traversal`) to view functions. This is also
known as :term:`view configuration`. You'll need to add four ``view``
declarations to ``configure.zcml``.
#. Add a declaration which maps the "Wiki" class in our ``models.py``
file to the view named ``view_wiki`` in our ``views.py`` file with
no view name. This is the default view for a Wiki. It does not
use a ``renderer`` because the ``view_wiki`` view callable always
returns a *response* object rather than a dictionary.
#. Add a declaration which maps the "Wiki" class in our ``models.py``
file to the view named ``add_page`` in our ``views.py`` file with
the view name ``add_page``. Associate this view with the
``templates/edit.pt`` template file via the ``renderer`` attribute.
This view will use the :term:`Chameleon` ZPT renderer configured
with the ``templates/edit.pt`` template to render non-*response*
return values from the ``add_page`` view. This is the add view for
a new Page.
#. Add a declaration which maps the "Page" class in our ``models.py``
file to the view named ``view_page`` in our ``views.py`` file with
no view name. Associate this view with the ``templates/view.pt``
template file via the ``renderer`` attribute. This view will use
the :term:`Chameleon` ZPT renderer configured with the
``templates/view.pt`` template to render non-*response* return
values from the ``view_page`` view. This is the default view for a
Page.
#. Add a declaration which maps the "Page" class in our ``models.py``
file to the view named ``edit_page`` in our ``views.py`` file with
the view name ``edit_page``. Associate this view with the
``templates/edit.pt`` template file via the ``renderer`` attribute.
This view will use the :term:`Chameleon` ZPT renderer configured
with the ``templates/edit.pt`` template to render non-*response*
return values from the ``edit_page`` view. This is the edit view
for a page.
As a result of our edits, the ``configure.zcml`` file should look
something like so:
.. literalinclude:: src/views/tutorial/configure.zcml
:linenos:
:language: xml
Examining ``tutorial.ini``
==========================
Let's take a look at our ``tutorial.ini`` file. The contents of the
file are as follows:
.. literalinclude:: src/models/tutorial.ini
:linenos:
:language: ini
The WSGI Pipeline
-----------------
Within ``tutorial.ini``, note the existence of a ``[pipeline:main]``
section which specifies our WSGI pipeline. This "pipeline" will be
served up as our WSGI application. As far as the WSGI server is
concerned the pipeline *is* our application. Simpler configurations
don't use a pipeline: instead they expose a single WSGI application as
"main". Our setup is more complicated, so we use a pipeline.
``egg:repoze.zodbconn#closer`` is at the "top" of the pipeline. This
is a piece of middleware which closes the ZODB connection opened by
the PersistentApplicationFinder at the end of the request.
``egg:repoze.tm#tm`` is the second piece of middleware in the
pipeline. This commits a transaction near the end of the request
unless there's an exception raised.
Adding an Element to the Pipeline
---------------------------------
Let's add a piece of middleware to the WSGI pipeline:
``egg:Paste#evalerror`` middleware which displays debuggable errors in
the browser while you're developing (not recommended for deployment).
Let's insert evalerror into the pipeline right below
"egg:repoze.zodbconn#closer", making our resulting ``tutorial.ini``
file look like so:
.. literalinclude:: src/views/tutorial.ini
:linenos:
:language: ini
Viewing the Application in a Browser
====================================
Once we've set up the WSGI pipeline properly, we can finally examine
our application in a browser. The views we'll try are as follows:
- Visiting ``http://localhost:6543/`` in a browser invokes the
``view_wiki`` view. This always redirects to the ``view_page`` view
of the FrontPage page object.
- Visiting ``http://localhost:6543/FrontPage/`` in a browser invokes
the ``view_page`` view of the front page page object. This is
because it's the *default view* (a view without a ``name``) for Page
objects.
- Visiting ``http://localhost:6543/FrontPage/edit_page`` in a browser
invokes the edit view for the front page object.
- Visiting ``http://localhost:6543/add_page/SomePageName`` in a
browser invokes the add view for a page.
- To generate an error, visit ``http://localhost:6543/add_page`` which
will generate an ``IndexError`` for the expression
``request.subpath[0]``. You'll see an interactive traceback
facility provided by evalerror.
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/tutorials/bfgwiki/definingviews.rst | 0.916161 | 0.789153 | definingviews.rst | pypi |
==========================================================
Using View Decorators Rather than ZCML ``view`` directives
==========================================================
So far we've been using :term:`ZCML` to map model types to views.
It's often easier to use the ``bfg_view`` view decorator to do this
mapping. Using view decorators provides better locality of reference
for the mapping, because you can see which model types and view names
the view will serve right next to the view function itself. In this
mode, however, you lose the ability for some views to be overridden
"from the outside" (by someone using your application as a framework,
as explained in the :ref:`extending_chapter`). Since this application
is not meant to be a framework, it makes sense for us to switch over
to using view decorators.
Adding View Decorators
======================
We're going to import the :class:`repoze.bfg.view.bfg_view` callable.
This callable can be used as a function, class, or method decorator.
We'll use it to decorate our ``view_wiki``, ``view_page``,
``add_page`` and ``edit_page`` view functions.
The :class:`repoze.bfg.view.bfg_view` callable accepts a number of
arguments:
``context``
The model type which the :term:`context` of our view will be, in our
case a class.
``name``
The name of the view.
``renderer``
The renderer (usually a *template name*) that will be used when the
view returns a non-:term:`response` object.
There are other arguments which this callable accepts, but these are
the ones we're going to use.
The ``view_wiki`` view function
-------------------------------
The decorator above the ``view_wiki`` function will be:
.. ignore-next-block
.. code-block:: python
:linenos:
@bfg_view(context=Wiki)
This indicates that the view is for the Wiki class and has the *empty*
view_name (indicating the :term:`default view` for the Wiki class).
After injecting this decorator, we can now *remove* the following from
our ``configure.zcml`` file:
.. code-block:: xml
:linenos:
<view
context=".models.Wiki"
view=".views.view_wiki"
/>
Our new decorator takes its place.
The ``view_page`` view function
-------------------------------
The decorator above the ``view_page`` function will be:
.. ignore-next-block
.. code-block:: python
:linenos:
@bfg_view(context=Page, renderer='templates/view.pt')
This indicates that the view is for the Page class and has the *empty*
view_name (indicating the :term:`default view` for the Page class).
After injecting this decorator, we can now *remove* the following from
our ``configure.zcml`` file:
.. code-block:: xml
:linenos:
<view
context=".models.Page"
view=".views.view_page"
renderer="templates/view.pt"
/>
Our new decorator takes its place.
The ``add_page`` view function
------------------------------
The decorator above the ``add_page`` function will be:
.. ignore-next-block
.. code-block:: python
:linenos:
@bfg_view(context=Wiki, name='add_page', renderer='templates/edit.pt')
This indicates that the view is for the Wiki class and has the
``add_page`` view_name. After injecting this decorator, we can now
*remove* the following from our ``configure.zcml`` file:
.. code-block:: xml
:linenos:
<view
context=".models.Wiki"
name="add_page"
view=".views.add_page"
renderer="templates/edit.pt"
/>
Our new decorator takes its place.
The ``edit_page`` view function
-------------------------------
The decorator above the ``edit_page`` function will be:
.. ignore-next-block
.. code-block:: python
:linenos:
@bfg_view(context=Page, name='edit_page', renderer='templates/edit.pt')
This indicates that the view is for the Page class and has the
``edit_page`` view_name. After injecting this decorator, we can now
*remove* the following from our ``configure.zcml`` file:
.. code-block:: xml
:linenos:
<view
context=".models.Page"
name="edit_page"
view=".views.edit_page"
renderer="templates/edit.pt"
/>
Our new decorator takes its place.
Adding a Scan Directive
=======================
In order for our decorators to be recognized, we must add a bit of
boilerplate to our ``configure.zcml`` file which tells
:mod:`repoze.bfg` to kick off a :term:`scan` at startup time. Add the
following tag anywhere beneath the ``<include
package="repoze.bfg.includes">`` tag but before the ending
``</configure>`` tag within ``configure.zcml``:
.. code-block:: xml
:linenos:
<scan package="."/>
Viewing the Result of Our Edits to ``views.py``
===============================================
The result of all of our edits to ``views.py`` will leave it looking
like this:
.. literalinclude:: src/viewdecorators/tutorial/views.py
:linenos:
:language: python
Viewing the Results of Our Edits to ``configure.zcml``
======================================================
The result of all of our edits to ``configure.zcml`` will leave it
looking like this:
.. literalinclude:: src/viewdecorators/tutorial/configure.zcml
:linenos:
:language: xml
Running the Tests
=================
We can run these tests by using ``setup.py test`` in the same way we
did in :ref:`running_tests`. Assuming our shell's current working
directory is the "tutorial" distribution directory:
On UNIX:
.. code-block:: text
$ ../bin/python setup.py test -q
On Windows:
.. code-block:: text
c:\bigfntut\tutorial> ..\Scripts\python setup.py test -q
Hopefully nothing will have changed. The expected result looks
something like:
.. code-block:: text
.........
----------------------------------------------------------------------
Ran 9 tests in 0.203s
OK
Viewing the Application in a Browser
====================================
Once we've set up the WSGI pipeline properly, we can finally examine
our application in a browser. We'll make sure that we didn't break
any views by trying each of them.
- Visiting ``http://localhost:6543/`` in a
browser invokes the ``view_wiki`` view. This always redirects to
the ``view_page`` view of the FrontPage page object.
- Visiting ``http://localhost:6543/FrontPage/`` in a browser invokes
the ``view_page`` view of the front page page object. This is
because it's the *default view* (a view without a ``name``) for Page
objects.
- Visiting ``http://localhost:6543/FrontPage/edit_page`` in a browser
invokes the edit view for the front page object.
- Visiting ``http://localhost:6543/add_page/SomePageName`` in a
browser invokes the add view for a page.
| /repoze.bfg-1.3a10.tar.gz/repoze.bfg-1.3a10/docs/tutorials/bfgwiki/viewdecorators.rst | 0.895328 | 0.710166 | viewdecorators.rst | pypi |
from zope import interface
from repoze.bfg.interfaces import IView
class ISkinObject(interface.Interface):
name = interface.Attribute(
"""Component name.""")
def refresh():
"""Refresh object from file on disk (reload)."""
class ISkinObjectFactory(interface.Interface):
def component_name(relative_path):
"""Returns a component name. This method must be a
class- or static method."""
class ISkinTemplate(interface.Interface):
"""Skin templates are page templates which reside in a skin
directory. These are registered as named components adapting on
(context, request)."""
name = interface.Attribute(
"""This is the basename of the template filename relative to
the skin directory. Note that the OS-level path separator
character is replaced with a forward slash ('/').""")
path = interface.Attribute(
"""Full path to the template. This attribute is available to
allow applications to get a reference to files that relate to the
template file, e.g. metadata or additional resources. An example
could be a title and a description, or a icon that gives a visual
representation for the template.""")
def __call__(context, request):
"""Returns a bound skin template instance."""
def get_api(name):
"""Look up skin api by name."""
def get_macro(name):
"""Look up skin macro by name."""
class IBoundSkinTemplate(ISkinTemplate):
"""Bound to a context and request."""
def __call__(**kwargs):
"""Renders template to a response object."""
def render(**kwargs):
"""Renders template to a unicode string, passing optional
keyword-arguments."""
class ISkinTemplateView(IView):
"""When skin templates are set to provide one or more interfaces,
a component providing this interface will be registered."""
template = interface.Attribute(
"""The skin template object.""")
def __call__():
"""Renders template to a response object."""
def render(**kwargs):
"""Renders template to a unicode string, passing optional
keyword-arguments."""
class ISkinApi(interface.Interface):
"""A helper component available to skin templates. Skin APIs
should be registered as named components adapting on (context,
request, template)."""
class ISkinApiMethod(interface.Interface):
"""Skin API methods are an alternative to a generic API and get
the chance of having arguments passed.""" | /repoze.bfg.skins-0.22.zip/repoze.bfg.skins-0.22/src/repoze/bfg/skins/interfaces.py | 0.881564 | 0.222478 | interfaces.py | pypi |
import xmlrpclib
import webob
def xmlrpc_marshal(data):
""" Marshal a Python data structure into an XML document suitable
for use as an XML-RPC response and return the document. If
``data`` is an ``xmlrpclib.Fault`` instance, it will be marshalled
into a suitable XML-RPC fault response."""
if isinstance(data, xmlrpclib.Fault):
return xmlrpclib.dumps(data)
else:
return xmlrpclib.dumps((data,), methodresponse=True)
def xmlrpc_response(data):
""" Marshal a Python data structure into a webob ``Response``
object with a body that is an XML document suitable for use as an
XML-RPC response with a content-type of ``text/xml`` and return
the response."""
xml = xmlrpc_marshal(data)
response = webob.Response(xml)
response.content_type = 'text/xml'
response.content_length = len(xml)
return response
def parse_xmlrpc_request(request):
""" Deserialize the body of a request from an XML-RPC request
document into a set of params and return a two-tuple. The first
element in the tuple is the method params as a sequence, the
second element in the tuple is the method name."""
if request.content_length > (1 << 23):
# protect from DOS (> 8MB body)
raise ValueError('Body too large (%s bytes)' % request.content_length)
params, method = xmlrpclib.loads(request.body)
return params, method
def xmlrpc_view(wrapped):
""" This decorator turns functions which accept params and return Python
structures into functions suitable for use as bfg views that speak XML-RPC.
The decorated function must accept a ``context`` argument and zero or
more positional arguments (conventionally named ``*params``).
E.g.::
from repoze.bfg.xmlrpc import xmlrpc_view
@xmlrpc_view
def say(context, what):
if what == 'hello'
return {'say':'Hello!'}
else:
return {'say':'Goodbye!'}
Equates to::
from repoze.bfg.xmlrpc import parse_xmlrpc_request
from repoze.bfg.xmlrpc import xmlrpc_response
def say_view(context, request):
params, method = parse_xmlrpc_request(request)
return say(context, *params)
def say(context, what):
if what == 'hello'
return {'say':'Hello!'}
else:
return {'say':'Goodbye!'}
Note that if you use :class:`~repoze.bfg.view.bfg_view`, you must
decorate your view function in the following order for it to be
recognized by the convention machinery as a view::
@bfg_view(name='say')
@xmlrpc_view
def say(context, what):
if what == 'hello'
return {'say':'Hello!'}
else:
return {'say':'Goodbye!'}
In other words do *not* decorate it in :func:`~repoze.bfg.xmlrpc.xmlrpc_view`,
then :class:`~repoze.bfg.view.bfg_view`; it won't work.
"""
def _curried(context, request):
params, method = parse_xmlrpc_request(request)
value = wrapped(context, *params)
return xmlrpc_response(value)
_curried.__name__ = wrapped.__name__
_curried.__grok_module__ = wrapped.__module__ # r.bfg.convention support
return _curried
class XMLRPCView:
"""A base class for a view that serves multiple methods by XML-RPC.
Subclass and add your methods as described in the documentation.
"""
def __init__(self,context,request):
self.context = context
self.request = request
def __call__(self):
"""
This method de-serializes the XML-RPC request and
dispatches the resulting method call to the correct
method on the :class:`~repoze.bfg.xmlrpc.XMLRPCView`
subclass instance.
.. warning::
Do not override this method in any subclass if you
want XML-RPC to continute to work!
"""
params, method = parse_xmlrpc_request(self.request)
return xmlrpc_response(getattr(self,method)(*params)) | /repoze.bfg.xmlrpc-0.4.tar.gz/repoze.bfg.xmlrpc-0.4/repoze/bfg/xmlrpc/__init__.py | 0.724091 | 0.202857 | __init__.py | pypi |
import hmac
import os
import random
import StringIO
import time
import threading
try:
from hashlib import sha1 as sha
except ImportError: # Python < 2.5
from sha import new as sha
from paste.request import get_cookies
_RANDS = []
_CURRENT_PERIOD = None
_LOCK = threading.Lock()
class BrowserIdMiddleware(object):
def __init__(self, app,
secret_key,
cookie_name,
cookie_path='/',
cookie_domain=None,
cookie_lifetime=None,
cookie_secure=False,
vary=(),
):
"""
Construct an object suitable for use as WSGI middleware that
implements a browser id manager.
``app``
A WSGI application object. Required.
``secret_key``
A string that will be used as a component of the browser id
tamper key. Required.
``cookie_name``
The cookie name used for the browser id cookie. Defaults
to ``repoze.browserid``.
``cookie_path``
The cookie path used for the browser id cookie. Defaults
to ``/``.
``cookie_domain``
The domain of the browser id key cookie. Defaults to ``None``,
meaning do not include a domain in the cookie.
``cookie_lifetime``
An integer number of seconds used to compute the expires time
for the browser id cookie. Defaults to ``None``, meaning
include no Expires time in the cookie.
``cookie_secure``
Boolean. If ``True``, set the Secure flag of the browser
id cookie.
``vary``
A sequence of string header names on which to vary.
"""
self.app = app
self.secret_key = secret_key
self.cookie_name = cookie_name
self.cookie_path = cookie_path
self.cookie_domain = cookie_domain
self.cookie_lifetime = cookie_lifetime
self.cookie_secure = cookie_secure
self.vary = vary
self.randint = random.randint # tests override
self.time = time.time # tests override
try:
self.pid = os.getpid()
except AttributeError: # pragma: no cover
# no getpid in Jython
self.pid = 1
def __call__(self, environ, start_response):
"""
If the remote browser has a cookie that claims to contain a
browser id value, and that value hasn't been tampered with,
set the browser id portion of the cookie value as
'repoze.browserid' in the environ and call the downstream
application.
Otherwise, create one and set that as 'repoze.browserid' in
the environ, then call the downstream application. On egress,
set a Set-Cookie header with the value+hmac so we can retrieve
it next time around.
We use the secret key and the values in self.vary to compose
the 'tamper key' when creating a browser id, which is used as
the hmac key. This allows a configurer to vary the tamper key
on, e.g. 'REMOTE_ADDR' if he believes that the same browser id
should always be sent from the same IP address, or
'HTTP_USER_AGENT' if he believes it should always come from
the same user agent, or some arbitrary combination thereof
made out of environ keys.
"""
cookies = get_cookies(environ)
cookie = cookies.get(self.cookie_name)
if cookie is not None:
# this browser returned a cookie value that claims to be
# a browser id
browser_id = self.from_cookieval(environ, cookie.value)
if browser_id is not None:
# cookie hasn't been tampered with
environ['repoze.browserid'] = browser_id
return self.app(environ, start_response)
# no browser id cookie or cookie value was tampered with
now = self.time()
browser_id = self.new(now)
environ['repoze.browserid'] = browser_id
wrapper = StartResponseWrapper(start_response)
app_iter = self.app(environ, wrapper.wrap_start_response)
cookie_value = self.to_cookieval(environ, browser_id)
set_cookie = '%s=%s; ' % (self.cookie_name, cookie_value)
if self.cookie_path:
set_cookie += 'Path=%s; ' % self.cookie_path
if self.cookie_domain:
set_cookie += 'Domain=%s; ' % self.cookie_domain
if self.cookie_lifetime:
expires = time.gmtime(now + self.cookie_lifetime)
expires = time.strftime('%a %d-%b-%Y %H:%M:%S GMT', expires)
set_cookie += 'Expires=%s; ' % expires
if self.cookie_secure:
set_cookie += 'Secure;'
wrapper.finish_response([('Set-Cookie', set_cookie)])
return app_iter
def from_cookieval(self, environ, cookie_value):
try:
browser_id, provided_hmac = cookie_value.split('!')
except ValueError:
return None
key = self._get_tamper_key(environ)
computed_hmac = hmac.new(key, browser_id).hexdigest()
if computed_hmac != provided_hmac:
return None
return browser_id
def to_cookieval(self, environ, browser_id):
key = self._get_tamper_key(environ)
h = hmac.new(key, browser_id).hexdigest()
val = '%s!%s' % (browser_id, h)
return val
def _get_tamper_key(self, environ):
key = self.secret_key
for name in self.vary:
key = key + environ.get(name, '')
return key
def new(self, when):
""" Returns opaque 40-character browser id
An example is: e193a01ecf8d30ad0affefd332ce934e32ffce72
"""
rand = self._get_rand_for(when)
source = '%s%s%s' % (rand, when, self.pid)
browser_id = sha(source).hexdigest()
return browser_id
def _get_rand_for(self, when):
"""
There is a good chance that two simultaneous callers will
obtain the same random number when the system first starts, as
all Python threads/interpreters will start with the same
random seed (the time) when they come up on platforms that
dont have an entropy generator.
We'd really like to be sure that two callers never get the
same browser id, so this is a problem. But since our browser
id has a time component and a random component, the random
component only needs to be unique within the resolution of the
time component to ensure browser id uniqueness.
We keep around a set of recently-generated random numbers at a
global scope for the past second, only returning numbers that
aren't in this set. The lowest-known-resolution time.time
timer is on Windows, which changes 18.2 times per second, so
using a period of one second should be conservative enough.
"""
period = 1
this_period = int(when - (when % period))
_LOCK.acquire()
try:
while 1:
rand = self.randint(0, 99999999)
global _CURRENT_PERIOD
if this_period != _CURRENT_PERIOD:
_CURRENT_PERIOD = this_period
_RANDS[:] = []
if rand not in _RANDS:
_RANDS.append(rand)
return rand
finally:
_LOCK.release()
class StartResponseWrapper(object):
def __init__(self, start_response):
self.start_response = start_response
self.status = None
self.headers = []
self.exc_info = None
self.buffer = StringIO.StringIO()
def wrap_start_response(self, status, headers, exc_info=None):
self.headers = headers
self.status = status
self.exc_info = exc_info
return self.buffer.write
def finish_response(self, extra_headers):
if not extra_headers:
extra_headers = []
headers = self.headers + extra_headers
write = self.start_response(self.status, headers, self.exc_info)
if write:
self.buffer.seek(0)
value = self.buffer.getvalue()
if value:
write(value)
if hasattr(write, 'close'):
write.close()
def asbool(val):
if isinstance(val, int):
return bool(val)
val= str(val)
if val.lower() in ('y', 'yes', 'true', 't'):
return True
return False
def make_middleware(app, global_conf, secret_key,
cookie_name='repoze.browserid',
cookie_path='/', cookie_domain=None,
cookie_lifetime=None, cookie_secure=False,
vary=None):
"""
Return an object suitable for use as WSGI middleware that
implements a browser id manager. Usually used as a PasteDeploy
filter_app_factory callback.
``app``
A WSGI application object. Required.
``global_conf``
A dictionary representing global configuration (PasteDeploy).
Required.
``secret_key``
A string that will be used as a component of the browser id
tamper key. Required.
``cookie_name``
The cookie name used for the browser id cookie. Defaults
to ``repoze.browserid``.
``cookie_path``
The cookie path used for the browser id cookie. Defaults
to ``/``.
``cookie_domain``
The domain of the browser id key cookie. Defaults to ``None``,
meaning do not include a domain in the cookie.
``cookie_lifetime``
An integer number of seconds used to compute the expires time
for the browser id cookie. Defaults to ``None``, meaning
include no Expires time in the cookie.
``cookie_secure``
Boolean. If ``true``, set the Secure flag of the browser id cookie.
``vary``
A space-separated string including the header names on which to vary.
"""
if cookie_lifetime:
cookie_lifetime = int(cookie_lifetime)
cookie_secure = asbool(cookie_secure)
if vary:
vary = tuple([ x.strip() for x in vary.split() ])
else:
vary = ()
return BrowserIdMiddleware(app, secret_key, cookie_name, cookie_path,
cookie_domain, cookie_lifetime, cookie_secure,
vary) | /repoze.browserid-0.3.tar.gz/repoze.browserid-0.3/repoze/browserid/middleware.py | 0.54819 | 0.207074 | middleware.py | pypi |
import random
from persistent import Persistent
from BTrees.IOBTree import IOBTree
from BTrees.OIBTree import OIBTree
from BTrees.OOBTree import OOBTree
import BTrees
_marker = ()
class DocumentMap(Persistent):
""" A two-way map between addresses (e.g. location paths) and document ids.
The map is a persistent object meant to live in a ZODB storage.
Additionally, the map is capable of mapping 'metadata' to docids.
"""
_v_nextid = None
family = BTrees.family32
_randrange = random.randrange
docid_to_metadata = None # latch for b/c
def __init__(self):
self.docid_to_address = IOBTree()
self.address_to_docid = OIBTree()
self.docid_to_metadata = IOBTree()
def docid_for_address(self, address):
""" Retrieve a document id for a given address.
``address`` is a string or other hashable object which represents
a token known by the application.
Return the integer document id corresponding to ``address``.
If ``address`` doesn't exist in the document map, return None.
"""
return self.address_to_docid.get(address)
def address_for_docid(self, docid):
""" Retrieve an address for a given document id.
``docid`` is an integer document id.
Return the address corresponding to ``docid``.
If ``docid`` doesn't exist in the document map, return None.
"""
return self.docid_to_address.get(docid)
def add(self, address, docid=_marker):
""" Add a new document to the document map.
``address`` is a string or other hashable object which represents
a token known by the application.
``docid``, if passed, must be an int. In this case, remove
any previous address stored for it before mapping it to the
new address. Passing an explicit ``docid`` also removes any
metadata associated with that docid.
If ``docid`` is not passed, generate a new docid.
Return the integer document id mapped to ``address``.
"""
if docid is _marker:
docid = self.new_docid()
self.remove_docid(docid)
self.remove_address(address)
self.docid_to_address[docid] = address
self.address_to_docid[address] = docid
return docid
def remove_docid(self, docid):
""" Remove a document from the document map for the given document ID.
``docid`` is an integer document id.
Remove any corresponding metadata for ``docid`` as well.
Return a True if ``docid`` existed in the map, else return False.
"""
# It should be an invariant that if one entry exists in
# docid_to_address for a docid/address pair, exactly one
# corresponding entry exists in address_to_docid for the same
# docid/address pair. However, versions of this code before
# r.catalog 0.7.3 had a bug which, if this method was called
# multiple times, each time with the same address but a
# different docid, the ``docid_to_address`` mapping could
# contain multiple entries for the same address each with a
# different docid, causing this invariant to be violated. The
# symptom: in systems that used r.catalog 0.7.2 and lower,
# there might be more entries in docid_to_address than there
# are in address_to_docid. The conditional fuzziness in the
# code directly below is a runtime kindness to systems in that
# state. Technically, the administrator of a system in such a
# state should normalize the two data structures by running a
# script after upgrading to 0.7.3. If we made the admin do
# this, some of the code fuzziness below could go away,
# replaced with something simpler. But there's no sense in
# breaking systems at runtime through being a hardass about
# consistency if an unsuspecting upgrader has not yet run the
# data fixer script. The "fix the data" mantra rings a
# little hollow when you weren't the one who broke the data in
# the first place ;-)
self._check_metadata()
address = self.docid_to_address.get(docid, _marker)
if address is _marker:
return False
old_docid = self.address_to_docid.get(address, _marker)
if (old_docid is not _marker) and (old_docid != docid):
self.remove_docid(old_docid)
if docid in self.docid_to_address:
del self.docid_to_address[docid]
if address in self.address_to_docid:
del self.address_to_docid[address]
if docid in self.docid_to_metadata:
del self.docid_to_metadata[docid]
return True
def remove_address(self, address):
""" Remove a document from the document map using an address.
``address`` is a string or other hashable object which represents
a token known by the application.
Remove any corresponding metadata for ``address`` as well.
Return a True if ``address`` existed in the map, else return False.
"""
# See the comment in remove_docid for complexity rationalization
self._check_metadata()
docid = self.address_to_docid.get(address, _marker)
if docid is _marker:
return False
old_address = self.docid_to_address.get(docid, _marker)
if (old_address is not _marker) and (old_address != address):
self.remove_address(old_address)
if docid in self.docid_to_address:
del self.docid_to_address[docid]
if address in self.address_to_docid:
del self.address_to_docid[address]
if docid in self.docid_to_metadata:
del self.docid_to_metadata[docid]
return True
def _check_metadata(self):
# backwards compatibility
if self.docid_to_metadata is None:
self.docid_to_metadata = IOBTree()
def add_metadata(self, docid, data):
""" Add metadata related to a given document id.
``data`` must be a mapping, such as a dictionary.
For each key/value pair in ``data`` insert a metadata key/value pair
into the metadata stored for ``docid``.
Overwrite any existing values for the keys in ``data``, leaving values
unchanged for other existing keys.
Raise a KeyError If ``docid`` doesn't relate to an address in the
document map.
"""
if not docid in self.docid_to_address:
raise KeyError(docid)
if len(list(data.keys())) == 0:
return
self._check_metadata()
meta = self.docid_to_metadata.setdefault(docid, OOBTree())
for k in data:
meta[k] = data[k]
def remove_metadata(self, docid, *keys):
""" Remove metadata related to a given document id.
If ``docid`` doesn't exist in the metadata map, raise a KeyError.
For each key in ``keys``, remove the metadata value for the
docid related to that key.
Do not raise any error if no value exists for a given key.
If no keys are specified, remove all metadata related to the docid.
"""
self._check_metadata()
if keys:
meta = self.docid_to_metadata.get(docid, _marker)
if meta is _marker:
raise KeyError(docid)
for k in keys:
if k in meta:
del meta[k]
if not meta:
del self.docid_to_metadata[docid]
else:
if not (docid in self.docid_to_metadata):
raise KeyError(docid)
del self.docid_to_metadata[docid]
def get_metadata(self, docid):
""" Return the metadata for ``docid``.
Return a mapping of the keys and values set using ``add_metadata``.
Raise a KeyError If metadata does not exist for ``docid``.
"""
if self.docid_to_metadata is None:
raise KeyError(docid)
meta = self.docid_to_metadata[docid]
return meta
def new_docid(self):
""" Return a new document id.
The returned value is guaranteed not to be used already in this
document map.
"""
while True:
if self._v_nextid is None:
self._v_nextid = self._randrange(self.family.minint,
self.family.maxint)
uid = self._v_nextid
self._v_nextid += 1
if uid not in self.docid_to_address:
return uid
self._v_nextid = None | /repoze.catalog-0.9.0.tar.gz/repoze.catalog-0.9.0/repoze/catalog/document.py | 0.728265 | 0.415314 | document.py | pypi |
import BTrees
from persistent.mapping import PersistentMapping
import transaction
from zope.interface import implementer
from repoze.catalog.interfaces import ICatalog
from repoze.catalog.interfaces import ICatalogIndex
from repoze.catalog.compat import text_type
@implementer(ICatalog)
class Catalog(PersistentMapping):
family = BTrees.family32
def __init__(self, family=None):
PersistentMapping.__init__(self)
if family is not None:
self.family = family
def clear(self):
""" Clear all indexes in this catalog. """
for index in self.values():
index.clear()
def index_doc(self, docid, obj):
"""Register the document represented by ``obj`` in indexes of
this catalog using docid ``docid``."""
assertint(docid)
for index in self.values():
index.index_doc(docid, obj)
def unindex_doc(self, docid):
"""Unregister the document id from indexes of this catalog."""
assertint(docid)
for index in self.values():
index.unindex_doc(docid)
def reindex_doc(self, docid, obj):
""" Reindex the document referenced by docid using the object
passed in as ``obj`` (typically just does the equivalent of
``unindex_doc``, then ``index_doc``, but specialized indexes
can override the method that this API calls to do less work. """
assertint(docid)
for index in self.values():
index.reindex_doc(docid, obj)
def __setitem__(self, name, index):
""" Add an object which implements
``repoze.catalog.interfaces.ICatalogIndex`` to the catalog.
No other type of object may be added to a catalog."""
if not ICatalogIndex.providedBy(index):
raise ValueError('%s does not provide ICatalogIndex')
return PersistentMapping.__setitem__(self, name, index)
def search(self, **query):
""" Use the query terms to perform a query. Return a tuple of
(num, resultseq) based on the merging of results from
individual indexes.
.. note::
this method is deprecated as of :mod:`repoze.catalog`
version 0.8. Use :meth:`repoze.catalog.Catalog.query`
instead.
"""
sort_index = None
reverse = False
limit = None
sort_type = None
index_query_order = None
if 'sort_index' in query:
sort_index = query.pop('sort_index')
if 'reverse' in query:
reverse = query.pop('reverse')
if 'limit' in query:
limit = query.pop('limit')
if 'sort_type' in query:
sort_type = query.pop('sort_type')
if 'index_query_order' in query:
index_query_order = query.pop('index_query_order')
if index_query_order is None:
# unordered query (use apply)
results = []
for index_name, index_query in query.items():
index = self.get(index_name)
if index is None:
raise ValueError('No such index %s' % index_name)
r = index.apply(index_query)
if not r:
# empty results, bail early; intersect will be null
return EMPTY_RESULT
results.append((len(r), r))
if not results:
return EMPTY_RESULT
results.sort(key=lambda x: x[0]) # order from smallest to largest
_, result = results.pop(0)
for _, r in results:
_, result = self.family.IF.weightedIntersection(result, r)
if not result:
EMPTY_RESULT
else:
# ordered query (use apply_intersect)
result = None
_marker = object()
for index_name in index_query_order:
index_query = query.get(index_name, _marker)
if index_query is _marker:
continue
index = self.get(index_name)
if index is None:
raise ValueError('No such index %s' % index_name)
result = index.apply_intersect(index_query, result)
if not result:
# empty results
return EMPTY_RESULT
return self.sort_result(result, sort_index, limit, sort_type, reverse)
def sort_result(self, result, sort_index=None, limit=None, sort_type=None,
reverse=False):
numdocs = total = len(result)
if sort_index:
index = self[sort_index]
result = index.sort(result, reverse=reverse, limit=limit,
sort_type=sort_type)
if limit:
numdocs = min(numdocs, limit)
return ResultSetSize(numdocs, total), result
def query(self, queryobject, sort_index=None, limit=None, sort_type=None,
reverse=False, names=None):
""" Use the arguments to perform a query. Return a tuple of
(num, resultseq)."""
try:
from repoze.catalog.query import parse_query
if isinstance(queryobject, text_type):
queryobject = parse_query(queryobject)
except ImportError: # pragma NO COVERAGE
pass
results = queryobject._apply(self, names)
return self.sort_result(results, sort_index, limit, sort_type, reverse)
def apply(self, query):
return self.search(**query)
def assertint(docid):
if not isinstance(docid, int):
raise ValueError('%r is not an integer value; document ids must be '
'integers' % docid)
class CatalogFactory(object):
def __call__(self, connection_handler=None):
conn = self.db.open()
if connection_handler:
connection_handler(conn)
root = conn.root()
if root.get(self.appname) is None:
root[self.appname] = Catalog()
return root[self.appname]
class FileStorageCatalogFactory(CatalogFactory):
def __init__(self, filename, appname, **kw):
""" ``filename`` is a filename to the FileStorage storage,
``appname`` is a key name in the root of the FileStorage in
which to store the catalog, and ``**kw`` is passed as extra
keyword arguments to :class:`ZODB.DB.DB` when creating a
database. Note that when we create a :class:`ZODB.DB.DB`
instance, if a ``cache_size`` is not passed in ``*kw``, we
override the default ``cache_size`` value with ``50000`` in
order to provide a more realistic cache size for modern apps"""
cache_size = kw.get('cache_size')
if cache_size is None:
kw['cache_size'] = 50000
from ZODB.FileStorage.FileStorage import FileStorage
from ZODB.DB import DB
f = FileStorage(filename)
self.db = DB(f, **kw)
self.appname = appname
def __del__(self):
self.db.close()
class ConnectionManager(object):
def __call__(self, conn):
self.conn = conn
def close(self):
self.conn.close()
def __del__(self):
self.close()
def commit(self, transaction=transaction):
transaction.commit()
class ResultSetSizeClass(int):
def __repr__(self):
return 'ResultSetSize(%d, %d)' % (self, self.total)
def ResultSetSize(i, total):
size = ResultSetSizeClass(i)
size.total = total
return size
EMPTY_RESULT = ResultSetSize(0, 0), () | /repoze.catalog-0.9.0.tar.gz/repoze.catalog-0.9.0/repoze/catalog/catalog.py | 0.697918 | 0.399929 | catalog.py | pypi |
from persistent import Persistent
from ZODB.broken import Broken
import BTrees
from repoze.catalog.compat import text_type
_marker = ()
class CatalogIndex(object):
""" Abstract class for interface-based lookup """
family = BTrees.family32
def __init__(self, discriminator):
if not callable(discriminator):
if not isinstance(discriminator, text_type):
raise ValueError('discriminator value must be callable or a '
'string')
self.discriminator = discriminator
self._not_indexed = self.family.IF.Set()
def index_doc(self, docid, object):
if callable(self.discriminator):
value = self.discriminator(object, _marker)
else:
value = getattr(object, self.discriminator, _marker)
if value is _marker:
# unindex the previous value
super(CatalogIndex, self).unindex_doc(docid)
# Store docid in set of unindexed docids
self._not_indexed.add(docid)
return None
if isinstance(value, Persistent):
raise ValueError('Catalog cannot index persistent object %s' %
value)
if isinstance(value, Broken):
raise ValueError('Catalog cannot index broken object %s' %
value)
if docid in self._not_indexed:
# Remove from set of unindexed docs if it was in there.
self._not_indexed.remove(docid)
return super(CatalogIndex, self).index_doc(docid, value)
def unindex_doc(self, docid):
_not_indexed = self._not_indexed
if docid in _not_indexed:
_not_indexed.remove(docid)
super(CatalogIndex, self).unindex_doc(docid)
def reindex_doc(self, docid, object):
""" Default reindex_doc implementation """
self.unindex_doc(docid)
self.index_doc(docid, object)
def docids(self):
not_indexed = self._not_indexed
indexed = self._indexed()
if len(not_indexed) == 0:
return self.family.IF.Set(indexed)
elif len(indexed) == 0:
return not_indexed
indexed = self.family.IF.Set(indexed)
return self.family.IF.union(not_indexed, indexed)
def apply_intersect(self, query, docids):
""" Default apply_intersect implementation """
result = self.apply(query)
if docids is None:
return result
return self.family.IF.weightedIntersection(result, docids)[1]
def _negate(self, assertion, *args, **kw):
positive = assertion(*args, **kw)
all = self.docids()
if len(positive) == 0:
return all
return self.family.IF.difference(all, positive)
def applyContains(self, *args, **kw):
raise NotImplementedError(
"Contains is not supported for %s" % type(self).__name__)
def applyDoesNotContain(self, *args, **kw):
return self._negate(self.applyContains, *args, **kw)
def applyEq(self, *args, **kw):
raise NotImplementedError(
"Eq is not supported for %s" % type(self).__name__)
def applyNotEq(self, *args, **kw):
return self._negate(self.applyEq, *args, **kw)
def applyGt(self, *args, **kw):
raise NotImplementedError(
"Gt is not supported for %s" % type(self).__name__)
def applyLt(self, *args, **kw):
raise NotImplementedError(
"Lt is not supported for %s" % type(self).__name__)
def applyGe(self, *args, **kw):
raise NotImplementedError(
"Ge is not supported for %s" % type(self).__name__)
def applyLe(self, *args, **kw):
raise NotImplementedError(
"Le is not supported for %s" % type(self).__name__)
def applyAny(self, *args, **kw):
raise NotImplementedError(
"Any is not supported for %s" % type(self).__name__)
def applyNotAny(self, *args, **kw):
return self._negate(self.applyAny, *args, **kw)
def applyAll(self, *args, **kw):
raise NotImplementedError(
"All is not supported for %s" % type(self).__name__)
def applyNotAll(self, *args, **kw):
return self._negate(self.applyAll, *args, **kw)
def applyInRange(self, *args, **kw):
raise NotImplementedError(
"InRange is not supported for %s" % type(self).__name__)
def applyNotInRange(self, *args, **kw):
return self._negate(self.applyInRange, *args, **kw)
def _migrate_to_0_8_0(self, docids):
"""
I'm sorry.
"""
docids = self.family.IF.Set(docids)
indexed = self.family.IF.Set(self._indexed())
self._not_indexed = self.family.IF.difference(docids, indexed) | /repoze.catalog-0.9.0.tar.gz/repoze.catalog-0.9.0/repoze/catalog/indexes/common.py | 0.660063 | 0.267743 | common.py | pypi |
try:
from hashlib import md5
except ImportError: # pragma no cover
from md5 import new as md5
from persistent import Persistent
from zope.interface import implementer
from repoze.catalog.indexes.keyword import CatalogKeywordIndex
from repoze.catalog.interfaces import ICatalogIndex
from repoze.catalog.compat import text_type
_marker = ()
@implementer(ICatalogIndex)
class CatalogFacetIndex(CatalogKeywordIndex):
"""Facet index.
Query types supported:
- Eq
- NotEq
- In
- NotIn
- Any
- NotAny
- All
- NotAll
"""
def __init__(self, discriminator, facets, family=None):
if not callable(discriminator):
if not isinstance(discriminator, text_type):
raise ValueError('discriminator value must be callable or a '
'string')
self.discriminator = discriminator
if family is not None:
self.family = family
self.facets = self.family.OO.Set(facets)
self._not_indexed = self.family.IF.Set()
self.clear()
def index_doc(self, docid, object):
""" Pass in an integer document id and an object supporting a
sequence of facet specifiers ala ['style:gucci:handbag'] via
the discriminator"""
if callable(self.discriminator):
value = self.discriminator(object, _marker)
else:
value = getattr(object, self.discriminator, _marker)
if value is _marker:
# unindex the previous value
self.unindex_doc(docid)
self._not_indexed.add(docid)
return None
if isinstance(value, Persistent):
raise ValueError('Catalog cannot index persistent object %s' %
value)
if docid in self._not_indexed:
self._not_indexed.remove(docid)
old = self._rev_index.get(docid)
if old is not None:
self.unindex_doc(docid)
changed = False
for facet in value:
L = []
categories = facet.split(':')
for category in categories:
L.append(category)
facet_candidate = ':'.join(L)
for fac in self.facets:
if fac == facet_candidate:
changed = True
fwset = self._fwd_index.get(fac)
if fwset is None:
fwset = self.family.IF.Set()
self._fwd_index[fac] = fwset
fwset.insert(docid)
revset = self._rev_index.get(docid)
if revset is None:
revset = self.family.OO.Set()
self._rev_index[docid] = revset
revset.insert(fac)
if changed:
self._num_docs.change(1)
return value
def counts(self, docids, omit_facets=()):
""" Given a set of docids (usually returned from query),
provide count information for further facet narrowing.
Optionally omit count information for facets and their
ancestors that are in 'omit_facets' (a sequence of facets)"""
effective_omits = self.family.OO.Set()
for omit_facet in omit_facets:
L = []
categories = omit_facet.split(':')
for category in categories:
L.append(category)
effective_omits.insert(':'.join(L))
include_facets = self.family.OO.difference(self.facets,
effective_omits)
counts = {}
isect_cache = {}
for docid in docids:
available_facets = self._rev_index.get(docid)
ck = cachekey(available_facets)
appropriate_facets = isect_cache.get(ck)
if appropriate_facets is None:
appropriate_facets = self.family.OO.intersection(
include_facets, available_facets)
isect_cache[ck] = appropriate_facets
for facet in appropriate_facets:
count = counts.get(facet, 0)
count += 1
counts[facet] = count
return counts
def cachekey(set):
h = md5()
for item in sorted(list(set)):
h.update(item.encode('utf-8'))
return h.hexdigest() | /repoze.catalog-0.9.0.tar.gz/repoze.catalog-0.9.0/repoze/catalog/indexes/facet.py | 0.655557 | 0.24578 | facet.py | pypi |
from zope.interface import implementer
import BTrees
from repoze.catalog.interfaces import ICatalogIndex
from repoze.catalog.indexes.common import CatalogIndex
from repoze.catalog.compat import text_type
from six.moves import range
_marker = object()
@implementer(ICatalogIndex)
class CatalogPathIndex2(CatalogIndex): # pragma NO COVERAGE
"""
DEPRECATED
Index for model paths (tokens separated by '/' characters or
tuples representing a model path).
A path index may be queried to obtain all subobjects (optionally
limited by depth) of a certain path.
This index differs from the original
``repoze.catalog.indexes.path.CatalogPath`` index inasmuch as it
actually retains a graph representation of the objects in the path
space instead of relying on 'level' information; query results
relying on this level information may or may not be correct for
any given tree. Use of this index is suggested rather than the
``path`` index.
Query types supported:
Eq
"""
attr_discriminator = None # b/w compat
family = BTrees.family32
def __init__(self, discriminator, attr_discriminator=None):
if not callable(discriminator):
if not isinstance(discriminator, text_type):
raise ValueError('discriminator value must be callable or a '
'string')
self.discriminator = discriminator
if attr_discriminator is not None and not callable(attr_discriminator):
if not isinstance(attr_discriminator, text_type):
raise ValueError('attr_discriminator value must be callable '
'or a string')
self.attr_discriminator = attr_discriminator
self.clear()
def clear(self):
self.docid_to_path = self.family.IO.BTree()
self.path_to_docid = self.family.OI.BTree()
self.adjacency = self.family.IO.BTree()
self.disjoint = self.family.OO.BTree()
self.docid_to_attr = self.family.IO.BTree()
def __len__(self):
return len(self.docid_to_path)
def __nonzero__(self):
return True
def __bool__(self):
return True
def _getPathTuple(self, path):
if not path:
raise ValueError('path must be nonempty (not %s)' % str(path))
if isinstance(path, text_type):
path = path.rstrip('/')
path = tuple(path.split('/'))
if path[0] != '':
raise ValueError('Path must be absolute (not %s)' % str(path))
return tuple(path)
def _getObjectPath(self, object):
if callable(self.discriminator):
path = self.discriminator(object, _marker)
else:
path = getattr(object, self.discriminator, _marker)
return path
def _getObjectAttr(self, object):
if callable(self.attr_discriminator):
attr = self.attr_discriminator(object, _marker)
else:
attr = getattr(object, self.attr_discriminator, _marker)
return attr
def index_doc(self, docid, object):
path = self._getObjectPath(object)
if path is _marker:
self.unindex_doc(docid)
return None
path = self._getPathTuple(path)
if self.attr_discriminator is not None:
attr = self._getObjectAttr(object)
if attr is not _marker:
self.docid_to_attr[docid] = attr
self.docid_to_path[docid] = path
self.path_to_docid[path] = docid
if path in self.disjoint:
self.adjacency[docid] = self.disjoint[path]
del self.disjoint[path]
if len(path) > 1:
parent_path = tuple(path[:-1])
parent_docid = self.path_to_docid.get(parent_path)
if parent_docid is None:
theset = self.disjoint.get(parent_path)
if theset is None:
theset = self.family.IF.Set()
self.disjoint[parent_path] = theset
else:
theset = self.adjacency.get(parent_docid)
if theset is None:
theset = self.family.IF.Set()
self.adjacency[parent_docid] = theset
theset.insert(docid)
def unindex_doc(self, docid):
path = self.docid_to_path.get(docid)
if path is None:
return
if len(path) > 1:
parent_path = tuple(path[:-1])
parent_docid = self.path_to_docid.get(parent_path)
if parent_docid is not None: # might be disjoint
self.adjacency[parent_docid].remove(docid)
if not self.adjacency[parent_docid]:
del self.adjacency[parent_docid]
else:
self.disjoint[parent_path].remove(docid)
if not self.disjoint[parent_path]:
del self.disjoint[parent_path]
stack = [docid]
while stack:
docid = stack.pop()
path = self.docid_to_path[docid]
del self.path_to_docid[path]
del self.docid_to_path[docid]
if docid in self.docid_to_attr:
del self.docid_to_attr[docid]
next_docids = self.adjacency.get(docid)
if next_docids is None:
next_docids = self.disjoint.get(path)
if next_docids is not None:
del self.disjoint[path]
stack.extend(next_docids)
else:
del self.adjacency[docid]
stack.extend(next_docids)
def reindex_doc(self, docid, object):
path = self._getPathTuple(self._getObjectPath(object))
if self.docid_to_path.get(docid) != path:
self.unindex_doc(docid)
self.index_doc(docid, object)
return True
else:
if self.attr_discriminator is not None:
attr = self._getObjectAttr(object)
if docid in self.docid_to_attr:
if attr is _marker:
del self.docid_to_attr[docid]
return True
elif attr != self.docid_to_attr[docid]:
self.docid_to_attr[docid] = attr
return True
else:
if attr is not _marker:
self.docid_to_attr[docid] = attr
return True
return False
def _indexed(self):
return list(self.docid_to_path.keys())
def search(self, path, depth=None, include_path=False, attr_checker=None):
""" Provided a path string (e.g. ``/path/to/object``) or a
path tuple (e.g. ``('', 'path', 'to', 'object')``, or a path
list (e.g. ``['', 'path', 'to' object'])``), search the index
for document ids representing subelements of the path
specified by the path argument.
If the ``path`` argment is specified as a tuple or list, its
first element must be the empty string. If the ``path``
argument is specified as a string, it must begin with a ``/``
character. In other words, paths passed to the ``search``
method must be absolute.
If the ``depth`` argument is specified, return only documents
at this depth and below. Depth ``0`` will returns the empty
set (or only the docid for the ``path`` specified if
``include_path`` is also True). Depth ``1`` will return
docids related to direct subobjects of the path (plus the
docid for the ``path`` specified if ``include_path`` is also
True). Depth ``2`` will return docids related to direct
subobjects and the docids of the children of those subobjects,
and so on.
If ``include_path`` is False, the docid of the object
specified by the ``path`` argument is *not* returned as part
of the search results. If ``include_path`` is True, the
object specified by the ``path`` argument *is* returned as
part of the search results.
If ``attr_checker`` is not None, it must be a callback that
accepts two arguments: the first argument will be the
attribute value found, the second argument is a sequence of
all previous attributes encountered during this search (in
path order). If ``attr_checker`` returns True, traversal will
continue; otherwise, traversal will cease.
"""
if attr_checker is None:
return self._simple_search(path, depth, include_path)
else:
return self._attr_search(path, depth, include_path, attr_checker)
def _simple_search(self, path, depth, include_path):
""" Codepath taken when no attr checker is used """
path = self._getPathTuple(path)
sets = []
if include_path:
try:
docid = self.path_to_docid[path]
except KeyError:
pass # XXX should we just return an empty set?
else:
sets.append(self.family.IF.Set([docid]))
stack = [path]
plen = len(path)
while stack:
nextpath = stack.pop()
if depth is not None and len(nextpath) - plen >= depth:
continue
try:
docid = self.path_to_docid[nextpath]
except KeyError:
continue # XXX we can't search from an unindexed root path?
try:
theset = self.adjacency[docid]
except KeyError:
pass
else:
sets.append(theset)
for docid in theset:
try:
newpath = self.docid_to_path[docid]
except KeyError:
continue
stack.append(newpath)
return self.family.IF.multiunion(sets)
def _attr_search(self, path, depth, include_path, attr_checker):
""" Codepath taken when an attr checker is used """
path = self._getPathTuple(path)
leading_attrs = []
result = {}
plen = len(path)
# make sure we get "leading" attrs
for p in range(plen-1):
subpath = path[:p+1]
try:
docid = self.path_to_docid[subpath]
except KeyError:
continue # XXX should we just return an empty set?
attr = self.docid_to_attr.get(docid, _marker)
if attr is not _marker:
remove_from_closest(result, subpath, docid)
leading_attrs.append(attr)
result[subpath] = ((docid, leading_attrs[:]),
self.family.IF.Set())
stack = [(path, leading_attrs)]
attrset = self.family.IF.Set()
while stack:
nextpath, attrs = stack.pop()
try:
docid = self.path_to_docid[nextpath]
except KeyError:
continue # XXX we can't search from an unindexed root path?
attr = self.docid_to_attr.get(docid, _marker)
if attr is _marker:
if include_path and nextpath == path:
add_to_closest(
result, nextpath, self.family.IF.Set([docid]))
if depth is not None and len(nextpath) - plen >= depth:
continue
else:
remove_from_closest(result, nextpath, docid)
attrs.append(attr)
if nextpath == path:
if include_path:
attrset = self.family.IF.Set([docid])
else:
attrset = self.family.IF.Set()
else:
attrset = self.family.IF.Set([docid])
result[nextpath] = ((docid, attrs), attrset)
if depth is not None and len(nextpath) - plen >= depth:
continue
try:
theset = self.adjacency[docid]
except KeyError:
pass
else:
add_to_closest(result, nextpath, theset)
for docid in theset:
try:
newpath = self.docid_to_path[docid]
except KeyError:
continue
stack.append((newpath, attrs[:]))
return attr_checker(list(result.values()))
def apply_intersect(self, query, docids):
""" Default apply_intersect implementation """
result = self.apply(query)
if docids is None:
return result
return self.family.IF.weightedIntersection(result, docids)[1]
def apply(self, query):
""" Search the path index using the query. If ``query`` is a
string, a tuple, or a list, it is treated as the ``path``
argument to use to search. If it is any other object, it is
assumed to be a dictionary with at least a value for the
``query`` key, which is treated as a path. The dictionary can
also optionally specify the ``depth`` and whether to include
the docid referenced by the path argument (the ``query`` key)
in the set of docids returned (``include_path``). See the
documentation for the ``search`` method of this class to
understand paths, depths, and the ``include_path`` argument.
"""
if isinstance(query, (text_type, tuple, list)):
path = query
depth = None
include_path = False
attr_checker = None
else:
path = query['query']
depth = query.get('depth', None)
include_path = query.get('include_path', False)
attr_checker = query.get('attr_checker', None)
return self.search(path, depth, include_path, attr_checker)
def applyEq(self, query):
return self.apply(query)
def add_to_closest(sofar, thispath, theset):
paths = sorted(list(sofar.keys()), reverse=True)
for path in paths:
pathlen = len(path)
if thispath[:pathlen] == path:
sofar[path][1].update(theset)
break
def remove_from_closest(sofar, thispath, docid):
paths = sorted(list(sofar.keys()), reverse=True)
for path in paths:
pathlen = len(path)
if thispath[:pathlen] == path:
theset = sofar[path][1]
if docid in theset:
theset.remove(docid)
break | /repoze.catalog-0.9.0.tar.gz/repoze.catalog-0.9.0/repoze/catalog/indexes/path2.py | 0.673299 | 0.237864 | path2.py | pypi |
from zope.interface import implementer
from zope.index.interfaces import IIndexSort
from zope.index.text import TextIndex
from repoze.catalog.interfaces import ICatalogIndex
from repoze.catalog.indexes.common import CatalogIndex
from repoze.catalog.compat import text_type
@implementer(ICatalogIndex, IIndexSort)
class CatalogTextIndex(CatalogIndex, TextIndex):
""" Full-text index.
Query types supported:
- Contains
- DoesNotContain
- Eq
- NotEq
"""
def __init__(self, discriminator, lexicon=None, index=None):
if not callable(discriminator):
if not isinstance(discriminator, text_type):
raise ValueError('discriminator value must be callable or a '
'string')
self.discriminator = discriminator
self._not_indexed = self.family.IF.Set()
TextIndex.__init__(self, lexicon, index)
self.clear()
def reindex_doc(self, docid, object):
# index_doc knows enough about reindexing to do the right thing
return self.index_doc(docid, object)
def _indexed(self):
return list(self.index._docwords.keys())
def sort(self, result, reverse=False, limit=None, sort_type=None):
"""Sort by text relevance.
This only works if the query includes at least one text query,
leading to a weighted result. This method raises TypeError
if the result is not weighted.
A weighted result is a dictionary-ish object that has docids
as keys and floating point weights as values. This method
sorts the dictionary by weight and returns the sorted
docids as a list.
"""
if not result:
return result
if not hasattr(result, 'items'):
raise TypeError(
"Unable to sort by relevance because the search "
"result does not contain weights. To produce a weighted "
"result, include a text search in the query.")
items = [(weight, docid) for (docid, weight) in result.items()]
# when reverse is false, output largest weight first.
# when reverse is true, output smallest weight first.
items.sort(reverse=not reverse)
result = [docid for (weight, docid) in items]
if limit:
result = result[:limit]
return result
def applyContains(self, value):
return self.apply(value)
applyEq = applyContains | /repoze.catalog-0.9.0.tar.gz/repoze.catalog-0.9.0/repoze/catalog/indexes/text.py | 0.790004 | 0.333042 | text.py | pypi |
from zope.interface import implementer
from persistent import Persistent
import BTrees
from BTrees.Length import Length
from repoze.catalog.interfaces import ICatalogIndex
from repoze.catalog.indexes.common import CatalogIndex
from repoze.catalog.compat import text_type
from six.moves import range
_marker = ()
@implementer(ICatalogIndex)
class CatalogPathIndex(CatalogIndex):
"""Index for model paths (tokens separated by '/' characters)
A path index stores all path components of the physical path of an object.
Internal datastructure:
- a physical path of an object is split into its components
- every component is kept as a key of a OOBTree in self._indexes
- the value is a mapping 'level of the path component' to
'all docids with this path component on this level'
Query types supported:
- Eq
- NotEq
"""
useOperator = 'or'
family = BTrees.family32
def __init__(self, discriminator):
if not callable(discriminator):
if not isinstance(discriminator, text_type):
raise ValueError('discriminator value must be callable or a '
'string')
self.discriminator = discriminator
self._not_indexed = self.family.IF.Set()
self.clear()
def clear(self):
self._depth = 0
self._index = self.family.OO.BTree()
self._unindex = self.family.IO.BTree()
self._length = Length(0)
def insertEntry(self, comp, id, level):
"""Insert an entry.
comp is a path component
id is the docid
level is the level of the component inside the path
"""
if comp not in self._index:
self._index[comp] = self.family.IO.BTree()
if level not in self._index[comp]:
self._index[comp][level] = self.family.IF.TreeSet()
self._index[comp][level].insert(id)
if level > self._depth:
self._depth = level
def index_doc(self, docid, object):
if callable(self.discriminator):
value = self.discriminator(object, _marker)
else:
value = getattr(object, self.discriminator, _marker)
if value is _marker:
# unindex the previous value
self.unindex_doc(docid)
# Store docid in set of unindexed docids
self._not_indexed.add(docid)
return None
if isinstance(value, Persistent):
raise ValueError('Catalog cannot index persistent object %s' %
value)
if docid in self._not_indexed:
# Remove from set of unindexed docs if it was in there.
self._not_indexed.remove(docid)
path = value
if isinstance(path, (list, tuple)):
path = '/' + '/'.join(path[1:])
comps = [_f for _f in path.split('/') if _f]
if docid not in self._unindex:
self._length.change(1)
for idx, comp in enumerate(comps):
self.insertEntry(comp, docid, idx)
self._unindex[docid] = path
return 1
def unindex_doc(self, docid):
_not_indexed = self._not_indexed
if docid in _not_indexed:
_not_indexed.remove(docid)
if docid not in self._unindex:
return
comps = self._unindex[docid].split('/')
for level in range(len(comps[1:])):
comp = comps[level + 1]
try:
self._index[comp][level].remove(docid)
if not self._index[comp][level]:
del self._index[comp][level]
if not self._index[comp]:
del self._index[comp]
except KeyError:
pass
self._length.change(-1)
del self._unindex[docid]
def _indexed(self):
return list(self._unindex.keys())
def search(self, path, default_level=0):
"""
path is either a string representing a
relative URL or a part of a relative URL or
a tuple (path,level).
level >= 0 starts searching at the given level
level < 0 not implemented yet
"""
if isinstance(path, text_type):
level = default_level
else:
level = int(path[1])
path = path[0]
comps = [_f for _f in path.split('/') if _f]
if len(comps) == 0:
return self.family.IF.Set(list(self._unindex.keys()))
results = None
if level >= 0:
for i, comp in enumerate(comps):
if comp not in self._index:
return self.family.IF.Set()
if (level + i) not in self._index[comp]:
return self.family.IF.Set()
results = self.family.IF.intersection(
results, self._index[comp][level+i])
else:
for level in range(self._depth + 1):
ids = None
for i, comp in enumerate(comps):
try:
ids = self.family.IF.intersection(
ids, self._index[comp][level+i])
except KeyError:
break
else:
results = self.family.IF.union(results, ids)
return results
def numObjects(self):
""" return the number distinct values """
return len(self._unindex)
def getEntryForObject(self, docid):
""" Takes a document ID and returns all the information
we have on that specific object.
"""
return self._unindex.get(docid)
def apply(self, query):
"""
"""
level = 0
operator = self.useOperator
if isinstance(query, text_type):
paths = [query]
elif isinstance(query, (tuple, list)):
paths = query
else:
paths = query.get('query', [])
if isinstance(paths, text_type):
paths = [paths]
level = query.get('level', 0)
operator = query.get('operator', self.useOperator).lower()
sets = []
for path in paths:
sets.append(self.search(path, level))
if operator == 'or':
rs = self.family.IF.multiunion(sets)
else:
rs = None
for set in sorted(sets, key=lambda x: len(x)):
rs = self.family.IF.intersection(rs, set)
if not rs:
break
if rs:
return rs
else:
return self.family.IF.Set()
applyEq = apply | /repoze.catalog-0.9.0.tar.gz/repoze.catalog-0.9.0/repoze/catalog/indexes/path.py | 0.455441 | 0.394638 | path.py | pypi |
A Tour of :mod:`repoze.catalog`
===============================
:mod:`repoze.catalog` borrows heavily from ``zope.app.catalog`` and
depends wholly on the ``zope.index`` package for its index
implementations, but assumes less about how you want your indexing and
querying to behave. In this spirit, you can index any Python object;
it needn't implement any particular interface except perhaps one you
define yourself conventionally. :mod:`repoze.catalog` does less than
any of its predecessors, in order to make it more useful for arbitrary
Python applications. Its implemented in terms of ZODB objects, and
the ZODB will store the derived index data, but it assumes little
else. You should be able to use it in any Python application. The
fact that it uses ZODB is ancillary: it's akin to Xapian using "flint"
or "quartz" backends.
Indexing
--------
To perform indexing of objects, you set up a catalog with some number
of indexes, each of which is capable of calling a callback function to
obtain data about an object being cataloged::
from repoze.catalog.indexes.field import CatalogFieldIndex
from repoze.catalog.indexes.text import CatalogTextIndex
from repoze.catalog.catalog import Catalog
def get_flavor(object, default):
return getattr(object, 'flavor', default)
def get_description(object, default):
return getattr(object, 'description', default)
catalog = Catalog()
catalog['flavors'] = CatalogFieldIndex(get_flavor)
catalog['description'] = CatalogTextIndex(get_description)
Note that ``get_flavor`` and ``get_description`` will be called for each
object you attempt to index. Each of them attempts to grab an
attribute from the object being indexed, and returns a default if no
such attribute exists.
Once you've got a catalog set up, you can begin to index Python
objects (aka "documents")::
class IceCream(object):
def __init__(self, flavor, description):
self.flavor = flavor
self.description = description
peach = IceCream('peach', 'This ice cream has a peachy flavor')
catalog.index_doc(1, peach)
pistachio = IceCream('pistachio', 'This ice cream tastes like pistachio nuts')
catalog.index_doc(2, pistachio)
Note that when you call ``index_doc``, you pass in a ``docid`` as the
first argument, and the object you want to index as the second
argument. When we index the ``peach`` object above we index it with
the docid ``1``. Each docid must be unique within a catalog; when you
query a :mod:`repoze.catalog` catalog, you'll get back a sequence of
document ids that match the query you supplied, which you'll
presumably need to map back to the content object in order to make
sense of the response; you're responsible for keeping track of which
objects map to which document id yourself.
Querying
--------
Once you've got some number of documents indexed, you can perform queries
against an existing catalog. A query is performed by passing a query argument
and optional keyword arguments to the ``query`` method of the catalog object::
from repoze.catalog.query import Eq
catalog.query(Eq('flavor', 'peach'))
The argument passed to ``query`` above is a :term:`query object`.
This particular query object is a :class:`repoze.catalog.query.Eq`
object, which is a *comparator* meaning "equals". The first argument
to the ``Eq`` object is an index name, the second argument is a value.
In english, this query represents "a document indexed in the
``flavor`` index with the value ``peach``". Other arguments to
:meth:`repoze.catalog.Catalog.query` may be special values that
specify sort ordering and query limiting.
In the above example, we specified no particular sort ordering or
limit, and we're essentially asking the catalog to return us all the
documents that match the word ``peach`` as a field within the field
index named ``flavor``. Other types of indexes can be queried
similarly::
from repoze.catalog.query import Contains
catalog.query(Contains('description', 'nuts'))
The result of calling the ``query`` method is a two tuple. The first
element of the tuple is the number of document ids in the catalog
which match the query. The second element is an iterable: each
iteration over this iterable returns a document id. The results of
``catalog.query(Contains('description', 'nuts'))`` might return::
(1, [2])
The first element in the tuple indicates that there is one document in
the catalog that matches the description 'nuts'. The second element
in the tuple (here represented as a list, although it's more typically
a generator) is a sequence of document ids that match the query.
You can combine search parameters to further limit a query::
from repoze.catalog.query import Contains, Eq, Intersection
catalog.query(Eq('flavor', 'peach') & Contains('description', 'nuts'))
This would return a result representing all the documents indexed
within the catalog with the flavor of peach and a description of nuts.
Index Types
-----------
Out of the box, ``repoze.catalog`` supports five index types: field indexes,
keyword indexes, text indexes, facet indexes, and path indexes. Field indexes
are meant to index single discrete values. Keys are stored in order, allowing
for the full suite of range and comparison operators to be used. Keyword
indexes index sequences of values which can be queried for any of the values
in each sequence indexed. Text indexes index text using the
``zope.index.text`` index type, and can be queried with arbitrary textual
terms. Text indexes can use various splitting and normalizing strategies to
collapse indexed texts for better querying. Facet indexes are much like
keyword indexes, but also allow for "faceted" indexing and searching, useful
for performing narrowing searches when there is a well-known set of allowable
values (the "facets"). Path indexes allow you to index documents as part of a
graph, and return documents that are contained in a portion of the graph.
.. note:: The existing facet index implementation narrowing support is
naive. It is not meant to be used in catalogs that must use it to
get count information for over, say, 30K documents, for performance
reasons.
Helper Facilities
-----------------
:mod:`repoze.catalog` provides some helper facilities which help you
integrate a catalog into an arbitrary Python application. The most
obvious is a ``FileStorageCatalogFactory``, which makes it reasonably
easy to create a Catalog object within an arbitrary Python
application. Using this facility, you don't have to know anything
about ZODB to use :mod:`repoze.catalog`. If you have an existing ZODB
application, however, you can ignore this facility entirely and use
the Catalog implementation directly.
:mod:`repoze.catalog` provides a ``DocumentMap`` object which can be
used to map document ids to "addresses". An address is any value that
can be used to resolve the document id back into to a Python object.
In Zope, an address is typically a traversal path. This facility
exists in :mod:`repoze.catalog.document.DocumentMap`.
| /repoze.catalog-0.9.0.tar.gz/repoze.catalog-0.9.0/docs/overview.rst | 0.8953 | 0.73412 | overview.rst | pypi |
Using :mod:`repoze.component` as a Component System
===================================================
We've seen the basic registration and lookup facilities of a
:mod:`repoze.component` registry. You can provide additional
functionality to your applications if you use it as an "component
registry". Using a registry as a component registry makes it possible
to use the :mod:`repoze.component` registry for the same purposes as
something like :term:`zope.component`.
The extra method exposed by a :mod:`repoze.component` registry that
allow you to treat it as a component regstry is ``resolve``. The
``resolve`` method simply accepts a provides value and a sequence of
*objects that supply component types*. When called with a list of
objects that supply component types, ``resolve`` returns a matching
component.
"Requires" Objects Supply Component Types
-----------------------------------------
Objects used as "requires" arguments to the ``resolve`` method of a
component registry usually supply a component type. This is done by
adding using the ``provides`` function against objects passed to these
methods.
The ``provides`` function may be used within a class definition, it
may be used as a class decorator, or it may be applied against an
instance. Within a class definition and as a class decorator, it
accepts as positional arguments the component types.
.. code-block:: python
from repoze.component import provides
class MyObject(object):
provides('mytype')
The followng is also legal.
.. code-block:: python
from repoze.component import provides
class MyObject(object):
provides('mytype', 'anothertype')
Likewise, it's also legal to do (in Python 2.6):
.. code-block:: python
from repoze.component import provides
@provides('mytype', 'anothertype')
class MyObject(object):
pass
You can also use the ``provides`` function against an instance, in
which case the first argument is assumed to be the object you're
trying to assign component types to:
.. code-block:: python
from repoze.component import provides
class MyObject(object):
pass
obj = MyObject()
provides(obj, 'mytype', 'anothertype')
Note that objects don't explicitly need to have a ``provides``
attribute attribute for simple usage; the class of an object is an
implicit component type that can be used in registrations.
"Under the hood", the ``provides`` function sets the
``__component_types__`` attribute on an instance or the
``__inherited_component_types__`` attribute on a class.
There is also an ``onlyprovides`` API which performs the same action
except inherited types are overwritten with the values passed to the
API. For example::
.. code-block:: python
from repoze.component import onlyprovides
class MyObject(object):
pass
obj = MyObject()
onlyprovides(obj, 'mytype', 'anothertype')
How :mod:`repoze.component` Computes an Effective Component Type for a Requires Object
--------------------------------------------------------------------------------------
When a component type is computed for an object, the object is
searched in the following order. All values are collected and used to
construct the final "requires" argument used.
- The object is checked for a ``__component_types__`` attribute
(usually stored directly on the instance); if it does not provide
one we use the empty tuple.
- The object is checked for an ``__inherited_component_types__``
attribute (found usually via an attribute of one of the object's
base classes). If it does not provide one we use the empty tuple.
- The values of ``__component_types__`` and
``__inherited_component_types__`` are concatenated together (in that
order).
- The object's class and the value ``None`` are appended to the
resulting tuple as unconditional component types.
We'll use the following set of objects as examples:
.. code-block:: python
from repoze.component import provides
class A(object):
provides('a', 'hello')
class B(A):
provides('b')
class C(B):
provides('c')
instance = C()
provides(instance, 'i')
provides(instance, 'i2')
When the preceding set of statements are made:
- The class statement defining ``A`` is executed, and the ``provides``
function assigns the ``__inherited_component_types__`` attribute of
the ``A`` object to ``('a', 'hello')``. Since the A object has no
base classes with the ``__inherited_component_types__`` attribute on
them, only the types directly fed to ``provides`` (``a`` and
``hello``) are assigned to the ``__inherited_component_types__``
attribute of ``A``.
- The class statement defining ``B`` is executed, and the ``provides``
function assigns the ``__inherited_component_types__`` attribute of
the ``B`` object to ``('b', 'a', 'hello')``. "``b``" is an argument
to the provides function itself, but the ``provides`` function also
appends ``a`` and ``hello`` to the ``__inherited_component_types__``
attribute because these are found within the
``__inherited_component_types__`` attribute of the base class object
``A``.
- The class statement defining ``C`` is executed, and the ``provides``
function assigns the ``__inherited_component_type__`` attribute of
the ``C`` object to ``('c', 'b', 'a', 'hello')``. "``c``" is an
argument to the provides function itself, but the ``provides``
function also appends ``b``, ``a`` and ``hello`` to the
``__inherited_component_types__`` attribute because they are found
within the ``__inherited_component_types__`` attribute of the base
class object ``B``.
- An instance of ``C`` is created via the ``instance = C()``
statement.
- The ``provides`` function is called with the C instance named
``instance`` as an argument as well as the ``i`` type. This causes
the ``__component_type__`` attribute of the ``instance`` object to
be set to ``('i',)``.
- The ``provides`` function is called with the C instance named
``instance`` as an argument as well as the ``i2`` type. This causes
the ``__component_type__`` attribute of the ``instance`` object to
be set to ``('i2', 'i')``. "``i2``" is an argument to the provides
function itself, but the ``provides`` function also appends ``i``,
to the ``__component_types__`` attribute because it is found within
the ``__component_types__`` attribute of the instance as a result of
the previous ``provides`` statement.
If "instance" is subsequently used as an argument to the ``resolve``
method of an component registry:
- We first look at the instance to find its direct component types.
This finds component types ``('i2', 'i')`` as the
``__component_type__`` attribute via standard Python attribute
lookup.
- We look at the instance to find its inherited component types. This
finds inherited component types ``('c', 'b', 'a', 'hello')`` as the
``__inherited_component_types__`` attribtute via standard Python
attribute lookup.
- We find the object's class (``C``).
- We concatenate ``__component_types__`` and
``__inherited_component_types__`` into the sequence ``('i2', 'i',
'c', 'b', 'a', 'hello')`` (the direct component types are first,
then the derived ones).
- To this list we append the class of the instance (``C``) and the
value ``None``.
Thus our "requires" argument for this particular object is ``('i2',
'i', 'c', 'b', 'a', 'hello', C, None)``. Every object supplied as a
"requires" argument to the ``resolve`` method of a component registry
has its requires values computed this way. We then find a component
based on the set of requires arguments passed in ala
:ref:`lookup_ordering`.
Comparing :mod:`repoze.component` to :term:`zope.component`
-----------------------------------------------------------
Zope and Twisted developers (and any other developer who has used
:term:`zope.component`) will find :mod:`repoze.component` familiar.
:mod:`repoze.component` steals concepts shamelessly from
:term:`zope.component`. :mod:`repoze.component` differs primarily from
:term:`zope.component` by abandoning the high-level concept of an
:term:`interface`. In :term:`zope.component`, component lookups and
registrations are done in terms of interfaces, which are very specific
kinds of Python objects. In :mod:`repoze.component`, interfaces are not
used. Instead, components (such as "adapters" and "utilities") are
registered using marker "component types", which are usually just
strings although they can be any hashable type.
One major difference between :mod:`repoze.component` and
:mod:`zope.component` is that :mod:`repoze.component` has no real
support for the concept of an "adapter". The things that you register
into a component registry are simply components. You can register a
callable against some set of arguments, but :mod:`repoze.component`
will not *call* it for you. You have to retrieve it and call it
yourself.
.. note::
In the examples below, where a :term:`zope.component` API might
expect an interface object (e.g. the interface ``ISomething``), the
:mod:`repoze.component` API expects a component type (e.g. the string
``something``). Also, in the examples below, whereas
:term:`zope.component` users typically rely on APIs that consult a
"global registry", :mod:`repoze.component` provides no such facility.
Thus examples that refer to ``registry`` below refer to a plugin
registry created by parsing a configuration file (or constructed
manually).
The :mod:`repoze.component` equivalent of ``utility =
zope.component.getUtility(ISomething)`` is the following:
.. code-block:: python
utility = registry.lookup('something')
The :mod:`repoze.component` equivalent of ``implementation =
zope.component.getAdapter(context, ISomething, name='foo')`` is the
following:
.. code-block:: python
adapter = registry.resolve('something', context, name='foo')
implementation = adapter(context)
The :mod:`repoze.component` equivalent of ``implementation =
getMultiAdapter((context1, context2), ISomething, name='foo')`` is the
following:
.. code-block:: python
adapter = registry.resolve('something', context1, context2, name='foo')
implementation = adapter(context1, context2)
Likewise, the :mod:`repoze.component` equivalent of ``implementation =
getMultiAdapter((context1, context2, context3), ISomething,
name='foo')`` is the following:
.. code-block:: python
adapter = registry.resolve('something', context1, context2, context3,
name='foo')
implementation = adapter(context1, context2, context3)
| /repoze.component-0.4.tar.gz/repoze.component-0.4/docs/component.rst | 0.931252 | 0.76806 | component.rst | pypi |
from repoze.configuration.exceptions import ConfigurationError # API
from repoze.configuration.exceptions import ConfigurationConflict # API
from repoze.configuration.context import Context # API
def load(filename='configure.yml', package=None, context=None, loader=None):
"""
You can load configuration without executing it (without calling
any callbacks) by using the ``load`` function. ``load`` accepts a
filename argument and a package argument. The ``package``
argument is optional. If it is not specified, the filename is
found in the current working directory.
.. code-block:: python
:linenos:
>>> # load configuration without a package via an absolute path
>>> from repoze.configuration import load
>>> context = load('/path/to/configure.yml')
After using ``load`` you can subsequently execute the directive
actions using the ``execute()`` method of the returned context
object. Using ``repoze.configuration.load``, then an immediately
subsequent ``context.execute()`` is exactly equivalent to calling
``repoze.configuration.execute``.
See the ``execute`` documentation for the meanings of the
arguments passed to this function.
"""
if context is None:
context = Context(_loader=loader)
context.load(filename, package)
return context
def execute(filename='configure.yml', package=None, context=None, loader=None):
"""
``execute`` loads the configuration, executes the actions implied
by the configuration, and returns a context. After successful
execution, the context object's state will be modified. The
context object is a mapping object.
``execute`` accepts a ``filename`` argument and a ``package``
argument. The ``package`` argument is optional. If it is not
specified, the filename is found in the current working directory.
For example:
.. code-block:: python
:linenos:
>>> # load configuration without a package via an absolute path
>>> from repoze.configuration import execute
>>> context = execute('/path/to/configure.yml')
>>> # load configuration from the 'configure.yml' file within 'somepackage'
>>> from repoze.configuration import load
>>> import somepackage
>>> context = execute('configure.yml', package=somepackge)
"""
context = load(filename, package, context, loader)
context.execute()
return context | /repoze.configuration-0.8.tar.gz/repoze.configuration-0.8/repoze/configuration/__init__.py | 0.858807 | 0.278827 | __init__.py | pypi |
import re
try:
set
except NameError:
from sets import Set as set
class CSSSelectorAbstract(object):
"""Outlines the interface between CSSParser and its rule-builder for selectors.
CSSBuilderAbstract.selector and CSSBuilderAbstract.combineSelectors must
return concrete implementations of this abstract.
See css.CSSMutableSelector for an example implementation.
"""
def addHashId(self, hashId):
"""Modify the selector to respond to hashId.
#hashId {...}
HashId is a string."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def addClass(self, className):
"""Modify the selector to respond to className classes.
.className {...}
className is a string."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def addAttribute(self, attrName):
"""Modify the selector to respond to objects with an attrName attribute.
[attrName] {...}
attrName is a string."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def addAttributeOperation(self, attrName, attrOp, attrValue):
"""Modify the selector to respond to objects with an attrName attribute
related to attrValue by attrOp.
[attrName] {...}
attrName and attrValue are strings.
attrOp is one of '=', '~=', '|=', '&=', '^=', '!=', '<>'
"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def addPseudo(self, pseudo):
"""Modify the selector to respond to pseudo-classes.
:pseudo {...}
pseudo is a string."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def addPseudoFunction(self, pseudoFn, value):
"""Modify the selector to respond to pseudo-class functions.
:pseudoFn(value) {...}
pseudoFn and value are strings."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
class CSSBuilderAbstract(object):
"""Outlines the interface between CSSParser and its rule-builder.
Compose CSSParser with a concrete implementation of the builder to get
usable results from the CSS parser.
See css.CSSBuilder for an example implementation
"""
# css results
def beginStylesheet(self, context):
"""Called at the beginning of a full stylesheet parse.
Context is simply passed through the parser to the builder"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def stylesheet(self, rulesets, imports):
"""Should return a stylesheet suitable for the subclass.
Rulesets is a list of results from ruleset() and @directives from at*()
methods. Imports is a list of imports returned from atImport()."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def endStylesheet(self, context):
"""Called at the end of a full stylesheet parse.
Context is simply passed through the parser to the builder"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def beginInline(self, context):
"""Called at the beginning of an inline stylesheet parse.
Context is simply passed through the parser to the builder"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def inline(self, declarations):
"""Should return an inline declaration result suitable for the subclass.
Declarations is a list of properties returned by property()."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def endInline(self, context):
"""Called at the end of an inline stylesheet parse.
Context is simply passed through the parser to the builder"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def ruleset(self, selectors, declarations):
"""Should return the ruleset suitable for the subclass.
Selectors is a list of selectors returned by either selector() or
combineSelectors(). Delcarations is a list of properties returned by
property().
"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
# css namespaces
def resolveNamespacePrefix(self, nsPrefix, name):
"""Should return a single name correlating to the namespace prefix and
the name. This affects the name passed to termIdent(), selector(),
and CSSSelectorAbstract's addAttribute() and addAttributeOperation().
See atNamespace and the CSS spec.
"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
# css @ directives
def atCharset(self, charset):
"""Charset is a string from the @charset directive"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def atImport(self, import_, filterMediums, cssParser):
"""Should return the result of importing 'import_' reference string if
the current medium matches the filterMediums. An implementation may
choose to return a callback for this method instead. The list of
results from this method are passed to stylehseet()
cssParser is an instance implementation of CSSParser"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def atNamespace(self, nsPrefix, uri):
"""Called for each @namespace directive to inform the builder of nsPrefix's associated url."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def atMedia(self, filterMediums, rulesets):
"""Should return rulesets if current medium matches the filterMediums.
rulesets is a list of results from ruleset()"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def atPage(self, page, pseudoPage, declarations, margins):
"""Should return a list of rulesets (possibly empty) to be passed to stylesheet().
Page and PseudoPage are strings. Declarations is a list of properties.
Margins is a list of results from atPageMargin().
See atPageMargin() and the extended `CSS 3 candidate recommendation`__
.. __: http://www.w3.org/TR/css3-page/
"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def atPageMargin(self, page, pseudoPage, marginName, declarations):
"""Should return a margin result suitable for atPage()'s margins argument.
Page, PseudoPage, and MarginName are strings. Declarations is a
list of properties.
See atPage() and the extended `CSS 3 candidate recommendation`__
.. __: http://www.w3.org/TR/css3-page/
"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def atFontFace(self, declarations):
"""Parses an @font-face directive. Should return a list of rulesets to
be passed to stylesheet()."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
# css selectors
def combineSelectors(self, selectorA, combiner, selectorB):
"""Should combine a selector suitable to the subclass implementation.
Combiner is typically one of " ", "+", or ">". Please see the CSS spec
for definition of these combiners.
The result must implement CSSSelectorAbstract"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def selector(self, name):
"""Should return a selector suitable to the subclass implementation.
The result must implement CSSSelectorAbstract"""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
# css declarations
def property(self, name, value, important=False):
"""Should return what the subclass defines as a property binding of
name, value and importance.
Name is a string, value is the result of either a term*() or
combineTerms() methods."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def combineTerms(self, termA, combiner, termB):
"""Needs to return the appropriate combination result of termA and
termB using combiner. Combiner is usually one of '/', '+', ',' and the
terms are results from the term*() methods provided here. """
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def termIdent(self, value):
"""Should return what the subclass defines as an ident terminal.
Value is a string."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def termNumber(self, value, units=None):
"""Should return what the subclass defines as a number terminal
Value and unites are strings."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def termColor(self, value):
"""Should return what the subclass defines as a color terminal
Value is a string."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def termURI(self, value):
"""Should return what the subclass defines as a URI terminal
Value is a string."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def termString(self, value):
"""Should return what the subclass defines as a string terminal
Value is a string."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def termUnicodeRange(self, value):
"""Should return what the subclass defines as a unicode range terminal
Value is a string."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def termFunction(self, name, value):
"""Should return what the subclass defines as a function terminal
Name and value are strings."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
def termUnknown(self, src):
"""Should return what the subclass decides to do with an unknown terminal
Src is a string."""
raise NotImplementedError('Subclass Responsibility: %r' % (self,))
# CSS Parser
class CSSParseError(Exception):
src = None
fullsrc = None
inline = False
_baseSrcRef = ('<unknown>', 1)
def __init__(self, msg, src):
Exception.__init__(self, msg)
self.src = src
def __str__(self):
return '%s (%s)' % (Exception.__str__(self), self.getSrcRefString())
def setFullCSSSource(self, fullsrc, inline=False):
self.fullsrc = fullsrc
if inline:
self.inline = inline
def _getFullSrcIndex(self):
return len(self.fullsrc) - len(self.src)
def getLineSrc(self):
lineIdx = 1 + self.fullsrc.rfind('\n', 0, self._getFullSrcIndex())
lineSrc = self.fullsrc[lineIdx:].split('\n', 1)[0]
return lineSrc
def getLine(self):
return self._getLineOffset() + self.fullsrc.count('\n', 0, self._getFullSrcIndex())
def getColumn(self):
linesrc = self.getLineSrc()
return linesrc.index(self.src.split('\n', 1)[0]) + 1
def getSrcRef(self):
return (self.getFilename(), self.getLine())
def getBaseSrcRef(self, srcRef):
return self._baseSrcRef
def setBaseSrcRef(self, srcRef):
self._baseSrcRef = srcRef
def getFilename(self):
return self._baseSrcRef[0]
def setFilename(self, filename):
self._baseSrcRef = (filename, 1)
def _getLineOffset(self):
return self._baseSrcRef[1]
def getSrcRefString(self):
return '\"%s\" line: %d col: %d' % (self.getFilename(), self.getLine(), self.getColumn())
def raiseFromSrc(self, srcref=None):
from TG.introspection.stack import traceSrcrefExec
if srcref is not None:
self.setBaseSrcRef(srcref)
traceSrcrefExec(self.getSrcRef(), 'raise cssError', cssError=self)
class CSSParser(object):
"""CSS-2.1 parser dependent only upon the re module.
Implemented directly from http://www.w3.org/TR/CSS21/grammar.html
Tested with some existing CSS stylesheets for portability.
CSS Parsing API:
* setCSSBuilder()
To set a concrete instance implementing CSSBuilderAbstract
* parseFile()
Use to parse external stylesheets using a file-like object
>>> cssFile = open('test.css', 'r')
>>> stylesheets = myCSSParser.parseFile(cssFile)
* parse()
Use to parse embedded stylesheets using source string
>>> cssSrc = '''
... body,body.body {
... font: 110%, "Times New Roman", Arial, Verdana, Helvetica, serif;
... background: White;
... color: Black;
... }
... a {text-decoration: underline;}
... '''
>>> stylesheets = myCSSParser.parse(cssSrc)
* parseInline()
Use to parse inline stylesheets using attribute source string
>>> style = 'font: 110%, "Times New Roman", Arial, Verdana, Helvetica, serif; background: White; color: Black'
>>> stylesheets = myCSSParser.parseInline(style)
* parseAttributes()
Use to parse attribute string values into inline stylesheets
>>> stylesheets = myCSSParser.parseAttributes(
... font='110%, "Times New Roman", Arial, Verdana, Helvetica, serif',
... background='White',
... color='Black')
* parseSingleAttr()
Use to parse a single string value into a CSS expression
>>> fontValue = myCSSParser.parseSingleAttr('110%, "Times New Roman", Arial, Verdana, Helvetica, serif')
"""
# Constants / Variables / Etc.
ParseError = CSSParseError
bParseStrict = False
AttributeOperators = set(('=', '~=', '|=', '&=', '^=', '!=', '<>'))
SelectorQualifiers = set(('#', '.', '[', ':'))
SelectorCombiners = set(('+', '>'))
ExpressionOperators = set(('/', '+', ','))
DeclarationSetters = set((':', '='))
DeclarationBoundry = set(('', ',', '{','}', '[',']','(',')'))
# atKeywordHandlers is a class-level dictionary to enable extending
# @-directives in a standard way. See _parseAtKeyword for details.
atKeywordHandlers = {}
# Regular expressions
if True: # makes the following code foldable
_orRule = lambda *args: '|'.join(args)
_reflags = re.I | re.M | re.U
i_hex = '[0-9a-fA-F]'
i_nonascii = u'[\200-\377]'
i_unicode = '\\\\(?:%s){1,6}\s?' % i_hex
i_escape = _orRule(i_unicode, u'\\\\[ -~\200-\377]')
i_nmstart = _orRule('[-A-Za-z_]', i_nonascii, i_escape)
i_nmchar = _orRule('[-0-9A-Za-z_]', i_nonascii, i_escape)
i_ident = '((?:%s)(?:%s)*)' % (i_nmstart,i_nmchar)
re_ident = re.compile(i_ident, _reflags)
i_element_name = '((?:%s)|\*)' % (i_ident[1:-1],)
re_element_name = re.compile(i_element_name, _reflags)
i_namespace_selector = '((?:%s)|\*|)\|(?!=)' % (i_ident[1:-1],)
re_namespace_selector = re.compile(i_namespace_selector, _reflags)
i_class = '\\.' + i_ident
re_class = re.compile(i_class, _reflags)
i_hash = '#((?:%s)+)' % i_nmchar
re_hash = re.compile(i_hash, _reflags)
i_rgbcolor = '(#%s{6}|#%s{3})' % (i_hex, i_hex)
re_rgbcolor = re.compile(i_rgbcolor, _reflags)
i_nl = u'\n|\r\n|\r|\f'
i_escape_nl = u'\\\\(?:%s)' % i_nl
i_string_content = _orRule(u'[\t !#$%&(-~]', i_escape_nl, i_nonascii, i_escape)
i_string1 = u'\"((?:%s|\')*)\"' % i_string_content
i_string2 = u'\'((?:%s|\")*)\'' % i_string_content
i_string = _orRule(i_string1, i_string2)
re_string = re.compile(i_string, _reflags)
i_string1_unexpectedEnd = i_string1[:-1]
i_string2_unexpectedEnd = i_string2[:-1]
i_string_unexpectedEnd = _orRule(i_string1_unexpectedEnd, i_string2_unexpectedEnd)
re_string_unexpectedEnd = re.compile(i_string_unexpectedEnd, _reflags)
i_uri = (u'url\\(\s*(?:(?:%s)|((?:%s)+))\s*\\)'
% (i_string, _orRule('[!#$%&*-~]', i_nonascii, i_escape)))
re_uri = re.compile(i_uri, _reflags)
i_num = u'([-+]?[0-9]+(?:\\.[0-9]+)?)|([-+]?\\.[0-9]+)'
re_num = re.compile(i_num, _reflags)
i_unit = '(%%|%s)?' % i_ident
re_unit = re.compile(i_unit, _reflags)
i_function = i_ident + '\\('
re_function = re.compile(i_function, _reflags)
i_functionterm = u'[-+]?' + i_function
re_functionterm = re.compile(i_functionterm, _reflags)
i_unicoderange1 = "(?:U\\+%s{1,6}-%s{1,6})" % (i_hex, i_hex)
i_unicoderange2 = "(?:U\\+\?{1,6}|{h}(\?{0,5}|{h}(\?{0,4}|{h}(\?{0,3}|{h}(\?{0,2}|{h}(\??|{h}))))))"
i_unicoderange = i_unicoderange1 # u'(%s|%s)' % (i_unicoderange1, i_unicoderange2)
re_unicoderange = re.compile(i_unicoderange, _reflags)
i_important = u'!\s*(important)'
re_important = re.compile(i_important, _reflags)
i_comment = u'\\s*(?:\\s*\\/\\*[^*]*\\*+([^/*][^*]*\\*+)*\\/\\s*)*'
re_comment = re.compile(i_comment, _reflags)
i_declarationError = u'((?:[^;{}]*(?:{[^}]*})?)*)'
re_declarationError = re.compile(i_declarationError, _reflags)
i_atKeywordErrorStart = u'([^;{]*[;{])'
re_atKeywordErrorStart = re.compile(i_atKeywordErrorStart, _reflags)
i_atKeywordErrorGroup = u'([^{}]*[{}])'
re_atKeywordErrorGroup = re.compile(i_atKeywordErrorGroup, _reflags)
del _orRule
# Public
def __init__(self, cssBuilder=None):
self.setCSSBuilder(cssBuilder)
# CSS Builder to delegate to
def getCSSBuilder(self):
"""A concrete instance implementing CSSBuilderAbstract"""
return self._cssBuilder
def setCSSBuilder(self, cssBuilder):
"""A concrete instance implementing CSSBuilderAbstract"""
self._cssBuilder = cssBuilder
cssBuilder = property(getCSSBuilder, setCSSBuilder)
#Public CSS Parsing API
def parseFile(self, srcFile, context=None, closeFile=False):
"""Parses CSS file-like objects using the current cssBuilder.
Use for external stylesheets.
>>> cssFile = open('test.css', 'r')
>>> stylesheets = myCSSParser.parseFile(cssFile)
Context is a pass-through variable to the CSSBuilder.
"""
try:
result = self.parse(srcFile.read(), context)
finally:
if closeFile:
srcFile.close()
return result
def parse(self, src, context=None):
"""Parses CSS string source using the current cssBuilder.
Use for embedded stylesheets.
>>> cssSrc = '''
body,body.body {
font: 110%, "Times New Roman", Arial, Verdana, Helvetica, serif;
background: White;
color: Black;
}
a {text-decoration: underline;}
'''
>>> stylesheets = myCSSParser.parse(cssSrc)
Context is just a pass-through variable to the CSSBuilder.
"""
self.cssBuilder.beginStylesheet(context)
try:
try:
src, stylesheet = self._parseStylesheet(src)
except self.ParseError, err:
err.setFullCSSSource(src)
raise
finally:
self.cssBuilder.endStylesheet(context)
return stylesheet
def parseInline(self, src, context=None):
"""Parses CSS inline source string using the current cssBuilder.
Use to parse inline stylesheets using attribute source string
Use to parse a tag's 'sytle'-like attribute.
>>> style = 'font: 110%, "Times New Roman", Arial, Verdana, Helvetica, serif; background: White; color: Black'
>>> stylesheets = myCSSParser.parseInline(style)
Context is just a pass-through variable to the CSSBuilder.
"""
self.cssBuilder.beginInline(context)
try:
try:
src, declarations = self._parseDeclarationGroup(self._stripCSS(src), braces=False)
except self.ParseError, err:
err.setFullCSSSource(src, inline=True)
raise
result = self.cssBuilder.inline(declarations)
finally:
self.cssBuilder.endInline(context)
return result
def parseAttributes(self, attributes={}, context=None, **kwAttributes):
"""Parses CSS attribute source strings and return an inline stylesheet.
Use to parse a tag's highly CSS-based attributes like 'font'.
Use to parse attribute string values into inline stylesheets
>>> stylesheets = myCSSParser.parseAttributes(
font='110%, "Times New Roman", Arial, Verdana, Helvetica, serif',
background='White',
color='Black')
Context is just a pass-through variable to the CSSBuilder.
See also: parseSingleAttr
"""
if attributes:
kwAttributes.update(attributes)
self.cssBuilder.beginInline(context)
try:
properties = []
try:
for propertyName, src in kwAttributes.iteritems():
src, property = self._parseDeclarationProperty(self._stripCSS(src), propertyName)
if property is not None:
properties.append(property)
except self.ParseError, err:
err.setFullCSSSource(src, inline=True)
raise
result = self.cssBuilder.inline(properties)
finally:
self.cssBuilder.endInline(context)
return result
def parseSingleAttr(self, attrValue):
"""Parse a single CSS attribute source string and return the built CSS expression.
Use to parse a tag's highly CSS-based attributes like 'font'.
Use to parse a single string value into a CSS expression
>>> fontValue = myCSSParser.parseSingleAttr('110%, "Times New Roman", Arial, Verdana, Helvetica, serif')
See also: parseAttributes
"""
attributes = self.parseAttributes(singleAttr=attrValue)
return attributes['singleAttr']
# Internal _parse methods
def _parseStylesheet(self, src):
"""Parses a CSS stylesheet into imports and rulesets, returning the
result of cssBuilder.stylesheet()
::
stylesheet
: [ CHARSET_SYM S* STRING S* ';' ]?
[S|CDO|CDC]* [ import [S|CDO|CDC]* ]*
[ [ ruleset | media | page | font_face ] [S|CDO|CDC]* ]*
;
"""
# [ CHARSET_SYM S* STRING S* ';' ]?
src = self._parseAtCharset(src)
# [S|CDO|CDC]*
src = self._parseSCDOCDC(src)
# [ import [S|CDO|CDC]* ]*
src, imports = self._parseAtImports(src)
# [ namespace [S|CDO|CDC]* ]*
src = self._parseAtNamespace(src)
rulesets = []
# [ [ ruleset | atkeywords ] [S|CDO|CDC]* ]*
while src: # due to ending with ]*
if src.startswith('@'):
# @media, @page, @font-face
src, atResults = self._parseAtKeyword(src)
if atResults is not None:
rulesets.extend(atResults)
else:
# ruleset
src, ruleset = self._parseRuleset(src)
rulesets.append(ruleset)
# [S|CDO|CDC]*
src = self._parseSCDOCDC(src)
stylesheet = self.cssBuilder.stylesheet(rulesets, imports)
return src, stylesheet
def _parseSCDOCDC(self, src):
"""[S|CDO|CDC]*"""
while 1:
src = self._stripCSS(src)
if src.startswith('<!--'):
src = src[4:]
elif src.startswith('-->'):
src = src[3:]
else:
break
return src
# CSS @ directives
def _parseAtCharset(self, src):
"""Parses @charset directives.
::
[ CHARSET_SYM S* STRING S* ';' ]?
"""
if src.startswith('@charset '):
src = self._stripCSS(src[9:])
src, charset = self._getString(src)
src = self._stripCSS(src)
if src[:1] != ';':
raise self.ParseError('@charset expected a terminating \';\'', src)
src = self._stripCSS(src[1:])
self.cssBuilder.atCharset(charset)
return src
def _parseAtImports(self, src):
"""Returns a list of imports returned by cssBuilder.atImport().
::
[ import [S|CDO|CDC]* ]*"""
result = []
while src.startswith('@import '):
src = self._stripCSS(src[8:])
src, import_ = self._getStringOrURI(src)
if import_ is None:
raise self.ParseError('Import expecting string or url', src)
filterMediums = []
src, medium = self._getIdent(self._stripCSS(src))
while medium is not None:
filterMediums.append(medium)
if src[:1] == ',':
src = self._stripCSS(src[1:])
src, medium = self._getIdent(src)
else:
break
if src[:1] != ';':
raise self.ParseError('@import expected a terminating \';\'', src)
src = self._stripCSS(src[1:])
stylesheet = self.cssBuilder.atImport(import_, filterMediums, self)
if stylesheet is not None:
result.append(stylesheet)
src = self._parseSCDOCDC(src)
return src, result
def _parseAtNamespace(self, src):
"""Parses @namespace directives.
Calls cssBuilder.atNamespace for each directive.
::
namespace :
@namespace S* [IDENT S*]? [STRING|URI] S* ';' S*
"""
src = self._parseSCDOCDC(src)
while src.startswith('@namespace'):
src = self._stripCSS(src[len('@namespace'):])
src, namespace = self._getStringOrURI(src)
if namespace is None:
src, nsPrefix = self._getIdent(src)
if nsPrefix is None:
raise self.ParseError('@namespace expected an identifier or a URI', src)
src, namespace = self._getStringOrURI(self._stripCSS(src))
if namespace is None:
raise self.ParseError('@namespace expected a URI', src)
else:
nsPrefix = None
src = self._stripCSS(src)
if src[:1] != ';':
raise self.ParseError('@namespace expected a terminating \';\'', src)
src = self._stripCSS(src[1:])
self.cssBuilder.atNamespace(nsPrefix, namespace)
src = self._parseSCDOCDC(src)
return src
def _parseAtKeyword(self, src):
"""[media | page | font_face | unknown_keyword]"""
if src.startswith('@'):
src = src[1:]
else:
raise self.ParseError('atKeyword missing @ sign', src)
src, atDirective = self._getIdent(src)
src = self._stripCSS(src)
directiveHandler = self.atKeywordHandlers.get(atDirective,
self.__class__._parseAtUnknownHandler)
return directiveHandler(self, atDirective, src)
def _parseAtUnknownHandler(self, atDirective, src):
if self.bParseStrict:
raise self.ParseError('Unknown @-Directive \"%s\"' % atDirective, src)
src, content = self._getMatchResult(self.re_atKeywordErrorStart, src)
src = self._stripCSS(src)
content = content.lstrip()
if content[0] == '{': n = 1
elif content[-1] == '{': n = 2
else: n = 0
while n:
src, content = self._getMatchResult(self.re_atKeywordErrorGroup, src)
src = self._stripCSS(src)
n += {'{':1, '}':-1}.get(content[-1], 0)
return src, []
def _parseAtMedia(self, atDirective, src):
"""media
: MEDIA_SYM S* medium [ ',' S* medium ]* '{' S* ruleset* '}' S*
;
"""
filterMediums = []
while src and src[0] != '{':
src, medium = self._getIdent(src)
if medium is None:
raise self.ParseError('@media rule expected media identifier', src)
filterMediums.append(medium)
if src[0] == ',':
src = self._stripCSS(src[1:])
else:
src = self._stripCSS(src)
if not src.startswith('{'):
raise self.ParseError('Ruleset opening \'{\' not found', src)
src = self._stripCSS(src[1:])
rulesets = []
while src and not src.startswith('}'):
src, ruleset = self._parseRuleset(src)
rulesets.append(ruleset)
src = self._stripCSS(src)
if not src.startswith('}'):
if self.bParseStrict or src:
raise self.ParseError('Ruleset closing \'}\' not found', src)
elif src:
src = self._stripCSS(src[1:])
result = self.cssBuilder.atMedia(filterMediums, rulesets)
if result is None:
result = []
return src, result
atKeywordHandlers['media'] = _parseAtMedia
def _parseAtPage(self, atDirective, src):
"""@page directive. Returns result from cssBuilder.atPage
Supports extended CSS 3 candidate recommendation
http://www.w3.org/TR/css3-page/
::
@page
: PAGE_SYM S* IDENT? pseudo_page? S*
'{' S* [ declaration | @margin ] [ ';'
S* [ declaration | @margin ]? ]* '}' S*
;
"""
if not src.startswith('{'):
src, page = self._getIdent(src)
else: page = ''
if src[:1] == ':':
src, pseudoPage = self._getIdent(src[1:])
src = src[1:]
else: pseudoPage = ''
src = self._stripCSS(src)
if not src.startswith('{'):
raise self.ParseError('Ruleset opening \'{\' not found', src)
src = self._stripCSS(src[1:])
declarations, margins = [], []
while src[:1] not in self.DeclarationBoundry:
# declaration group while loop.
if src.startswith('@'):
# @ specific margin
src, margin = self._parseAtPageMargin(src[1:], page, pseudoPage)
margins.append(margin)
else:
# declaration
src, property = self._parseDeclaration(src)
if property is not None:
declarations.append(property)
if src.startswith(';'):
src = self._stripCSS(src[1:])
# [S|CDO|CDC]*
src = self._parseSCDOCDC(src)
if not src.startswith('}'):
raise self.ParseError('Ruleset closing \'}\' not found', src)
else:
src = self._stripCSS(src[1:])
result = self.cssBuilder.atPage(page, pseudoPage, declarations, margins)
if result is None:
result = []
return self._stripCSS(src), result
atKeywordHandlers['page'] = _parseAtPage
def _parseAtPageMargin(self, src, page, pseudoPage):
"""@page margin directive. Returns result from cssBuilder.atPageMargin
Supports extended CSS 3 candidate recommendation
http://www.w3.org/TR/css3-page/
::
page margin
: margin_sym S* '{' declaration [ ';' S* declaration? ]* '}' S*
;
;
See _parseAtPage()
"""
src, margin = self._getIdent(src)
if margin is None:
raise self.ParseError('At-margin rule received an unknown margin', src)
src, declarations = self._parseDeclarationGroup(self._stripCSS(src))
result = self.cssBuilder.atPageMargin(page, pseudoPage, margin.lower(), declarations)
return src, result
def _parseAtFontFace(self, atDirective, src):
src, declarations = self._parseDeclarationGroup(src)
result = self.cssBuilder.atFontFace(declarations)
if result is None:
result = []
return src, result
atKeywordHandlers['font-face'] = _parseAtFontFace
# ruleset - see selector and declaration groups
def _parseRuleset(self, src):
"""Parses a CSS ruleset by parsing the selectors and declarations.
Returns the result of cssBuilder.ruleset() from the list of selectors
and list of declarations.
::
ruleset
: selector [ ',' S* selector ]*
'{' S* declaration [ ';' S* declaration ]* '}' S*
;
"""
src, selectors = self._parseSelectorGroup(src)
src, declarations = self._parseDeclarationGroup(self._stripCSS(src))
result = self.cssBuilder.ruleset(selectors, declarations)
return src, result
# selector parsing
def _parseSelectorGroup(self, src):
"""Returns a list of selectors, complex or simple, returned from cssBuilder.selector().
Each element must implement CSSSelectorAbstract."""
selectors = []
while src[:1] not in ('{','}', ']','(',')', ';', ''):
src, selector = self._parseSelector(src)
if selector is None:
break
selectors.append(selector)
if src.startswith(','):
src = self._stripCSS(src[1:])
return src, selectors
def _parseSelector(self, src):
"""Parses a complex selector.
Selectors are combined using cssBuilder.combineSelectors() as necessary.
Returns a modified selector from cssBuilder.selector() which must implement
CSSSelectorAbstract.
::
selector
: simple_selector [ combinator simple_selector ]*
;
"""
src, selector = self._parseSimpleSelector(src)
while src[:1] not in ('', ',', ';', '{','}', '[',']','(',')'):
for combiner in self.SelectorCombiners:
if src.startswith(combiner):
src = self._stripCSS(src[len(combiner):])
break
else:
combiner = ' '
src, selectorB = self._parseSimpleSelector(src)
selector = self.cssBuilder.combineSelectors(selector, combiner, selectorB)
return self._stripCSS(src), selector
def _parseSimpleSelector(self, src):
"""Parses a single selector.
Complex selectors are handled by _parseSelector. Returns a modified
selector from cssBuilder.selector() which must implement
CSSSelectorAbstract.
::
simple_selector
: [ namespace_selector ]? element_name?
[ HASH | class | attrib | pseudo ]* S*
;
"""
src = self._stripCSS(src)
src, nsPrefix = self._getMatchResult(self.re_namespace_selector, src)
src = self._stripCSS(src)
src, name = self._getMatchResult(self.re_element_name, src)
src = self._stripCSS(src)
if name:
pass # already *successfully* assigned
elif src[:1] in self.SelectorQualifiers:
name = '*'
else:
raise self.ParseError('Selector name or qualifier expected', src)
name = self.cssBuilder.resolveNamespacePrefix(nsPrefix, name)
selector = self.cssBuilder.selector(name)
while src and src[:1] in self.SelectorQualifiers:
src, hash_ = self._getMatchResult(self.re_hash, src)
src = self._stripCSS(src)
if hash_ is not None:
selector.addHashId(hash_)
continue
src, class_ = self._getMatchResult(self.re_class, src)
src = self._stripCSS(src)
if class_ is not None:
selector.addClass(class_)
continue
if src.startswith('['):
src, selector = self._parseSelectorAttribute(src, selector)
elif src.startswith(':'):
src, selector = self._parseSelectorPseudo(src, selector)
else:
break
return self._stripCSS(src), selector
def _parseSelectorAttribute(self, src, selector):
"""Parses a attribute selector.
Selector argument must implement CSSSelectorAbstract.
Please see CSS spec for definition.
::
attrib
: '[' S* [ namespace_selector ]? IDENT S*
[ [ '=' | INCLUDES | DASHMATCH ] S* [ IDENT | STRING ] S* ]? ']'
;
"""
if not src.startswith('['):
raise self.ParseError('Selector Attribute opening \'[\' not found', src)
src = self._stripCSS(src[1:])
src, nsPrefix = self._getMatchResult(self.re_namespace_selector, src)
src = self._stripCSS(src)
src, attrName = self._getIdent(src)
if attrName is None:
raise self.ParseError('Expected a selector attribute name', src)
if nsPrefix is not None:
attrName = self.cssBuilder.resolveNamespacePrefix(nsPrefix, attrName)
for attrOp in self.AttributeOperators:
if src.startswith(attrOp):
break
else:
attrOp = ''
src = self._stripCSS(src[len(attrOp):])
if attrOp:
src, attrValue = self._getIdent(src)
if attrValue is None:
src, attrValue = self._getString(src)
if attrValue is None:
raise self.ParseError('Expected a selector attribute value', src)
else:
attrValue = None
if not src.startswith(']'):
raise self.ParseError('Selector Attribute closing \']\' not found', src)
else:
src = src[1:]
if attrOp:
selector.addAttributeOperation(attrName, attrOp, attrValue)
else:
selector.addAttribute(attrName)
return src, selector
def _parseSelectorPseudo(self, src, selector):
"""Parses a pseudo selector.
Selector argument must implement CSSSelectorAbstract.
Please see CSS spec for definition.
::
pseudo
: ':' [ IDENT | function ]
;
"""
if not src.startswith(':'):
raise self.ParseError('Selector Pseudo \':\' not found', src)
src = src[1:]
src, name = self._getIdent(src)
if not name:
raise self.ParseError('Selector Pseudo identifier not found', src)
if src.startswith('('):
# function
src = self._stripCSS(src[1:])
src, term = self._parseExpression(src, True)
if not src.startswith(')'):
raise self.ParseError('Selector Pseudo Function closing \')\' not found', src)
src = src[1:]
selector.addPseudoFunction(name, term)
else:
selector.addPseudo(name)
return src, selector
# declaration and expression parsing
def _parseDeclarationGroup(self, src, braces=True):
"""Returns a list of properties returned from cssBuilder.property"""
if src.startswith('{'):
src, braces = src[1:], True
elif braces:
raise self.ParseError('Declaration group opening \'{\' not found', src)
properties = []
src = self._stripCSS(src)
while src[:1] not in self.DeclarationBoundry:
src, property = self._parseDeclaration(src)
if property is not None:
properties.append(property)
if src.startswith(';'):
src = self._stripCSS(src[1:])
if braces:
if not src.startswith('}'):
if self.bParseStrict or src:
raise self.ParseError('Declaration group closing \'}\' not found', src)
src = src[1:]
return self._stripCSS(src), properties
def _parseDeclaration(self, src):
"""Returns a property or None.
Parses only the property name and the declaration setter.
_parseDeclarationProperty completes the property by parsing the
expression.
::
declaration
: ident S* ':' S* expr prio?
| /* empty */
;
"""
# property
src, propertyName = self._getIdent(src)
property = None
if propertyName is not None:
src = self._stripCSS(src)
# S* : S*
if self.bParseStrict:
if src[:1] != ':':
raise self.ParseError('Malformed declaration missing ":" before the value', src)
src, property = self._parseDeclarationProperty(self._stripCSS(src[1:]), propertyName)
elif src[:1] in self.DeclarationSetters:
# Note: we are being fairly flexible here... technically, the
# ":" is *required*, but in the name of flexibility we support
# an "=" transition
src, property = self._parseDeclarationProperty(self._stripCSS(src[1:]), propertyName)
else:
# dump characters to next ; or }
src, dumpText = self._getMatchResult(self.re_declarationError, src)
src = self._stripCSS(src)
elif self.bParseStrict:
raise self.ParseError('Property name not present', src)
return src, property
def _parseDeclarationProperty(self, src, propertyName):
"""Returns the result from cssBuilder.property(), combining name and value for the declaration"""
# expr
src, expr = self._parseExpression(src)
if expr is NotImplemented:
return src, None
# prio?
src, important = self._getMatchResult(self.re_important, src)
src = self._stripCSS(src)
property = self.cssBuilder.property(propertyName, expr, important)
return src, property
def _parseExpression(self, src, returnList=False):
"""Returns the terms (combined if necessary) for the property's value expression.
::
expr
: term [ operator term ]*
;
"""
src, term = self._parseExpressionTerm(src)
if term is NotImplemented:
return src, term
operator = None
while src[:1] not in ('', ';', '{','}', '[',']', ')'):
for operator in self.ExpressionOperators:
if src.startswith(operator):
src = src[len(operator):]
break
else:
operator = ' '
src, term2 = self._parseExpressionTerm(self._stripCSS(src))
if term2 is NotImplemented:
break
else:
term = self.cssBuilder.combineTerms(term, operator, term2)
if operator is None and returnList:
term = self.cssBuilder.combineTerms(term, None, None)
return src, term
else:
return src, term
def _parseExpressionTerm(self, src):
"""Returns the result from the applicable cssBuilder.term*() method.
::
term
: unary_operator? [ NUMBER S* | PERCENTAGE S* | LENGTH S* | EMS S*
| EXS S* | ANGLE S* | TIME S* | FREQ S* | function ] | STRING S*
| IDENT S* | URI S* | RGB S* | UNICODERANGE S* | hexcolor
;
"""
src, result = self._getMatchResult(self.re_num, src)
if result is not None:
src, units = self._getMatchResult(self.re_unit, src)
term = self.cssBuilder.termNumber(result, units)
return self._stripCSS(src), term
src, result = self._getString(src, self.re_uri)
if result is not None:
term = self.cssBuilder.termURI(result)
return self._stripCSS(src), term
src, result = self._getString(src)
if result is not None:
term = self.cssBuilder.termString(result)
return self._stripCSS(src), term
src, result = self._getMatchResult(self.re_functionterm, src)
if result is not None:
src, params = self._parseExpression(src, True)
if src[0] != ')':
raise self.ParseError('Terminal function expression expected closing \')\'', src)
src = self._stripCSS(src[1:])
term = self.cssBuilder.termFunction(result, params)
return src, term
src, result = self._getMatchResult(self.re_rgbcolor, src)
if result is not None:
term = self.cssBuilder.termColor(result)
return self._stripCSS(src), term
src, result = self._getMatchResult(self.re_unicoderange, src)
if result is not None:
term = self.cssBuilder.termUnicodeRange(result)
return self._stripCSS(src), term
src, nsPrefix = self._getMatchResult(self.re_namespace_selector, src)
src, result = self._getIdent(src)
if result is not None:
if nsPrefix is not None:
result = self.cssBuilder.resolveNamespacePrefix(nsPrefix, result)
term = self.cssBuilder.termIdent(result)
return self._stripCSS(src), term
src2, result = self._getString(src, self.re_string_unexpectedEnd)
if result:
if self.bParseStrict:
raise self.ParseError('Unexpected end of string literal', src)
src2 = self._stripCSS(src2)
if not src2:
# Special case where the CSS file was truncated, and we
# should actually return the unterminated string
term = self.cssBuilder.termString(result)
else:
# Per section 4.2 of the CSS21 spec, unterminated strings
# should be ignored
term = NotImplemented
return src2, term
if self.bParseStrict:
raise self.ParseError('Malformed declaration missing value', src)
else:
return self.cssBuilder.termUnknown(src)
# utility methods
def _getIdent(self, src, default=None):
return self._getMatchResult(self.re_ident, src, default)
def _getString(self, src, rexpression=None, default=None):
if rexpression is None:
rexpression = self.re_string
result = rexpression.match(src)
if result:
strres = filter(None, result.groups())
if strres:
strres = strres[0]
else:
strres = ''
return src[result.end():], strres
else:
return src, default
def _getStringOrURI(self, src):
src, result = self._getString(src, self.re_uri)
if result is None:
src, result = self._getString(src)
return src, result
def _getMatchResult(self, rexpression, src, default=None, group=1):
result = rexpression.match(src)
if result:
return src[result.end():], result.group(group)
else:
return src, default
def _stripCSS(self, src):
# Get rid of the comments
match = self.re_comment.match(src)
return src[match.end():] | /repoze.cssutils-1.0a3.tar.gz/repoze.cssutils-1.0a3/src/repoze/cssutils/parser.py | 0.744006 | 0.234966 | parser.py | pypi |
class DependencyInjector(object):
def __init__(self):
self.factories = {}
self.lookups = {}
self.factory_results = {}
def inject_factory(self, fixture, real):
""" Inject a testing dependency factory. ``fixture`` is the
factory used for testing purposes. ``real`` is the actual
factory implementation when the system is not used under test.
Returns a ``promise`` callable, which accepts no arguments.
When called, the promise callable returns the instance created
during a test run."""
def thunk():
return self.factory_results[(thunk, real)]
these_factories = self.factories.setdefault(real, [])
these_factories.append((thunk, fixture))
return thunk
def inject(self, fixture, real):
""" Inject a testing dependency object. ``fixture`` is the
object used for testing purposes. ``real`` is the
actual object when the system is not used under test."""
these_lookups = self.lookups.setdefault(_key(real), [])
these_lookups.append(fixture)
def construct(self, real, *arg, **kw):
""" Return the result of a testing factory related to ``real``
when the system is under test or the result of the ``real``
factory when the system is not under test. ``*arg`` and
``**kw`` will be passed to either factory."""
if real in self.factories:
these_factories = self.factories[real]
if these_factories:
thunk, fake = these_factories.pop(0)
result = fake(*arg, **kw)
self.factory_results[(thunk, real)] = result
return result
return real(*arg, **kw)
def lookup(self, real):
""" Return a testing object related to ``real`` if the system
is under test or the ``real`` when the system is not under
test."""
key = _key(real)
if key in self.lookups:
these_lookups = self.lookups[key]
if these_lookups:
fake = these_lookups.pop(0)
return fake
return real
def clear(self):
""" Clear the dependency injection registry """
self.__init__()
def _key(obj):
"""
Makes a key from an object suitable for looking up. If object is hashable
just returns the object, otherwise returns id.
"""
try:
hash(obj)
return obj
except TypeError:
return id(obj)
injector = DependencyInjector()
lookup = injector.lookup
construct = injector.construct
inject_factory = injector.inject_factory
inject = injector.inject
clear = injector.clear | /repoze.depinj-0.3.tar.gz/repoze.depinj-0.3/repoze/depinj/__init__.py | 0.84858 | 0.338911 | __init__.py | pypi |
:mod:`repoze.dvselect` Documentation
====================================
Overview
--------
:mod:`repoze.dvselect` provides a `WSGI`_ middleware component which reads
assertions set by :mod:`repoze.urispace` into the WSGI environment, and
uses them to select themes and rules for `Deliverance`_ based on the URI of
the request.
Because it reads the assertions created by the :mod:`repoze.urispace`
middleware, the :mod:`repoze.dvselect` middleware must be configured
"downstream" from the :mod:`repoze.urispace` middleware (i.e., nearer to
the application); otherwise there will be no assertions to use.
Because it tweaks the WSGI environment, the :mod:`repoze.dvselect`
middleware must be configured "upstream" from the `Deliverance`_ middleware
/ proxy (i.e., nearer to the server); otherwise Deliverance will have
already selected the theme / rules to use.
.. toctree::
:maxdepth: 2
Example: Using different theme / rules for sections of a site
--------------------------------------------------------------
This example assumes that the site being themed is something like a
newspaper site, where different "sections" have different layouts.
.. literalinclude:: etc/dv_news.xml
:linenos:
:language: xml
The purpose of this file is to compute two values (called "assertions" by
the `URISpace`_ spec) based on the request URI:
- the ``theme`` assertion is the URI to be used by `Deliverance`_ as the
theme for this request. It should be a URI for a static HTML page. This
example assumes that the theme pages are served from a separate
server than the main application; see the `Deliverance`_ docs for
details about serving the theme from within the main application.
- the ``rules`` assertion is the URI to be used by `Deliverance`_ as the
rules mapping the content onto the theme for this request. It should be
a URI for an XML document. This example assumes that the rules are
served from a separate server than the main application; see the
`Deliverance`_ docs for details about serving the theme from within the
main application.
Prolog
++++++
- Line 1 is the stock XML prolog.
- Lines 2 - 5 define the root element (its element name is irrelevant to
`URISpace`_) and the namespaces used in the document.
- The ``uri:`` namespace defined in line 3 is the stock namespace for
`URISpace`_.
- The ``uriext:`` namespace in line 4 is used for extensions defined
by :mod:`repoze.urispce`: this document uses the ``uriext:pathlast``
extension element.
For details of the syntax of this file, please see the
`repoze.urispace docs <http://packages.python.org/repoze.urispace/>`_.
Default Assertions
++++++++++++++++++
Liens 9 and 10 define the default theme and rule assertions: they will
be used if no other rule matches the request URI.
Assertions for Sections
+++++++++++++++++++++++
- Lines 11 - 22 are conditioned by a match on ``news`` as the first
element of the path in the URI.
- Line 12 overrides the theme for requests in the ``news`` section.
- Lines 13 - 15 override the theme further, for items which are in the
``news/world`` subsection. Likewise, lines 16 - 18 override it for the
``news/national`` subsection, and lines 19 - 21 for the ``news/local``
subsection.
- Lines 24 - 26 are conditioned by a match on ``lifestye`` as the first
element of the URI path; they override the theme accordingly.
- Lines 28 - 30 are conditioned by a match on ``sports`` as the first
element of the URI path; they override the theme accordingly.
Note that none of these `URISpace`_ assertions override the ``rules``
assertion, which means that the default assertion applies.
Assertions for Pages
++++++++++++++++++++
- Lines 33 - 35 match URIs whose **last** path element matches the glob,
``*.html``: they override the ``rules`` assertion, likely because the
layout of the content resource is substantially different for stories.
- Lines 37 - 39 re-override the ``rules`` assertion for the pages whose
last path element is ``index.html`` (the "section front").
Note that these last two matches vary the ``rules`` assertion independently
from the ``theme`` assertion, which will have been set by the earlier,
section-based matches.
Configuring :mod:`repoze.dvselect` via Paste
--------------------------------------------
To configure the middleware via a :mod:`Paste` config file, you can just
add it as a filter to the WSGI pipeline: it doesn't require any separate
configuration.
You also need to configure the :mod:`repoze.urispace` middleware as a
filter, supplying the filename of the XML file defining the `URISpace`_
rules. E.g., assuming we put the `URISpace`_ configuration from the example
above into the same directory as our Paste config file, and call it
``urispace.xml``, the filter configuration would be::
[filter:urispace]
use = egg:repoze.urispace#urispace
urispace = %(here)/urispace.xml
Assuming that our Paste configuration defines a setup where the
`Deliverance`_ proxy is the application, we would configure it as::
[app:deliverance]
use = egg:Deliverance#proxy
wrap_ref = http://example.com/
theme_uri = http://static.example.com/themes/default.html
rule_uri = http://static.example.com/rules/default.xml
We would then set up the pipline with the :mod:`repoze.urispace` filter
first in line, followed byt eh :mod:`repoze.dvselect` filter, followed by
the proxy::
[pipeline:main]
pipeline =
urispace
egg:repoze.urispace#urispace
deliverance
Configuring :mod:`repoze.dvselect` via Python
---------------------------------------------
Defining the same middleware via impertive python code would look something
like the following:
.. code-block:: python
from deliverance.proxyapp import ProxyDeliveranceApp
from repoze.dvselect import DeliveranceSelect
from repoze.urispace.middleware import URISpaceMiddleware
proxy = ProxyDeliveranceApp(
theme_uri = 'http://static.example.com/themes/default.html',
rule_uri = 'http://static.example.com/rules/default.xml',
proxy = 'http://example.com/',
)
dvselect = DeliveranceSelect(proxy)
urispace = URISpaceMiddleware(dvselect, 'file://etc/urispace.xml')
application = urispace
This example builds the `WSGI`_ pipeline by composing the application
and the filters together via their constructors.
.. _WSGI: http://www.wsgi.org/
.. _URISpace: http://www.w3.org/TR/urispace.html
.. _Deliverance: http://www.coactivate.org/projects/deliverance/introduction
| /repoze.dvselect-0.1.2.tar.gz/repoze.dvselect-0.1.2/docs/index.rst | 0.895889 | 0.715287 | index.rst | pypi |
from pkg_resources import EntryPoint
from zope.interface import implementer
from repoze.evolution.interfaces import IEvolutionManager
_marker = object()
@implementer(IEvolutionManager)
class ZODBEvolutionManager:
key = 'repoze.evolution'
def __init__(self,
context,
evolve_packagename,
sw_version,
initial_db_version=None,
txn=_marker,
):
""" Initialize a ZODB evolution manager. ``context`` is an
object which must inherit from ``persistent.Persistent`` that
will be passed in to each evolution step. evolve_packagename
is the Python dotted package name of a package which contains
evolution scripts. ``sw_version`` is the current software
version of the software represented by this manager.
``initial_db_version`` indicates the presumed version of a database
which doesn't already have a version set. If not supplied or is set
to ``None``, the evolution manager will not attempt to construe the
version of a an unversioned db. ``dry_run``, if True, tells the
manager not to commit any transactions.
"""
if txn is _marker:
import transaction
self.transaction = transaction
else:
self.transaction = txn
self.context = context
self.package_name = evolve_packagename
self.sw_version = sw_version
self.initial_db_version = initial_db_version
@property
def root(self):
return self.context._p_jar.root()
def get_sw_version(self):
return self.sw_version
def get_db_version(self):
registry = self.root.setdefault(self.key, {})
db_version = registry.get(self.package_name)
if db_version is None:
return self.initial_db_version
return db_version
def evolve_to(self, version):
scriptname = '%s.evolve%s' % (self.package_name, version)
evmodule = EntryPoint.parse('x=%s' % scriptname).load(False)
if self.transaction is not None:
self.transaction.begin()
evmodule.evolve(self.context)
self.set_db_version(version, commit=False)
if self.transaction is not None:
self.transaction.commit()
def set_db_version(self, version, commit=True):
registry = self.root.setdefault(self.key, {})
registry[self.package_name] = version
self.root[self.key] = registry
if commit and self.transaction is not None:
self.transaction.commit()
# b/w compatibility
_set_db_version = set_db_version
def evolve_to_latest(manager):
""" Evolve the database to the latest software version using the
``manager`` object. """
db_version = manager.get_db_version()
sw_version = manager.get_sw_version()
if not isinstance(sw_version, int):
raise ValueError('software version %s is not an integer' %
sw_version)
if db_version is None:
raise ValueError('database version has not been set and no initial '
'value has been provided.')
if not isinstance(db_version, int):
raise ValueError('database version %s is not an integer' %
db_version)
if db_version < sw_version:
for version in range(db_version+1, sw_version+1):
manager.evolve_to(version)
return version
return db_version | /repoze.evolution-0.6.tar.gz/repoze.evolution-0.6/repoze/evolution/__init__.py | 0.72662 | 0.178383 | __init__.py | pypi |
import os
import transaction
from zope import component
from webob.exc import HTTPNotFound
from ore.xapian.interfaces import IOperationFactory
from ore.xapian.interfaces import IResolver
from repoze.filecat.resource import FileSystemResource
from repoze.filecat.json import JSONResponse
from repoze.filecat.index import search
def get_path(context, request, _raise=True):
relative_path = request.params.get('path')
if relative_path is None:
raise ValueError("Must provide ``path``.")
absolute_path = os.path.join(context.path, relative_path)
if not os.path.exists(absolute_path) and _raise:
raise HTTPNotFound
return relative_path, absolute_path
def add_view(context, request):
""" add view
Add a file to the index. The path of the file is taken from the request.
The path is assumed to be *relative* to the file pool.
@context This is a ``RoutesContext`` instance
@request A webob request object
"""
relative_path, absolute_path = get_path(context, request)
# ADD
IOperationFactory(FileSystemResource(
relative_path)).add()
transaction.commit()
return JSONResponse({})
def update_view(context, request):
""" update view
Update a file already in the index. The path of the file is taken from the
request. The path is assumed to be *relative* to the file pool.
@context This is a ``RoutesContext`` instance
@request A webob request object
"""
relative_path, absolute_path = get_path(context, request)
# MODIFY
IOperationFactory(FileSystemResource(
relative_path)).modify()
transaction.commit()
return JSONResponse({})
def remove_view(context, request):
""" remove view
Remove a file from the index. The path of the file is taken from the
request. The path is assumed to be *relative* to the file pool.
@context This is a ``RoutesContext`` instance
@request A webob request object
"""
relative_path, absolute_path = get_path(context, request, _raise=False)
# REMOVE
IOperationFactory(FileSystemResource(
relative_path)).remove()
transaction.commit()
return JSONResponse({})
def query_view(context, request):
""" query view
This view is called for queries. The query is constructed from the
request, a search using xapian is done and the results are sent back to the
caller in JSON format.
@context This is a ``RoutesContext`` instance
@request A webob request object
The result is a JSON encoded list of dicts as in::
[{
"url": "http://localhost:1234/static/fashion.jpg"
"mimetype": "image/jpg",
"metadata": {
"creation_date": "2008-10-02 17:43",
"keywords": ["new york", "fashion"],
...
},
}, ... ]
"""
start = 0
limit = 100
try:
limit = int(request.params.get("limit"))
except (TypeError, ValueError), e:
pass
try:
start = int(request.params.get("start"))
except (TypeError, ValueError), e:
pass
# construct query
search_result = search(request.params.get("query"), start, limit)
# format results
data = []
matches_estimated = search_result.matches_estimated
resolver = component.getUtility(IResolver)
for brain in search_result:
resource = resolver.resolve(brain.id)
rel_path = resource.path[len(resolver.path)+1:]
data.append(
dict(
url=os.path.join(context.host, rel_path),
mimetype=resource.mimetype,
metadata=brain.data,
)
)
return JSONResponse((matches_estimated, data))
def purge_view(context, request):
raise NotImplementedError("Purge not yet implemented.") | /repoze.filecat-0.2.tar.gz/repoze.filecat-0.2/src/repoze/filecat/views.py | 0.585931 | 0.177045 | views.py | pypi |
import os
import re
import time
import datetime
import mimetypes
import unicodedata
from zope import interface
from zope import component
from ore.xapian.interfaces import IIndexer
import xappy
import interfaces
import extraction
def trim_join(strings):
return "".join(map(trim, strings))
def trim(string):
if string is not None:
return string.replace('\n', ' ').replace(' ', ' ').strip()
def normalize(value):
"""
Normalizes string, converts to lowercase, removes non-alpha characters,
and converts spaces to hyphens.
"""
value = unicode(value)
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore')
value = unicode(re.sub('[^\w\s-]', '', value).strip().lower())
return re.sub('[-\s]+', '-', value)
def create_indexer(database):
indexer = xappy.IndexerConnection(database)
# indexes
indexer.add_field_action('short_name', xappy.FieldActions.INDEX_EXACT)
indexer.add_field_action('searchable_text', xappy.FieldActions.INDEX_FREETEXT)
indexer.add_field_action('author', xappy.FieldActions.INDEX_EXACT)
indexer.add_field_action('keywords', xappy.FieldActions.TAG)
indexer.add_field_action('modified_date', xappy.FieldActions.SORTABLE, type='date')
indexer.add_field_action('creation_date', xappy.FieldActions.SORTABLE, type='date')
indexer.add_field_action('mime_type', xappy.FieldActions.INDEX_EXACT)
# metadata
indexer.add_field_action('title', xappy.FieldActions.STORE_CONTENT)
indexer.add_field_action('short_name', xappy.FieldActions.STORE_CONTENT)
indexer.add_field_action('description', xappy.FieldActions.STORE_CONTENT)
indexer.add_field_action('author', xappy.FieldActions.STORE_CONTENT)
indexer.add_field_action('modified_date', xappy.FieldActions.STORE_CONTENT)
indexer.add_field_action('creation_date', xappy.FieldActions.STORE_CONTENT)
indexer.add_field_action('keywords', xappy.FieldActions.STORE_CONTENT)
indexer.add_field_action('mime_type', xappy.FieldActions.STORE_CONTENT)
return indexer
def search(query=None, start=None, limit=None, **kw):
searcher = component.getUtility(interfaces.IXapianConnection).get_connection()
if query is not None:
query = searcher.query_parse(query)
elif kw:
assert len(kw) == 1, \
"Single keyword-argument supported only."
query = searcher.query_field(*(kw.items()[0]))
else:
query = searcher.query_all()
return searcher.search(query, start, start+limit)
class Indexer(object):
interface.implements(IIndexer)
def __init__( self, resource):
locator = component.getUtility(interfaces.IResourceLocator)
self.path = locator.get_path(resource)
def document(self, connection=None):
mimetype, encoding = mimetypes.guess_type(self.path)
try:
method = getattr(self, 'document_%s' % (
mimetype.replace('/', '_').replace('-', '_')))
except AttributeError:
raise ValueError("Unable to handle file-type: %s." % mimetype)
doc = method(connection)
# add modified date
doc.fields.append(xappy.Field('modified_date', datetime.date(
*time.localtime(os.path.getmtime(self.path))[:3])))
# add mimetype
doc.fields.append(xappy.Field('mime_type', mimetype))
return doc
def document_text_x_rst(self, connection):
doc = xappy.UnprocessedDocument()
metadata = extraction.extract_dc_from_rst(file(self.path))
doc.fields.append(xappy.Field('title', metadata['title']))
doc.fields.append(xappy.Field('author', metadata['author']))
doc.fields.append(xappy.Field('creation_date', metadata['creation_date']))
doc.fields.append(xappy.Field('searchable_text', metadata['searchable_text']))
doc.fields.append(xappy.Field('short_name', normalize(metadata['title'])))
return doc
def document_image_jpeg(self, connection):
doc = xappy.UnprocessedDocument()
metadata = extraction.extract_xmp_from_jpeg(file(self.path))
# iterate over all metadata text
searchable_text = " ".join(
text.strip('\n ') for text in metadata.itertext())
# set up XML namespaces for queries
namespaces = {
'dc': 'http://purl.org/dc/elements/1.1/',
'xmp': 'http://ns.adobe.com/xap/1.0/',
'rdf': 'http://www.w3.org/1999/02/22-rdf-syntax-ns#',
'exif': 'http://ns.adobe.com/exif/1.0/'}
def xpath(query):
return metadata.xpath(query, namespaces=namespaces)
try:
title = trim_join(xpath('.//dc:title')[0].itertext())
except IndexError:
title = u""
try:
description = trim_join(xpath('.//dc:description')[0].itertext())
except IndexError:
description = u""
try:
author = trim_join(xpath('.//dc:creator')[0].itertext())
except IndexError:
author = u""
try:
meta = xpath('.//rdf:Description')[0]
creation_date = trim(
meta.attrib.get(
'{%(exif)s}DateTimeOriginal' % namespaces) or
meta.attrib.get(
'{%(xmp)s}CreateDate' % namespaces))
except IndexError:
creation_date = None
keywords = map(trim, xpath('.//dc:subject//rdf:li/text()'))
# fill document fields
if title:
doc.fields.append(xappy.Field('title', title))
if description:
doc.fields.append(xappy.Field('description', description))
if keywords:
for keyword in keywords:
doc.fields.append(xappy.Field('keywords', keyword))
if author:
doc.fields.append(xappy.Field('author', author))
if creation_date:
try:
date = datetime.datetime(*map(int, re.split('[^\d]', creation_date)[:-1]))
doc.fields.append(xappy.Field('creation_date', date))
except ValueError:
pass
if searchable_text:
doc.fields.append(xappy.Field('searchable_text', searchable_text))
short_name, ext = os.path.splitext(os.path.basename(self.path))
doc.fields.append(xappy.Field('short_name', normalize(short_name)))
return doc | /repoze.filecat-0.2.tar.gz/repoze.filecat-0.2/src/repoze/filecat/index.py | 0.401219 | 0.227298 | index.py | pypi |
from zope.component.interfaces import IObjectEvent
from zope.interface import Interface
from zope.interface import Attribute
marker = object()
class IObjectWillBeAddedEvent(IObjectEvent):
""" An event type sent when an before an object is added """
object = Attribute('The object being added')
parent = Attribute('The folder to which the object is being added')
name = Attribute('The name which the object is being added to the folder '
'with')
class IObjectAddedEvent(IObjectEvent):
""" An event type sent when an object is added """
object = Attribute('The object being added')
parent = Attribute('The folder to which the object is being added')
name = Attribute('The name of the object within the folder')
class IObjectWillBeRemovedEvent(IObjectEvent):
""" An event type sent before an object is removed """
object = Attribute('The object being removed')
parent = Attribute('The folder from which the object is being removed')
name = Attribute('The name of the object within the folder')
class IObjectRemovedEvent(IObjectEvent):
""" An event type sent when an object is removed """
object = Attribute('The object being removed')
parent = Attribute('The folder from which the object is being removed')
name = Attribute('The name of the object within the folder')
class IFolder(Interface):
""" A Folder which stores objects using Unicode keys.
All methods which accept a ``name`` argument expect the
name to either be Unicode or a byte string decodable using the
default system encoding or the UTF-8 encoding."""
order = Attribute("""Order of items within the folder
(Optional) If not set on the instance, objects are iterated in an
arbitrary order based on the underlying data store.""")
def keys():
""" Return an iterable sequence of object names present in the folder.
Respect ``order``, if set.
"""
def __iter__():
""" An alias for ``keys``
"""
def values():
""" Return an iterable sequence of the values present in the folder.
Respect ``order``, if set.
"""
def items():
""" Return an iterable sequence of (name, value) pairs in the folder.
Respect ``order``, if set.
"""
def get(name, default=None):
""" Return the object named by ``name`` or the default.
``name`` must be a Unicode object or a bytestring object.
If ``name`` is a bytestring object, it must be decodable using the
system default encoding or the UTF-8 encoding.
"""
def __contains__(name):
""" Does the container contains an object named by name?
``name`` must be a Unicode object or a bytestring object.
If ``name`` is a bytestring object, it must be decodable using the
system default encoding or the UTF-8 encoding.
"""
def __nonzero__():
""" Always return True
"""
def __len__():
""" Return the number of subobjects in this folder.
"""
def __setitem__(name, other):
""" Set object ``other' into this folder under the name ``name``.
``name`` must be a Unicode object or a bytestring object.
If ``name`` is a bytestring object, it must be decodable using the
system default encoding or the UTF-8 encoding.
``name`` cannot be the empty string.
When ``other`` is seated into this folder, it will also be
decorated with a ``__parent__`` attribute (a reference to the
folder into which it is being seated) and ``__name__``
attribute (the name passed in to this function.
If a value already exists in the foldr under the name ``name``, raise
:exc:`KeyError`.
When this method is called, emit an ``IObjectWillBeAddedEvent`` event
before the object obtains a ``__name__`` or ``__parent__`` value.
Emit an ``IObjectAddedEvent`` after the object obtains a ``__name__``
and ``__parent__`` value.
"""
def add(name, other, send_events=True):
""" Same as ``__setitem__``.
If ``send_events`` is false, suppress the sending of folder events.
"""
def pop(name, default=marker):
""" Remove the item stored in the under ``name`` and return it.
If ``name`` doesn't exist in the folder, and ``default`` **is not**
passed, raise a :exc:`KeyError`.
If ``name`` doesn't exist in the folder, and ``default`` **is**
passed, return ``default``.
When the object stored under ``name`` is removed from this folder,
remove its ``__parent__`` and ``__name__`` values.
When this method is called, emit an ``IObjectWillBeRemovedEvent`` event
before the object loses its ``__name__`` or ``__parent__`` values.
Emit an ``ObjectRemovedEvent`` after the object loses its ``__name__``
and ``__parent__`` value,
This method is new in repoze.folder 0.5.
"""
def __delitem__(name):
""" Remove the object from this folder stored under ``name``.
``name`` must be a Unicode object or a bytestring object.
If ``name`` is a bytestring object, it must be decodable using the
system default encoding or the UTF-8 encoding.
If no object is stored in the folder under ``name``, raise a
:exc:`KeyError`.
When the object stored under ``name`` is removed from this folder,
remove its ``__parent__`` and ``__name__`` values.
When this method is called, emit an ``IObjectWillBeRemovedEvent`` event
before the object loses its ``__name__`` or ``__parent__`` values.
Emit an ``IObjectRemovedEvent`` after the object loses its ``__name__``
and ``__parent__`` value,
"""
def remove(name, send_events=True):
""" Same thing as ``__delitem__``.
If ``send_events`` is false, suppress the sending of folder events.
""" | /repoze.folder-1.0.tar.gz/repoze.folder-1.0/repoze/folder/interfaces.py | 0.754915 | 0.463626 | interfaces.py | pypi |
from repoze.formapi.parser import parse
from repoze.formapi.parser import missing
import types
import re
def get_instances_of(type, *bases):
for base in bases:
for value in base.__dict__.values():
if isinstance(value, type):
yield value
for value in get_instances_of(type, *base.__bases__):
yield value
class Validator(object):
"""Wrapper for validators.
This calls a validator object (usually a method) and collects all it's
errors. It also sets a flag that the form library can use to know that it
is a validator."""
def __init__(self, func, *fields):
self.func = func
# Fields can be empty, in that case we want to have and empty path
if not fields:
self.fieldpaths = ((),)
else:
self.fieldpaths = [f.split('.') for f in fields]
def __call__(self, form):
for error in self.func(form):
for fieldpath in self.fieldpaths:
yield (fieldpath, error)
class Action(object):
def __init__(self, action, name=None, submitted=False):
self.action = action
self.name = name
self.submitted = submitted
@property
def __call__(self):
return self.action
def __nonzero__(self):
return self.submitted
def __repr__(self):
return '<%s name="%s" submitted="%s">' % (
type(self).__name__, self.name or "", str(bool(self.submitted)))
class metaclass(type):
def __init__(kls, name, bases, dict):
kls.validators = tuple(get_instances_of(Validator, kls))
kls.actions = tuple(get_instances_of(Action, kls))
class Form(object):
"""Base form class. Optionally pass a dictionary as ``data`` and a
WebOb-like request object as ``request``."""
__metaclass__ = metaclass
fields = {}
status = None
prefix = None
action = None
def __init__(self, data=None, context=None, request=None, params=None, prefix=None):
self.context = context
self.request = request
if context is not None:
if data is not None:
raise ValueError(
"Can't provide both ``data`` and ``context``.")
# proxy the context object
data = Proxy(context)
self.data = Data(data)
if prefix is None:
prefix = self.prefix
if request is not None:
if params is not None:
raise ValueError(
"Can't provide both ``params`` and ``request``.")
params = request.params.items()
# find action parameters
action_params = {}
if prefix is not None and params is not None:
re_prefix = re.compile(r'^%s[._-](?P<name>.*)' % prefix)
for key, value in params:
if key == prefix:
action_params[None] = value
else:
m = re_prefix.search(key)
if m is not None:
action_params[m.group('name')] = value
# initialize form actions
actions = self.actions = []
for action in type(self).actions:
name = action.name
action = Action(action.__call__, name, name in action_params)
actions.append(action)
if action:
self.action = action
# conditionally apply request parameters if:
# 1. no prefix has been set
# 2. there is a submitted action
# 3. there are no defined actions, but a default action was submitted
if params is not None and (
prefix is None or \
filter(None, actions) or \
len(actions) == 0 and action_params.get(None) is not None):
params = list(params)
else:
params = ()
# Parse parameter input
data, errors = parse(params, self.fields)
if len(params):
self.data.update(data)
self.errors = errors
self.prefix = prefix
def __call__(self):
"""Calls the first submitted action and returns the value."""
if self.action is not None:
self.status = self.action(self, self.data)
return self.status
def validate(self):
"""Validates the request against the form fields. Returns
``True`` if all fields validate, else ``False``."""
for validator in self.validators:
for field_path, validation_error in validator(self):
errors = self.errors
for field in field_path:
errors = errors[field]
errors += validation_error
return not bool(self.errors)
class ValidationError(Exception):
"""Represents a field validation error."""
def __init__(self, field, msg):
if not isinstance(msg, unicode):
msg = unicode(msg)
self.field = field
self.msg = msg
def __repr__(self):
return '<%s field="%s" %s>' % (
type(self).__name__, self.field, repr(self.msg))
def __str__(self):
return str(unicode(self))
def __unicode__(self):
return self.msg
class Data(list):
"""Form data object with dictionary-like interface. If initialized
with a ``data`` object, this will be used to provide default
values, if not set in the ``request``. Updates to the object are
transient until the ``save`` method is invoked."""
def __init__(self, data):
if data is not None:
self.append(data)
self.append({})
def __getitem__(self, name):
for data in reversed(self):
try:
value = data[name]
except KeyError:
continue
if value is not missing:
return value
def __setitem__(self, name, value):
self.tail[name] = value
@property
def tail(self):
return list.__getitem__(self, -1)
@property
def head(self):
return list.__getitem__(self, 0)
def update(self, data):
"""Updates the dictionary by appending ``data`` to the list at
the position just before the current dictionary."""
self.insert(-1, data)
def save(self):
"""Flattens the dictionary, saving changes to the data object."""
while len(self) > 1:
for name, value in self.pop(1).items():
self.head[name] = value
self.append({})
class Proxy(object):
"""Proxy object; reads and writes to attributes are forwarded to
the provided object. Descriptors are supported: they must read and
write to ``self.context``. Note, that all attribute names are
supported, including the name 'context'.
Any object can be the context of a proxy.
>>> class Content(object):
... pass
>>> from repoze.formapi import Proxy
>>> class ContentProxy(Proxy):
... def get_test_descriptor(self):
... return self.test_descriptor
...
... def set_test_descriptor(self, value):
... self.test_descriptor = value + 1
...
... def get_get_only(self):
... return self.test_get_only
...
... test_descriptor = property(get_test_descriptor, set_test_descriptor)
... test_get_only = property(get_get_only)
>>> context = Content()
>>> proxy = ContentProxy(context)
We can read and write to the ``context`` attribute.
>>> proxy.test = 42
>>> proxy.test
42
Descriptors have access to the original context.
>>> proxy.test_descriptor = 41
>>> proxy.test_descriptor
42
Descriptors that only define a getter are supported.
>>> proxy.test_get_only = 41
>>> proxy.test_get_only
41
Proxies provide dictionary-access to attributes.
>>> proxy['test']
42
>>> proxy['test'] = 41
>>> proxy.test
41
"""
def __init__(self, context):
# instantiate a base proxy object with this context
serf = object.__new__(Proxy)
object.__setattr__(serf, '_context', context)
object.__setattr__(self, '_context', context)
object.__setattr__(self, '_serf', serf)
def __getattribute__(self, name):
prop = object.__getattribute__(type(self), '__dict__').get(name)
if prop is not None:
# call getter in the context of the proxy object
serf = object.__getattribute__(self, '_serf')
return prop.fget(serf)
else:
return getattr(
object.__getattribute__(self, '_context'), name)
def __setattr__(self, name, value):
prop = object.__getattribute__(type(self), '__dict__').get(name)
if prop is not None:
# property might be read-only (e.g. does not define a
# setter); in this case we just set attribute on context.
setter = prop.fset
if setter is not None:
# call setter in the context of the proxy object
serf = object.__getattribute__(self, '_serf')
return prop.fset(serf, value)
setattr(
object.__getattribute__(self, '_context'), name, value)
__getitem__ = __getattribute__
__setitem__ = __setattr__
def action(name):
if isinstance(name, types.FunctionType):
return Action(name)
else:
def decorator(action):
return Action(action, name)
return decorator
def validator(*args):
# If the first (and only) argument is a callable process it
if len(args) == 1 and callable(args[0]):
return Validator(args[0])
# Treat the args as field names and prepare a wrapper
else:
return lambda func: Validator(func, *args) | /repoze.formapi-0.6.1.tar.gz/repoze.formapi-0.6.1/src/repoze/formapi/form.py | 0.812496 | 0.199133 | form.py | pypi |
from repoze.formapi.py24 import defaultdict
from repoze.formapi.py24 import any
class Errors(object):
"""Container for errors.
Each error will be present in it's `messages` list. Dictionary lookup can
be used to get at errors which are specific to a field.
Note that structure will automatically create entries for non-existing
keys. This is done to make access from templates etc. easier and less
fragile.
>>> from repoze.formapi.error import Errors
>>> errors = Errors()
The `errors` object is an object which can easily be converted to unicode
and that inhibits a dict-like behavior, too.
>>> unicode(errors)
u''
Any dict-entry returns a new `errors`-object.
>>> isinstance(errors[None], Errors)
True
Errors may be appended to the object using the add operator.
>>> errors += "Abc."
>>> errors += "Def."
>>> errors[0]
'Abc.'
We can iterate through the errors object.
>>> tuple(errors)
('Abc.', 'Def.')
The string representation of the object is a concatenation of the
individual error messages.
>>> unicode(errors)
u'Abc. Def.'
>>> len(errors)
9
The truth value of the errors object is based on the error messages it or
it's sub errors contain.
>>> bool(errors)
True
If there are no error messages the truth value will be false regardless
even when there are error keys.
>>> errors = Errors()
>>> name_error = errors['name']
>>> bool(errors)
False
Two errors instances are considered equal when they have the same keys with
the same messages.
>>> a = Errors()
>>> a['foo'].append('Error')
>>> b = Errors()
>>> b['foo'].append('Error')
>>> a == b
True
Adding an error to one makes them unequal.
>>> a['bar'].append('Error')
>>> a == b
False
We can use the standard dictionary ``get`` method.
>>> a.get('foo')
<Errors: ['Error'], defaultdict(<class 'repoze.formapi.error.Errors'>, {})>
>>> a.get('boo', False)
False
"""
_messages = _dict = None
def __init__(self, *args, **kwargs):
self._dict = defaultdict(Errors, *args, **kwargs)
self._messages = []
def __nonzero__(self):
return bool(self._messages) or any(self._dict.itervalues())
def __repr__(self):
return '<Errors: %r, %r>' % (self._messages, self._dict)
def __getitem__(self, key):
if isinstance(key, int):
return self._messages[key]
return self._dict[key]
def __contains__(self, key):
return key in self._dict
has_key = __contains__
def __unicode__(self):
return u" ".join(self._messages)
def __str__(self):
return str(unicode(self))
def __iter__(self):
return iter(self._messages)
def __len__(self):
return len(unicode(self))
def __add__(self, error):
self.append(error)
return self
def __getattr__(self, name):
if name in type(self).__dict__:
return object.__getattribute__(self, name)
raise AttributeError(name)
def __eq__(self, other):
if type(other) != type(self):
return False
return self._messages == other._messages and self._dict == other._dict
def append(self, error):
self._messages.append(error)
def get(self, key, default=None):
assert isinstance(key, basestring), "Key must be a string."
return self._dict.get(key, default) | /repoze.formapi-0.6.1.tar.gz/repoze.formapi-0.6.1/src/repoze/formapi/error.py | 0.844985 | 0.394551 | error.py | pypi |
from zope.component import getSiteManager
from zope.component import getAdapter
from zope.interface import directlyProvides
from zope.interface import providedBy
from repoze.lemonade.interfaces import IContentFactory
from repoze.lemonade.interfaces import IContent
from repoze.lemonade.interfaces import IContentTypeCache
class provides:
def __init__(self, iface):
directlyProvides(self, iface)
_marker = ()
def create_content(iface, *arg, **kw):
""" Create an instance of the content type related to ``iface``,
by calling its factory, passing ``*arg`` and ``**kw`` to the factory.
Raise a ComponentLookupError if there is no content type related to
``iface`` """
factory = getAdapter(provides(iface), IContentFactory)
return factory(*arg, **kw)
def get_content_types(context=_marker):
""" Return a sequence of interface objects that have been
registered as content types. If ``context`` is used, return only
the content_type interfaces which are provided by the context."""
sm = getSiteManager()
cache = sm.queryUtility(IContentTypeCache)
if cache is None:
# use a utility to cache the result of calling sm.registeredAdapters
cache = set()
sm.registerUtility(cache, IContentTypeCache)
for reg in sm.registeredAdapters():
if reg.provided is IContentFactory:
iface = reg.required[0]
cache.add(iface)
if context is _marker:
return list(cache)
return [iface for iface in providedBy(context) if iface in cache]
def get_content_type(context):
types = get_content_types(context)
if len(types) > 1:
raise ValueError('%r has more than one content type (%s)' %
(context, types))
if not types:
raise ValueError('%s is not content' % context)
return types[0]
def is_content(model):
""" Return True if the model object provides any interface
registered as a content type, False otherwise. """
return IContent.providedBy(model) | /repoze.lemonade-0.7.6.tar.gz/repoze.lemonade-0.7.6/repoze/lemonade/content.py | 0.652795 | 0.171061 | content.py | pypi |
from zope.interface import Interface
class IMessageStore(Interface):
""" Plugin interface for append-only storage of RFC822 messages.
"""
def __getitem__(message_id):
""" Retrieve a message.
- Return an instance of 'email.Message'. (XXX text?)
- Raise KeyError if no message with the given ID is found.
"""
def __setitem__(message_id, message):
""" Store a message.
- 'message' should be an instance of 'email.Message'. (XXX text?)
- Raise KeyError if no message with the given ID is found.
"""
def iterkeys():
""" Return an interator over the message IDs in the store.
"""
class IPendingQueue(Interface):
""" Plugin interface for a FIFO queue of messages awaiting processing.
"""
def push(message_id):
""" Append 'message_id' to the queue.
"""
def pop(how_many=1):
""" Retrieve the next 'how_many' message IDs to be processed.
- If 'how_many' is None, then return all available message IDs.
- May return fewer than 'how_many' IDs, if the queue is emptied.
- Popped messages are no longer present in the queue.
"""
def remove(message_id):
""" Remove the given message ID from the queue.
- Raise KeyError if not found.
"""
def quarantine(message_id, error_msg=None):
""" Adds 'message_id' to quarantine for this queue. Message must be
moved out of quarantine before it can be processed. May optionally
pass in error_msg string as reason for the quarantine.
"""
def iter_quarantine():
""" Returns an iterator for message_ids that are in the quaratine.
"""
def get_error_message(message_id):
""" Returns the error message for the quarantined message_id.
"""
def clear_quarantine():
""" Moves all messages out of quarantine to retry processing.
"""
def __nonzero__():
""" Return True if no message IDs are in the queue, else False.
"""
class StopProcessing(Exception):
""" Raised by IMessageFilter instances to halt procesing of a message.
o The application may still commit the current transaction.
"""
class CancelProcessing(StopProcessing):
""" Raised by IMessageFilter instances to halt procesing of a message.
o The application must abort the current transaction.
"""
class IBlackboardFactory(Interface):
""" Utility for creating a pre-initialized blackboard.
"""
def __call__(message):
""" Return an IBlackboard instance for the message.
"""
class IBlackboard(Interface):
""" Mapping for recording the results of message processing.
- API is that of a Python dict.
"""
class IMessageFilter(Interface):
""" Plugin interface for processing messages.
"""
def __call__(message, blackboard):
""" Process / extract information from mesage and add to blackboard.
- 'message' will be an instance of 'email.Message'.
- 'blackboard' will be an 'IBlackboard.
- Raise 'StopProcessing' to cancel further processing of 'message'.
""" | /repoze.mailin-0.4.tar.gz/repoze.mailin-0.4/repoze/mailin/interfaces.py | 0.810854 | 0.240295 | interfaces.py | pypi |
import sys
""" repoze.obob publisher: perform policy-driven graph traversal.
"""
class DefaultHelper:
""" Default traversal policy helper.
Simple applications may just use this class directly. More complex
apps can either subclass it, or else supply a different 'helper_factory'
to the ObobPublisher constructor, implementing all the same methods
on the returned object.
"""
def __init__(self, environ, **extras):
""" Initialize helper.
@param object environ WSGI environment
@param dict extras Extra application configuration
"""
self.environ = environ
self.extras = extras.copy()
path_info = self.environ.get('PATH_INFO', '')
self.names = [x for x in path_info.split('/') if x.strip()]
self.names.reverse()
def setup(self):
""" Perform any initializtion require before request processing.
@return None
"""
pass
def next_name(self):
""" Return the name of the next element to find
@return string element Path element
"""
if self.names:
return self.names.pop()
def before_traverse(self, current):
""" Called before traversing each path element.
@param object current Object being traversed
@return None
"""
pass
def traverse(self, current, name):
""" Traverse the next path element.
@param object current Object being traversed
@param object name Name of next object
@return object Next object in traversal chain
"""
return current[name]
def before_invoke(self, published):
""" Called just before invoking the published object.
@param object publishe Published object (end of traversal chain)
@return None
"""
pass
def invoke(self, published):
""" Invoke the published object.
@param object publishe Published object (end of traversal chain)
@return object Result of call
"""
return published()
def map_result(self, result):
""" Map the call result onto a triple for WSGI.
@param object result Result of calling published object
@return tuple WSGI triple: (status, headers, body_iter)
"""
if isinstance(result, basestring):
result = [result]
return '200 OK', [('Content-Type', 'text/html')], result
def teardown(self):
""" Perform any cleanup required end of request processing
@return None
"""
pass
def handle_exception(self, exc_info):
""" Handle any exceptions that happen during helper consultation.
Reraise or return a WSGI triple.
@return tuple WSGI triple: (status, headers, body_iter)
"""
t, v, tb = exc_info
try:
raise t, v, tb
finally:
del tb
class ObobPublisher:
""" repoze graph-traversal publisher.
o Plug points include a callable to find the root object for traversal,
plus one to return a traversal policy helper.
"""
def __init__(self,
initializer=None,
helper_factory=None,
get_root=None,
dispatchable=None,
extras=None,
):
if helper_factory is not None:
self.helper_factory = helper_factory
if get_root is not None:
self.get_root = get_root
else:
if dispatchable is None:
dispatchable = {}
self._default_root = _DefaultRoot(dispatchable)
if extras is None:
extras = {}
self.extras = extras
if initializer is not None:
initializer(**extras)
def __call__(self, environ, start_response):
""" Application dispatch via graph traversal.
0. Construct a traversal policy helper.
1. Get traversal root via self.get_root().
2. Iterate over items in request's path:
a. Notify 'self.before_traverse' if not None.
b. Get next object via 'self.traverse'.
3. Notify 'self.before_invoke', if not None.
4. Call the terminal ("published") object, applying request
parameters.
5. Map result onto WSGI 'start_response' + iteration.
"""
helper = self.helper_factory(environ, **self.extras)
try:
try:
helper.setup()
root = current = self.get_root(helper)
while 1:
helper.before_traverse(current)
name = helper.next_name()
if name is None:
break
current = helper.traverse(current, name)
published = current
helper.before_invoke(published)
result = helper.invoke(published)
status, headers, body_iter = helper.map_result(result)
except:
exc_info = sys.exc_info()
status, headers, body_iter = helper.handle_exception(exc_info)
start_response(status, headers)
return body_iter
finally:
helper.teardown()
def get_root(self, helper):
return self._default_root
def helper_factory(self, environ, **kw):
return DefaultHelper(environ, **kw)
def initializer(self):
pass
class _DefaultRoot:
""" Default root object, configured as a callable mapping.
"""
def __init__(self, dispatchable):
self._dispatchable = dispatchable
def keys(self):
return self._dispatchable.keys()
def __getitem__(self, key):
return self._dispatchable[key]
def __call__(self, *args, **kw):
lines = ['<html>',
'<body>',
'<ul>',
]
keys = self._dispatchable.keys()
keys.sort()
for key in keys:
lines.append('<li><a href="%s">%s</a></li>' % (key, key))
lines.extend(['</ul>',
'</body>',
'</html>',
])
return lines
_PLUGPOINTS = ('get_root',
'helper_factory',
'initializer',
)
def _resolve(dotted_or_ep):
""" Resolve a dotted name or setuptools entry point to a callable.
"""
from pkg_resources import EntryPoint
return EntryPoint.parse('x=%s' % dotted_or_ep).load(False)
def make_obob(global_config, **kw):
""" WSGI application factory.
"""
PREFIX = 'repoze.obob.'
dispatchable = {}
extras = {}
new_kw = {'dispatchable': dispatchable, 'extras': extras}
merged = global_config.copy()
merged.update(kw)
for k, v in merged.items():
if k.startswith(PREFIX):
trimmed = k[len(PREFIX):]
callable = _resolve(v)
if trimmed in _PLUGPOINTS:
new_kw[trimmed] = callable
else:
dispatchable[trimmed] = callable
else:
extras[k] = v
return ObobPublisher(**new_kw) | /repoze.obob-0.4.tar.gz/repoze.obob-0.4/repoze/obob/publisher.py | 0.61555 | 0.319944 | publisher.py | pypi |
from zope.interface import Attribute
from zope.interface import Interface
class IWeightedText(Interface):
"""An indexable text value with optional weighted components.
This interface can be implemented by a subclass of unicode.
Applications can return an object that implements this interface
from the discriminator function attached to a PGTextIndex.
PostgreSQL supports up to 4 text weights per document, labeled A,
B, C, and D, where D is the weight applied by default. Applications
can provide the numeric values of the weights at search time. To
make use of weights, applications should index fields of the
document using different weights. For example, a document's title
could be indexed with the A weight and its description could be
indexed with the B weight, while the body is indexed with the D
weight.
If no weights are assigned at search time, here are the default
weights assigned by PostgreSQL:
D = 0.1
C = 0.2
B = 0.4
A = 1.0
See:
http://www.postgresql.org/docs/9.0/interactive/textsearch-controls.html
"""
def __str__():
"""Required: get the default indexable text.
The text will be assigned the D weight.
"""
A = Attribute("Optional: text to index with the A weight.")
B = Attribute("Optional: text to index with the B weight.")
C = Attribute("Optional: text to index with the C weight.")
coefficient = Attribute("""Optional: a floating point score multiplier.
PGTextIndex multiplies each text match score by the coefficient
after all weighting is computed.
Use this to influence the document's score in text searches.
For example, if a document is known to be provided by a reputable
source, a coefficient of 1.5 would increase its score by 50%. The
default coefficient is 1.
""")
marker = Attribute("Optional: a string marker value or sequence of string "
"marker values.")
class IWeightedQuery(Interface):
"""A text query that optionally controls text weights and filtering.
This interface can be implemented by a subclass of unicode.
"""
def __str__():
"""Required: the human-provided query text.
The text will be assigned the D weight.
"""
text = Attribute(
"""Deprecated: the human-provided query text.
This takes precedence over __str__() when it is provided.
""")
A = Attribute("Optional: the weight to apply to A text, default 1.0.")
B = Attribute("Optional: the weight to apply to B text, default 0.4.")
C = Attribute("Optional: the weight to apply to C text, default 0.2.")
D = Attribute(
"Optional: the weight to apply to D (default) text, default 0.1.")
marker = Attribute(
"""Optional: find only documents with the given marker value.
If empty or missing, the search is not constrained by markers.
""")
limit = Attribute(
"""Optional: limit the number of documents returned.
This is more efficient than asking repoze.catalog to limit the
number of results because this method gives PostgreSQL a chance
to employ more query optimizations. It also reduces the amount
of data sent by PostgreSQL in response to a query.
""")
offset = Attribute(
"""Optional: skip the specified number of rows in the result set.
Used for paging/batching. The limit and offset attributes are normally
used together.
""")
cache_enabled = Attribute(
"""Optional boolean: if true, pgtextindex will cache the result.
The result will be stored as the 'cache' attribute of this query
object.
""")
cache = Attribute(
"""Optional: a dict of cached query results.""") | /repoze.pgtextindex-1.4.tar.gz/repoze.pgtextindex-1.4/repoze/pgtextindex/interfaces.py | 0.898661 | 0.766578 | interfaces.py | pypi |
:mod:`repoze.postoffice`
========================
:mod:`repoze.postoffice` provides a centralized depot for collecting
incoming email for consumption by multiple applications. Incoming mail is
sorted into queues according to rules with the expectation that each
application will then consume its own queue. Each queue is a
first-in-first-out (FIFO) queue, so messages are processed in the order
received.
ZODB is used for storage and is also used to provide the client interface.
:mod:`repoze.postoffice` clients create a ZODB connection and manipulate
models. This makes consuming the message queue in the context of a
transaction, relatively simple.
Setting up the depot
--------------------
:mod:`repoze.postoffice` assumes that a message transport agent (MTA), such
as Postfix, has been configured to deliver messages to a folder using the
Maildir format. Configuring the MTA is outside of the scope of this
document.
Configuration File
++++++++++++++++++
The depot is configured via a configuration file in ini format. The ini
file consists of a single 'post office' section followed by one or more
named queue sections. The 'post office' section contains information about
the ZODB set up as well as the location of the incoming Maildir:
.. code-block:: ini
[post office]
# Required parameters
zodb_uri = zconfig://%(here)s/zodb.conf#main
maildir = %(here)s/incoming/Maildir
# Optional parameters
zodb_path = /postoffice
ooo_loop_frequency = 60 # 1 Hertz
ooo_loop_headers = To,Subject
ooo_throttle_period = 300 # 5 minutes
max_message_size = 500m
`zodb_uri` is interpreted using :mod:`repoze.zodbconn` and follows the
format laid out there. See: http://docs.repoze.org/zodbconn/narr.html
`zodb_path` is the path in the db to the postoffice queues. This parameter
is optional and defaults to '/postoffice'.
`maildir` is the path to the incoming Maildir format folder from which
messages are pulled.
`ooo_loop_frequency` specifies the threshold frequency of incoming messages
from the same user to the same queue, in messages per minute. When the
threshold is reached by a particular user, messages from that user will be
marked as rejected for period of time in an attempt to break a possible out
of office auto-reply loop. If not specified, no check is performed on
frequency of incoming messages.
`ooo_loop_headers` optionally causes loop detection to use the specified
email headers as discriminators. If specified, these headers must match
for incoming messages to trigger the ooo throttle. If not specified, no
header matching is done, and messages need only be sent from the same user
to the same queue to trigger the throttle.
`ooo_throttle_period` specifies the amount of time, in minutes, for which a
user's incoming mail will be marked as rejected if loop detection is in use
and the user reaches the `ooo_loop_frequency` threshold. Defaults to 5
minutes. If `ooo_loop_frequency` is not set, this setting has no effect.
`max_message_size` sets the maximum size, in bytes, of incoming messages.
Messages which exceed this limit will have their payloads discarded and
will be marked as rejected. The suffixes 'k', 'm' or 'g' may be used to
specify that the number of bytes is expressed in kilobytes, megabytes or
gigabytes, respectively. A number without suffix will be interpreted as
bytes. If not set, no limit will be imposed on incoming message size.
Each message queue is configured in a section with the prefix 'queue:':
.. code-block:: ini
[queue:Customer A]
filters =
to_hostname: app.customera.com app.aliasa.com
[queue:Customer B]
filters =
to_hostname: .customerb.com
Filters
+++++++
Filters are used to determine which messages land in which queues. When a new
message enters the system each queue is tried in the order specified in the
ini file until a match is found or until all of the queues have been tried.
For each queue each filter for that queue is processed. In order to match for
a queue a message must match all filters for that queue.
At the time of the following filters are implemented:
- `to_hostname`: This filter matches the hostname of the email address in the
spcified headers of the message. Hostnames which beging with a period will
match any hostname that ends with the specified name, ie '.example.com'
matches 'example.com' and 'app.example.com'. If the hostname does not begin
with a period it must match exactly. Multiple hostnames, delimited by
whitespace, may be listed. If multiple hostnames are used, an incoming
message need match only one.
By default, this filter matches against the following headers: ``To``,
``Cc``, and ``X-Original-To``. You can change this default by appending
``;headers=<comma-separaated-list>`` to the value, e.g.:
.. code-block:: ini
[queue:only_bcc]
filters =
to_hostname: example.com;headers=X-Original-To
.. note::
The ``X-Original-To`` header is set by Postfix to match the envelope
recipient; considering it for a match allows accepting messages for which
the domain is BCC'ed or delivered from a mailing list.
- `header_regexp`: This filter allows the matching of arbitrary regular
expressions against the headers of a message. Only a single regular
expression can be specified. An example:
.. code-block:: ini
[queue:Parties]
filters =
header_regexp: Subject:.+[Pp]arty.+
- `header_regexp_file`: This filter is the same as `header_regexp` except that
multiple regular expressions can be written in a file. Regular expressions are
newline delimited in the file. The argument to this filter is the path to the
file:
.. code-block:: ini
[queue:Weddings]
filters =
header_regexp_file: %(here)s/wedding_invitation_header_checks.txt
- `body_regexp`: Like `header_regexp` except the regular expression must match
some text in one of the message part bodies.
- `body_regexp_file`: Like `header_regexp_file` except the regular expressions
must match some text in one of the message part bodies.
Global Reject Filters
+++++++++++++++++++++
In addition to defining filters for queues, filters can be defined globally
for rejection of messages before they can be assigned to queues. Any filter
that can be used for a queue can be used here. The basic difference, though,
is that for a queue, if a filter matches, the message goes into the queue.
Here, though, if a filter matches the message is rejected.:
.. code-block:: ini
[post office]
reject_filters =
header_regexp_file: reject_headers.txt
body_regexp_file: reject_body.txt
to_hostname: *.partycentral.com # They need to change their MX
Populating Queues
-----------------
Queues are populated using the :cmd:`postoffice` console script that is
provided when the :mod:`repoze.postoffice` egg is installed. This script
reads messages from the incoming maildir and imports them into the
ZODB-based depot. Messages are matched and placed in appropriate queues.
Messages which do not match any queues are erased. There are no required
arguments to the script--if it can find its .ini file, it will work:
.. code-block:: sh
$ bin/postoffice
The :cmd:`postoffice` script will search for an ini file named
:file:'postoffice.ini' first in the current directory, then in an 'etc'
folder in the current directory, then an 'etc' folder that is a sibling of
the 'bin' folder which contains the `postoffice` script and then, finally,
in '/etc'. You can also use a non-standard location for the ini file by
passing the path as an argument to the script:
.. code-block:: sh
$ bin/postoffice -C path/to/config.ini
Use the '-h' or '--help' switch to see all of the options available.
Out of Office Loop Detection
----------------------------
:mod:`repoze.postoffice` does attempt to address out of office loops. An
out of office loop can occur when :mod:`repoze.postoffice` is used to
populate content in an application which generates an email to alert users
of the new content. Essentially, a poorly behaved email client will
respond to the new content alert email with an out of office reply which in
turn causes more content to be created and another alert email to be sent.
Without some form of loop detection, this can lead to a large amount of
junk content being generated very quickly.
When a new email enters the system, :mod:`repoze.postoffice` first checks
for some headers that could be set by well behaved MTA's to indicate
automated responses and marks as rejected messages which match these known
heuristics. First, the non-standard, but widely supported, 'Precedence'
header is checked and messages with a precedence of 'bulk', 'junk', or
'list' are marked as rejected. Next :mod:`repoze.postoffice` will check
for the presence of the 'Auto-Submitted' header which is described in
rfc3834 and is standard, but not yet widely supported. Messages
containing this header are marked. In either of these two cases, the
incoming message is marked by adding the header::
X-Postoffice-Rejected: Auto-response
Out of office messages sent by certain clients (Microsoft) will typically not
use either of the above standards to indicate an automated reply. As a last
line of defense, :mod:`repoze.postoffice` also tracks the frequency of incoming
mail by email address and, optionally, other headers specified by the
'ooo_loop_headers' configuration option. When the number of messages arriving
from the same user surpasses a particular, assumedly inhuman, threshold, a
temporary block is placed on messages from that user, such that all messages
from that user are marked as rejected for a certain period of time, hopefully
breaking the auto reply feedback loop. Messages which trigger are fall under a
throttle are marked with header::
X-Postoffice-Rejected: Throttled
Messages marked with the 'X-Postoffice-Rejected' header are still conveyed to
the client. It is up to the client to check for this header and take
appropriate action. This allows the client to choose and take appropriate
action, such as bouncing with a particular bounce message, etc.
Message Size Limit
------------------
If 'max_message_size' is specified in the configuration, messages which exceed
this size will have their payloads (body and any attachments) discarded and
will be marked with the header:
X-Postoffice-Rejected: Maximum Message Size Exceeded
The trimmed message is still conveyed to the client, which should check for
the 'X-Postoffice-Rejected' header and take appropriate action, possibly
including bouncing the message with an appropriate bounce message.
Consuming Queues
----------------
Client applications consume message queues by establishing a connection to
the ZODB which houses the depot and interacting with queue and message
objects. :mod:`repoze.postoffice.queue` contains a helper method,
`open_queue` which given connection information can open the connection for
you and return a Queue instance:
.. code-block:: python
from my.example import process_message
from my.example import validate_message
from repoze.postoffice.queue import open_queue
import sys
import transaction
ZODB_URI = zconfig://%(here)s/zodb.conf#main
queue_name = 'my queue'
queue = open_queue(ZODB_URI, queue_name, path='/postoffice')
while queue:
message = queue.pop_next()
if not validate_message(message):
queue.bounce(message, 'Message is invalid.')
try:
process_message(message)
transaction.commit()
except:
transaction.abort()
queue.quarantine(message, sys.exc_info())
transaction.commit()
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| /repoze.postoffice-0.25.tar.gz/repoze.postoffice-0.25/docs/index.rst | 0.853867 | 0.695881 | index.rst | pypi |
from email import utils
from email import header
from repoze.sendmail._compat import PY_2
from repoze.sendmail._compat import text_type
# From http://tools.ietf.org/html/rfc5322#section-3.6
ADDR_HEADERS = ('resent-from',
'resent-sender',
'resent-to',
'resent-cc',
'resent-bcc',
'from',
'sender',
'reply-to',
'to',
'cc',
'bcc')
PARAM_HEADERS = ('content-type',
'content-disposition')
def cleanup_message(message,
addr_headers=ADDR_HEADERS, param_headers=PARAM_HEADERS):
"""
Cleanup a `Message` handling header and payload charsets.
Headers are handled in the most sane way possible. Address names
are left in `ascii` if possible or encoded to `iso-8859-1` or `utf-8`
and finally encoded according to RFC 2047 without encoding the
address, something the `email` stdlib package doesn't do.
Parameterized headers such as `filename` in the
`Content-Disposition` header, have their values encoded properly
while leaving the rest of the header to be handled without
encoding. Finally, all other header are left in `ascii` if
possible or encoded to `iso-8859-1` or `utf-8` as a whole.
The message is modified in place and is also returned in such a
state that it can be safely encoded to ascii.
"""
for key, value in message.items():
if key.lower() in addr_headers:
addrs = []
for name, addr in utils.getaddresses([value]):
best, encoded = best_charset(name)
if PY_2:
name = encoded
name = header.Header(
name, charset=best, header_name=key).encode()
addrs.append(utils.formataddr((name, addr)))
value = ', '.join(addrs)
message.replace_header(key, value)
if key.lower() in param_headers:
for param_key, param_value in message.get_params(header=key):
if param_value:
best, encoded = best_charset(param_value)
if PY_2:
param_value = encoded
if best == 'ascii':
best = None
message.set_param(param_key, param_value,
header=key, charset=best)
else:
best, encoded = best_charset(value)
if PY_2:
value = encoded
value = header.Header(
value, charset=best, header_name=key).encode()
message.replace_header(key, value)
payload = message.get_payload()
if payload and isinstance(payload, text_type):
charset = message.get_charset()
if not charset:
charset, encoded = best_charset(payload)
message.set_payload(payload, charset=charset)
elif isinstance(payload, list):
for part in payload:
cleanup_message(part)
return message
def encode_message(message,
addr_headers=ADDR_HEADERS, param_headers=PARAM_HEADERS):
"""
Encode a `Message` handling headers and payloads.
Headers are handled in the most sane way possible. Address names
are left in `ascii` if possible or encoded to `iso-8859-1` or `utf-8`
and finally encoded according to RFC 2047 without encoding the
address, something the `email` stdlib package doesn't do.
Parameterized headers such as `filename` in the
`Content-Disposition` header, have their values encoded properly
while leaving the rest of the header to be handled without
encoding. Finally, all other header are left in `ascii` if
possible or encoded to `iso-8859-1` or `utf-8` as a whole.
The return is a byte string of the whole message.
"""
cleanup_message(message)
return message.as_string().encode('ascii')
def best_charset(text):
"""
Find the most human-readable and/or conventional encoding for unicode text.
Prefers `ascii` or `iso-8859-1` and falls back to `utf-8`.
"""
encoded = text
for charset in 'ascii', 'iso-8859-1', 'utf-8':
try:
encoded = text.encode(charset)
except UnicodeError:
pass
else:
return charset, encoded | /repoze.sendmail-4.4.1.tar.gz/repoze.sendmail-4.4.1/repoze/sendmail/encoding.py | 0.591487 | 0.263136 | encoding.py | pypi |
from email.message import Message
from email.header import Header
from email.parser import Parser
from email.utils import formatdate
from email.utils import make_msgid
from zope.interface import implementer
from repoze.sendmail.interfaces import IMailDelivery
from repoze.sendmail.maildir import Maildir
from repoze.sendmail import encoding
import transaction
from transaction.interfaces import ISavepointDataManager
from transaction.interfaces import IDataManagerSavepoint
class MailDataManagerState(object):
"""MailDataManagerState consolidates all the possible MDM and TPC states.
Most of these are not needed and were removed from the actual logic.
This was modeled loosely after the Zope.Sqlaclhemy extension.
"""
INIT = 0
NO_WORK = 1
COMMITTED = 2
ABORTED = 3
TPC_NONE = 11
TPC_BEGIN = 12
TPC_VOTED = 13
TPC_COMMITED = 14
TPC_FINISHED = 15
TPC_ABORTED = 16
@implementer(ISavepointDataManager)
class MailDataManager(object):
"""When creating a MailDataManager, we expect to :
1. NOT be in a transaction on creation
2. DO be joined into a transaction afterwards
__init__ is given a `callable` function and `args` to pass into it.
If everything goes as planned, during the tpc_finish phase we call:
self.callable(*self.args)
"""
def __init__(self, callable, args=(), onAbort=None,
transaction_manager=None):
self.callable = callable
self.args = args
self.onAbort = onAbort
if transaction_manager is None:
transaction_manager = transaction.manager
self.transaction_manager = transaction_manager
self.transaction = None
self.state = MailDataManagerState.INIT
self.tpc_phase = 0
def join_transaction(self, trans=None):
"""Join the object into a transaction.
If no transaction is specified, use ``transaction.manager.get()``.
Raise an error if the object is already in a different transaction.
"""
_before = self.transaction
if trans is not None:
_after = trans
else:
_after = self.transaction_manager.get()
if _before is not None and _before is not _after:
if self in _before._resources:
raise ValueError("Item is in the former transaction. "
"It must be removed before it can be added "
"to a new transaction")
if self not in _after._resources:
_after.join(self)
self.transaction = _after
def _finish(self, final_state):
if self.transaction is None:
raise ValueError("Not in a transaction")
self.state = final_state
self.tpc_phase = 0
def commit(self, trans):
if self.transaction is None:
raise ValueError("Not in a transaction")
if self.transaction is not trans:
raise ValueError("In a different transaction")
# OK to call ``commit`` w/ TPC underway
def abort(self, trans):
if self.transaction is None:
raise ValueError("Not in a transaction")
if self.transaction is not trans:
raise ValueError("In a different transaction")
if self.tpc_phase != 0:
raise ValueError("TPC in progress")
if self.onAbort:
self.onAbort()
def sortKey(self):
return str(id(self))
def savepoint(self):
"""Create a custom `MailDataSavepoint` object
Although it has a `rollback` method, the custom instance doesn't
actually do anything. `transaction` does it all.
"""
if self.transaction is None:
raise ValueError("Not in a transaction")
return MailDataSavepoint(self)
def tpc_begin(self, trans, subtransaction=False):
if self.transaction is None:
raise ValueError("Not in a transaction")
if self.transaction is not trans:
raise ValueError("In a different transaction")
if self.tpc_phase != 0:
raise ValueError("TPC in progress")
if subtransaction:
raise ValueError("Subtransactions not supported")
self.tpc_phase = 1
def tpc_vote(self, trans):
if self.transaction is None:
raise ValueError("Not in a transaction")
if self.transaction is not trans:
raise ValueError("In a different transaction")
if self.tpc_phase != 1:
raise ValueError("TPC phase error: %d" % self.tpc_phase)
self.tpc_phase = 2
def tpc_finish(self, trans):
if self.transaction is None:
raise ValueError("Not in a transaction")
if self.transaction is not trans:
raise ValueError("In a different transaction")
if self.tpc_phase != 2:
raise ValueError("TPC phase error: %d" % self.tpc_phase)
self.callable(*self.args)
self._finish(MailDataManagerState.TPC_FINISHED)
def tpc_abort(self, trans):
if self.transaction is None:
raise ValueError("Not in a transaction")
if self.transaction is not trans:
raise ValueError("In a different transaction")
if self.tpc_phase == 0:
raise ValueError("TPC phase error: %d" % self.tpc_phase)
if self.state is MailDataManagerState.TPC_FINISHED:
raise ValueError("TPC already finished")
self._finish(MailDataManagerState.TPC_ABORTED)
@implementer(IDataManagerSavepoint)
class MailDataSavepoint:
"""Don't actually do anything; transaction does it all.
"""
def __init__(self, mail_data_manager):
pass
def rollback(self):
pass
class AbstractMailDelivery(object):
"""Base class for mail delivery.
Calling ``send`` will create a managed message -- the result of
``self.createDataManager(fromaddr,toaddrs,message)``
The managed message should be an instance of `MailDataManager` or
another class that implements `IDataManager` or `ISavepointDataManager`
The managed message is immediately joined into the current transaction.
"""
def send(self, fromaddr, toaddrs, message):
if not isinstance(message, Message):
raise ValueError('Message must be email.message.Message')
encoding.cleanup_message(message)
messageid = message['Message-Id']
if messageid is None:
messageid = message['Message-Id'] = make_msgid('repoze.sendmail')
if message['Date'] is None:
message['Date'] = formatdate()
managedMessage = self.createDataManager(fromaddr, toaddrs, message)
managedMessage.join_transaction()
return messageid
@implementer(IMailDelivery)
class DirectMailDelivery(AbstractMailDelivery):
def __init__(self, mailer, transaction_manager=None):
self.mailer = mailer
if transaction_manager is None:
transaction_manager = transaction.manager
self.transaction_manager = transaction_manager
def createDataManager(self, fromaddr, toaddrs, message):
return MailDataManager(self.mailer.send,
args=(fromaddr, toaddrs, message),
transaction_manager=self.transaction_manager)
@implementer(IMailDelivery)
class QueuedMailDelivery(AbstractMailDelivery):
queuePath = property(lambda self: self._queuePath)
processor_thread = None
def __init__(self, queuePath, transaction_manager=None):
self._queuePath = queuePath
if transaction_manager is None:
transaction_manager = transaction.manager
self.transaction_manager = transaction_manager
def createDataManager(self, fromaddr, toaddrs, message):
message = copy_message(message)
message['X-Actually-From'] = Header(fromaddr, 'utf-8')
message['X-Actually-To'] = Header(','.join(toaddrs), 'utf-8')
maildir = Maildir(self.queuePath, True)
tx_message = maildir.add(message)
return MailDataManager(tx_message.commit, onAbort=tx_message.abort,
transaction_manager=self.transaction_manager)
def copy_message(message):
parser = Parser()
return parser.parsestr(message.as_string()) | /repoze.sendmail-4.4.1.tar.gz/repoze.sendmail-4.4.1/repoze/sendmail/delivery.py | 0.635109 | 0.20464 | delivery.py | pypi |
import pprint
import time
from zope.interface import implements
from ZODB.POSException import ConflictError
from persistent.mapping import PersistentMapping
from repoze.session.interfaces import ISessionData
def manage_modified(wrapped):
""" Decorator which sets last modified time on session data
when a wrapped method is called """
def set_modified(self, *arg, **kw):
self.last_modified = time.time()
return wrapped(self, *arg, **kw)
return set_modified
class SessionData(PersistentMapping):
""" Dictionary-like object that supports additional methods and
attributes concerning invalidation. expiration and conflict
resolution."""
implements(ISessionData)
# Note that we use short instance variable names here to reduce
# instance pickle sizes, as these items are written quite often to
# the database under typical usage and it's a material win to do
# so.
# _lm (last modified) indicates the last time that __setitem__,
# __delitem__, update, clear, setdefault, pop, or popitem was
# called on us.
_lm = None
# _iv indicates that this node is invalid if true.
_iv = False
def __init__(self, d=None):
# _ct is creation time
self._ct = time.time()
PersistentMapping.__init__(self, d)
# IMapping methods (overridden to manage modification time)
clear = manage_modified(PersistentMapping.clear)
update = manage_modified(PersistentMapping.update)
setdefault = manage_modified(PersistentMapping.setdefault)
pop = manage_modified(PersistentMapping.pop)
popitem = manage_modified(PersistentMapping.popitem)
__setitem__ = manage_modified(PersistentMapping.__setitem__)
__delitem__ = manage_modified(PersistentMapping.__delitem__)
# "Smarter" copy (part of IMapping interface)
def copy(self):
c = self.__class__(self.data)
if self._iv:
c._iv = self._iv
c._ct = self._ct
c._lm = self._lm
return c
# ISessionData
def invalidate(self):
"""
Invalidate (expire) the session data object.
"""
self._iv = True
def is_valid(self):
"""
Return true if session data object is still valid, false if not.
A session data object is valid if its invalidate method has not been
called.
"""
return not self._iv
def _get_last_modified(self):
return self._lm
def _set_last_modified(self, when=None): # 'when' is for testing
if when is None:
when = time.time()
self._lm = when
last_modified = property(_get_last_modified, _set_last_modified)
def _get_created(self):
return self._ct
created = property(_get_created)
# ZODB conflict resolution (to prevent write conflicts)
def _p_resolveConflict(self, old, committed, new):
# dict modifiers set '_lm'.
if committed['_lm'] != new['_lm']:
# we are operating against the PersistentMapping.__getstate__
# representation, which aliases '_container' to self.data.
if committed['_container'] != new['_container']:
msg = "Competing writes to session data: \n%s\n----\n%s" % (
pprint.pformat(committed['_container']),
pprint.pformat(new['_container']))
raise ConflictError(msg)
resolved = dict(new)
invalid = committed.get('_iv') or new.get('_iv')
if invalid:
resolved['_iv'] = True
resolved['_lm'] = max(committed['_lm'], new['_lm'])
return resolved | /repoze.session-0.2.tar.gz/repoze.session-0.2/repoze/session/data.py | 0.778144 | 0.282654 | data.py | pypi |
from zope.interface import Interface
from zope.interface import Attribute
from zope.interface.common.mapping import IMapping
class ISessionDataManager(Interface):
def query(k, default=None):
"""
Return value associated with key k. If value associated with
k does not exist, return default.
"""
def has_key(k):
"""
Return true if manager has value associated with key k, else
return false.
"""
def get(k):
"""
If an object already exists in the manager with key "k", it
is returned.
Otherwise, create a new subobject of the type supported by this
container with key "k" and return it.
"""
class ISessionData(IMapping):
""" Supports a mapping interface plus expiration- and container-related
methods """
def invalidate():
"""
Invalidate (expire) the session data object.
"""
def is_valid():
"""
Return true if session data object is still valid, false if not.
A session data object is valid if its invalidate method has not been
called.
"""
last_modified = Attribute("""\
Return the time the session data was last modified in
integer seconds-since-the-epoch form. Modification generally implies
a call to one of the session data object's __setitem__ or __delitem__
methods, directly or indirectly as a result of a call to
update, clear, or other mutating data access methods. This value
can be assigned.
""")
created = Attribute("""\
Return the time the session data object was created in integer
seconds-since-the-epoch form. This attribute cannot be assigned.
""")
class ISessionBeginEvent(Interface):
""" An interface representing an event that happens when a session begins
"""
class ISessionEndEvent(Interface):
""" An interface representing an event that happens when a session ends
""" | /repoze.session-0.2.tar.gz/repoze.session-0.2/repoze/session/interfaces.py | 0.818193 | 0.287519 | interfaces.py | pypi |
Documentation for repoze.tm
===========================
Overview
--------
:mod:`repoze.tm` is WSGI middleware which uses the ``ZODB`` package's
transaction manager to wrap a call to its pipeline children inside a
transaction.
.. note:: :mod:`repoze.tm` is equivalent to the :mod:`repoze.tm2`
package (which was forked from :mod:`repoze.tm`), except it has a
dependency on the entire ``ZODB3`` package rather than only a a
dependency on the ``transaction`` package (``ZODB3`` ships with the
``transaction`` package right now). It is an error to install both
repoze.tm and repoze.tm2 into the same environment, as they provide
the same entry points and import points.
Behavior
--------
When this middleware is present in the WSGI pipeline, a new
transaction will be started once a WSGI request makes it to the
:mod:`repoze.tm` middleware. If any downstream application raises an
exception, the transaction will be aborted, otherwise the transaction
will be committed. Any "data managers" participating in the
transaction will be aborted or committed respectively. A ZODB
"connection" is an example of a data manager.
Since this is a tiny wrapper around the ZODB transaction module, and
the ZODB transaction module is "thread-safe" (in the sense that its
default policy is to create a new transaction for each thread), it
should be fine to use in either multiprocess or multithread
environments.
Purpose and Usage
-----------------
The ZODB transaction manager is a completely generic transaction
manager. It can be used independently of the actual "object database"
part of ZODB. One of the purposes of creating :mod:`repoze.tm` was to
allow for systems other than Zope to make use of two-phase commit
transactions in a WSGI context.
Let's pretend we have an existing system that places data into a
relational database when someone submits a form. The system has been
running for a while, and our code handles the database commit and
rollback for us explicitly; if the form processing succeeds, our code
commits the database transaction. If it fails, our code rolls back
the database transaction. Everything works fine.
Now our customer asks us if we can also place data into another
separate relational database when the form is submitted as well as
continuing to place data in the original database. We need to put
data in both databases, and if we want to ensure that no records exist
in one that don't exist in the other as a result of a form submission,
we're going to need to do a pretty complicated commit and rollback
dance in each place in our code which needs to write to both data
stores. We can't just blindly commit one, then commit the other,
because the second commit may fail and we'll be left with "orphan"
data in the first, and we'll either need to clean it up manually or
leave it there to trip over later.
A transaction manager helps us ensure that no data is committed to
either database unless both participating data stores can commit.
Once the transaction manager determines that both data stores are
willing to commit, it will commit them both in very quick succession,
so that there is only a minimal chance that the second data store will
fail to commit. If it does, the system will raise an error that makes
it impossible to begin another transaction until the system restarts,
so the damage is minimized. In practice, this error almost never
occurs unless the code that interfaces the database to the transaction
manager has a bug.
Adding :mod:`repoze.tm` To Your WSGI Pipeline
---------------------------------------------
Via ``PasteDeploy`` .INI configuration::
[pipeline:main]
pipeline =
egg:repoze.tm#tm
myapp
Via Python:
.. code-block:: python
from otherplace import mywsgiapp
from repoze.tm import TM
new_wsgiapp = TM(mywsgiapp)
Mocking Up A Data Manager
-------------------------
The piece of code you need to write in order to participate in ZODB
transactions is called a 'data manager'. It is typically a class.
Here's the interface that you need to implement in the code for a data
manager:
.. code-block:: python
class IDataManager(zope.interface.Interface):
"""Objects that manage transactional storage.
These objects may manage data for other objects, or they
may manage non-object storages, such as relational
databases. For example, a ZODB.Connection.
Note that when some data is modified, that data's data
manager should join a transaction so that data can be
committed when the user commits the transaction. """
transaction_manager = zope.interface.Attribute(
"""The transaction manager (TM) used by this data
manager.
This is a public attribute, intended for read-only
use. The value is an instance of ITransactionManager,
typically set by the data manager's constructor. """
)
def abort(transaction):
"""Abort a transaction and forget all changes.
Abort must be called outside of a two-phase commit.
Abort is called by the transaction manager to abort transactions
that are not yet in a two-phase commit.
"""
# Two-phase commit protocol. These methods are called by
# the ITransaction object associated with the transaction
# being committed. The sequence of calls normally follows
# this regular expression: tpc_begin commit tpc_vote
# (tpc_finish | tpc_abort)
def tpc_begin(transaction):
"""Begin commit of a transaction, starting the
two-phase commit.
transaction is the ITransaction instance associated with the
transaction being committed.
"""
def commit(transaction):
"""Commit modifications to registered objects.
Save changes to be made persistent if the transaction
commits (if tpc_finish is called later). If tpc_abort
is called later, changes must not persist.
This includes conflict detection and handling. If no
conflicts or errors occur, the data manager should be
prepared to make the changes persist when tpc_finish
is called. """
def tpc_vote(transaction):
"""Verify that a data manager can commit the transaction.
This is the last chance for a data manager to vote 'no'. A
data manager votes 'no' by raising an exception.
transaction is the ITransaction instance associated with the
transaction being committed.
"""
def tpc_finish(transaction):
"""Indicate confirmation that the transaction is done.
Make all changes to objects modified by this
transaction persist.
transaction is the ITransaction instance associated
with the transaction being committed.
This should never fail. If this raises an exception,
the database is not expected to maintain consistency;
it's a serious error. """
def tpc_abort(transaction):
"""Abort a transaction.
This is called by a transaction manager to end a
two-phase commit on the data manager. Abandon all
changes to objects modified by this transaction.
transaction is the ITransaction instance associated
with the transaction being committed.
This should never fail.
"""
def sortKey():
"""Return a key to use for ordering registered
DataManagers.
ZODB uses a global sort order to prevent deadlock when
it commits transactions involving multiple resource
managers. The resource manager must define a
sortKey() method that provides a global ordering for
resource managers. """
# Alternate version:
#"""Return a consistent sort key for this connection.
# #This allows ordering multiple connections that use
the same storage in #a consistent manner. This is
unique for the lifetime of a connection, #which is
good enough to avoid ZEO deadlocks. #"""
Let's implement a mock data manager. Our mock data manager will write
data to a file if the transaction commits. It will not write data to
a file if the transaction aborts:
.. code-block:: python
class MockDataManager:
transaction_manager = None
def __init__(self, data, path):
self.data = data
self.path = path
def abort(self, transaction):
pass
def tpc_begin(self, transaction):
pass
def commit(self, transaction):
import tempfile
self.tempfn = tempfile.mktemp()
temp = open(self.tempfn, 'wb')
temp.write(self.data)
temp.flush()
temp.close()
def tpc_vote(self, transaction):
import os
if not os.path.exists(self.tempfn):
raise ValueError('%s doesnt exist' % self.tempfn)
if os.path.exists(self.path):
raise ValueError('file already exists')
def tpc_finish(self, transaction):
import os
os.rename(self.tempfn, self.path)
def tpc_abort(self, transaction):
import os
try:
os.remove(self.tempfn)
except OSError:
pass
We can create a datamanager and join it into the currently running
transaction:
.. code-block:: python
dm = MockDataManager('heres the data', '/tmp/file')
import transaction
t = transaction.get()
t.join(dm)
When the transaction commits, a file will be placed in '/tmp/file'
containing 'heres the data'. If the transaction aborts, no file will
be created.
If more than one data manager is joined to the transaction, all of
them must be willing to commit or the entire transaction is aborted
and none of them commit. If you can imagine creating two of the mock
data managers we've made within application code, if one has a problem
during "tpc_vote", neither will actually write a file to the ultimate
location, and thus your application consistency is maintained.
Integrating Your Data Manager With :mod:`repoze.tm`
---------------------------------------------------
The :mod:`repoze.tm` transaction management machinery has an implicit
policy. When it is in the WSGI pipeline, a transaction is started
when the middleware is invoked. Thus, in your application code,
calling "import transaction; transaction.get()" will return the
transaction object created by the :mod:`repoze.tm` middleware. You
needn't call t.commit() or t.abort() within your application code.
You only need to call t.join, to register your data manager with the
transaction. :mod:`repoze.tm` will abort the transaction if an
exception is raised by your application code or lower middleware
before it returns a WSGI response. If your application or lower
middleware raises an exception, the transaction is aborted.
Cleanup
-------
When a :mod:`repoze.tm` is in the WSGI pipeline, a boolean key is
present in the environment (``repoze.tm.active``). A utility function
named isActive can be imported from the :mod:`repoze.tm` package and
passed the WSGI environment to check for activation:
.. code-block:: python
from repoze.tm import isActive
tm_active = isActive(wsgi_environment)
If an application needs to perform an action after a transaction ends,
the "after_end" registry may be used to register a callback. The
after_end.register function accepts a callback (accepting no
arguments) and a transaction instance:
.. code-block:: python
from repoze.tm import after_end
import transaction
t = transaction.get() # the current transaction
def func():
pass # close a connection, etc
after_end.register(func, t)
"after_end" callbacks should only be registered when the transaction
manager is active, or a memory leak will result (registration cleanup
happens only on transaction commit or abort, which is managed by
:mod:`repoze.tm` while in the pipeline).
Further Documentation
---------------------
Many database adapters written for Zope (e.g. for Postgres, MySQL,
etc) use this transaction manager, so it should be possible to take a
look in these places to see how to implement a more real-world
transaction-aware database connector that uses this module in non-Zope
applications:
- http://svn.zope.org/ZODB/branches/3.7/src/transaction/
- http://svn.zope.org/ZODB/branches/3.8/src/transaction/
- http://mysql-python.sourceforge.net/ (ZMySQLDA)
- http://www.initd.org/svn/psycopg/psycopg2/trunk/ (ZPsycoPGDA)
Contacting
----------
The `repoze-dev maillist
<http://lists.repoze.org/mailman/listinfo/repoze-dev>`_ should be used
for communications about this software. Put the overview of the
purpose of the package here.
.. toctree::
:maxdepth: 2
changes
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| /repoze.tm-1.0a5.tar.gz/repoze.tm-1.0a5/docs/index.rst | 0.903187 | 0.705608 | index.rst | pypi |
import re
from zope.interface import implements
from repoze.urispace.interfaces import IOperator
class SchemePredicate(object):
""" Test the URI scheme.
o See: http://www.w3.org/TR/urispace.html, section 3.1, "Scheme".
"""
def __init__(self, pattern):
self.pattern = pattern
self.schemes = pattern.split(' ')
def __call__(self, info):
scheme = info.get('scheme')
if scheme is not None and scheme in self.schemes:
return True, info
return False, None
class NethostPredicate(object):
""" Do match testing against the nethost.
o See: http://www.w3.org/TR/urispace.html, section 3.2.2, "Host".
"""
WC_SEARCH = re.compile(r'\?|\*')
wildcard = None
port = None
def __init__(self, pattern):
self.pattern = pattern
tokens = self.WC_SEARCH.split(pattern)
if len(tokens) > 2:
raise ValueError("Only one wildcard allowed in pattern: %s"
% pattern)
if len(tokens) == 2:
if tokens[0]:
raise ValueError("Wildcard must be at start of in pattern: %s"
% pattern)
wc = self.wildcard = pattern[0]
host = tokens[1]
if wc == '?':
if not host.startswith('.'):
raise ValueError("Wildcard '?' must be followed by '.': %s"
% pattern)
host = host[1:]
self.host = host
else:
self.host = pattern
tokens = self.host.split(':')
if len(tokens) > 2:
raise ValueError("No more than one colon in pattern: %s"
% pattern)
elif len(tokens) == 2:
self.host = tokens[0]
self.port = int(tokens[1])
def __call__(self, info):
hostport = info.get('nethost')
if hostport is None:
return False, None
host, port = hostport
if self.wildcard == '*':
hostmatch = host.endswith(self.host)
elif self.wildcard == '?':
first, rest = host.split('.', 1)
hostmatch = (rest == self.host)
else:
hostmatch = (host == self.host)
if hostmatch and port == self.port:
return True, info
return False, None
class PathElementPattern(object):
""" Glob-like pattern matching for path elements.
o Only a single '*' is allowed in our pattern, per spec.
"""
def __init__(self, pattern):
self.pattern = pattern
tokens = pattern.split('*')
if len(tokens) > 2:
raise ValueError('Only one asterisk allowed: %s' % pattern)
self.wildcard = len(tokens) == 2
if self.wildcard:
self.before, self.after = tokens
def __call__(self, element):
""" Return True if element matches our pattern.
"""
if self.wildcard:
return (element.startswith(self.before) and
element.endswith(self.after))
return element == self.pattern
class _PathPredicate(object):
def __init__(self, pattern):
self.pattern = PathElementPattern(pattern)
class PathFirstPredicate(_PathPredicate):
""" Implement the semantics of the '<path match="">...</path>' pattern:
o Test our pattern against the first element of the current path.
o See: http://www.w3.org/TR/urispace.html, section 3.3, "Path Segment".
"""
def __call__(self, info):
path = info.get('path')
if path and self.pattern(path[0]):
new_info = info.copy()
new_info['path'] = path[1:]
return True, new_info
return False, None
class PathLastPredicate(_PathPredicate):
""" Implement the semantics of the '<pathlast="">...</pathlast>' pattern:
o Test our pattern against the last element of the current path.
o These semantics are *not* called out in the URISpace spec; it should
probably be triggered by an extension element (e.g., <ext:path last="">).
"""
def __call__(self, info):
path = info.get('path')
hit = path and self.pattern(path[-1])
if hit:
new_info = info.copy()
new_info['path'] = ()
return True, new_info
return False, None
class PathAnyPredicate(_PathPredicate):
""" Implement the semantics of the '<pathany="">...</pathany>' pattern:
o Test our pattern against any element of the current path.
o These semantics are *not* called out in the URISpace spec; it should
probably be triggered by an extension element (e.g., <ext:path any="">).
"""
def __call__(self, info):
path = info.get('path')
if path is not None:
residue = list(path)
while residue:
element, residue = residue[0], residue[1:]
if self.pattern(element):
new_info = info.copy()
new_info['path'] = tuple(residue)
return True, new_info
return False, None
class QueryKeyPredicate(object):
""" Do match testing against the query string.
o See: http://www.w3.org/TR/urispace.html, section 3.4, "Query".
"""
def __init__(self, pattern):
self.pattern = pattern
if '=' in pattern:
raise ValueError("pattern must not contain '=': %s" % pattern)
def __call__(self, info):
query = info.get('query')
if query is not None and self.pattern in query:
return True, info
return False, None
class QueryValuePredicate(object):
""" Do match testing against the query string.
o See: http://www.w3.org/TR/urispace.html, section 3.4, "Query".
"""
def __init__(self, pattern):
self.pattern = pattern
tokens = pattern.split('=')
if len(tokens) != 2:
raise ValueError("'pattern' must be 'key=value', was %s"
% pattern)
self.key = tokens[0]
self.value = tokens[1]
def __call__(self, info):
query = info.get('query')
if query is not None and query.get(self.key, self) == self.value:
return True, info
return False, None | /repoze.urispace-0.3.2.tar.gz/repoze.urispace-0.3.2/repoze/urispace/predicates.py | 0.769124 | 0.225331 | predicates.py | pypi |
from zope.interface import implements
from repoze.urispace.interfaces import IOperator
class ReplaceOperator:
""" Updates a given key in the assertion set with a new value.
"""
implements(IOperator)
def __init__(self, key, value):
self.key = key
self.value = value
def apply(self, assertions):
""" See IOperator.
"""
assertions[self.key] = self.value
class ClearOperator:
""" Clears a given key from the assertion set.
"""
implements(IOperator)
value = None
def __init__(self, key):
self.key = key
def apply(self, assertions):
""" See IOperator.
"""
try:
del assertions[self.key]
except KeyError:
pass
class SetOperatorBase:
""" Base class for operators on sets.
"""
def __init__(self, key, value):
self.key = key
self.value = value
def _setify(self, x):
if not isinstance(x, set):
x = set(x)
return x
def apply(self, assertions):
""" See IOperator.
"""
old = self._setify(assertions.setdefault(self.key, []))
new = self._setify(self.value)
assertions[self.key] = list(self._merge(old, new))
class UnionOperator(SetOperatorBase):
""" Assign the union of the old and new values.
"""
implements(IOperator)
def _merge(self, old, new):
return old | new
class IntersectionOperator(SetOperatorBase):
""" Assign the intersection of the old and new values.
"""
implements(IOperator)
def _merge(self, old, new):
return old & new
class RevIntersectionOperator(SetOperatorBase):
""" Assign the symmetric differences (XOR) of the old and new values.
"""
implements(IOperator)
def _merge(self, old, new):
return old ^ new
class DifferenceOperator(SetOperatorBase):
""" Assign the difference, old - new.
"""
implements(IOperator)
def _merge(self, old, new):
return old - new
class RevDifferenceOperator(SetOperatorBase):
""" Assign the differences, new - old.
"""
implements(IOperator)
def _merge(self, old, new):
return new - old
class SequenceOperatorBase:
""" Base class for operators on sequences.
"""
def __init__(self, key, value):
self.key = key
self.value = value
def _listify(self, x):
if x is None:
return []
if isinstance(x, basestring):
x = [x]
if not isinstance(x, list):
x = list(x)
return x
def apply(self, assertions):
""" See IOperator.
"""
old = self._listify(assertions.setdefault(self.key, []))
new = self._listify(self.value)
assertions[self.key] = self._merge(old, new)
class PrependOperator(SequenceOperatorBase):
""" Updates a given key in the assertion set, prepending a new value.
o Promotes existing scalar values to lists.
"""
implements(IOperator)
def _merge(self, old, new):
result = new[:]
result.extend(old)
return result
class AppendOperator(SequenceOperatorBase):
""" Updates a given key in the assertion set, appending a new value.
o Promotes existing scalar values to lists.
"""
implements(IOperator)
def _merge(self, old, new):
result = old[:]
result.extend(new)
return result | /repoze.urispace-0.3.2.tar.gz/repoze.urispace-0.3.2/repoze/urispace/operators.py | 0.829561 | 0.615926 | operators.py | pypi |
from zope.interface import implements
from repoze.urispace.interfaces import IOperator
from repoze.urispace.interfaces import ISelector
from repoze.urispace.interfaces import IURISpaceElement
class SelectorBase(object):
""" Base for objects which test URIs and return operators.
o Implements composite pattern to allow nesting.
"""
def listChildren(self):
""" See ISelector.
"""
return tuple(self.children)
def addChild(self, child):
""" See ISelector.
"""
if not IURISpaceElement.providedBy(child):
raise ValueError
self.children.append(child)
class TrueSelector(SelectorBase):
""" Always fire (e.g., for the root of the URISpace).
"""
implements(ISelector)
predicate = lambda *x: True
def __init__(self):
self.children = []
def collect(self, uri_info):
""" See ISelector.
"""
commands = []
commands.extend([x for x in self.children
if IOperator.providedBy(x)])
for child in [x for x in self.children
if ISelector.providedBy(x)]:
commands.extend(child.collect(uri_info))
return commands
class FalseSelector(SelectorBase):
""" Never fire (e.g., to comment out part of the URISpace).
"""
implements(ISelector)
predicate = lambda *x: False
def __init__(self):
self.children = []
def collect(self, uri_info):
""" See ISelector.
"""
return []
class PredicateSelector(SelectorBase):
""" Do match testing using a generic predicate against URI info.
"""
implements(ISelector)
def __init__(self, predicate):
self.predicate = predicate
self.children = []
def collect(self, uri_info):
""" See ISelector.
"""
commands = []
hit, new_info = self.predicate(uri_info)
if hit:
commands.extend([x for x in self.children
if IOperator.providedBy(x)])
for child in [x for x in self.children
if ISelector.providedBy(x)]:
commands.extend(child.collect(new_info))
return commands | /repoze.urispace-0.3.2.tar.gz/repoze.urispace-0.3.2/repoze/urispace/selectors.py | 0.697094 | 0.3415 | selectors.py | pypi |
:mod:`repoze.urispace` -- Hierarchical URI-based metadata
*********************************************************
:Author: Tres Seaver
:Version: |version|
.. module:: repoze.urispace
:synopsis: Hierarchical URI-based metadata
.. topic:: Overview
:mod:`repoze.urispace` implements the URISpace_ 1.0 spec, as proposed
to the W3C by Akamai. Its aim is to provide an implementation of
that language as a vehicle for asserting declarative metadata about a
resource based on pattern matching against its URI.
Once asserted, such metadata can be used to guide the application in
serving the resource, with possible applciations including:
- Setting cache control headers.
- Selecting externally applied themes, e.g. in :mod:`Deliverance`
- Restricting access, e.g. to emulate Zope's "placeful security."
URISpace Specification
----------------------
The URISpace_ specification provides for matching on the following
portions of a URI:
- scheme
- authority (see URIRFC_)
o host, including wildcarding (leading only) and port
o user (if specified in the URI)
- path elements, including nesting and wildcarding, as well as
parameters, where used.
- query elements, including test for presence or for specific value
- fragments (likely irrelevant for server-side applications)
.. Note:: :mod:`repoze.urispace` does not yet provide support for
fragment matching.
The asserted metadata can be scalar, or can use RDF Bag and Sequences
to indicate sets or ordered collections.
.. Note:: :mod:`repoze.urispace` does not yet provide support for
parsing multi-valued assertions using RDF.
Operators are provided to allow for incrementally updating or clearing
the value for a given metadata element. Specified operators include:
``replace``
Completely replace any previously asserted value with a new one.
This is the default operator.
``clear``
Remove any previously asserted value.
``union``
Perform a set union: ``old | new``
``intersection``
Perform a set intersection: ``old & new``
``rev-intersection``
Perform a set exclusion: ``old ^ new``
``difference``
Perform set subtraction: ``old - new``
``rev-difference``
Perform set subtraction: ``new - old``
``prepend``
Insert ``new`` values at the head of ``old`` values
``append``
Insert ``new`` values at the tail of ``old`` values
Example
-------
Suppose we want to select different Delieverance themes and or rulesets
based on the URI of the resource being themed. In particular:
- The ``news``, ``lifestyle``, and ``sports`` sections of the site each get
custom themes, with the homepage and any other sections sharing the
default theme.
- Within the news section, the ``world``, ``national``, and ``local``
sections all use a different theme URL (one with the desired color
scheme name encoded as a query string).
- Within any section, the ``index.html`` page should use a different
ruleset, than that for stories in that section (whose final path element
will be ``<slug>.html``): the index page's HTML structured very differently
from that used for stories.
A URISpace file specifying these policies would look like:
.. include:: examples/dv_news.xml
:literal:
Given that URISpace file, one can test how given URIs matches using
the ``uri_test`` script::
$ /path/to/bin/uri_test examples/dv_news.xml \
http://example.com/ \
http://example.com/foo \
http://example.com/news/ \
http://example.com/news/index.html \
http://example.com/news/world/index.html \
http://example.com/sports/ \
http://example.com/sports/world_series_2008.html
------------------------------------------------------------------------------
URI: http://example.com/
------------------------------------------------------------------------------
rules = http://static.example.com/rules/default.xml
theme = http://themes.example.com/default.html
------------------------------------------------------------------------------
URI: http://example.com/foo
------------------------------------------------------------------------------
rules = http://static.example.com/rules/default.xml
theme = http://themes.example.com/default.html
------------------------------------------------------------------------------
URI: http://example.com/news/
------------------------------------------------------------------------------
rules = http://static.example.com/rules/default.xml
theme = http://themes.example.com/news.html
------------------------------------------------------------------------------
URI: http://example.com/news/index.html
------------------------------------------------------------------------------
rules = http://static.example.com/rules/default.xml
theme = http://themes.example.com/news.html
------------------------------------------------------------------------------
URI: http://example.com/news/world/index.html
------------------------------------------------------------------------------
rules = http://static.example.com/rules/default.xml
theme = http://themes.example.com/news.html?style=world
------------------------------------------------------------------------------
URI: http://example.com/sports/
------------------------------------------------------------------------------
rules = http://static.example.com/rules/default.xml
theme = http://themes.example.com/sports.html
------------------------------------------------------------------------------
URI: http://example.com/sports/world_series_2008.html
------------------------------------------------------------------------------
rules = http://static.example.com/rules/default.xml
theme = http://themes.example.com/sports.html
Using a URISpace parser in Python Code
--------------------------------------
Once parsing is complete, the URISpace is available as tree-like object.
The canonical operators to extract metadata for a given URI are:
.. code-block:: python
from urlparse import urlsplit
scheme, nethost, path, query, fragment = urlsplit(uri)
path = path.split('/')
if len(path) > 1 and path[0] == '':
path = path[1:]
info = {'scheme': scheme,
'nethost': nethost,
'path': path,
'query': parse_qs(query, keep_blank_values=1),
'fragment': fragment,
}
operators = urispace.collect(info)
assertions = {}
for operator in operators:
operator.apply(assertions)
At this point, ``assertions`` will contain keys and values for all
operators found while matching against the URI.
Using URISpace as WSGI Middleware
---------------------------------
One application of a URISpace might be to make assertions about the
URI of a WSGI request, in order to allow other parts of the application
to use those assertions. :mod:`repoze.urispace` provides a component
which can be used as middleware for this purpose.
To configure the middleware in a :mod:`PasteDeploy` config file::
[filter:urispace]
use = egg:repoze.urispace#urispace
file = %{here)s/urispace.xml
You should then be able to add the middleware to your pipeline::
[pipeline:main]
pipeline =
urispace
your_app
In your application, you can get to the assertions made by the middleware
using the :func:`repoze.urispace.middleware.getAssertions` API, e.g.:
.. code-block:: python
from repoze.urispace.middleware import getAssertions
def your_app(environ, start_response):
assertions = getAssertions(environ)
Development Notes
-----------------
Extending :mod:`repoze.urispace`
++++++++++++++++++++++++++++++++
- Registering custom selectors (TBD)
- Registering operator converters (TBD)
.. toctree::
:maxdepth: 2
parser
.. _URISpace: http://www.w3.org/TR/urispace.html
.. _URIRFC: http://www.ietf.org/rfc/rfc2396.txt
.. target-notes::
| /repoze.urispace-0.3.2.tar.gz/repoze.urispace-0.3.2/docs/index.rst | 0.944035 | 0.668285 | index.rst | pypi |
import os
from hashlib import sha1
from datetime import datetime
from elixir import Entity, Field
from elixir import DateTime, Unicode
from elixir import using_options
from elixir import ManyToMany
class User(Entity):
"""Reasonably basic User definition. Probably would want additional
attributes.
"""
using_options(tablename="user", auto_primarykey="user_id")
user_name = Field(Unicode(16), required=True, unique=True)
_password = Field(Unicode(80), colname="password", required=True)
groups = ManyToMany(
"Group",
inverse="users",
tablename="user_group",
local_colname="group_id",
remote_colname="user_id",
)
def _set_password(self, password):
"""Hash password on the fly."""
hashed_password = password
if isinstance(password, unicode):
password_8bit = password.encode('UTF-8')
else:
password_8bit = password
salt = sha1()
salt.update(os.urandom(60))
hash = sha1()
hash.update(password_8bit + salt.hexdigest())
hashed_password = salt.hexdigest() + hash.hexdigest()
# Make sure the hased password is an UTF-8 object at the end of the
# process because SQLAlchemy _wants_ a unicode object for Unicode
# fields
if not isinstance(hashed_password, unicode):
hashed_password = hashed_password.decode('UTF-8')
self._password = hashed_password
def _get_password(self):
"""Return the password hashed"""
return self._password
password = descriptor=property(_get_password, _set_password)
def validate_password(self, password):
"""Check the password against existing credentials.
:param password: the password that was provided by the user to
try and authenticate. This is the clear text version that we will
need to match against the hashed one in the database.
:type password: unicode object.
:return: Whether the password is valid.
:rtype: bool
"""
hashed_pass = sha1()
hashed_pass.update(password + self.password[:40])
return self.password[40:] == hashed_pass.hexdigest()
class Group(Entity):
"""An ultra-simple group definition."""
using_options(tablename="group", auto_primarykey="group_id")
group_name = Field(Unicode(16), unique=True)
display_name = Field(Unicode(255))
created = Field(DateTime, default=datetime.now)
users = ManyToMany("User")
permissions = ManyToMany(
"Permission",
inverse="groups",
tablename="group_permission",
local_colname="group_id",
remote_colname="permission_id",
)
class Permission(Entity):
"""A relationship that determines what each Group can do"""
using_options(tablename="permission", auto_primarykey="permission_id")
permission_name = Field(Unicode(16), unique=True)
groups = ManyToMany("Group") | /repoze.what-quickstart-1.0.9.tar.gz/repoze.what-quickstart-1.0.9/docs/source/_static/model_elixir_example.py | 0.704465 | 0.167457 | model_elixir_example.py | pypi |
import os
from hashlib import sha1
from datetime import datetime
from sqlalchemy import Table, ForeignKey, Column
from sqlalchemy.types import String, Unicode, UnicodeText, Integer, DateTime, \
Boolean, Float
from sqlalchemy.orm import relation, backref, synonym
from yourproject.model import DeclarativeBase, metadata
# This is the association table for the many-to-many relationship between
# groups and permissions.
group_permission_table = Table('group_permission', metadata,
Column('group_id', Integer, ForeignKey('group.group_id',
onupdate="CASCADE", ondelete="CASCADE")),
Column('permission_id', Integer, ForeignKey('permission.permission_id',
onupdate="CASCADE", ondelete="CASCADE"))
)
# This is the association table for the many-to-many relationship between
# groups and members - this is, the memberships.
user_group_table = Table('user_group', metadata,
Column('user_id', Integer, ForeignKey('user.user_id',
onupdate="CASCADE", ondelete="CASCADE")),
Column('group_id', Integer, ForeignKey('group.group_id',
onupdate="CASCADE", ondelete="CASCADE"))
)
# auth model
class Group(DeclarativeBase):
"""An ultra-simple group definition."""
__tablename__ = 'group'
group_id = Column(Integer, autoincrement=True, primary_key=True)
group_name = Column(Unicode(16), unique=True)
users = relation('User', secondary=user_group_table, backref='groups')
class User(DeclarativeBase):
"""
Reasonably basic User definition. Probably would want additional
attributes.
"""
__tablename__ = 'user'
user_id = Column(Integer, autoincrement=True, primary_key=True)
user_name = Column(Unicode(16), unique=True)
_password = Column('password', Unicode(80))
def _set_password(self, password):
"""Hash password on the fly."""
hashed_password = password
if isinstance(password, unicode):
password_8bit = password.encode('UTF-8')
else:
password_8bit = password
salt = sha1()
salt.update(os.urandom(60))
hash = sha1()
hash.update(password_8bit + salt.hexdigest())
hashed_password = salt.hexdigest() + hash.hexdigest()
# Make sure the hased password is an UTF-8 object at the end of the
# process because SQLAlchemy _wants_ a unicode object for Unicode
# fields
if not isinstance(hashed_password, unicode):
hashed_password = hashed_password.decode('UTF-8')
self._password = hashed_password
def _get_password(self):
"""Return the password hashed"""
return self._password
password = synonym('_password', descriptor=property(_get_password,
_set_password))
def validate_password(self, password):
"""
Check the password against existing credentials.
:param password: the password that was provided by the user to
try and authenticate. This is the clear text version that we will
need to match against the hashed one in the database.
:type password: unicode object.
:return: Whether the password is valid.
:rtype: bool
"""
hashed_pass = sha1()
hashed_pass.update(password + self.password[:40])
return self.password[40:] == hashed_pass.hexdigest()
class Permission(DeclarativeBase):
"""A relationship that determines what each Group can do"""
__tablename__ = 'permission'
permission_id = Column(Integer, autoincrement=True, primary_key=True)
permission_name = Column(Unicode(16), unique=True)
groups = relation(Group, secondary=group_permission_table,
backref='permissions') | /repoze.what-quickstart-1.0.9.tar.gz/repoze.what-quickstart-1.0.9/docs/source/_static/model_sa_example.py | 0.672869 | 0.150216 | model_sa_example.py | pypi |
from repoze.what.predicates import Predicate
from repoze.who.plugins.x509.utils import *
import re
__all__ = ['is_subject', 'is_issuer', 'X509Predicate', 'X509DNPredicate']
class X509Predicate(Predicate):
"""
Represents a predicate based on the X.509 protocol. It can be evaluated,
although it can only check that is a valid client certificate.
Users must use a subclass or inherit from it.
"""
def __init__(self, **kwargs):
"""
:param verify_key: The WSGI environment key that specify if the client
certificate is valid or not. A value of 'SUCCESS' will make it
valid. If you don't specify a key, then the value that it will take
is by default ``SSL_CLIENT_VERIFY``
:param validity_start_key: The WSGI environment key that specifies the
encoded datetime that indicates the start of the validity range.
If the timezone is not UTC (or GMT), it will fail.
:param validity_end_key: The WSGI environment key that specifies the
encoded datetime that indicates the end of the validity range.
If the timezone is not UTC (or GMT), it will fail.
"""
self.verify_key = kwargs.pop('verify_key', None) or VERIFY_KEY
self.validity_start_key = kwargs.pop('validity_start_key', None) or \
VALIDITY_START_KEY
self.validity_end_key = kwargs.pop('validity_end_key', None) or \
VALIDITY_END_KEY
super(X509Predicate, self).__init__(msg=kwargs.get('msg'))
def evaluate(self, environ, credentials):
"""
Evaluates the predicate. A subclass should override this method however
call it before doing its custom code.
:param environ: The WSGI environment.
:param credentials: The user credentials. These will not be used
:raise NotAuthorizedError: If the predicate is not met.
"""
# Cannot assume every environment will have all mod_ssl CGI vars.
if not verify_certificate(
environ,
self.verify_key,
self.validity_start_key,
self.validity_end_key
):
self.unmet()
class X509DNPredicate(X509Predicate):
"""
Represents a predicate that evaluates a distinguished name encoded in a
OpenSSL X.509 DN string. It evaluates according to the properties
specified.
"""
def __init__(self, common_name=None, organization=None,
organizational_unit=None, country=None,
state=None, locality=None, environ_key=None, **kwargs):
"""
:param common_name: The common name of the distinguished name.
:param organization: The organization of the distinguished name.
:param organizational_unit: The organization unit of the distinguished
name.
:param country: ISO-3166-1 alpha-2 encoding of the country of the
distinguished name.
:param state: The state within the country of the distinguished name.
:param locality: The locality or city of the distinguished name.
:param environ_key: The WSGI environment key of where the distinguished
name is located.
:param kwargs: You can specify a custom attribute type. The name of the
key will count as the type, and the value is what is going to be
checked against.
:raise ValueError: When you don't specify at least one value for the
parameters, including any custom one; or, when you don't specify an
``environ_key``.
"""
if common_name is None and organizational_unit is None and \
organization is None and country is None and state is None and \
locality is None and len(kwargs) == 0:
raise ValueError(('At least one of common_name, organizational_unit,'
' organization, country, state, locality, or one '
'custom parameter must have a value'))
super(X509DNPredicate, self).__init__(**kwargs)
field_and_values = (
('O', organization, 'organization'),
('CN', common_name, 'common_name'),
('OU', organizational_unit, 'organizational_unit'),
('C', country, 'country'),
('ST', state, 'state'),
('L', locality, 'locality')
)
self.log = kwargs.get('log')
self._prepare_dn_params_with_consistency(
field_and_values,
kwargs
)
if environ_key is None or len(environ_key) == 0:
raise ValueError('This predicate requires a WSGI environ key')
self.environ_key = environ_key
def _prepare_dn_params_with_consistency(self, check_params, kwargs):
# We prefer common_name over CN, for example
# It receives a 3-tuple:
# * The DN attribute type
# * The value of the constructor parameter
# * The name of the constructor parameter
self.dn_params = []
for param in check_params:
if param[0] in kwargs and param[1] is not None:
self.log and self.log.warn(
'Choosing %s over "%s"' % (param[0], param[1])
)
del kwargs[param[0]]
if param[1] is not None:
self.dn_params.append((param[0], param[1]))
for param in ('validity_start_key', 'validity_end_key', 'verify_key'):
try:
del kwargs[param]
except:
pass
self.dn_params.extend(kwargs.iteritems())
def evaluate(self, environ, credentials):
"""
Evaluates a distinguished name or the server variables that represents
it, already parsed. First it checks for the server variables, and then
it tries to parse the distinguished name. See the documentation for
more information.
:param environ: The WSGI environment.
:param credentials: The user credentials. This parameter is not used.
:raise NotAuthorizedError: When the evaluation fails.
"""
super(X509DNPredicate, self).evaluate(environ, credentials)
# First let's try with Apache-like server variables, and last rely on
# the parsing of the DN itself.
try:
for suffix, value in self.dn_params:
self._check_server_variable(environ, '_' + suffix, value)
except KeyError:
pass
else:
# Every environ variable is valid
return
dn = environ.get(self.environ_key)
if dn is None:
self.unmet()
try:
parsed_dn = parse_dn(dn)
except:
self.unmet()
try:
for key, value in self.dn_params:
self._check_parsed_dict(parsed_dn, key, value)
except KeyError:
self.unmet()
def _check_parsed_dict(self, parsed, key, value):
parsed_value = parsed[key]
if isinstance(value, list) or isinstance(value, tuple):
for v in value:
if v not in parsed_value:
self.unmet()
elif value not in parsed_value:
self.unmet()
def _check_server_variable(self, environ, suffix, value):
key = self.environ_key + suffix
if isinstance(value, list) or isinstance(value, tuple):
environ_values = []
for n in range(len(value)):
environ_values.append(environ[key + '_' + str(n)])
for v in value:
if v not in environ_values:
self.unmet()
elif environ[key] != value:
self.unmet()
class is_issuer(X509DNPredicate):
"""
Represents a predicate that evaluates the issuer distinguished name.
"""
ISSUER_KEY_DN = 'SSL_CLIENT_I_DN'
message = 'Invalid SSL client issuer.'
def __init__(self, common_name=None, organization=None,
organizational_unit=None, country=None, state=None,
locality=None, issuer_key=None, **kwargs):
super(is_issuer, self).__init__(
common_name,
organization,
organizational_unit,
country,
state,
locality,
issuer_key or self.ISSUER_KEY_DN,
**kwargs
)
class is_subject(X509DNPredicate):
"""
Represents a predicate that evalutes the subject distinguished name.
"""
SUBJECT_KEY_DN = 'SSL_CLIENT_S_DN'
message = 'Invalid SSL client subject.'
def __init__(self, common_name=None, organization=None,
organizational_unit=None, country=None, state=None,
locality=None, subject_key=None, **kwargs):
super(is_subject, self).__init__(
common_name,
organization,
organizational_unit,
country,
state,
locality,
subject_key or self.SUBJECT_KEY_DN,
**kwargs
) | /repoze.what-x509-0.3.0.tar.gz/repoze.what-x509-0.3.0/repoze/what/plugins/x509/predicates.py | 0.771327 | 0.32134 | predicates.py | pypi |
import os
from zope.interface import implements
from repoze.who.plugins.testutil import make_middleware
from repoze.who.classifiers import default_challenge_decider, \
default_request_classifier
from repoze.who.interfaces import IAuthenticator, IMetadataProvider
__all__ = ['AuthorizationMetadata', 'setup_auth']
class AuthorizationMetadata(object):
"""
repoze.who metadata provider to load groups and permissions data for
the current user.
There's no need to include this class in the end-user documentation,
as there's no reason why they may ever need it... It's only by
:func:`setup_auth`.
"""
implements(IMetadataProvider)
def __init__(self, group_adapters=None, permission_adapters=None):
"""
Fetch the groups and permissions of the authenticated user.
:param group_adapters: Set of adapters that retrieve the known groups
of the application, each identified by a keyword.
:type group_adapters: dict
:param permission_adapters: Set of adapters that retrieve the
permissions for the groups, each identified by a keyword.
:type permission_adapters: dict
"""
self.group_adapters = group_adapters
self.permission_adapters = permission_adapters
def _find_groups(self, identity):
"""
Return the groups to which the authenticated user belongs, as well as
the permissions granted to such groups.
"""
groups = set()
permissions = set()
if self.group_adapters is not None:
# repoze.what-2.X group adapters expect to find the
# 'repoze.what.userid' key in the credentials
credentials = identity.copy()
credentials['repoze.what.userid'] = identity['repoze.who.userid']
# It's using groups/permissions-based authorization
for grp_fetcher in self.group_adapters.values():
groups |= set(grp_fetcher.find_sections(credentials))
for group in groups:
for perm_fetcher in self.permission_adapters.values():
permissions |= set(perm_fetcher.find_sections(group))
return tuple(groups), tuple(permissions)
# IMetadataProvider
def add_metadata(self, environ, identity):
"""
Load the groups and permissions of the authenticated user.
It will load such data into the :mod:`repoze.who` ``identity`` and
the :mod:`repoze.what` ``credentials`` dictionaries.
:param environ: The WSGI environment.
:param identity: The :mod:`repoze.who`'s ``identity`` dictionary.
"""
logger = environ.get('repoze.who.logger')
# Finding the groups and permissions:
groups, permissions = self._find_groups(identity)
identity['groups'] = groups
identity['permissions'] = permissions
# Adding the groups and permissions to the repoze.what credentials for
# forward compatibility:
if 'repoze.what.credentials' not in environ:
environ['repoze.what.credentials'] = {}
environ['repoze.what.credentials']['groups'] = groups
environ['repoze.what.credentials']['permissions'] = permissions
# Adding the userid:
userid = identity['repoze.who.userid']
environ['repoze.what.credentials']['repoze.what.userid'] = userid
# Adding the adapters:
environ['repoze.what.adapters'] = {
'groups': self.group_adapters,
'permissions': self.permission_adapters
}
# Logging
logger and logger.info('User belongs to the following groups: %s' %
str(groups))
logger and logger.info('User has the following permissions: %s' %
str(permissions))
def setup_auth(app, group_adapters=None, permission_adapters=None, **who_args):
"""
Setup :mod:`repoze.who` with :mod:`repoze.what` support.
:param app: The WSGI application object.
:param group_adapters: The group source adapters to be used.
:type group_adapters: dict
:param permission_adapters: The permission source adapters to be used.
:type permission_adapters: dict
:param who_args: Authentication-related keyword arguments to be passed to
:mod:`repoze.who`.
:return: The WSGI application with authentication and authorization
middleware.
.. tip::
If you are looking for an easier way to get started, you may want to
use :mod:`the quickstart plugin <repoze.what.plugins.quickstart>` and
its :func:`setup_sql_auth()
<repoze.what.plugins.quickstart.setup_sql_auth>` function.
You must define the ``group_adapters`` and ``permission_adapters``
keyword arguments if you want to use the groups/permissions-based
authorization pattern.
Additional keyword arguments will be passed to
:func:`repoze.who.plugins.testutil.make_middleware` -- and
among those keyword arguments, you *must* define at least the identifier(s),
authenticator(s) and challenger(s) to be used. For example::
from repoze.who.plugins.basicauth import BasicAuthPlugin
from repoze.who.plugins.htpasswd import HTPasswdPlugin, crypt_check
from repoze.what.middleware import setup_auth
from repoze.what.plugins.xml import XMLGroupsAdapter
from repoze.what.plugins.ini import INIPermissionAdapter
# Defining the group adapters; you may add as much as you need:
groups = {'all_groups': XMLGroupsAdapter('/path/to/groups.xml')}
# Defining the permission adapters; you may add as much as you need:
permissions = {'all_perms': INIPermissionAdapter('/path/to/perms.ini')}
# repoze.who identifiers; you may add as much as you need:
basicauth = BasicAuthPlugin('Private web site')
identifiers = [('basicauth', basicauth)]
# repoze.who authenticators; you may add as much as you need:
htpasswd_auth = HTPasswdPlugin('/path/to/users.htpasswd', crypt_check)
authenticators = [('htpasswd', htpasswd_auth)]
# repoze.who challengers; you may add as much as you need:
challengers = [('basicauth', basicauth)]
app_with_auth = setup_auth(
app,
groups,
permissions,
identifiers=identifiers,
authenticators=authenticators,
challengers=challengers)
.. attention::
Keep in mind that :mod:`repoze.who` must be configured `through`
:mod:`repoze.what` for authorization to work.
.. note::
If you want to skip authentication while testing your application,
you should pass the ``skip_authentication`` keyword argument with a
value that evaluates to ``True``.
.. versionchanged:: 1.0.5
:class:`repoze.who.middleware.PluggableAuthenticationMiddleware`
replaced with :func:`repoze.who.plugins.testutil.make_middleware`
internally.
"""
authorization = AuthorizationMetadata(group_adapters,
permission_adapters)
if 'mdproviders' not in who_args:
who_args['mdproviders'] = []
who_args['mdproviders'].append(('authorization_md', authorization))
if 'classifier' not in who_args:
who_args['classifier'] = default_request_classifier
if 'challenge_decider' not in who_args:
who_args['challenge_decider'] = default_challenge_decider
auth_log = os.environ.get('AUTH_LOG', '') == '1'
if auth_log:
import sys
who_args['log_stream'] = sys.stdout
skip_authn = who_args.pop('skip_authentication', False)
middleware = make_middleware(skip_authn, app, **who_args)
return middleware | /repoze.what-1.0.9.tar.gz/repoze.what-1.0.9/repoze/what/middleware.py | 0.667364 | 0.225076 | middleware.py | pypi |
from paste.request import parse_formvars, parse_dict_querystring
__all__ = ['Predicate', 'CompoundPredicate', 'All', 'Any',
'has_all_permissions', 'has_any_permission', 'has_permission',
'in_all_groups', 'in_any_group', 'in_group', 'is_user',
'is_anonymous', 'not_anonymous', 'PredicateError',
'NotAuthorizedError']
#{ Predicates
class Predicate(object):
"""
Generic predicate checker.
This is the base predicate class. It won't do anything useful for you,
unless you subclass it.
"""
def __init__(self, msg=None):
"""
Create a predicate and use ``msg`` as the error message if it fails.
:param msg: The error message, if you want to override the default one
defined by the predicate.
:type msg: str
You may use the ``msg`` keyword argument with any predicate.
"""
if msg:
self.message = msg
def evaluate(self, environ, credentials):
"""
Raise an exception if the predicate is not met.
:param environ: The WSGI environment.
:type environ: dict
:param credentials: The :mod:`repoze.what` ``credentials`` dictionary
as a short-cut.
:type credentials: dict
:raise NotImplementedError: When the predicate doesn't define this
method.
:raises NotAuthorizedError: If the predicate is not met (use
:meth:`unmet` to raise it).
This is the method that must be overridden by any predicate checker.
For example, if your predicate is "The current month is the specified
one", you may define the following predicate checker::
from datetime import date
from repoze.what.predicates import Predicate
class is_month(Predicate):
message = 'The current month must be %(right_month)s'
def __init__(self, right_month, **kwargs):
self.right_month = right_month
super(is_month, self).__init__(**kwargs)
def evaluate(self, environ, credentials):
today = date.today()
if today.month != self.right_month:
# Raise an exception because the predicate is not met.
self.unmet()
.. versionadded:: 1.0.2
.. attention::
Do not evaluate predicates by yourself using this method. See
:meth:`check_authorization` and :meth:`is_met`.
.. warning::
To make your predicates thread-safe, keep in mind that they may
be instantiated at module-level and then shared among many threads,
so avoid predicates from being modified after their evaluation.
This is, the ``evaluate()`` method should not add, modify or
delete any attribute of the predicate.
"""
self.eval_with_environ(environ)
def _eval_with_environ(self, environ):
"""
Check whether the predicate is met.
:param environ: The WSGI environment.
:type environ: dict
:return: Whether the predicate is met or not.
:rtype: bool
:raise NotImplementedError: This must be defined by the predicate
itself.
.. deprecated:: 1.0.2
Only :meth:`evaluate` will be used as of :mod:`repoze.what` v2.
"""
raise NotImplementedError
def eval_with_environ(self, environ):
"""
Make sure this predicate is met.
:param environ: The WSGI environment.
:raises NotAuthorizedError: If the predicate is not met.
.. versionchanged:: 1.0.1
In :mod:`repoze.what`<1.0.1, this method returned a ``bool`` and
set the ``error`` instance attribute of the predicate to the
predicate message.
.. deprecated:: 1.0.2
Define :meth:`evaluate` instead.
"""
from warnings import warn
msg = 'Predicate._eval_with_environ(environ) is deprecated ' \
'for forward compatibility with repoze.what v2; define ' \
'Predicate.evaluate(environ, credentials) instead'
warn(msg, DeprecationWarning, stacklevel=2)
if not self._eval_with_environ(environ):
self.unmet()
def unmet(self, msg=None, **placeholders):
"""
Raise an exception because this predicate is not met.
:param msg: The error message to be used; overrides the predicate's
default one.
:type msg: str
:raises NotAuthorizedError: If the predicate is not met.
``placeholders`` represent the placeholders for the predicate message.
The predicate's attributes will also be taken into account while
creating the message with its placeholders.
For example, if you have a predicate that checks that the current
month is the specified one, where the predicate message is defined with
two placeholders as in::
The current month must be %(right_month)s and it is %(this_month)s
and the predicate has an attribute called ``right_month`` which
represents the expected month, then you can use this method as in::
self.unmet(this_month=this_month)
Then :meth:`unmet` will build the message using the ``this_month``
keyword argument and the ``right_month`` attribute as the placeholders
for ``this_month`` and ``right_month``, respectively. So, if
``this_month`` equals ``3`` and ``right_month`` equals ``5``,
the message for the exception to be raised will be::
The current month must be 5 and it is 3
If you have a context-sensitive predicate checker and thus you want
to change the error message on evaluation, you can call :meth:`unmet`
as::
self.unmet('%(this_month)s is not a good month',
this_month=this_month)
The exception raised would contain the following message::
3 is not a good month
.. versionadded:: 1.0.2
.. versionchanged:: 1.0.4
Introduced the ``msg`` argument.
.. attention::
This method should only be called from :meth:`evaluate`.
"""
if msg:
message = msg
else:
message = self.message
# Let's convert it into unicode because it may be just a class, as a
# Pylons' "lazy" translation message:
message = unicode(message)
# Include the predicate attributes in the placeholders:
all_placeholders = self.__dict__.copy()
all_placeholders.update(placeholders)
raise NotAuthorizedError(message % all_placeholders)
def check_authorization(self, environ):
"""
Evaluate the predicate and raise an exception if it's not met.
:param environ: The WSGI environment.
:raise NotAuthorizedError: If it the predicate is not met.
Example::
>>> from repoze.what.predicates import is_user
>>> environ = gimme_the_environ()
>>> p = is_user('gustavo')
>>> p.check_authorization(environ)
# ...
repoze.what.predicates.NotAuthorizedError: The current user must be "gustavo"
.. versionadded:: 1.0.4
Backported from :mod:`repoze.what` v2; deprecates
:func:`repoze.what.authorize.check_authorization`.
"""
logger = environ.get('repoze.who.logger')
credentials = environ.get('repoze.what.credentials', {})
try:
self.evaluate(environ, credentials)
except NotAuthorizedError, error:
logger and logger.info(u'Authorization denied: %s' % error)
raise
logger and logger.info('Authorization granted')
def is_met(self, environ):
"""
Find whether the predicate is met or not.
:param environ: The WSGI environment.
:return: Whether the predicate is met or not.
:rtype: bool
Example::
>>> from repoze.what.predicates import is_user
>>> environ = gimme_the_environ()
>>> p = is_user('gustavo')
>>> p.is_met(environ)
False
.. versionadded:: 1.0.4
Backported from :mod:`repoze.what` v2.
"""
credentials = environ.get('repoze.what.credentials', {})
try:
self.evaluate(environ, credentials)
return True
except NotAuthorizedError, error:
return False
def parse_variables(self, environ):
"""
Return the GET and POST variables in the request, as well as
``wsgiorg.routing_args`` arguments.
:param environ: The WSGI environ.
:return: The GET and POST variables and ``wsgiorg.routing_args``
arguments.
:rtype: dict
This is a handy method for request-sensitive predicate checkers.
It will return a dictionary for the POST and GET variables, as well as
the `wsgiorg.routing_args
<http://www.wsgi.org/wsgi/Specifications/routing_args>`_'s
``positional_args`` and ``named_args`` arguments, in the ``post``,
``get``, ``positional_args`` and ``named_args`` items (respectively) of
the returned dictionary.
For example, if the user submits a form using the POST method to
``http://example.com/blog/hello-world/edit_post?wysiwyg_editor=Yes``,
this method may return::
{
'post': {'new_post_contents': 'These are the new contents'},
'get': {'wysiwyg_editor': 'Yes'},
'named_args': {'post_slug': 'hello-world'},
'positional_args': (),
}
But note that the ``named_args`` and ``positional_args`` items depend
completely on how you configured the dispatcher.
.. versionadded:: 1.0.4
"""
get_vars = parse_dict_querystring(environ) or {}
try:
post_vars = parse_formvars(environ, False) or {}
except KeyError:
post_vars = {}
routing_args = environ.get('wsgiorg.routing_args', ([], {}))
positional_args = routing_args[0] or ()
named_args = routing_args[1] or {}
variables = {
'post': post_vars,
'get': get_vars,
'positional_args': positional_args,
'named_args': named_args}
return variables
class CompoundPredicate(Predicate):
"""A predicate composed of other predicates."""
def __init__(self, *predicates, **kwargs):
super(CompoundPredicate, self).__init__(**kwargs)
self.predicates = predicates
class Not(Predicate):
"""
Negate the specified predicate.
:param predicate: The predicate to be negated.
Example::
# The user *must* be anonymous:
p = Not(not_anonymous())
"""
message = u"The condition must not be met"
def __init__(self, predicate, **kwargs):
super(Not, self).__init__(**kwargs)
self.predicate = predicate
def evaluate(self, environ, credentials):
try:
self.predicate.evaluate(environ, credentials)
except NotAuthorizedError, error:
return
self.unmet()
class All(CompoundPredicate):
"""
Check that all of the specified predicates are met.
:param predicates: All of the predicates that must be met.
Example::
# Grant access if the current month is July and the user belongs to
# the human resources group.
p = All(is_month(7), in_group('hr'))
"""
def evaluate(self, environ, credentials):
"""
Evaluate all the predicates it contains.
:param environ: The WSGI environment.
:param credentials: The :mod:`repoze.what` ``credentials``.
:raises NotAuthorizedError: If one of the predicates is not met.
"""
for p in self.predicates:
p.evaluate(environ, credentials)
class Any(CompoundPredicate):
"""
Check that at least one of the specified predicates is met.
:param predicates: Any of the predicates that must be met.
Example::
# Grant access if the currest user is Richard Stallman or Linus
# Torvalds.
p = Any(is_user('rms'), is_user('linus'))
"""
message = u"At least one of the following predicates must be met: " \
"%(failed_predicates)s"
def evaluate(self, environ, credentials):
"""
Evaluate all the predicates it contains.
:param environ: The WSGI environment.
:param credentials: The :mod:`repoze.what` ``credentials``.
:raises NotAuthorizedError: If none of the predicates is met.
"""
errors = []
for p in self.predicates:
try:
p.evaluate(environ, credentials)
return
except NotAuthorizedError, exc:
errors.append(unicode(exc))
failed_predicates = ', '.join(errors)
self.unmet(failed_predicates=failed_predicates)
class is_user(Predicate):
"""
Check that the authenticated user's username is the specified one.
:param user_name: The required user name.
:type user_name: str
Example::
p = is_user('linus')
"""
message = u'The current user must be "%(user_name)s"'
def __init__(self, user_name, **kwargs):
super(is_user, self).__init__(**kwargs)
self.user_name = user_name
def evaluate(self, environ, credentials):
if credentials and \
self.user_name == credentials.get('repoze.what.userid'):
return
self.unmet()
class in_group(Predicate):
"""
Check that the user belongs to the specified group.
:param group_name: The name of the group to which the user must belong.
:type group_name: str
Example::
p = in_group('customers')
"""
message = u'The current user must belong to the group "%(group_name)s"'
def __init__(self, group_name, **kwargs):
super(in_group, self).__init__(**kwargs)
self.group_name = group_name
def evaluate(self, environ, credentials):
if credentials and self.group_name in credentials.get('groups'):
return
self.unmet()
class in_all_groups(All):
"""
Check that the user belongs to all of the specified groups.
:param groups: The name of all the groups the user must belong to.
Example::
p = in_all_groups('developers', 'designers')
"""
def __init__(self, *groups, **kwargs):
group_predicates = [in_group(g) for g in groups]
super(in_all_groups,self).__init__(*group_predicates, **kwargs)
class in_any_group(Any):
"""
Check that the user belongs to at least one of the specified groups.
:param groups: The name of any of the groups the user may belong to.
Example::
p = in_any_group('directors', 'hr')
"""
message = u"The member must belong to at least one of the following " \
"groups: %(group_list)s"
def __init__(self, *groups, **kwargs):
self.group_list = ", ".join(groups)
group_predicates = [in_group(g) for g in groups]
super(in_any_group,self).__init__(*group_predicates, **kwargs)
class is_anonymous(Predicate):
"""
Check that the current user is anonymous.
Example::
# The user must be anonymous!
p = is_anonymous()
.. versionadded:: 1.0.7
"""
message = u"The current user must be anonymous"
def evaluate(self, environ, credentials):
if credentials:
self.unmet()
class not_anonymous(Predicate):
"""
Check that the current user has been authenticated.
Example::
# The user must have been authenticated!
p = not_anonymous()
"""
message = u"The current user must have been authenticated"
def evaluate(self, environ, credentials):
if not credentials:
self.unmet()
class has_permission(Predicate):
"""
Check that the current user has the specified permission.
:param permission_name: The name of the permission that must be granted to
the user.
Example::
p = has_permission('hire')
"""
message = u'The user must have the "%(permission_name)s" permission'
def __init__(self, permission_name, **kwargs):
super(has_permission, self).__init__(**kwargs)
self.permission_name = permission_name
def evaluate(self, environ, credentials):
if credentials and \
self.permission_name in credentials.get('permissions'):
return
self.unmet()
class has_all_permissions(All):
"""
Check that the current user has been granted all of the specified
permissions.
:param permissions: The names of all the permissions that must be
granted to the user.
Example::
p = has_all_permissions('view-users', 'edit-users')
"""
def __init__(self, *permissions, **kwargs):
permission_predicates = [has_permission(p) for p in permissions]
super(has_all_permissions, self).__init__(*permission_predicates,
**kwargs)
class has_any_permission(Any):
"""
Check that the user has at least one of the specified permissions.
:param permissions: The names of any of the permissions that have to be
granted to the user.
Example::
p = has_any_permission('manage-users', 'edit-users')
"""
message = u"The user must have at least one of the following " \
"permissions: %(permission_list)s"
def __init__(self, *permissions, **kwargs):
self.permission_list = ", ".join(permissions)
permission_predicates = [has_permission(p) for p in permissions]
super(has_any_permission,self).__init__(*permission_predicates,
**kwargs)
#{ Exceptions
class PredicateError(Exception):
"""
Former exception raised by a :class:`Predicate` if it's not met.
.. deprecated:: 1.0.4
Deprecated in favor of :class:`NotAuthorizedError`, for forward
compatibility with :mod:`repoze.what` v2.
"""
# Ugly workaround for Python < 2.6:
if not hasattr(Exception, '__unicode__'):
def __unicode__(self):
return unicode(self.args and self.args[0] or '')
class NotAuthorizedError(PredicateError):
"""
Exception raised by :meth:`Predicate.check_authorization` if the subject
is not allowed to access the requested source.
This exception deprecates :class:`PredicateError` as of v1.0.4, but
extends it to avoid breaking backwards compatibility.
.. versionchanged:: 1.0.4
This exception was defined at :mod:`repoze.what.authorize` until
version 1.0.3, but is still imported into that module to keep backwards
compatibility with v1.X releases -- but it won't work in
:mod:`repoze.what` v2.
"""
pass
#} | /repoze.what-1.0.9.tar.gz/repoze.what-1.0.9/repoze/what/predicates.py | 0.743261 | 0.269221 | predicates.py | pypi |
from zope.interface import Interface
__all__ = ['BaseSourceAdapter', 'AdapterError', 'SourceError',
'ExistingSectionError', 'NonExistingSectionError',
'ItemPresentError', 'ItemNotPresentError']
class BaseSourceAdapter(object):
"""
Base class for :term:`source adapters <source adapter>`.
Please note that these abstract methods may only raise one exception:
:class:`SourceError`, which is raised if there was a problem while dealing
with the source. They may not raise other exceptions because they should not
validate anything but the source (not even the parameters they get).
.. attribute:: is_writable = True
:type: bool
Whether the adapter can write to the source.
If the source type handled by your adapter doesn't support write
access, or if your adapter itself doesn't support writting to the
source (yet), then you should set this value to ``False`` in the class
itself; it will get overriden if the ``writable`` parameter in
:meth:`the contructor <BaseSourceAdapter.__init__>` is set, unless you
explicitly disable that parameter::
# ...
class MyFakeAdapter(BaseSourceAdapter):
def __init__():
super(MyFakeAdapter, self).__init__(writable=False)
# ...
.. note::
If it's ``False``, then you don't have to define the methods that
modify the source because they won't be used:
* :meth:`_include_items`
* :meth:`_exclude_items`
* :meth:`_create_section`
* :meth:`_edit_section`
* :meth:`_delete_section`
.. warning::
Do not ever cache the results -- that is :class:`BaseSourceAdapter`'s
job. It requests a given datum once, not multiple times, thanks to
its internal cache.
"""
def __init__(self, writable=True):
"""
Run common setup for source adapters.
:param writable: Whether the source is writable.
:type writable: bool
"""
# The cache for the sections loaded by the source adapter.
self.loaded_sections = {}
# Whether all of the existing items have been loaded
self.all_sections_loaded = False
# Whether the current source is writable:
self.is_writable = writable
def get_all_sections(self):
"""
Return all the sections found in the source.
:return: All the sections found in the source.
:rtype: dict
:raise SourceError: If there was a problem with the source.
"""
if not self.all_sections_loaded:
self.loaded_sections = self._get_all_sections()
self.all_sections_loaded = True
return self.loaded_sections
def get_section_items(self, section):
"""
Return the properties of ``section``.
:param section: The name of the section to be fetched.
:type section: unicode
:return: The items of the ``section``.
:rtype: tuple
:raise NonExistingSectionError: If the requested section doesn't exist.
:raise SourceError: If there was a problem with the source.
"""
if section not in self.loaded_sections:
self._check_section_existence(section)
# It does exist; let's load it:
self.loaded_sections[section] = self._get_section_items(section)
return self.loaded_sections[section]
def set_section_items(self, section, items):
"""
Set ``items`` as the only items of the ``section``.
:raise NonExistingSectionError: If the section doesn't exist.
:raise SourceError: If there was a problem with the source.
"""
old_items = self.get_section_items(section)
items = set(items)
# Finding what was added and what was removed:
added = set((i for i in items if i not in old_items))
removed = set((i for i in old_items if i not in items))
# Removing/adding as requested. We're removing first to avoid
# increasing the size of the source more than required.
self.exclude_items(section, removed)
self.include_items(section, added)
# The cache must have been updated by the two methods above.
def find_sections(self, hint):
"""
Return the sections that meet a given criteria.
:param hint: repoze.what's credentials dictionary or a group name.
:type hint: dict or unicode
:return: The sections that meet the criteria.
:rtype: tuple
:raise SourceError: If there was a problem with the source.
"""
return self._find_sections(hint)
def include_item(self, section, item):
"""
Include ``item`` in ``section``.
This is the individual (non-bulk) edition of :meth:`include_items`.
:param section: The ``section`` to contain the ``item``.
:type section: unicode
:param item: The new ``item`` of the ``section``.
:type item: tuple
:raise NonExistingSectionError: If the ``section`` doesn't exist.
:raise ItemPresentError: If the ``item`` is already included.
:raise SourceError: If there was a problem with the source.
"""
self.include_items(section, (item, ))
def include_items(self, section, items):
"""
Include ``items`` in ``section``.
This is the bulk edition of :meth:`include_item`.
:param section: The ``section`` to contain the ``items``.
:type section: unicode
:param items: The new ``items`` of the ``section``.
:type items: tuple
:raise NonExistingSectionError: If the ``section`` doesn't exist.
:raise ItemPresentError: If at least one of the items is already
present.
:raise SourceError: If there was a problem with the source.
"""
# Verifying that the section exists and doesn't already contain the
# items:
self._check_section_existence(section)
for i in items:
self._confirm_item_not_present(section, i)
# Verifying write permissions:
self._check_writable()
# Everything's OK, let's add it:
items = set(items)
self._include_items(section, items)
# Updating the cache, if necessary:
if section in self.loaded_sections:
self.loaded_sections[section] |= items
def exclude_item(self, section, item):
"""
Exclude ``item`` from ``section``.
This is the individual (non-bulk) edition of :meth:`exclude_items`.
:param section: The ``section`` that contains the ``item``.
:type section: unicode
:param item: The ``item`` to be removed from ``section``.
:type item: tuple
:raise NonExistingSectionError: If the ``section`` doesn't exist.
:raise ItemNotPresentError: If the item is not included in the section.
:raise SourceError: If there was a problem with the source.
"""
self.exclude_items(section, (item, ))
def exclude_items(self, section, items):
"""
Exclude items from section.
This is the bulk edition of :meth:`exclude_item`.
:param section: The ``section`` that contains the ``items``.
:type section: unicode
:param items: The ``items`` to be removed from ``section``.
:type items: tuple
:raise NonExistingSectionError: If the ``section`` doesn't exist.
:raise ItemNotPresentError: If at least one of the items is not
included in the section.
:raise SourceError: If there was a problem with the source.
"""
# Verifying that the section exists and already contains the items:
self._check_section_existence(section)
for i in items:
self._confirm_item_is_present(section, i)
# Verifying write permissions:
self._check_writable()
# Everything's OK, let's remove them:
items = set(items)
self._exclude_items(section, items)
# Updating the cache, if necessary:
if section in self.loaded_sections:
self.loaded_sections[section] -= items
def create_section(self, section):
"""
Add ``section`` to the source.
:param section: The section name.
:type section: unicode
:raise ExistingSectionError: If the section name is already in use.
:raise SourceError: If there was a problem with the source.
"""
self._check_section_not_existence(section)
self._check_writable()
self._create_section(section)
# Adding to the cache:
self.loaded_sections[section] = set()
def edit_section(self, section, new_section):
"""
Edit ``section``'s properties.
:param section: The current name of the section.
:type section: unicode
:param new_section: The new name of the section.
:type new_section: unicode
:raise NonExistingSectionError: If the section doesn't exist.
:raise SourceError: If there was a problem with the source.
"""
self._check_section_existence(section)
self._check_writable()
self._edit_section(section, new_section)
# Updating the cache too, if loaded:
if section in self.loaded_sections:
self.loaded_sections[new_section] = self.loaded_sections[section]
del self.loaded_sections[section]
def delete_section(self, section):
"""
Delete ``section``.
It removes the ``section`` from the source.
:param section: The name of the section to be deleted.
:type section: unicode
:raise NonExistingSectionError: If the section in question doesn't
exist.
:raise SourceError: If there was a problem with the source.
"""
self._check_section_existence(section)
self._check_writable()
self._delete_section(section)
# Removing from the cache too, if loaded:
if section in self.loaded_sections:
del self.loaded_sections[section]
def _check_writable(self):
"""
Raise an exception if the source is not writable.
:raise SourceError: If the source is not writable.
"""
if not self.is_writable:
raise SourceError('The source is not writable')
def _check_section_existence(self, section):
"""
Raise an exception if ``section`` is not defined in the source.
:param section: The name of the section to look for.
:type section: unicode
:raise NonExistingSectionError: If the section is not defined.
:raise SourceError: If there was a problem with the source.
"""
if not self._section_exists(section):
msg = u'Section "%s" is not defined in the source' % section
raise NonExistingSectionError(msg)
def _check_section_not_existence(self, section):
"""
Raise an exception if ``section`` is defined in the source.
:param section: The name of the section to look for.
:type section: unicode
:raise ExistingSectionError: If the section is defined.
:raise SourceError: If there was a problem with the source.
"""
if self._section_exists(section):
msg = u'Section "%s" is already defined in the source' % section
raise ExistingSectionError(msg)
def _confirm_item_is_present(self, section, item):
"""
Raise an exception if ``section`` doesn't contain ``item``.
:param section: The name of the section that may contain the item.
:type section: unicode
:param item: The name of the item to look for.
:type item: unicode
:raise NonExistingSectionError: If the section doesn't exist.
:raise ItemNotPresentError: If the item is not included.
:raise SourceError: If there was a problem with the source.
"""
self._check_section_existence(section)
if not self._item_is_included(section, item):
msg = u'Item "%s" is not defined in section "%s"' % (item, section)
raise ItemNotPresentError(msg)
def _confirm_item_not_present(self, section, item):
"""
Raise an exception if ``section`` already contains ``item``.
:param section: The name of the section that may contain the item.
:type section: unicode
:param item: The name of the item to look for.
:type item: unicode
:raise NonExistingSectionError: If the section doesn't exist.
:raise ItemPresentError: If the item is already included.
:raise SourceError: If there was a problem with the source.
"""
self._check_section_existence(section)
if self._item_is_included(section, item):
msg = u'Item "%s" is already defined in section "%s"' % (item,
section)
raise ItemPresentError(msg)
#{ Abstract methods
def _get_all_sections(self):
"""
Return all the sections found in the source.
:return: All the sections found in the source.
:rtype: dict
:raise SourceError: If there was a problem with the source while
retrieving the sections.
"""
raise NotImplementedError()
def _get_section_items(self, section):
"""
Return the items of the section called ``section``.
:param section: The name of the section to be fetched.
:type section: unicode
:return: The items of the section.
:rtype: set
:raise SourceError: If there was a problem with the source while
retrieving the section.
.. attention::
When implementing this method, don't check whether the
section really exists; that's already done when this method is
called.
"""
raise NotImplementedError()
def _find_sections(self, hint):
"""
Return the sections that meet a given criteria.
:param hint: repoze.what's credentials dictionary or a group name.
:type hint: dict or unicode
:return: The sections that meet the criteria.
:rtype: tuple
:raise SourceError: If there was a problem with the source while
retrieving the sections.
This method depends on the type of adapter that is implementing it:
* If it's a ``group`` source adapter, it returns the groups the
authenticated user belongs to. In this case, hint represents
repoze.what's credentials dict. Please note that hint is not an
user name because some adapters may need something else to find the
groups the authenticated user belongs to. For example, LDAP adapters
need the full Distinguished Name (DN) in the credentials dict, or a
given adapter may only need the email address, so the user name alone
would be useless in both situations.
* If it's a ``permission`` source adapter, it returns the name of the
permissions granted to the group in question; here hint represents
the name of such a group.
"""
raise NotImplementedError()
def _include_items(self, section, items):
"""
Add ``items`` to the ``section``, in the source.
:param section: The section to contain the items.
:type section: unicode
:param items: The new items of the section.
:type items: tuple
:raise SourceError: If the items could not be added to the section.
.. attention::
When implementing this method, don't check whether the
section really exists or the items are already included; that's
already done when this method is called.
"""
raise NotImplementedError()
def _exclude_items(self, section, items):
"""
Remove ``items`` from the ``section``, in the source.
:param section: The section that contains the items.
:type section: unicode
:param items: The items to be removed from section.
:type items: tuple
:raise SourceError: If the items could not be removed from the section.
.. attention::
When implementing this method, don't check whether the
section really exists or the items are already included; that's
already done when this method is called.
"""
raise NotImplementedError()
def _item_is_included(self, section, item):
"""
Check whether ``item`` is included in ``section``.
:param section: The name of the item to look for.
:type section: unicode
:param section: The name of the section that may include the item.
:type section: unicode
:return: Whether the item is included in section or not.
:rtype: bool
:raise SourceError: If there was a problem with the source.
.. attention::
When implementing this method, don't check whether the
section really exists; that's already done when this method is
called.
"""
raise NotImplementedError()
def _create_section(self, section):
"""
Add ``section`` to the source.
:param section: The section name.
:type section: unicode
:raise SourceError: If the section could not be added.
.. attention::
When implementing this method, don't check whether the
section already exists; that's already done when this method is
called.
"""
raise NotImplementedError()
def _edit_section(self, section, new_section):
"""
Edit ``section``'s properties.
:param section: The current name of the section.
:type section: unicode
:param new_section: The new name of the section.
:type new_section: unicode
:raise SourceError: If the section could not be edited.
.. attention::
When implementing this method, don't check whether the
section really exists; that's already done when this method is
called.
"""
raise NotImplementedError()
def _delete_section(self, section):
"""
Delete ``section``.
It removes the ``section`` from the source.
:param section: The name of the section to be deleted.
:type section: unicode
:raise SourceError: If the section could not be deleted.
.. attention::
When implementing this method, don't check whether the
section really exists; that's already done when this method is
called.
"""
raise NotImplementedError()
def _section_exists(self, section):
"""
Check whether ``section`` is defined in the source.
:param section: The name of the section to check.
:type section: unicode
:return: Whether the section is the defined in the source or not.
:rtype: bool
:raise SourceError: If there was a problem with the source.
"""
raise NotImplementedError()
#}
#{ Exceptions
class AdapterError(Exception):
"""
Base exception for problems in the source adapters.
It's never raised directly.
"""
pass
class SourceError(AdapterError):
"""
Exception for problems with the source itself.
.. attention::
If you are creating a :term:`source adapter`, this is the only
exception you should raise.
"""
pass
class ExistingSectionError(AdapterError):
"""Exception raised when trying to add an existing group."""
pass
class NonExistingSectionError(AdapterError):
"""Exception raised when trying to use a non-existing group."""
pass
class ItemPresentError(AdapterError):
"""
Exception raised when trying to add an item to a group that already
contains it.
"""
pass
class ItemNotPresentError(AdapterError):
"""
Exception raised when trying to remove an item from a group that doesn't
contain it.
"""
pass
#} | /repoze.what-1.0.9.tar.gz/repoze.what-1.0.9/repoze/what/adapters/__init__.py | 0.743634 | 0.225481 | __init__.py | pypi |
import os
import ConfigParser
from repoze.what.adapters import BaseSourceAdapter, SourceError
def U(text, encoding='utf8'):
"""Simple function to convert a string into a unicode object"""
return unicode(text, encoding)
class CommonAdapter(BaseSourceAdapter):
"""Abstract base class for the group and permission adapters"""
def __init__(self, authz_file):
"""Base constructor.
Reads the groups or permission information from the authz_file
Arguments:
authz_file: is a init style file with information about
groups and permissions
"""
super(CommonAdapter, self).__init__(writable=False)
if not os.path.exists(authz_file):
raise SourceError('Unable to find the authorization file "%s"'
% authz_file)
self._data = ConfigParser.ConfigParser()
self._data.read(authz_file)
self._sections = self._load_sections()
def _load_sections(self):
"""Subclasses should implement this"""
raise SourceError("This is implemented in the groups and "
"permissions adapter")
def _find_sections(self, hint):
"""Subclasses should implement this"""
raise SourceError("This is implemented in the groups and "
"permissions adapter")
def _get_all_sections(self):
"""Return all sections in this source adapter"""
return self._sections
def _get_section_items(self, section):
"""Return the items of the given section"""
return self._sections[section]
def _item_is_included(self, section, item):
"""True if the given item is included in the given section"""
return item in self._sections[section]
def _section_exists(self, section):
"""True if the section is included in this source adapter"""
return section in self._sections.keys()
class HgwebdirGroupsAdapter(CommonAdapter):
"""Source adapter for groups"""
def _load_sections(self):
"""Construct the sections and items information of this source.
The ini file with the information should have been read before.
The groups can be found in two places:
- Inside the [groups] section, every option is a group and its
members are the comma/space separate items in the value.
- Inside the other sections a line like 'user = rw' is handled
by creating a group with just an item. The name of the group
is the same as its only item.
"""
sections = {}
for section in self._data.sections():
if section == 'groups':
for group, items in self._data.items(section):
members = [U(it).strip(u",") for it in items.split()]
sections[U(group)] = set(members)
else:
for name in self._data.options(section):
if not name.startswith(u"@") and name != u'*':
group = U(name)
if group in sections.keys():
raise SourceError('There is already a group named '
'"%s". Maybe you forgot an @'
% group)
sections[group] = set((group, ))
return sections
def _find_sections(self, hint):
"""Return a set of groups (sections) the user belongs to.
Arguments:
hint: a repoze credentials dictionary. The userid is in the
repoze.what.userid key
"""
userid = hint['repoze.what.userid']
answer = set()
for section, items in self._sections.items():
if userid in items:
answer.add(section)
return answer
class HgwebdirPermissionsAdapter(CommonAdapter):
"""Source adapter for permissions"""
def _load_sections(self):
"""Construct the sections and items information of this source.
The ini file with the information should have been read before.
The permissions are built from the sections of the ini file.
The 'groups' section is special and ignored. From all the other
sections, two permissions are created, one for read access and
one for write access. The groups that have these permissions
are built from the options and values of such sections.
"""
sections = {}
for section in self._data.sections():
if section == 'groups':
continue
read_permission = U(section) + u'-read'
write_permission = U(section) + u'-write'
sections[read_permission] = set()
sections[write_permission] = set()
for name, value in self._data.items(section):
if name == u'*':
continue
if name.startswith(u"@"):
group = U(name[1:])
else:
group = U(name)
if 'r' in value:
sections[read_permission].add(group)
if 'w' in value:
sections[write_permission].add(group)
return sections
def _find_sections(self, hint):
"""Return the set of permission that has the group.
Arguments:
hint: the group name
"""
groupid = hint
answer = set()
for section, items in self._sections.items():
if groupid in items:
answer.add(section)
return answer
def get_public_repositories(authz_file):
"""Returns a list of repositories that should not be protected.
These repositories have a special line inside their section:
* = r
that means that everybody (*) can read them.
"""
if not os.path.exists(authz_file):
raise ValueError('Unable to find the authorization file "%s"'
% authz_file)
data = ConfigParser.ConfigParser()
data.read(authz_file)
public_repositories = []
for section in data.sections():
if section == 'groups':
continue
for name, value in data.items(section):
if name == u'*' and value == u'r':
public_repositories.append(U(section))
break
return public_repositories | /repoze.what.plugins.hgwebdir-0.1.1.tar.gz/repoze.what.plugins.hgwebdir-0.1.1/repoze/what/plugins/hgwebdir/adapters.py | 0.659734 | 0.301516 | adapters.py | pypi |
__version__ = '2.1.3'
import struct
class AddressValueError(ValueError):
"""A Value Error related to the address."""
class NetmaskValueError(ValueError):
"""A Value Error related to the netmask."""
def IPAddress(address, version=None):
"""Take an IP string/int and return an object of the correct type.
Args:
address: A string or integer, the IP address. Either IPv4 or
IPv6 addresses may be supplied; integers less than 2**32 will
be considered to be IPv4 by default.
version: An Integer, 4 or 6. If set, don't try to automatically
determine what the IP address type is. important for things
like IPAddress(1), which could be IPv4, '0.0.0.1', or IPv6,
'::1'.
Returns:
An IPv4Address or IPv6Address object.
Raises:
ValueError: if the string passed isn't either a v4 or a v6
address.
"""
if version:
if version == 4:
return IPv4Address(address)
elif version == 6:
return IPv6Address(address)
try:
return IPv4Address(address)
except (AddressValueError, NetmaskValueError):
pass
try:
return IPv6Address(address)
except (AddressValueError, NetmaskValueError):
pass
raise ValueError('%r does not appear to be an IPv4 or IPv6 address' %
address)
def IPNetwork(address, version=None, strict=False):
"""Take an IP string/int and return an object of the correct type.
Args:
address: A string or integer, the IP address. Either IPv4 or
IPv6 addresses may be supplied; integers less than 2**32 will
be considered to be IPv4 by default.
version: An Integer, if set, don't try to automatically
determine what the IP address type is. important for things
like IPNetwork(1), which could be IPv4, '0.0.0.1/32', or IPv6,
'::1/128'.
Returns:
An IPv4Network or IPv6Network object.
Raises:
ValueError: if the string passed isn't either a v4 or a v6
address. Or if a strict network was requested and a strict
network wasn't given.
"""
if version:
if version == 4:
return IPv4Network(address, strict)
elif version == 6:
return IPv6Network(address, strict)
try:
return IPv4Network(address, strict)
except (AddressValueError, NetmaskValueError):
pass
try:
return IPv6Network(address, strict)
except (AddressValueError, NetmaskValueError):
pass
raise ValueError('%r does not appear to be an IPv4 or IPv6 network' %
address)
def _find_address_range(addresses):
"""Find a sequence of addresses.
Args:
addresses: a list of IPv4 or IPv6 addresses.
Returns:
A tuple containing the first and last IP addresses in the sequence.
"""
first = last = addresses[0]
for ip in addresses[1:]:
if ip._ip == last._ip + 1:
last = ip
else:
break
return (first, last)
def _get_prefix_length(number1, number2, bits):
"""Get the number of leading bits that are same for two numbers.
Args:
number1: an integer.
number2: another integer.
bits: the maximum number of bits to compare.
Returns:
The number of leading bits that are the same for two numbers.
"""
for i in range(bits):
if number1 >> i == number2 >> i:
return bits - i
return 0
def _count_righthand_zero_bits(number, bits):
"""Count the number of zero bits on the right hand side.
Args:
number: an integer.
bits: maximum number of bits to count.
Returns:
The number of zero bits on the right hand side of the number.
"""
if number == 0:
return bits
for i in range(bits):
if (number >> i) % 2:
return i
def summarize_address_range(first, last):
"""Summarize a network range given the first and last IP addresses.
Example:
>>> summarize_address_range(IPv4Address('1.1.1.0'),
IPv4Address('1.1.1.130'))
[IPv4Network('1.1.1.0/25'), IPv4Network('1.1.1.128/31'),
IPv4Network('1.1.1.130/32')]
Args:
first: the first IPv4Address or IPv6Address in the range.
last: the last IPv4Address or IPv6Address in the range.
Returns:
The address range collapsed to a list of IPv4Network's or
IPv6Network's.
Raise:
TypeError:
If the first and last objects are not IP addresses.
If the first and last objects are not the same version.
ValueError:
If the last object is not greater than the first.
If the version is not 4 or 6.
"""
if not (isinstance(first, _BaseIP) and isinstance(last, _BaseIP)):
raise TypeError('first and last must be IP addresses, not networks')
if first.version != last.version:
raise TypeError("%s and %s are not of the same version" % (
str(self), str(other)))
if first > last:
raise ValueError('last IP address must be greater than first')
networks = []
if first.version == 4:
ip = IPv4Network
elif first.version == 6:
ip = IPv6Network
else:
raise ValueError('unknown IP version')
ip_bits = first._max_prefixlen
first_int = first._ip
last_int = last._ip
while first_int <= last_int:
nbits = _count_righthand_zero_bits(first_int, ip_bits)
current = None
while nbits >= 0:
addend = 2**nbits - 1
current = first_int + addend
nbits -= 1
if current <= last_int:
break
prefix = _get_prefix_length(first_int, current, ip_bits)
net = ip('%s/%d' % (str(first), prefix))
networks.append(net)
if current == ip._ALL_ONES:
break
first_int = current + 1
first = IPAddress(first_int, version=first._version)
return networks
def _collapse_address_list_recursive(addresses):
"""Loops through the addresses, collapsing concurrent netblocks.
Example:
ip1 = IPv4Network'1.1.0.0/24')
ip2 = IPv4Network'1.1.1.0/24')
ip3 = IPv4Network'1.1.2.0/24')
ip4 = IPv4Network'1.1.3.0/24')
ip5 = IPv4Network'1.1.4.0/24')
ip6 = IPv4Network'1.1.0.1/22')
_collapse_address_list_recursive([ip1, ip2, ip3, ip4, ip5, ip6]) ->
[IPv4Network('1.1.0.0/22'), IPv4Network('1.1.4.0/24')]
This shouldn't be called directly; it is called via
collapse_address_list([]).
Args:
addresses: A list of IPv4Network's or IPv6Network's
Returns:
A list of IPv4Network's or IPv6Network's depending on what we were
passed.
"""
ret_array = []
optimized = False
for cur_addr in addresses:
if not ret_array:
ret_array.append(cur_addr)
continue
if cur_addr in ret_array[-1]:
optimized = True
elif cur_addr == ret_array[-1].supernet().subnet()[1]:
ret_array.append(ret_array.pop().supernet())
optimized = True
else:
ret_array.append(cur_addr)
if optimized:
return _collapse_address_list_recursive(ret_array)
return ret_array
def collapse_address_list(addresses):
"""Collapse a list of IP objects.
Example:
collapse_address_list([IPv4('1.1.0.0/24'), IPv4('1.1.1.0/24')]) ->
[IPv4('1.1.0.0/23')]
Args:
addresses: A list of IPv4Network or IPv6Network objects.
Returns:
A list of IPv4Network or IPv6Network objects depending on what we
were passed.
Raises:
TypeError: If passed a list of mixed version objects.
"""
i = 0
addrs = []
ips = []
nets = []
# split IP addresses and networks
for ip in addresses:
if isinstance(ip, _BaseIP):
if ips and ips[-1]._version != ip._version:
raise TypeError("%s and %s are not of the same version" % (
str(ip), str(ips[-1])))
ips.append(ip)
elif ip._prefixlen == ip._max_prefixlen:
if ips and ips[-1]._version != ip._version:
raise TypeError("%s and %s are not of the same version" % (
str(ip), str(ips[-1])))
ips.append(ip.ip)
else:
if nets and nets[-1]._version != ip._version:
raise TypeError("%s and %s are not of the same version" % (
str(ip), str(ips[-1])))
nets.append(ip)
# sort and dedup
ips = sorted(set(ips))
nets = sorted(set(nets))
while i < len(ips):
(first, last) = _find_address_range(ips[i:])
i = ips.index(last) + 1
addrs.extend(summarize_address_range(first, last))
return _collapse_address_list_recursive(sorted(
addrs + nets, key=_BaseNet._get_networks_key))
# backwards compatibility
CollapseAddrList = collapse_address_list
# Test whether this Python implementation supports byte objects that
# are not identical to str ones.
# We need to exclude platforms where bytes == str so that we can
# distinguish between packed representations and strings, for example
# b'12::' (the IPv4 address 49.50.58.58) and '12::' (an IPv6 address).
try:
_compat_has_real_bytes = bytes is not str
except NameError: # <Python2.6
_compat_has_real_bytes = False
def get_mixed_type_key(obj):
"""Return a key suitable for sorting between networks and addresses.
Address and Network objects are not sortable by default; they're
fundamentally different so the expression
IPv4Address('1.1.1.1') <= IPv4Network('1.1.1.1/24')
doesn't make any sense. There are some times however, where you may wish
to have ipaddr sort these for you anyway. If you need to do this, you
can use this function as the key= argument to sorted().
Args:
obj: either a Network or Address object.
Returns:
appropriate key.
"""
if isinstance(obj, _BaseNet):
return obj._get_networks_key()
elif isinstance(obj, _BaseIP):
return obj._get_address_key()
return NotImplemented
class _IPAddrBase(object):
"""The mother class."""
def __index__(self):
return self._ip
def __int__(self):
return self._ip
def __hex__(self):
return hex(self._ip)
@property
def exploded(self):
"""Return the longhand version of the IP address as a string."""
return self._explode_shorthand_ip_string()
@property
def compressed(self):
"""Return the shorthand version of the IP address as a string."""
return str(self)
class _BaseIP(_IPAddrBase):
"""A generic IP object.
This IP class contains the version independent methods which are
used by single IP addresses.
"""
def __init__(self, address):
if '/' in str(address):
raise AddressValueError(address)
def __eq__(self, other):
try:
return (self._ip == other._ip
and self._version == other._version
and isinstance(other, _BaseIP))
except AttributeError:
return NotImplemented
def __ne__(self, other):
eq = self.__eq__(other)
if eq is NotImplemented:
return NotImplemented
return not eq
def __le__(self, other):
gt = self.__gt__(other)
if gt is NotImplemented:
return NotImplemented
return not gt
def __ge__(self, other):
lt = self.__lt__(other)
if lt is NotImplemented:
return NotImplemented
return not lt
def __lt__(self, other):
if self._version != other._version:
raise TypeError('%s and %s are not of the same version' % (
str(self), str(other)))
if not isinstance(other, _BaseIP):
raise TypeError('%s and %s are not of the same type' % (
str(self), str(other)))
if self._ip != other._ip:
return self._ip < other._ip
return False
def __gt__(self, other):
if self._version != other._version:
raise TypeError('%s and %s are not of the same version' % (
str(self), str(other)))
if not isinstance(other, _BaseIP):
raise TypeError('%s and %s are not of the same type' % (
str(self), str(other)))
if self._ip != other._ip:
return self._ip > other._ip
return False
# Shorthand for Integer addition and subtraction. This is not
# meant to ever support addition/subtraction of addresses.
def __add__(self, other):
if not isinstance(other, int):
return NotImplemented
return IPAddress(int(self) + other, version=self._version)
def __sub__(self, other):
if not isinstance(other, int):
return NotImplemented
return IPAddress(int(self) - other, version=self._version)
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, str(self))
def __str__(self):
return '%s' % self._string_from_ip_int(self._ip)
def __hash__(self):
return hash(hex(self._ip))
def _get_address_key(self):
return (self._version, self)
@property
def version(self):
raise NotImplementedError('BaseIP has no version')
class _BaseNet(_IPAddrBase):
"""A generic IP object.
This IP class contains the version independent methods which are
used by networks.
"""
def __init__(self, address):
self._cache = {}
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, str(self))
def iterhosts(self):
"""Generate Iterator over usable hosts in a network.
This is like __iter__ except it doesn't return the network
or broadcast addresses.
"""
cur = int(self.network) + 1
bcast = int(self.broadcast) - 1
while cur <= bcast:
cur += 1
yield IPAddress(cur - 1, version=self._version)
def __iter__(self):
cur = int(self.network)
bcast = int(self.broadcast)
while cur <= bcast:
cur += 1
yield IPAddress(cur - 1, version=self._version)
def __getitem__(self, n):
network = int(self.network)
broadcast = int(self.broadcast)
if n >= 0:
if network + n > broadcast:
raise IndexError
return IPAddress(network + n, version=self._version)
else:
n += 1
if broadcast + n < network:
raise IndexError
return IPAddress(broadcast + n, version=self._version)
def __lt__(self, other):
if self._version != other._version:
raise TypeError('%s and %s are not of the same version' % (
str(self), str(other)))
if not isinstance(other, _BaseNet):
raise TypeError('%s and %s are not of the same type' % (
str(self), str(other)))
if self.network != other.network:
return self.network < other.network
if self.netmask != other.netmask:
return self.netmask < other.netmask
return False
def __gt__(self, other):
if self._version != other._version:
raise TypeError('%s and %s are not of the same version' % (
str(self), str(other)))
if not isinstance(other, _BaseNet):
raise TypeError('%s and %s are not of the same type' % (
str(self), str(other)))
if self.network != other.network:
return self.network > other.network
if self.netmask != other.netmask:
return self.netmask > other.netmask
return False
def __le__(self, other):
gt = self.__gt__(other)
if gt is NotImplemented:
return NotImplemented
return not gt
def __ge__(self, other):
lt = self.__lt__(other)
if lt is NotImplemented:
return NotImplemented
return not lt
def __eq__(self, other):
try:
return (self._version == other._version
and self.network == other.network
and int(self.netmask) == int(other.netmask))
except AttributeError:
return NotImplemented
def __ne__(self, other):
eq = self.__eq__(other)
if eq is NotImplemented:
return NotImplemented
return not eq
def __str__(self):
return '%s/%s' % (str(self.ip),
str(self._prefixlen))
def __hash__(self):
return hash(int(self.network) ^ int(self.netmask))
def __contains__(self, other):
# dealing with another network.
if isinstance(other, _BaseNet):
return (self.network <= other.network and
self.broadcast >= other.broadcast)
# dealing with another address
else:
return (int(self.network) <= int(other._ip) <=
int(self.broadcast))
def overlaps(self, other):
"""Tell if self is partly contained in other."""
return self.network in other or self.broadcast in other or (
other.network in self or other.broadcast in self)
@property
def network(self):
x = self._cache.get('network')
if x is None:
x = IPAddress(self._ip & int(self.netmask), version=self._version)
self._cache['network'] = x
return x
@property
def broadcast(self):
x = self._cache.get('broadcast')
if x is None:
x = IPAddress(self._ip | int(self.hostmask), version=self._version)
self._cache['broadcast'] = x
return x
@property
def hostmask(self):
x = self._cache.get('hostmask')
if x is None:
x = IPAddress(int(self.netmask) ^ self._ALL_ONES,
version=self._version)
self._cache['hostmask'] = x
return x
@property
def with_prefixlen(self):
return '%s/%d' % (str(self.ip), self._prefixlen)
@property
def with_netmask(self):
return '%s/%s' % (str(self.ip), str(self.netmask))
@property
def with_hostmask(self):
return '%s/%s' % (str(self.ip), str(self.hostmask))
@property
def numhosts(self):
"""Number of hosts in the current subnet."""
return int(self.broadcast) - int(self.network) + 1
@property
def version(self):
raise NotImplementedError('BaseNet has no version')
@property
def prefixlen(self):
return self._prefixlen
def address_exclude(self, other):
"""Remove an address from a larger block.
For example:
addr1 = IP('10.1.1.0/24')
addr2 = IP('10.1.1.0/26')
addr1.address_exclude(addr2) =
[IP('10.1.1.64/26'), IP('10.1.1.128/25')]
or IPv6:
addr1 = IP('::1/32')
addr2 = IP('::1/128')
addr1.address_exclude(addr2) = [IP('::0/128'),
IP('::2/127'),
IP('::4/126'),
IP('::8/125'),
...
IP('0:0:8000::/33')]
Args:
other: An IP object of the same type.
Returns:
A sorted list of IP objects addresses which is self minus
other.
Raises:
TypeError: If self and other are of difffering address
versions, or if other is not a network object.
ValueError: If other is not completely contained by self.
"""
if not self._version == other._version:
raise TypeError("%s and %s are not of the same version" % (
str(self), str(other)))
if not isinstance(other, _BaseNet):
raise TypeError("%s is not a network object" % str(other))
if other not in self:
raise ValueError('%s not contained in %s' % (str(other),
str(self)))
if other == self:
return []
ret_addrs = []
# Make sure we're comparing the network of other.
other = IPNetwork('%s/%s' % (str(other.network), str(other.prefixlen)),
version=other._version)
s1, s2 = self.subnet()
while s1 != other and s2 != other:
if other in s1:
ret_addrs.append(s2)
s1, s2 = s1.subnet()
elif other in s2:
ret_addrs.append(s1)
s1, s2 = s2.subnet()
else:
# If we got here, there's a bug somewhere.
assert True == False, ('Error performing exclusion: '
's1: %s s2: %s other: %s' %
(str(s1), str(s2), str(other)))
if s1 == other:
ret_addrs.append(s2)
elif s2 == other:
ret_addrs.append(s1)
else:
# If we got here, there's a bug somewhere.
assert True == False, ('Error performing exclusion: '
's1: %s s2: %s other: %s' %
(str(s1), str(s2), str(other)))
return sorted(ret_addrs, key=_BaseNet._get_networks_key)
def compare_networks(self, other):
"""Compare two IP objects.
This is only concerned about the comparison of the integer
representation of the network addresses. This means that the
host bits aren't considered at all in this method. If you want
to compare host bits, you can easily enough do a
'HostA._ip < HostB._ip'
Args:
other: An IP object.
Returns:
If the IP versions of self and other are the same, returns:
-1 if self < other:
eg: IPv4('1.1.1.0/24') < IPv4('1.1.2.0/24')
IPv6('1080::200C:417A') < IPv6('1080::200B:417B')
0 if self == other
eg: IPv4('1.1.1.1/24') == IPv4('1.1.1.2/24')
IPv6('1080::200C:417A/96') == IPv6('1080::200C:417B/96')
1 if self > other
eg: IPv4('1.1.1.0/24') > IPv4('1.1.0.0/24')
IPv6('1080::1:200C:417A/112') >
IPv6('1080::0:200C:417A/112')
If the IP versions of self and other are different, returns:
-1 if self._version < other._version
eg: IPv4('10.0.0.1/24') < IPv6('::1/128')
1 if self._version > other._version
eg: IPv6('::1/128') > IPv4('255.255.255.0/24')
"""
if self._version < other._version:
return -1
if self._version > other._version:
return 1
# self._version == other._version below here:
if self.network < other.network:
return -1
if self.network > other.network:
return 1
# self.network == other.network below here:
if self.netmask < other.netmask:
return -1
if self.netmask > other.netmask:
return 1
# self.network == other.network and self.netmask == other.netmask
return 0
def _get_networks_key(self):
"""Network-only key function.
Returns an object that identifies this address' network and
netmask. This function is a suitable "key" argument for sorted()
and list.sort().
"""
return (self._version, self.network, self.netmask)
def _ip_int_from_prefix(self, prefixlen=None):
"""Turn the prefix length netmask into a int for comparison.
Args:
prefixlen: An integer, the prefix length.
Returns:
An integer.
"""
if not prefixlen and prefixlen != 0:
prefixlen = self._prefixlen
return self._ALL_ONES ^ (self._ALL_ONES >> prefixlen)
def _prefix_from_ip_int(self, ip_int, mask=32):
"""Return prefix length from the decimal netmask.
Args:
ip_int: An integer, the IP address.
mask: The netmask. Defaults to 32.
Returns:
An integer, the prefix length.
"""
while mask:
if ip_int & 1 == 1:
break
ip_int >>= 1
mask -= 1
return mask
def _ip_string_from_prefix(self, prefixlen=None):
"""Turn a prefix length into a dotted decimal string.
Args:
prefixlen: An integer, the netmask prefix length.
Returns:
A string, the dotted decimal netmask string.
"""
if not prefixlen:
prefixlen = self._prefixlen
return self._string_from_ip_int(self._ip_int_from_prefix(prefixlen))
def iter_subnets(self, prefixlen_diff=1, new_prefix=None):
"""The subnets which join to make the current subnet.
In the case that self contains only one IP
(self._prefixlen == 32 for IPv4 or self._prefixlen == 128
for IPv6), return a list with just ourself.
Args:
prefixlen_diff: An integer, the amount the prefix length
should be increased by. This should not be set if
new_prefix is also set.
new_prefix: The desired new prefix length. This must be a
larger number (smaller prefix) than the existing prefix.
This should not be set if prefixlen_diff is also set.
Returns:
An iterator of IPv(4|6) objects.
Raises:
ValueError: The prefixlen_diff is too small or too large.
OR
prefixlen_diff and new_prefix are both set or new_prefix
is a smaller number than the current prefix (smaller
number means a larger network)
"""
if self._prefixlen == self._max_prefixlen:
yield self
return
if new_prefix is not None:
if new_prefix < self._prefixlen:
raise ValueError('new prefix must be longer')
if prefixlen_diff != 1:
raise ValueError('cannot set prefixlen_diff and new_prefix')
prefixlen_diff = new_prefix - self._prefixlen
if prefixlen_diff < 0:
raise ValueError('prefix length diff must be > 0')
new_prefixlen = self._prefixlen + prefixlen_diff
if not self._is_valid_netmask(str(new_prefixlen)):
raise ValueError(
'prefix length diff %d is invalid for netblock %s' % (
new_prefixlen, str(self)))
first = IPNetwork('%s/%s' % (str(self.network),
str(self._prefixlen + prefixlen_diff)),
version=self._version)
yield first
current = first
while True:
broadcast = current.broadcast
if broadcast == self.broadcast:
return
new_addr = IPAddress(int(broadcast) + 1, version=self._version)
current = IPNetwork('%s/%s' % (str(new_addr), str(new_prefixlen)),
version=self._version)
yield current
def masked(self):
"""Return the network object with the host bits masked out."""
return IPNetwork('%s/%d' % (self.network, self._prefixlen),
version=self._version)
def subnet(self, prefixlen_diff=1, new_prefix=None):
"""Return a list of subnets, rather than an interator."""
return list(self.iter_subnets(prefixlen_diff, new_prefix))
def supernet(self, prefixlen_diff=1, new_prefix=None):
"""The supernet containing the current network.
Args:
prefixlen_diff: An integer, the amount the prefix length of
the network should be decreased by. For example, given a
/24 network and a prefixlen_diff of 3, a supernet with a
/21 netmask is returned.
Returns:
An IPv4 network object.
Raises:
ValueError: If self.prefixlen - prefixlen_diff < 0. I.e., you have a
negative prefix length.
OR
If prefixlen_diff and new_prefix are both set or new_prefix is a
larger number than the current prefix (larger number means a
smaller network)
"""
if self._prefixlen == 0:
return self
if new_prefix is not None:
if new_prefix > self._prefixlen:
raise ValueError('new prefix must be shorter')
if prefixlen_diff != 1:
raise ValueError('cannot set prefixlen_diff and new_prefix')
prefixlen_diff = self._prefixlen - new_prefix
if self.prefixlen - prefixlen_diff < 0:
raise ValueError(
'current prefixlen is %d, cannot have a prefixlen_diff of %d' %
(self.prefixlen, prefixlen_diff))
return IPNetwork('%s/%s' % (str(self.network),
str(self.prefixlen - prefixlen_diff)),
version=self._version)
# backwards compatibility
Subnet = subnet
Supernet = supernet
AddressExclude = address_exclude
CompareNetworks = compare_networks
Contains = __contains__
class _BaseV4(object):
"""Base IPv4 object.
The following methods are used by IPv4 objects in both single IP
addresses and networks.
"""
# Equivalent to 255.255.255.255 or 32 bits of 1's.
_ALL_ONES = (2**32) - 1
def __init__(self, address):
self._version = 4
self._max_prefixlen = 32
def _explode_shorthand_ip_string(self, ip_str=None):
if not ip_str:
ip_str = str(self)
return ip_str
def _ip_int_from_string(self, ip_str):
"""Turn the given IP string into an integer for comparison.
Args:
ip_str: A string, the IP ip_str.
Returns:
The IP ip_str as an integer.
Raises:
AddressValueError: if the string isn't a valid IP string.
"""
packed_ip = 0
octets = ip_str.split('.')
if len(octets) != 4:
raise AddressValueError(ip_str)
for oc in octets:
try:
packed_ip = (packed_ip << 8) | int(oc)
except ValueError:
raise AddressValueError(ip_str)
return packed_ip
def _string_from_ip_int(self, ip_int):
"""Turns a 32-bit integer into dotted decimal notation.
Args:
ip_int: An integer, the IP address.
Returns:
The IP address as a string in dotted decimal notation.
"""
octets = []
for _ in xrange(4):
octets.insert(0, str(ip_int & 0xFF))
ip_int >>= 8
return '.'.join(octets)
def _is_valid_ip(self, address):
"""Validate the dotted decimal notation IP/netmask string.
Args:
address: A string, either representing a quad-dotted ip
or an integer which is a valid IPv4 IP address.
Returns:
A boolean, True if the string is a valid dotted decimal IP
string.
"""
octets = address.split('.')
if len(octets) == 1:
# We have an integer rather than a dotted decimal IP.
try:
return int(address) >= 0 and int(address) <= self._ALL_ONES
except ValueError:
return False
if len(octets) != 4:
return False
for octet in octets:
try:
if not 0 <= int(octet) <= 255:
return False
except ValueError:
return False
return True
@property
def max_prefixlen(self):
return self._max_prefixlen
@property
def packed(self):
"""The binary representation of this address."""
return struct.pack('!I', self._ip)
@property
def version(self):
return self._version
@property
def is_reserved(self):
"""Test if the address is otherwise IETF reserved.
Returns:
A boolean, True if the address is within the
reserved IPv4 Network range.
"""
return self in IPv4Network('240.0.0.0/4')
@property
def is_private(self):
"""Test if this address is allocated for private networks.
Returns:
A boolean, True if the address is reserved per RFC 1918.
"""
return (self in IPv4Network('10.0.0.0/8') or
self in IPv4Network('172.16.0.0/12') or
self in IPv4Network('192.168.0.0/16'))
@property
def is_multicast(self):
"""Test if the address is reserved for multicast use.
Returns:
A boolean, True if the address is multicast.
See RFC 3171 for details.
"""
return self in IPv4Network('224.0.0.0/4')
@property
def is_unspecified(self):
"""Test if the address is unspecified.
Returns:
A boolean, True if this is the unspecified address as defined in
RFC 5735 3.
"""
return self in IPv4Network('0.0.0.0')
@property
def is_loopback(self):
"""Test if the address is a loopback address.
Returns:
A boolean, True if the address is a loopback per RFC 3330.
"""
return self in IPv4Network('127.0.0.0/8')
@property
def is_link_local(self):
"""Test if the address is reserved for link-local.
Returns:
A boolean, True if the address is link-local per RFC 3927.
"""
return self in IPv4Network('169.254.0.0/16')
class IPv4Address(_BaseV4, _BaseIP):
"""Represent and manipulate single IPv4 Addresses."""
def __init__(self, address):
"""
Args:
address: A string or integer representing the IP
'192.168.1.1'
Additionally, an integer can be passed, so
IPv4Address('192.168.1.1') == IPv4Address(3232235777).
or, more generally
IPv4Address(int(IPv4Address('192.168.1.1'))) ==
IPv4Address('192.168.1.1')
Raises:
AddressValueError: If ipaddr isn't a valid IPv4 address.
"""
_BaseIP.__init__(self, address)
_BaseV4.__init__(self, address)
# Efficient constructor from integer.
if isinstance(address, (int, long)):
self._ip = address
if address < 0 or address > self._ALL_ONES:
raise AddressValueError(address)
return
# Constructing from a packed address
if _compat_has_real_bytes:
if isinstance(address, bytes) and len(address) == 4:
self._ip = struct.unpack('!I', address)[0]
return
# Assume input argument to be string or any object representation
# which converts into a formatted IP string.
addr_str = str(address)
if not self._is_valid_ip(addr_str):
raise AddressValueError(addr_str)
self._ip = self._ip_int_from_string(addr_str)
class IPv4Network(_BaseV4, _BaseNet):
"""This class represents and manipulates 32-bit IPv4 networks.
Attributes: [examples for IPv4Network('1.2.3.4/27')]
._ip: 16909060
.ip: IPv4Address('1.2.3.4')
.network: IPv4Address('1.2.3.0')
.hostmask: IPv4Address('0.0.0.31')
.broadcast: IPv4Address('1.2.3.31')
.netmask: IPv4Address('255.255.255.224')
.prefixlen: 27
"""
# the valid octets for host and netmasks. only useful for IPv4.
_valid_mask_octets = set((255, 254, 252, 248, 240, 224, 192, 128, 0))
def __init__(self, address, strict=False):
"""Instantiate a new IPv4 network object.
Args:
address: A string or integer representing the IP [& network].
'192.168.1.1/24'
'192.168.1.1/255.255.255.0'
'192.168.1.1/0.0.0.255'
are all functionally the same in IPv4. Similarly,
'192.168.1.1'
'192.168.1.1/255.255.255.255'
'192.168.1.1/32'
are also functionaly equivalent. That is to say, failing to
provide a subnetmask will create an object with a mask of /32.
If the mask (portion after the / in the argument) is given in
dotted quad form, it is treated as a netmask if it starts with a
non-zero field (e.g. /255.0.0.0 == /8) and as a hostmask if it
starts with a zero field (e.g. 0.255.255.255 == /8), with the
single exception of an all-zero mask which is treated as a
netmask == /0. If no mask is given, a default of /32 is used.
Additionally, an integer can be passed, so
IPv4Network('192.168.1.1') == IPv4Network(3232235777).
or, more generally
IPv4Network(int(IPv4Network('192.168.1.1'))) ==
IPv4Network('192.168.1.1')
strict: A boolean. If true, ensure that we have been passed
A true network address, eg, 192.168.1.0/24 and not an
IP address on a network, eg, 192.168.1.1/24.
Raises:
AddressValueError: If ipaddr isn't a valid IPv4 address.
NetmaskValueError: If the netmask isn't valid for
an IPv4 address.
ValueError: If strict was True and a network address was not
supplied.
"""
_BaseNet.__init__(self, address)
_BaseV4.__init__(self, address)
# Efficient constructor from integer.
if isinstance(address, (int, long)):
self._ip = address
self.ip = IPv4Address(self._ip)
self._prefixlen = 32
self.netmask = IPv4Address(self._ALL_ONES)
if address < 0 or address > self._ALL_ONES:
raise AddressValueError(address)
return
# Constructing from a packed address
if _compat_has_real_bytes:
if isinstance(address, bytes) and len(address) == 4:
self._ip = struct.unpack('!I', address)[0]
self.ip = IPv4Address(self._ip)
self._prefixlen = 32
self.netmask = IPv4Address(self._ALL_ONES)
return
# Assume input argument to be string or any object representation
# which converts into a formatted IP prefix string.
addr = str(address).split('/')
if len(addr) > 2:
raise AddressValueError(address)
if not self._is_valid_ip(addr[0]):
raise AddressValueError(addr[0])
self._ip = self._ip_int_from_string(addr[0])
self.ip = IPv4Address(self._ip)
if len(addr) == 2:
mask = addr[1].split('.')
if len(mask) == 4:
# We have dotted decimal netmask.
if self._is_valid_netmask(addr[1]):
self.netmask = IPv4Address(self._ip_int_from_string(
addr[1]))
elif self._is_hostmask(addr[1]):
self.netmask = IPv4Address(
self._ip_int_from_string(addr[1]) ^ self._ALL_ONES)
else:
raise NetmaskValueError('%s is not a valid netmask'
% addr[1])
self._prefixlen = self._prefix_from_ip_int(int(self.netmask))
else:
# We have a netmask in prefix length form.
if not self._is_valid_netmask(addr[1]):
raise NetmaskValueError(addr[1])
self._prefixlen = int(addr[1])
self.netmask = IPv4Address(self._ip_int_from_prefix(
self._prefixlen))
else:
self._prefixlen = 32
self.netmask = IPv4Address(self._ip_int_from_prefix(
self._prefixlen))
if strict:
if self.ip != self.network:
raise ValueError('%s has host bits set' %
self.ip)
def _is_hostmask(self, ip_str):
"""Test if the IP string is a hostmask (rather than a netmask).
Args:
ip_str: A string, the potential hostmask.
Returns:
A boolean, True if the IP string is a hostmask.
"""
bits = ip_str.split('.')
try:
parts = [int(x) for x in bits if int(x) in self._valid_mask_octets]
except ValueError:
return False
if len(parts) != len(bits):
return False
if parts[0] < parts[-1]:
return True
return False
def _is_valid_netmask(self, netmask):
"""Verify that the netmask is valid.
Args:
netmask: A string, either a prefix or dotted decimal
netmask.
Returns:
A boolean, True if the prefix represents a valid IPv4
netmask.
"""
mask = netmask.split('.')
if len(mask) == 4:
if [x for x in mask if int(x) not in self._valid_mask_octets]:
return False
if [y for idx, y in enumerate(mask) if idx > 0 and
y > mask[idx - 1]]:
return False
return True
try:
netmask = int(netmask)
except ValueError:
return False
return 0 <= netmask <= 32
# backwards compatibility
IsRFC1918 = lambda self: self.is_private
IsMulticast = lambda self: self.is_multicast
IsLoopback = lambda self: self.is_loopback
IsLinkLocal = lambda self: self.is_link_local
class _BaseV6(object):
"""Base IPv6 object.
The following methods are used by IPv6 objects in both single IP
addresses and networks.
"""
_ALL_ONES = (2**128) - 1
def __init__(self, address):
self._version = 6
self._max_prefixlen = 128
def _ip_int_from_string(self, ip_str=None):
"""Turn an IPv6 ip_str into an integer.
Args:
ip_str: A string, the IPv6 ip_str.
Returns:
A long, the IPv6 ip_str.
Raises:
AddressValueError: if ip_str isn't a valid IP Address.
"""
if not ip_str:
ip_str = str(self.ip)
ip_int = 0
fields = self._explode_shorthand_ip_string(ip_str).split(':')
# Do we have an IPv4 mapped (::ffff:a.b.c.d) or compact (::a.b.c.d)
# ip_str?
if fields[-1].count('.') == 3:
ipv4_string = fields.pop()
ipv4_int = IPv4Network(ipv4_string)._ip
octets = []
for _ in xrange(2):
octets.append(hex(ipv4_int & 0xFFFF).lstrip('0x').rstrip('L'))
ipv4_int >>= 16
fields.extend(reversed(octets))
for field in fields:
try:
ip_int = (ip_int << 16) + int(field or '0', 16)
except ValueError:
raise AddressValueError(ip_str)
return ip_int
def _compress_hextets(self, hextets):
"""Compresses a list of hextets.
Compresses a list of strings, replacing the longest continuous
sequence of "0" in the list with "" and adding empty strings at
the beginning or at the end of the string such that subsequently
calling ":".join(hextets) will produce the compressed version of
the IPv6 address.
Args:
hextets: A list of strings, the hextets to compress.
Returns:
A list of strings.
"""
best_doublecolon_start = -1
best_doublecolon_len = 0
doublecolon_start = -1
doublecolon_len = 0
for index in range(len(hextets)):
if hextets[index] == '0':
doublecolon_len += 1
if doublecolon_start == -1:
# Start of a sequence of zeros.
doublecolon_start = index
if doublecolon_len > best_doublecolon_len:
# This is the longest sequence of zeros so far.
best_doublecolon_len = doublecolon_len
best_doublecolon_start = doublecolon_start
else:
doublecolon_len = 0
doublecolon_start = -1
if best_doublecolon_len > 1:
best_doublecolon_end = (best_doublecolon_start +
best_doublecolon_len)
# For zeros at the end of the address.
if best_doublecolon_end == len(hextets):
hextets += ['']
hextets[best_doublecolon_start:best_doublecolon_end] = ['']
# For zeros at the beginning of the address.
if best_doublecolon_start == 0:
hextets = [''] + hextets
return hextets
def _string_from_ip_int(self, ip_int=None):
"""Turns a 128-bit integer into hexadecimal notation.
Args:
ip_int: An integer, the IP address.
Returns:
A string, the hexadecimal representation of the address.
Raises:
ValueError: The address is bigger than 128 bits of all ones.
"""
if not ip_int and ip_int != 0:
ip_int = int(self._ip)
if ip_int > self._ALL_ONES:
raise ValueError('IPv6 address is too large')
hex_str = '%032x' % ip_int
hextets = []
for x in range(0, 32, 4):
hextets.append('%x' % int(hex_str[x:x+4], 16))
hextets = self._compress_hextets(hextets)
return ':'.join(hextets)
def _explode_shorthand_ip_string(self, ip_str=None):
"""Expand a shortened IPv6 address.
Args:
ip_str: A string, the IPv6 address.
Returns:
A string, the expanded IPv6 address.
"""
if not ip_str:
ip_str = str(self)
if isinstance(self, _BaseNet):
ip_str = str(self.ip)
if self._is_shorthand_ip(ip_str):
new_ip = []
hextet = ip_str.split('::')
sep = len(hextet[0].split(':')) + len(hextet[1].split(':'))
new_ip = hextet[0].split(':')
for _ in xrange(8 - sep):
new_ip.append('0000')
new_ip += hextet[1].split(':')
# Now need to make sure every hextet is 4 lower case characters.
# If a hextet is < 4 characters, we've got missing leading 0's.
ret_ip = []
for hextet in new_ip:
ret_ip.append(('0' * (4 - len(hextet)) + hextet).lower())
return ':'.join(ret_ip)
# We've already got a longhand ip_str.
return ip_str
def _is_valid_ip(self, ip_str):
"""Ensure we have a valid IPv6 address.
Probably not as exhaustive as it should be.
Args:
ip_str: A string, the IPv6 address.
Returns:
A boolean, True if this is a valid IPv6 address.
"""
# We need to have at least one ':'.
if ':' not in ip_str:
return False
# We can only have one '::' shortener.
if ip_str.count('::') > 1:
return False
# '::' should be encompassed by start, digits or end.
if ':::' in ip_str:
return False
# A single colon can neither start nor end an address.
if ((ip_str.startswith(':') and not ip_str.startswith('::')) or
(ip_str.endswith(':') and not ip_str.endswith('::'))):
return False
# If we have no concatenation, we need to have 8 fields with 7 ':'.
if '::' not in ip_str and ip_str.count(':') != 7:
# We might have an IPv4 mapped address.
if ip_str.count('.') != 3:
return False
ip_str = self._explode_shorthand_ip_string(ip_str)
# Now that we have that all squared away, let's check that each of the
# hextets are between 0x0 and 0xFFFF.
for hextet in ip_str.split(':'):
if hextet.count('.') == 3:
# If we have an IPv4 mapped address, the IPv4 portion has to
# be at the end of the IPv6 portion.
if not ip_str.split(':')[-1] == hextet:
return False
try:
IPv4Network(hextet)
except AddressValueError:
return False
else:
try:
# a value error here means that we got a bad hextet,
# something like 0xzzzz
if int(hextet, 16) < 0x0 or int(hextet, 16) > 0xFFFF:
return False
except ValueError:
return False
return True
def _is_shorthand_ip(self, ip_str=None):
"""Determine if the address is shortened.
Args:
ip_str: A string, the IPv6 address.
Returns:
A boolean, True if the address is shortened.
"""
if ip_str.count('::') == 1:
return True
return False
@property
def max_prefixlen(self):
return self._max_prefixlen
@property
def packed(self):
"""The binary representation of this address."""
return struct.pack('!QQ', self._ip >> 64, self._ip & (2**64 - 1))
@property
def version(self):
return self._version
@property
def is_multicast(self):
"""Test if the address is reserved for multicast use.
Returns:
A boolean, True if the address is a multicast address.
See RFC 2373 2.7 for details.
"""
return self in IPv6Network('ff00::/8')
@property
def is_reserved(self):
"""Test if the address is otherwise IETF reserved.
Returns:
A boolean, True if the address is within one of the
reserved IPv6 Network ranges.
"""
return (self in IPv6Network('::/8') or
self in IPv6Network('100::/8') or
self in IPv6Network('200::/7') or
self in IPv6Network('400::/6') or
self in IPv6Network('800::/5') or
self in IPv6Network('1000::/4') or
self in IPv6Network('4000::/3') or
self in IPv6Network('6000::/3') or
self in IPv6Network('8000::/3') or
self in IPv6Network('A000::/3') or
self in IPv6Network('C000::/3') or
self in IPv6Network('E000::/4') or
self in IPv6Network('F000::/5') or
self in IPv6Network('F800::/6') or
self in IPv6Network('FE00::/9'))
@property
def is_unspecified(self):
"""Test if the address is unspecified.
Returns:
A boolean, True if this is the unspecified address as defined in
RFC 2373 2.5.2.
"""
return (self == IPv6Network('::') or self == IPv6Address('::'))
@property
def is_loopback(self):
"""Test if the address is a loopback address.
Returns:
A boolean, True if the address is a loopback address as defined in
RFC 2373 2.5.3.
"""
return (self == IPv6Network('::1') or self == IPv6Address('::1'))
@property
def is_link_local(self):
"""Test if the address is reserved for link-local.
Returns:
A boolean, True if the address is reserved per RFC 4291.
"""
return self in IPv6Network('fe80::/10')
@property
def is_site_local(self):
"""Test if the address is reserved for site-local.
Note that the site-local address space has been deprecated by RFC 3879.
Use is_private to test if this address is in the space of unique local
addresses as defined by RFC 4193.
Returns:
A boolean, True if the address is reserved per RFC 3513 2.5.6.
"""
return self in IPv6Network('fec0::/10')
@property
def is_private(self):
"""Test if this address is allocated for private networks.
Returns:
A boolean, True if the address is reserved per RFC 4193.
"""
return self in IPv6Network('fc00::/7')
@property
def ipv4_mapped(self):
"""Return the IPv4 mapped address.
Returns:
If the IPv6 address is a v4 mapped address, return the
IPv4 mapped address. Return None otherwise.
"""
hextets = self._explode_shorthand_ip_string().split(':')
if hextets[-3] != 'ffff':
return None
try:
return IPv4Address(int('%s%s' % (hextets[-2], hextets[-1]), 16))
except AddressValueError:
return None
class IPv6Address(_BaseV6, _BaseIP):
"""Represent and manipulate single IPv6 Addresses.
"""
def __init__(self, address):
"""Instantiate a new IPv6 address object.
Args:
address: A string or integer representing the IP
Additionally, an integer can be passed, so
IPv6Address('2001:4860::') ==
IPv6Address(42541956101370907050197289607612071936L).
or, more generally
IPv6Address(IPv6Address('2001:4860::')._ip) ==
IPv6Address('2001:4860::')
Raises:
AddressValueError: If address isn't a valid IPv6 address.
"""
_BaseIP.__init__(self, address)
_BaseV6.__init__(self, address)
# Efficient constructor from integer.
if isinstance(address, (int, long)):
self._ip = address
if address < 0 or address > self._ALL_ONES:
raise AddressValueError(address)
return
# Constructing from a packed address
if _compat_has_real_bytes:
if isinstance(address, bytes) and len(address) == 16:
tmp = struct.unpack('!QQ', address)
self._ip = (tmp[0] << 64) | tmp[1]
return
# Assume input argument to be string or any object representation
# which converts into a formatted IP string.
addr_str = str(address)
if not addr_str:
raise AddressValueError('')
self._ip = self._ip_int_from_string(addr_str)
class IPv6Network(_BaseV6, _BaseNet):
"""This class represents and manipulates 128-bit IPv6 networks.
Attributes: [examples for IPv6('2001:658:22A:CAFE:200::1/64')]
.ip: IPv6Address('2001:658:22a:cafe:200::1')
.network: IPv6Address('2001:658:22a:cafe::')
.hostmask: IPv6Address('::ffff:ffff:ffff:ffff')
.broadcast: IPv6Address('2001:658:22a:cafe:ffff:ffff:ffff:ffff')
.netmask: IPv6Address('ffff:ffff:ffff:ffff::')
.prefixlen: 64
"""
def __init__(self, address, strict=False):
"""Instantiate a new IPv6 Network object.
Args:
address: A string or integer representing the IPv6 network or the IP
and prefix/netmask.
'2001:4860::/128'
'2001:4860:0000:0000:0000:0000:0000:0000/128'
'2001:4860::'
are all functionally the same in IPv6. That is to say,
failing to provide a subnetmask will create an object with
a mask of /128.
Additionally, an integer can be passed, so
IPv6Network('2001:4860::') ==
IPv6Network(42541956101370907050197289607612071936L).
or, more generally
IPv6Network(IPv6Network('2001:4860::')._ip) ==
IPv6Network('2001:4860::')
strict: A boolean. If true, ensure that we have been passed
A true network address, eg, 192.168.1.0/24 and not an
IP address on a network, eg, 192.168.1.1/24.
Raises:
AddressValueError: If address isn't a valid IPv6 address.
NetmaskValueError: If the netmask isn't valid for
an IPv6 address.
ValueError: If strict was True and a network address was not
supplied.
"""
_BaseNet.__init__(self, address)
_BaseV6.__init__(self, address)
# Efficient constructor from integer.
if isinstance(address, (int, long)):
self._ip = address
self.ip = IPv6Address(self._ip)
self._prefixlen = 128
self.netmask = IPv6Address(self._ALL_ONES)
if address < 0 or address > self._ALL_ONES:
raise AddressValueError(address)
return
# Constructing from a packed address
if _compat_has_real_bytes:
if isinstance(address, bytes) and len(address) == 16:
tmp = struct.unpack('!QQ', address)
self._ip = (tmp[0] << 64) | tmp[1]
self.ip = IPv6Address(self._ip)
self._prefixlen = 128
self.netmask = IPv6Address(self._ALL_ONES)
return
# Assume input argument to be string or any object representation
# which converts into a formatted IP prefix string.
addr = str(address).split('/')
if len(addr) > 2:
raise AddressValueError(address)
if not self._is_valid_ip(addr[0]):
raise AddressValueError(addr[0])
if len(addr) == 2:
if self._is_valid_netmask(addr[1]):
self._prefixlen = int(addr[1])
else:
raise NetmaskValueError(addr[1])
else:
self._prefixlen = 128
self.netmask = IPv6Address(self._ip_int_from_prefix(self._prefixlen))
self._ip = self._ip_int_from_string(addr[0])
self.ip = IPv6Address(self._ip)
if strict:
if self.ip != self.network:
raise ValueError('%s has host bits set' %
self.ip)
def _is_valid_netmask(self, prefixlen):
"""Verify that the netmask/prefixlen is valid.
Args:
prefixlen: A string, the netmask in prefix length format.
Returns:
A boolean, True if the prefix represents a valid IPv6
netmask.
"""
try:
prefixlen = int(prefixlen)
except ValueError:
return False
return 0 <= prefixlen <= 128
@property
def with_netmask(self):
return self.with_prefixlen | /repoze.what.plugins.ip-0.2.tar.gz/repoze.what.plugins.ip-0.2/repoze/what/plugins/ip/ipaddr.py | 0.941701 | 0.438725 | ipaddr.py | pypi |
from repoze.what.adapters import BaseSourceAdapter
__version__ = '0.1.1'
__author__ = 'Ryan Senkbeil <rsenk330@gmail.com'
__license__ = 'GPL v3'
class MongoBaseSourceAdapter(BaseSourceAdapter):
"""Base source adapter for MongoDB."""
def __init__(self, db):
self._db = db
self._translations = {}
@property
def translations(self):
"""Gets the database translations.
:returns: A dictionary containing the database translations.
"""
return self._translations
@translations.setter
def translations(self, translations):
"""Sets the database translations.
This should be a dictionary and can contain the following keys:
* ``username``: The field containing the user's username. Default: `username`.
* ``usergroups``: The field containing the groups of an individual user. Default: `groups`.
* ``grouppermissions``: The field containing the permissions of an individual group. Default: `permissions`.
* ``permissionsname``: The field containing the permission names. Default: `name`.
* ``groupsname``: The field containing the group names. Default: `name`.
* ``groupdoc``: The document containing the groups. Default: `auth.groups`.
* ``permissiondoc``: The document containing the permissions. Default: `auth.permissions`.
* ``userdoc``: The document containing the users. Default: `auth.users`.
:param dict translations: The database translations dictionary.
"""
self._translations = translations
def _username(self):
"""Gets the User's 'username' field name."""
return self._translations.get('username', 'username')
def _usergroups(self):
"""Gets the User's 'groups' field name."""
return self._translations.get('usergroups', 'groups')
def _grouppermissions(self):
"""Gets the Group's 'permissions' field name."""
return self._translations.get('grouppermissions', 'permissions')
def _permissionname(self):
"""Gets the Permission's 'name' field name."""
return self._translations.get('permissionname', 'name')
def _groupsname(self):
"""Gets the Group's 'name' field name."""
return self._translations.get('groupsname', 'name')
def _groupdoc(self):
"""Gets the Group Document name."""
return self._translations.get('groupdoc', 'auth.groups')
def _permissiondoc(self):
"""Gets the Permission Document name."""
return self._translations.get('permissiondoc', 'auth.permissions')
def _userdoc(self):
"""Gets the User Document name."""
return self._translations.get('userdoc', 'auth.users')
class MongoGroupAdapter(MongoBaseSourceAdapter):
"""Group adapter for MongoDB."""
def _get_section_items(self, section):
"""Gets a list of permissions in the group specified by `section`."""
group = self._db[self._groupdoc()].find_one({self._groupsname(): section})
if group:
return [document[self._permissionname()] for document in group.get(self._grouppermissions(), [])]
else:
return []
def _find_sections(self, hint):
"""Gets a list of groups that the user specified by `hint` belongs to."""
user = self._db[self._userdoc()].find_one({self._username(): hint['repoze.what.userid']})
if user:
return [document[self._groupsname()] for document in user.get(self._usergroups(), [])]
else:
return []
def _get_all_sections(self):
raise NotImplementedError()
def _include_items(self, section, items):
raise NotImplementedError()
def _item_is_included(self, section, item):
raise NotImplementedError()
def _section_exists(self, section):
raise NotImplementedError()
class MongoPermissionAdapter(MongoBaseSourceAdapter):
"""Permission adapter for MongoDB."""
def _find_sections(self, hint):
"""Gets a list of permissions in the group specified by `hint`."""
group = self._db[self._groupdoc()].find_one({self._groupsname(): hint})
if group:
return [document[self._permissionname()] for document in group.get(self._grouppermissions(), [])]
else:
return []
def _get_all_sections(self):
raise NotImplementedError()
def _get_section_items(self, section):
raise NotImplementedError()
def _include_items(self, section, items):
raise NotImplementedError()
def _item_is_included(self, section, item):
raise NotImplementedError()
def _section_exists(self, section):
raise NotImplementedError() | /repoze.what.plugins.mongodb-0.1.1.tar.gz/repoze.what.plugins.mongodb-0.1.1/repoze/what/plugins/mongodb/adapters.py | 0.877201 | 0.257313 | adapters.py | pypi |
from dateutil.parser import parse as date_parse
from dateutil.tz import tzutc
from datetime import datetime
import re
VERIFY_KEY = 'SSL_CLIENT_VERIFY'
VALIDITY_START_KEY = 'SSL_CLIENT_V_START'
VALIDITY_END_KEY = 'SSL_CLIENT_V_END'
# OpenSSL's DNs are separated by /, but the problem is that it may have any
# escaped characters as value.
# Thanks to David Esperanza
_DN_SSL_REGEX = re.compile('(/\\s*\\w+=)')
_TZ_UTC = tzutc()
__all__ = ['parse_dn', 'verify_certificate', 'VERIFY_KEY',
'VALIDITY_START_KEY', 'VALIDITY_END_KEY']
def parse_dn(dn):
"""
Parses a OpenSSL-like distinguished name into a dictionary. The keys are
the attribute types and the values are lists (multiple values for that
type).
"Multi-values" are not supported (e.g., O=company+CN=name).
:param dn: The distinguished name.
:raise ValueError: When you input an invalid or empty distinguished name.
"""
parsed = {}
split_string = _DN_SSL_REGEX.split(dn)
if split_string[0] == '':
split_string.pop(0)
for i in range(0, len(split_string), 2):
try:
type_, value = split_string[i][1:-1], split_string[i + 1]
except IndexError:
raise ValueError('Invalid DN')
if len(value) == 0:
raise ValueError('Invalid DN: Invalid value')
if type_ not in parsed:
parsed[type_] = []
parsed[type_].append(value)
if len(parsed) == 0:
raise ValueError('Invalid DN: Empty DN')
return parsed
def verify_certificate(environ, verify_key, validity_start_key,
validity_end_key):
"""
Checks if the client certificate is valid. Start and end data is optional,
as not all SSL mods give that information.
:param environ: The WSGI environment.
:param verify_key: The key for the value in the environment where it was
stored if the certificate is valid or not.
:param validity_start_key: The key for the value in the environment with
the encoded datetime that indicates the start of the validity range.
:param validity_end_key: The key for the value in the environment with the
encoded datetime that indicates the end of the validity range.
"""
verified = environ.get(verify_key)
validity_start = environ.get(validity_start_key)
validity_end = environ.get(validity_end_key)
if verified != 'SUCCESS':
return False
if validity_start is None or validity_end is None:
return True
validity_start = date_parse(validity_start)
validity_end = date_parse(validity_end)
if validity_start.tzinfo != _TZ_UTC or validity_end.tzinfo != _TZ_UTC:
# Can't consider other timezones
return False
now = datetime.utcnow().replace(tzinfo=_TZ_UTC)
return validity_start <= now <= validity_end | /repoze.who-x509-0.2.0.tar.gz/repoze.who-x509-0.2.0/repoze/who/plugins/x509/utils.py | 0.529263 | 0.199893 | utils.py | pypi |
from zope.interface import implements as zope_implements
from repoze.who.interfaces import IIdentifier
from .utils import *
__all__ = ['X509Identifier']
class X509Identifier(object):
"""
IIdentifier for HTTP requests with client certificates.
"""
zope_implements(IIdentifier)
classifications = { IIdentifier: ['browser'] }
def __init__(self, subject_dn_key, login_field='Email',
multiple_values=False, verify_key=VERIFY_KEY,
start_key=VALIDITY_START_KEY, end_key=VALIDITY_END_KEY,
classifications=None):
"""
:param subject_dn_key: The WSGI environment key for the subject
distinguished name (it also works as the base for the server
variables).
:param login_field: The field of the distinguished name that will use
to recognize the user.
:param multiple_values: Determines if we allow to have multiple values
in the ``login_field``.
:param verify_key: The WSGI environment key where it can check if the
client certificate is valid.
:param start_key: The WSGI environment key with the encoded datetime of
the start of the validity range.
:param end_key: The WSGI environment key with the encoded datetime of
the end of the validity range.
:param classifications: The ``repoze.who`` classifications for this
identifier (used with the classifier).
"""
self.subject_dn_key = subject_dn_key
self.login_field = login_field
self.verify_key = verify_key
self.start_key = start_key
self.end_key = end_key
self.multiple_values = multiple_values
if classifications is not None:
self.classifications[IIdentifier] = classifications
# IIdentifier
def identify(self, environ):
"""
Gets the credentials for this request.
:param environ: The WSGI environment.
"""
subject_dn = environ.get(self.subject_dn_key)
if subject_dn is None or not verify_certificate(
environ,
self.verify_key,
self.start_key,
self.end_key
):
return None
creds = {'subject': subject_dn }
# First let's try with Apache-like var name, if None then parse the DN
key = self.subject_dn_key + '_' + self.login_field
login = environ.get(key)
if login is None:
try:
login = parse_dn(subject_dn)[self.login_field]
except:
login = None
else:
values = []
try:
n = 0
while True:
values.append(environ[key + '_' + str(n)])
n += 1
except KeyError:
pass
if n == 0:
login = [login]
else:
login = values
if login is None:
return None
if not self.multiple_values and len(login) > 1:
return None
elif not self.multiple_values:
creds['login'] = login[0]
else:
creds['login'] = login
return creds
# IIdentifier
def forget(self, environ, identity):
"""
Not used. We can't forget because it is client certificated based.
"""
# We can't forget
return None
# IIdentifier
def remember(self, environ, identity):
"""
Not used. The browser always remembers the client credentials.
"""
# We always remember as it is provided by the server
return None | /repoze.who-x509-0.2.0.tar.gz/repoze.who-x509-0.2.0/repoze/who/plugins/x509/__init__.py | 0.640523 | 0.261369 | __init__.py | pypi |
import hashlib
import http.cookies
import time as time_mod
import urllib.parse
from repoze.who._helpers import encodestring
DEFAULT_DIGEST = hashlib.md5
def _exclude_separator(separator, value, fieldname):
if isinstance(value, bytes):
separator = separator.encode("ascii")
if separator in value:
raise ValueError(
"{} may not contain '{}'".format(fieldname, separator)
)
class AuthTicket(object):
"""
This class represents an authentication token. You must pass in
the shared secret, the userid, and the IP address. Optionally you
can include tokens (a list of strings, representing role names),
'user_data', which is arbitrary data available for your own use in
later scripts. Lastly, you can override the timestamp, cookie name,
whether to secure the cookie and the digest algorithm (for details
look at ``AuthTKTMiddleware``).
Once you provide all the arguments, use .cookie_value() to
generate the appropriate authentication ticket. .cookie()
generates a Cookie object, the str() of which is the complete
cookie header to be sent.
CGI usage::
token = auth_tkt.AuthTick('sharedsecret', 'username',
os.environ['REMOTE_ADDR'], tokens=['admin'])
print('Status: 200 OK')
print('Content-type: text/html')
print(token.cookie())
print("")
... redirect HTML ...
Webware usage::
token = auth_tkt.AuthTick('sharedsecret', 'username',
self.request().environ()['REMOTE_ADDR'], tokens=['admin'])
self.response().setCookie('auth_tkt', token.cookie_value())
Be careful not to do an HTTP redirect after login; use meta
refresh or Javascript -- some browsers have bugs where cookies
aren't saved when set on a redirect.
"""
def __init__(self, secret, userid, ip, tokens=(), user_data='',
time=None, cookie_name='auth_tkt',
secure=False, digest_algo=DEFAULT_DIGEST):
self.secret = secret
_exclude_separator('!', userid, "'userid'")
self.userid = userid
self.ip = ip
for token in tokens:
_exclude_separator(',', token, "'token' values")
_exclude_separator('!', token, "'token' values")
self.tokens = ','.join(tokens)
_exclude_separator('!', user_data, "'user_data'")
self.user_data = user_data
if time is None:
self.time = time_mod.time()
else:
self.time = time
self.cookie_name = cookie_name
self.secure = secure
if isinstance(digest_algo, str):
# correct specification of digest from hashlib or fail
self.digest_algo = getattr(hashlib, digest_algo)
else:
self.digest_algo = digest_algo
def digest(self):
return calculate_digest(
self.ip, self.time, self.secret, self.userid, self.tokens,
self.user_data, self.digest_algo)
def cookie_value(self):
v = '%s%08x%s!' % (self.digest(), int(self.time),
urllib.parse.quote(self.userid))
if self.tokens:
v += self.tokens + '!'
v += self.user_data
return v
def cookie(self):
c = http.cookies.SimpleCookie()
c_val = encodestring(self.cookie_value())
c_val = c_val.strip().replace('\n', '')
c[self.cookie_name] = c_val
c[self.cookie_name]['path'] = '/'
if self.secure:
c[self.cookie_name]['secure'] = 'true'
return c
class BadTicket(Exception):
"""
Exception raised when a ticket can't be parsed. If we get
far enough to determine what the expected digest should have
been, expected is set. This should not be shown by default,
but can be useful for debugging.
"""
def __init__(self, msg, expected=None):
self.expected = expected
Exception.__init__(self, msg)
def parse_ticket(secret, ticket, ip, digest_algo=DEFAULT_DIGEST):
"""
Parse the ticket, returning (timestamp, userid, tokens, user_data).
If the ticket cannot be parsed, ``BadTicket`` will be raised with
an explanation.
"""
if isinstance(digest_algo, str):
# correct specification of digest from hashlib or fail
digest_algo = getattr(hashlib, digest_algo)
digest_hexa_size = digest_algo().digest_size * 2
ticket = ticket.strip('"')
digest = ticket[:digest_hexa_size]
try:
timestamp = int(ticket[digest_hexa_size:digest_hexa_size + 8], 16)
except ValueError as e:
raise BadTicket('Timestamp is not a hex integer: %s' % e)
try:
userid, data = ticket[digest_hexa_size + 8:].split('!', 1)
except ValueError:
raise BadTicket('userid is not followed by !')
userid = urllib.parse.unquote(userid)
if '!' in data:
tokens, user_data = data.split('!', 1)
else:
# @@: Is this the right order?
tokens = ''
user_data = data
expected = calculate_digest(ip, timestamp, secret,
userid, tokens, user_data,
digest_algo)
if expected != digest:
raise BadTicket('Digest signature is not correct',
expected=(expected, digest))
tokens = tokens.split(',')
return (timestamp, userid, tokens, user_data)
def calculate_digest(ip, timestamp, secret, userid, tokens, user_data,
digest_algo):
secret = maybe_encode(secret)
userid = maybe_encode(userid)
tokens = maybe_encode(tokens)
user_data = maybe_encode(user_data)
digest0 = digest_algo(
encode_ip_timestamp(ip, timestamp) + secret + userid + b'\0'
+ tokens + b'\0' + user_data).hexdigest()
digest = digest_algo(maybe_encode(digest0) + secret).hexdigest()
return digest
if type(chr(1)) == type(b''): #pragma NO COVER Python < 3.0
def ints2bytes(ints):
return b''.join(map(chr, ints))
else: #pragma NO COVER Python >= 3.0
def ints2bytes(ints):
return bytes(ints)
def encode_ip_timestamp(ip, timestamp):
ip_chars = ints2bytes(map(int, ip.split('.')))
t = int(timestamp)
ts = ((t & 0xff000000) >> 24,
(t & 0xff0000) >> 16,
(t & 0xff00) >> 8,
t & 0xff)
ts_chars = ints2bytes(ts)
return ip_chars + ts_chars
def maybe_encode(s, encoding='utf8'):
if not isinstance(s, type(b'')):
s = s.encode(encoding)
return s
# Original Paste AuthTktMiddleware stripped: we don't have a use for it. | /repoze.who-3.0.0b1-py3-none-any.whl/repoze/who/_auth_tkt.py | 0.561095 | 0.225076 | _auth_tkt.py | pypi |
from zope.interface import Interface
class IAPIFactory(Interface):
def __call__(environ):
""" environ -> IRepozeWhoAPI
"""
class IAPI(Interface):
""" Facade for stateful invocation of underlying plugins.
"""
def authenticate():
""" -> {identity}
o Return an authenticated identity mapping, extracted from the
request environment.
o If no identity can be authenticated, return None.
o Identity will include at least a 'repoze.who.userid' key,
as well as any keys added by metadata plugins.
"""
def challenge(status='403 Forbidden', app_headers=()):
""" -> wsgi application
o Return a WSGI application which represents a "challenge"
(request for credentials) in response to the current request.
"""
def remember(identity=None):
""" -> [headers]
O Return a sequence of response headers which suffice to remember
the given identity.
o If 'identity' is not passed, use the identity in the environment.
"""
def forget(identity=None):
""" -> [headers]
O Return a sequence of response headers which suffice to destroy
any credentials used to establish an identity.
o If 'identity' is not passed, use the identity in the environment.
"""
def login(credentials, identifier_name=None):
""" -> (identity, headers)
o This is an API for browser-based application login forms.
o If 'identifier_name' is passed, use it to look up the identifier;
othewise, use the first configured identifier.
o Attempt to authenticate 'credentials' as though the identifier
had extracted them.
o On success, 'identity' will be authenticated mapping, and 'headers'
will be "remember" headers.
o On failure, 'identity' will be None, and response_headers will be
"forget" headers.
"""
def logout(identifier_name=None):
""" -> (headers)
o This is an API for browser-based application logout.
o If 'identifier_name' is passed, use it to look up the identifier;
othewise, use the first configured identifier.
o Returned headers will be "forget" headers.
"""
class IPlugin(Interface):
pass
class IRequestClassifier(IPlugin):
""" On ingress: classify a request.
"""
def __call__(environ):
""" environ -> request classifier string
This interface is responsible for returning a string
value representing a request classification.
o 'environ' is the WSGI environment.
"""
class IChallengeDecider(IPlugin):
""" On egress: decide whether a challenge needs to be presented
to the user.
"""
def __call__(environ, status, headers):
""" args -> True | False
o 'environ' is the WSGI environment.
o 'status' is the HTTP status as returned by the downstream
WSGI application.
o 'headers' are the headers returned by the downstream WSGI
application.
This interface is responsible for returning True if
a challenge needs to be presented to the user, False otherwise.
"""
class IIdentifier(IPlugin):
"""
On ingress: Extract credentials from the WSGI environment and
turn them into an identity.
On egress (remember): Conditionally set information in the response headers
allowing the remote system to remember this identity.
On egress (forget): Conditionally set information in the response
headers allowing the remote system to forget this identity (during
a challenge).
"""
def identify(environ):
""" On ingress:
environ -> { k1 : v1
, ...
, kN : vN
} | None
o 'environ' is the WSGI environment.
o If credentials are found, the returned identity mapping will
contain an arbitrary set of key/value pairs. If the
identity is based on a login and password, the environment
is recommended to contain at least 'login' and 'password'
keys as this provides compatibility between the plugin and
existing authenticator plugins. If the identity can be
'preauthenticated' (e.g. if the userid is embedded in the
identity, such as when we're using ticket-based
authentication), the plugin should set the userid in the
special 'repoze.who.userid' key; no authenticators will be
asked to authenticate the identity thereafer.
o Return None to indicate that the plugin found no appropriate
credentials.
o Only IIdentifier plugins which match one of the the current
request's classifications will be asked to perform
identification.
o An identifier plugin is permitted to add a key to the
environment named 'repoze.who.application', which should be
an arbitrary WSGI application. If an identifier plugin does
so, this application is used instead of the downstream
application set up within the middleware. This feature is
useful for identifier plugins which need to perform
redirection to obtain credentials. If two identifier
plugins add a 'repoze.who.application' WSGI application to
the environment, the last one consulted will"win".
"""
def remember(environ, identity):
""" On egress (no challenge required):
args -> [ (header-name, header-value), ...] | None
Return a list of headers suitable for allowing the requesting
system to remember the identification information (e.g. a
Set-Cookie header). Return None if no headers need to be set.
These headers will be appended to any headers returned by the
downstream application.
"""
def forget(environ, identity):
""" On egress (challenge required):
args -> [ (header-name, header-value), ...] | None
Return a list of headers suitable for allowing the requesting
system to forget the identification information (e.g. a
Set-Cookie header with an expires date in the past). Return
None if no headers need to be set. These headers will be
included in the response provided by the challenge app.
"""
class IAuthenticator(IPlugin):
""" On ingress: validate the identity and return a user id or None.
"""
def authenticate(environ, identity):
""" identity -> 'userid' | None
o 'environ' is the WSGI environment.
o 'identity' will be a dictionary (with arbitrary keys and
values).
o The IAuthenticator should return a single user id (optimally
a string) if the identity can be authenticated. If the
identify cannot be authenticated, the IAuthenticator should
return None.
Each instance of a registered IAuthenticator plugin that
matches the request classifier will be called N times during a
single request, where N is the number of identities found by
any IIdentifierPlugin instances.
An authenticator must not raise an exception if it is provided
an identity dictionary that it does not understand (e.g. if it
presumes that 'login' and 'password' are keys in the
dictionary, it should check for the existence of these keys
before attempting to do anything; if they don't exist, it
should return None).
An authenticator is permitted to add extra keys to the 'identity'
dictionary (e.g., to save metadata from a database query, rather
than requiring a separate query from an IMetadataProvider plugin).
"""
class IChallenger(IPlugin):
""" On egress: Conditionally initiate a challenge to the user to
provide credentials.
Only challenge plugins which match one of the the current
response's classifications will be asked to perform a
challenge.
"""
def challenge(environ, status, app_headers, forget_headers):
""" args -> WSGI application or None
o 'environ' is the WSGI environment.
o 'status' is the status written into start_response by the
downstream application.
o 'app_headers' is the headers list written into start_response by the
downstream application.
o 'forget_headers' is a list of headers which must be passed
back in the response in order to perform credentials reset
(logout). These come from the 'forget' method of
IIdentifier plugin used to do the request's identification.
Examine the values passed in and return a WSGI application
(a callable which accepts environ and start_response as its
two positional arguments, ala PEP 333) which causes a
challenge to be performed. Return None to forego performing a
challenge.
"""
class IMetadataProvider(IPlugin):
"""On ingress: When an identity is authenticated, metadata
providers may scribble on the identity dictionary arbitrarily.
Return values from metadata providers are ignored.
"""
def add_metadata(environ, identity):
"""
Add metadata to the identity (which is a dictionary). One
value is always guaranteed to be in the dictionary when
add_metadata is called: 'repoze.who.userid', representing the
user id of the identity. Availability and composition of
other keys will depend on the identifier plugin which created
the identity.
""" | /repoze.who-3.0.0b1-py3-none-any.whl/repoze/who/interfaces.py | 0.846958 | 0.355076 | interfaces.py | pypi |
import re
import base64
import wsgiref.util
from urlparse import urlparse
from hashlib import md5
# Regular expression matching a single param in the HTTP_AUTHORIZATION header.
# This is basically <name>=<value> where <value> can be an unquoted token,
# an empty quoted string, or a quoted string where the ending quote is *not*
# preceded by a backslash.
_AUTH_PARAM_RE = r'([a-zA-Z0-9_\-]+)=(([a-zA-Z0-9_\-]+)|("")|(".*[^\\]"))'
_AUTH_PARAM_RE = re.compile(r"^\s*" + _AUTH_PARAM_RE + r"\s*$")
# Regular expression matching an unescaped quote character.
_UNESC_QUOTE_RE = r'(^")|([^\\]")'
_UNESC_QUOTE_RE = re.compile(_UNESC_QUOTE_RE)
# Regular expression matching a backslash-escaped characer.
_ESCAPED_CHAR = re.compile(r"\\.")
def parse_auth_header(value):
"""Parse an authorization header string into an identity dict.
This function can be used to parse the value from an Authorization
header into a dict of its constituent parameters. The auth scheme
name will be included under the key "scheme", and any other auth
creds will appear as keys in the dictionary.
For example, given the following auth header value:
'Digest realm="Sync" userame=user1 response="123456"'
This function will return the following dict:
{"scheme": "Digest", realm: "Sync",
"username": "user1", "response": "123456"}
"""
scheme, kvpairs_str = value.split(None, 1)
# Split the parameters string into individual key=value pairs.
# In the simple case we can just split by commas to get each pair.
# Unfortunately this will break if one of the values contains a comma.
# So if we find a component that isn't a well-formed key=value pair,
# then we stitch bits back onto the end of it until it is.
kvpairs = []
if kvpairs_str:
for kvpair in kvpairs_str.split(","):
if not kvpairs or _AUTH_PARAM_RE.match(kvpairs[-1]):
kvpairs.append(kvpair)
else:
kvpairs[-1] = kvpairs[-1] + "," + kvpair
if not _AUTH_PARAM_RE.match(kvpairs[-1]):
raise ValueError('Malformed auth parameters')
# Now we can just split by the equal-sign to get each key and value.
creds = {"scheme": scheme}
for kvpair in kvpairs:
(key, value) = kvpair.strip().split("=", 1)
# For quoted strings, remove quotes and backslash-escapes.
if value.startswith('"'):
value = value[1:-1]
if _UNESC_QUOTE_RE.search(value):
raise ValueError("Unescaped quote in quoted-string")
value = _ESCAPED_CHAR.sub(lambda m: m.group(0)[1], value)
creds[key] = value
return creds
def extract_digest_credentials(environ):
"""Extract digest credentials from the given request environment.
This function extracts the HTTP-Digest-Auth credentials from the given
request environment, performs some sanity checks, and returns them as
a dict. If the credentials are missing or invalid, None is returned.
"""
# Grab the auth credentials, if any.
authz = environ.get("HTTP_AUTHORIZATION")
if authz is None:
return None
# Parse out the dict of credentials credentials.
try:
creds = parse_auth_header(authz)
except ValueError:
return None
if creds["scheme"].lower() != "digest":
return None
# Check that there's nothing broken or missing.
if not validate_digest_parameters(creds):
return None
# Check that the reported uri matches the request URI
if not validate_digest_uri(creds, environ):
return None
# Include extra information from the request itself.
creds["request-method"] = environ["REQUEST_METHOD"]
if creds.get("qop") == "auth-int":
creds["content-md5"] = environ["HTTP_CONTENT_MD5"]
return creds
def validate_digest_parameters(creds):
"""Validate that credentials contain valid digest-auth parameters.
This function provides a basic sanity-check on the given digest-auth
credentials. It checks that they're well-formed and are not missing
any parameters, but doesn't actually provide any authentication.
Returns True if the parameters are valid, False if not.
"""
# Check that we have all the basic information.
for key in ("username", "realm", "nonce", "uri", "response"):
if key not in creds:
return False
# Check for extra information required when "qop" is present.
if "qop" in creds:
for key in ("cnonce", "nc"):
if key not in creds:
return False
if creds["qop"] not in ("auth", "auth-int"):
return False
# RFC-2617 says the nonce-count must be an 8-char-long hex number.
# We enforce the length limit strictly since flooding the server with
# many large nonce-counts could cause a DOS via memory exhaustion.
if len(creds["nc"]) > 8:
return False
try:
int(creds["nc"], 16)
except ValueError:
return False
# Check that the algorithm, if present, is explcitly set to MD5.
if "algorithm" in creds and creds["algorithm"].lower() != "md5":
return False
# Looks good!
return True
def validate_digest_uri(creds, environ, msie_hack=True):
"""Validate that the digest URI matches the request environment.
This is a helper function to check that digest-auth is being applied
to the correct URI. It matches the given request environment against
the URI specified in the digest auth credentials, returning True if
they are equiavlent and False otherwise.
Older versions of MSIE are known to handle certain URIs incorrectly,
and this function includes a hack to work around this problem. To
disable it and sligtly increase security, pass msie_hack=False.
"""
uri = creds["uri"]
req_uri = wsgiref.util.request_uri(environ)
if uri != req_uri:
p_req_uri = urlparse(req_uri)
if not p_req_uri.query:
if uri != p_req_uri.path:
return False
else:
if uri != "%s?%s" % (p_req_uri.path, p_req_uri.query):
# MSIE < 7 doesn't include the GET vars in the signed URI.
# Let them in, but don't give other user-agents a free ride.
if not msie_hack:
return False
if "MSIE" not in environ.get("HTTP_USER_AGENT", ""):
return False
if uri != p_req_uri.path:
return False
return True
def calculate_pwdhash(username, password, realm):
"""Calculate the password hash used for digest auth.
This function takes the username, password and realm and calculates
the password hash (aka "HA1") used in the digest-auth protocol.
It assumes that the hash algorithm is MD5.
"""
data = "%s:%s:%s" % (username, realm, password)
return md5(data).hexdigest()
def calculate_reqhash(creds):
"""Calculate the request hash used for digest auth.
This function takes the digest auth credentials and calculates the
request hash (aka "HA2") used in the digest-auth protocol. It assumes
that the hash algorithm is MD5.
"""
method = creds["request-method"]
uri = creds["uri"]
qop = creds.get("qop")
# For qop="auth" or unspecified, we just has the method and uri.
if qop in (None, "auth"):
data = "%s:%s" % (method, uri)
# For qop="auth-int" we also include the md5 of the entity body.
# We assume that a Content-MD5 header has been sent and is being
# checked by some other layer in the stack.
elif qop == "auth-int":
content_md5 = creds["content-md5"]
content_md5 = base64.b64decode(content_md5)
data = "%s:%s:%s" % (method, uri, content_md5)
# No other qop values are recognised.
else:
raise ValueError("unrecognised qop value: %r" % (qop,))
return md5(data).hexdigest()
def calculate_digest_response(creds, pwdhash=None, password=None):
"""Calculate the expected response to a digest challenge.
Given the digest challenge credentials and the user's password or
password hash, this function calculates the expected digest response
according to RFC-2617. It assumes that the hash algorithm is MD5.
"""
username = creds["username"]
realm = creds["realm"]
if pwdhash is None:
if password is None:
raise ValueError("must provide either 'pwdhash' or 'password'")
pwdhash = calculate_pwdhash(username, password, realm)
reqhash = calculate_reqhash(creds)
qop = creds.get("qop")
if qop is None:
data = "%s:%s:%s" % (pwdhash, creds["nonce"], reqhash)
else:
data = ":".join([pwdhash, creds["nonce"], creds["nc"],
creds["cnonce"], qop, reqhash])
return md5(data).hexdigest()
def check_digest_response(creds, pwdhash=None, password=None):
"""Check if the given digest response is valid.
This function checks whether a dict of digest response credentials
has been correctly authenticated using the specified password or
password hash.
"""
expected = calculate_digest_response(creds, pwdhash)
# Use a timing-invarient comparison to prevent guessing the correct
# digest one character at a time. Ideally we would reject repeated
# attempts to use the same nonce, but that may not be possible using
# e.g. time-based nonces. This is a nice extra safeguard.
return not strings_differ(expected, creds["response"])
def strings_differ(string1, string2):
"""Check whether two strings differ while avoiding timing attacks.
This function returns True if the given strings differ and False
if they are equal. It's careful not to leak information about *where*
they differ as a result of its running time, which can be very important
to avoid certain timing-related crypto attacks:
http://seb.dbzteam.org/crypto/python-oauth-timing-hmac.pdf
"""
if len(string1) != len(string2):
return True
invalid_bits = 0
for a, b in zip(string1, string2):
invalid_bits += a != b
return invalid_bits != 0 | /repoze.who.plugins.digestauth-0.1.1.tar.gz/repoze.who.plugins.digestauth-0.1.1/repoze/who/plugins/digestauth/utils.py | 0.622 | 0.376709 | utils.py | pypi |
__ver_major__ = 0
__ver_minor__ = 1
__ver_patch__ = 1
__ver_sub__ = ""
__ver_tuple__ = (__ver_major__, __ver_minor__, __ver_patch__, __ver_sub__)
__version__ = "%d.%d.%d%s" % __ver_tuple__
from zope.interface import implements
from repoze.who.interfaces import IIdentifier, IChallenger, IAuthenticator
from repoze.who.utils import resolveDotted
from repoze.who.plugins.digestauth.noncemanager import SignedNonceManager
from repoze.who.plugins.digestauth.utils import (extract_digest_credentials,
calculate_pwdhash,
check_digest_response)
# WSGI environ key used to indicate a stale nonce.
_ENVKEY_STALE_NONCE = "repoze.who.plugins.digestauth.stale_nonce"
class DigestAuthPlugin(object):
"""A repoze.who plugin for authentication via HTTP-Digest-Auth.
This plugin provides a repoze.who IIdentifier/IAuthenticator/IChallenger
implementing HTTP's Digest Access Authentication protocol:
http://tools.ietf.org/html/rfc2617
When used as an IIdentifier, it will extract digest-auth credentials
from the HTTP Authorization header, check that they are well-formed
and fresh, and return them for checking by an IAuthenticator.
When used as an IAuthenticator, it will validate digest-auth credentials
using a callback function to obtain the user's password or password hash.
When used as an IChallenger, it will issue a HTTP WWW-Authenticate
header with a fresh digest-auth challenge for each challenge issued.
This plugin implements fairly complete support for the protocol as defined
in RFC-2167. Specifically:
* both qop="auth" and qop="auth-int" modes
* compatability mode for legacy clients
* client nonce-count checking
* next-nonce generation via the Authentication-Info header
The following optional parts of the specification are not supported:
* MD5-sess, or any hash algorithm other than MD5
* mutual authentication via the Authentication-Info header
Also, for qop="auth-int" mode, this plugin assumes that the request
contains a Content-MD5 header and that this header is validated by some
other component of the system (as it would be very rude for an auth
plugin to consume the request body to calculate this header itself).
To implement nonce generation, storage and expiration, this plugin
uses a helper object called a "nonce manager". This allows the details
of nonce management to be modified to meet the security needs of your
deployment. The default implementation (SignedNonceManager) should be
suitable for most purposes.
"""
implements(IIdentifier, IChallenger, IAuthenticator)
def __init__(self, realm, nonce_manager=None, domain=None, qop=None,
get_password=None, get_pwdhash=None):
if nonce_manager is None:
nonce_manager = SignedNonceManager()
if qop is None:
qop = "auth"
self.realm = realm
self.nonce_manager = nonce_manager
self.domain = domain
self.qop = qop
self.get_password = get_password
self.get_pwdhash = get_pwdhash
def identify(self, environ):
"""Extract HTTP-Digest-Auth credentials from the request.
This method extracts the digest-auth credentials from the request
and checks that the provided nonce and other metadata is valid.
If the nonce is found to be invalid (e.g. it is being re-used)
then None is returned.
If the credentials are fresh then the returned identity is a dict
containing all the digest-auth credentials necessary to validate the
signature, e.g.:
{'username': 'user',
'nonce': 'fc19cc22d1b5f84d',
'realm': 'Sync',
'algorithm': 'MD5',
'qop': 'auth',
'cnonce': 'd61391b0baeb5131',
'nc': '00000001',
'uri': '/some-protected-uri',
'request-method': 'GET',
'response': '75a8f0d4627eef8c73c3ac64a4b2acca'}
It is the responsibility of an IAuthenticator plugin to check that
the "response" value is a correct digest calculated according to the
provided credentials.
"""
# Grab the credentials out of the environment.
identity = extract_digest_credentials(environ)
if identity is None:
return None
# Check that they're for the expected realm.
if identity["realm"] != self.realm:
return None
# Check that the provided nonce is valid.
# If this looks like a stale request, mark it in the environment
# so we can include that information in the challenge.
nonce = identity["nonce"]
if not self.nonce_manager.is_valid_nonce(nonce, environ):
environ[_ENVKEY_STALE_NONCE] = True
return None
# Check that the nonce-count is strictly increasing.
# We store them as integers since that takes less memory than strings.
nc_old = self.nonce_manager.get_nonce_count(nonce)
if nc_old is not None:
nc_new = identity.get("nc", None)
if nc_new is None or int(nc_new, 16) <= nc_old:
environ[_ENVKEY_STALE_NONCE] = True
return None
# Looks good!
return identity
def remember(self, environ, identity):
"""Remember the authenticated identity.
This method records an updated nonce-count for the given identity.
By only updating the nonce-count if the request is successfully
authenticated, we reduce the risk of a DOS via memory exhaustion.
This method can be used to pre-emptively send an updated nonce to
the client as part of a successful response.
"""
nonce = identity.get("nonce", None)
if nonce is None:
return None
# Update the nonce-count if given.
nc_new = identity.get("nc", None)
if nc_new is not None:
self.nonce_manager.record_nonce_count(nonce, int(nc_new, 16))
# Send an updated nonce if required.
next_nonce = self.nonce_manager.get_next_nonce(nonce, environ)
if next_nonce is None:
return None
next_nonce = next_nonce.replace('"', '\\"')
value = 'nextnonce="%s"' % (next_nonce,)
return [("Authentication-Info", value)]
def forget(self, environ, identity):
"""Forget the authenticated identity.
For digest auth this is equivalent to sending a new challenge header,
which should cause the user-agent to re-prompt for credentials.
"""
return self._get_challenge_headers(environ, check_stale=False)
def authenticate(self, environ, identity):
"""Authenticate the provided identity.
If one of the "get_password" or "get_pwdhash" callbacks were provided
then this class is capable of authenticating the identity for itself.
It will calculate the expected digest response and compare it to that
provided by the client. The client is authenticated only if it has
provided the correct response.
"""
# Grab the username.
# If there isn't one, we can't use this identity.
username = identity.get("username")
if username is None:
return None
# Grab the realm.
# If there isn't one or it doesn't match, we can't use this identity.
realm = identity.get("realm")
if realm is None or realm != self.realm:
return None
# Obtain the pwdhash via one of the callbacks.
if self.get_pwdhash is not None:
pwdhash = self.get_pwdhash(username, realm)
elif self.get_password is not None:
password = self.get_password(username)
pwdhash = calculate_pwdhash(username, password, realm)
else:
return None
# Validate the digest response.
if not check_digest_response(identity, pwdhash=pwdhash):
return None
# Looks good!
return username
def challenge(self, environ, status, app_headers, forget_headers):
"""Challenge for digest-auth credentials.
For digest-auth the challenge is a "401 Unauthorized" response with
a fresh nonce in the WWW-Authenticate header.
"""
headers = self._get_challenge_headers(environ)
headers.extend(app_headers)
headers.extend(forget_headers)
if not status.startswith("401 "):
status = "401 Unauthorized"
def challenge_app(environ, start_response):
start_response(status, headers)
return ["Unauthorized"]
return challenge_app
def _get_challenge_headers(self, environ, check_stale=True):
"""Get headers necessary for a fresh digest-auth challenge.
This method generates a new digest-auth challenge for the given
request environ, including a fresh nonce. If the environment
is marked as having a stale nonce then this is indicated in the
challenge.
"""
params = {}
params["realm"] = self.realm
params["qop"] = self.qop
params["nonce"] = self.nonce_manager.generate_nonce(environ)
if self.domain is not None:
params["domain"] = self.domain
# Escape any special characters in those values, so we can send
# them as quoted-strings. The extra values added below are under
# our control so we know they don't contain quotes.
for key, value in params.iteritems():
params[key] = value.replace('"', '\\"')
# Mark the nonce as stale if told so by the environment.
# NOTE: The RFC says the server "should only set stale to TRUE if
# it receives a request for which the nonce is invalid but with a
# valid digest for that nonce". But we can't necessarily check the
# password at this stage, and it's only a "should", so don't bother.
if check_stale and environ.get(_ENVKEY_STALE_NONCE):
params["stale"] = "TRUE"
params["algorithm"] = "MD5"
# Construct the final header as quoted-string k/v pairs.
value = ", ".join('%s="%s"' % itm for itm in params.iteritems())
value = "Digest " + value
return [("WWW-Authenticate", value)]
def make_plugin(realm='', nonce_manager=None, domain=None, qop=None,
get_password=None, get_pwdhash=None):
"""Make a DigestAuthPlugin using values from a .ini config file.
This is a helper function for loading a DigestAuthPlugin via the
repoze.who .ini config file system. It converts its arguments from
strings to the appropriate type then passes them on to the plugin.
"""
if isinstance(nonce_manager, basestring):
nonce_manager = resolveDotted(nonce_manager)
if callable(nonce_manager):
nonce_manager = nonce_manager()
if isinstance(get_password, basestring):
get_password = resolveDotted(get_password)
if get_password is not None:
assert callable(get_password)
if isinstance(get_pwdhash, basestring):
get_pwdhash = resolveDotted(get_pwdhash)
if get_pwdhash is not None:
assert callable(get_pwdhash)
plugin = DigestAuthPlugin(realm, nonce_manager, domain, qop,
get_password, get_pwdhash)
return plugin | /repoze.who.plugins.digestauth-0.1.1.tar.gz/repoze.who.plugins.digestauth-0.1.1/repoze/who/plugins/digestauth/__init__.py | 0.634543 | 0.223812 | __init__.py | pypi |
__ver_major__ = 0
__ver_minor__ = 2
__ver_patch__ = 0
__ver_sub__ = ""
__ver_tuple__ = (__ver_major__, __ver_minor__, __ver_patch__, __ver_sub__)
__version__ = "%d.%d.%d%s" % __ver_tuple__
import functools
from zope.interface import implements
from webob import Request, Response
from repoze.who.interfaces import IIdentifier, IAuthenticator, IChallenger
from repoze.who.utils import resolveDotted
import tokenlib
import hawkauthlib
import hawkauthlib.utils
class HawkAuthPlugin(object):
"""Plugin to implement Hawk HTTP Access Auth in repoze.who.
This class provides an IIdentifier, IChallenger and IAuthenticator
implementation for repoze.who. Authentication is based on signed
requests using the Hawk Access Authentication protocol with pre-shared
credentials.
The plugin can be customized with the following arguments:
* master_secret: a secret known only by the server, used for signing
Hawk id tokens in the default implementation.
* nonce_cache: an object implementing the same interface as
hawkauthlib.NonceCache.
* decode_hawk_id: a callable taking a Request object and Hawk id, and
returning the Hawk secret key and user data dict.
* encode_hawk_id: a callable taking a Request object and userid, and
returning the Hawk token id and secret key.
"""
implements(IIdentifier, IChallenger, IAuthenticator)
# The default value of master_secret is None, which will cause tokenlib
# to generate a fresh secret at application startup.
master_secret = None
def __init__(self, master_secret=None, nonce_cache=None,
decode_hawk_id=None, encode_hawk_id=None):
if master_secret is not None:
self.master_secret = master_secret
if nonce_cache is not None:
self.nonce_cache = nonce_cache
else:
self.nonce_cache = hawkauthlib.NonceCache()
if decode_hawk_id is not None:
self.decode_hawk_id = decode_hawk_id
if encode_hawk_id is not None:
self.encode_hawk_id = encode_hawk_id
def identify(self, environ):
"""Extract the authentication info from the request.
We parse the Authorization header to get the Hawk auth parameters.
If they seem sensible, we cache them in the identity to speed up
signature checking in the authenticate() method.
Note that this method does *not* validate the Hawk signature.
"""
request = Request(environ)
# Parse the Authorization header, to be cached for future use.
params = hawkauthlib.utils.parse_authz_header(request, None)
if params is None:
return None
# Extract the Hawk id.
id = hawkauthlib.get_id(request, params=params)
if id is None:
return None
# Parse the Hawk id into its data and secret key.
try:
key, data = self.decode_hawk_id(request, id)
except ValueError:
msg = "invalid Hawk id: %s" % (id,)
return self._respond_unauthorized(request, msg)
# Return all that data so we can using it during authentication.
return {
"hawkauth.id": id,
"hawkauth.key": key,
"hawkauth.data": data,
"hawkauth.params": params,
}
def authenticate(self, environ, identity):
"""Authenticate the extracted identity.
The identity must be a set of Hawk auth credentials extracted from
the request. This method checks the Hawk HMAC signature, and if valid
extracts the user metadata from the Hawk id.
"""
request = Request(environ)
# Check that these are Hawk auth credentials.
# They may not be if we're using multiple auth methods.
id = identity.get("hawkauth.id")
key = identity.get("hawkauth.key")
data = identity.get("hawkauth.data")
params = identity.get("hawkauth.params")
if id is None or params is None or data is None or key is None:
return None
# Check the Hawk signature.
if not self._check_signature(request, key, params=params):
msg = "invalid Hawk signature"
return self._respond_unauthorized(request, msg)
# Find something we can use as repoze.who.userid.
if "repoze.who.userid" not in data:
for key in ("username", "userid", "uid", "email"):
if key in data:
data["repoze.who.userid"] = data[key]
break
else:
msg = "Hawk id contains no userid"
return self._respond_unauthorized(request, msg)
# Update the identity with the data from the MAC id.
identity.update(data)
return identity["repoze.who.userid"]
def challenge(self, environ, status, app_headers=(), forget_headers=()):
"""Challenge the user for credentials.
This simply sends a 401 response using the WWW-Authenticate field
as constructed by forget().
"""
resp = Response()
resp.status = 401
resp.headers = self.forget(environ, {})
for headers in (app_headers, forget_headers):
for name, value in headers:
resp.headers[name] = value
resp.content_type = "text/plain"
resp.body = "Unauthorized"
return resp
def remember(self, environ, identity):
"""Remember the user's identity.
This is a no-op for this plugin; the client is supposed to remember
its Hawk credentials and use them for all requests.
"""
return []
def forget(self, environ, identity):
"""Forget the user's identity.
This simply issues a new WWW-Authenticate challenge, which should
cause the client to forget any previously-provisioned credentials.
"""
return [("WWW-Authenticate", "Hawk")]
def decode_hawk_id(self, request, id):
"""Decode Hawk id into secret key and data dict.
This method decodes the given Hawk id to give the corresponding Hawk
secret key and dict of user data. By default it uses the tokenlib
library, but plugin instances may override this method with another
callable from the config file.
If the Hawk id is invalid then ValueError will be raised.
"""
secret = tokenlib.get_token_secret(id, secret=self.master_secret)
data = tokenlib.parse_token(id, secret=self.master_secret)
return secret, data
def encode_hawk_id(self, request, data):
"""Encode data dict into Hawk id token and secret key.
This method is essentially the reverse of decode_hawk_id. Given a
dict of identity data, it encodes it into a unique Hawk id token and
corresponding secret key. By default it uses the tokenlib library,
but plugin instances may override this method with another callable
the config file.
This method is not needed when consuming auth tokens, but is very
handy when building them for testing purposes.
"""
id = tokenlib.make_token(data, secret=self.master_secret)
secret = tokenlib.get_token_secret(id, secret=self.master_secret)
return id, secret
def _check_signature(self, request, secret, params=None):
"""Check the request signature, using our local nonce cache."""
return hawkauthlib.check_signature(request, secret, params=params,
nonces=self.nonce_cache)
def _respond_unauthorized(self, request, message="Unauthorized"):
"""Generate a "401 Unauthorized" error response."""
resp = Response()
resp.status = 401
resp.headers = self.forget(request.environ, {})
resp.content_type = "text/plain"
resp.body = message
request.environ["repoze.who.application"] = resp
return None
def make_plugin(**kwds):
"""Make a HawkAuthPlugin using values from a .ini config file.
This is a helper function for loading a HawkAuthPlugin via the
repoze.who .ini config file system. It converts its arguments from
strings to the appropriate type then passes them on to the plugin.
"""
master_secret = kwds.pop("master_secret", None)
nonce_cache = _load_object_from_kwds("nonce_cache", kwds)
decode_hawk_id = _load_function_from_kwds("decode_hawk_id", kwds)
encode_hawk_id = _load_function_from_kwds("encode_hawk_id", kwds)
for unknown_kwd in kwds:
raise TypeError("unknown keyword argument: %s" % unknown_kwd)
plugin = HawkAuthPlugin(master_secret, nonce_cache,
decode_hawk_id, encode_hawk_id)
return plugin
def _load_function_from_kwds(name, kwds):
"""Load a plugin argument as a function created from the given kwds.
This function is a helper to load and possibly curry a callable argument
to the plugin. It grabs the value from the dotted python name found in
kwds[name] and checks that it is a callable. It looks for arguments of
the form kwds[name_*] and curries them into the function as additional
keyword argument before returning.
"""
# See if we actually have the named object.
dotted_name = kwds.pop(name, None)
if dotted_name is None:
return None
func = resolveDotted(dotted_name)
# Check that it's a callable.
if not callable(func):
raise ValueError("Argument %r must be callable" % (name,))
# Curry in any keyword arguments.
func_kwds = {}
prefix = name + "_"
for key in kwds.keys():
if key.startswith(prefix):
func_kwds[key[len(prefix):]] = kwds.pop(key)
# Return the original function if not currying anything.
# This is both more effient and better for unit testing.
if func_kwds:
func = functools.partial(func, **func_kwds)
return func
def _load_object_from_kwds(name, kwds):
"""Load a plugin argument as an object created from the given kwds.
This function is a helper to load and possibly instanciate an argument
to the plugin. It grabs the value from the dotted python name found in
kwds[name]. If this is a callable, it looks for arguments of the form
kwds[name_*] and calls it with them to instanciate an object.
"""
# See if we actually have the named object.
dotted_name = kwds.pop(name, None)
if dotted_name is None:
return None
obj = resolveDotted(dotted_name)
# Extract any arguments for the callable.
obj_kwds = {}
prefix = name + "_"
for key in kwds.keys():
if key.startswith(prefix):
obj_kwds[key[len(prefix):]] = kwds.pop(key)
# Call it if callable.
if callable(obj):
obj = obj(**obj_kwds)
elif obj_kwds:
raise ValueError("arguments provided for non-callable %r" % (name,))
return obj | /repoze.who.plugins.hawkauth-0.2.0.tar.gz/repoze.who.plugins.hawkauth-0.2.0/repoze/who/plugins/hawkauth/__init__.py | 0.648689 | 0.165863 | __init__.py | pypi |
__ver_major__ = 0
__ver_minor__ = 1
__ver_patch__ = 0
__ver_sub__ = ""
__ver_tuple__ = (__ver_major__, __ver_minor__, __ver_patch__, __ver_sub__)
__version__ = "%d.%d.%d%s" % __ver_tuple__
import functools
from zope.interface import implements
from webob import Request, Response
from repoze.who.interfaces import IIdentifier, IAuthenticator, IChallenger
from repoze.who.utils import resolveDotted
import tokenlib
import macauthlib
import macauthlib.utils
class MACAuthPlugin(object):
"""Plugin to implement MAC Access Auth in repoze.who.
This class provides an IIdentifier, IChallenger and IAuthenticator
implementation for repoze.who. Authentication is based on signed
requests using the MAC Access Authentication standard with pre-shared
MAC credentials.
The plugin can be customized with the following arguments:
* decode_mac_id: a callable taking a Request object and MAC id, and
returning the MAC secret key and user data dict.
* nonce_cache: an object implementing the same interface as
macauthlib.NonceCache.
"""
implements(IIdentifier, IChallenger, IAuthenticator)
def __init__(self, decode_mac_id=None, nonce_cache=None):
if decode_mac_id is not None:
self.decode_mac_id = decode_mac_id
if nonce_cache is not None:
self.nonce_cache = nonce_cache
else:
self.nonce_cache = macauthlib.NonceCache()
def identify(self, environ):
"""Extract the authentication info from the request.
We parse the Authorization header to get the MAC auth parameters.
If they seem sensible, we cache them in the identity to speed up
signature checking in the authenticate() method.
Note that this method does *not* validate the MAC signature.
"""
request = Request(environ)
# Parse the Authorization header, to be cached for future use.
params = macauthlib.utils.parse_authz_header(request, None)
if params is None:
return None
# Extract the MAC id.
id = macauthlib.get_id(request, params=params)
if id is None:
return None
# Parse the MAC id into its data and MAC key.
try:
key, data = self.decode_mac_id(request, id)
except ValueError:
msg = "invalid MAC id: %s" % (id,)
return self._respond_unauthorized(request, msg)
# Return all that data so we can using it during authentication.
return {
"macauth.id": id,
"macauth.key": key,
"macauth.data": data,
"macauth.params": params,
}
def authenticate(self, environ, identity):
"""Authenticate the extracted identity.
The identity must be a set of MAC auth credentials extracted from
the request. This method checks the MAC signature, and if valid
extracts the user metadata from the MAC id.
"""
request = Request(environ)
# Check that these are MAC auth credentials.
# They may not be if we're using multiple auth methods.
id = identity.get("macauth.id")
key = identity.get("macauth.key")
data = identity.get("macauth.data")
params = identity.get("macauth.params")
if id is None or params is None or data is None or key is None:
return None
# Check the MAC signature.
if not self._check_signature(request, key, params=params):
msg = "invalid MAC signature"
return self._respond_unauthorized(request, msg)
# Find something we can use as repoze.who.userid.
if "repoze.who.userid" not in data:
for key in ("username", "userid", "uid", "email"):
if key in data:
data["repoze.who.userid"] = data[key]
break
else:
msg = "MAC id contains no userid"
return self._respond_unauthorized(request, msg)
# Update the identity with the data from the MAC id.
identity.update(data)
return identity["repoze.who.userid"]
def challenge(self, environ, status, app_headers=(), forget_headers=()):
"""Challenge the user for credentials.
This simply sends a 401 response using the WWW-Authenticate field
as constructed by forget().
"""
resp = Response()
resp.status = 401
resp.headers = self.forget(environ, {})
for headers in (app_headers, forget_headers):
for name, value in headers:
resp.headers[name] = value
resp.content_type = "text/plain"
resp.body = "Unauthorized"
return resp
def remember(self, environ, identity):
"""Remember the user's identity.
This is a no-op for this plugin; the client is supposed to remember
its MAC credentials and use them for all requests.
"""
return []
def forget(self, environ, identity):
"""Forget the user's identity.
This simply issues a new WWW-Authenticate challenge, which should
cause the client to forget any previously-provisioned credentials.
"""
return [("WWW-Authenticate", "MAC")]
def decode_mac_id(self, request, id):
"""Decode MAC id into MAC key and data dict.
This method decodes the given MAC id to give the corresponding MAC
secret key and dict of user data. By default it uses the tokenlib
library, but plugin instances may override this method with another
callable from the config file.
If the MAC id is invalid then ValueError will be raised.
"""
secret = tokenlib.get_token_secret(id)
data = tokenlib.parse_token(id)
return secret, data
def _check_signature(self, request, secret, params=None):
"""Check the request signature, using our local nonce cache."""
return macauthlib.check_signature(request, secret, params=params,
nonces=self.nonce_cache)
def _respond_unauthorized(self, request, message="Unauthorized"):
"""Generate a "401 Unauthorized" error response."""
resp = Response()
resp.status = 401
resp.headers = self.forget(request.environ, {})
resp.content_type = "text/plain"
resp.body = message
request.environ["repoze.who.application"] = resp
return None
def make_plugin(**kwds):
"""Make a MACAuthPlugin using values from a .ini config file.
This is a helper function for loading a MACAuthPlugin via the
repoze.who .ini config file system. It converts its arguments from
strings to the appropriate type then passes them on to the plugin.
"""
decode_mac_id = _load_function_from_kwds("decode_mac_id", kwds)
nonce_cache = _load_object_from_kwds("nonce_cache", kwds)
for unknown_kwd in kwds:
raise TypeError("unknown keyword argument: %s" % unknown_kwd)
plugin = MACAuthPlugin(decode_mac_id, nonce_cache)
return plugin
def _load_function_from_kwds(name, kwds):
"""Load a plugin argument as a function created from the given kwds.
This function is a helper to load and possibly curry a callable argument
to the plugin. It grabs the value from the dotted python name found in
kwds[name] and checks that it is a callable. It looks for arguments of
the form kwds[name_*] and curries them into the function as additional
keyword argument before returning.
"""
# See if we actually have the named object.
dotted_name = kwds.pop(name, None)
if dotted_name is None:
return None
func = resolveDotted(dotted_name)
# Check that it's a callable.
if not callable(func):
raise ValueError("Argument %r must be callable" % (name,))
# Curry in any keyword arguments.
func_kwds = {}
prefix = name + "_"
for key in kwds.keys():
if key.startswith(prefix):
func_kwds[key[len(prefix):]] = kwds.pop(key)
# Return the original function if not currying anything.
# This is both more effient and better for unit testing.
if func_kwds:
func = functools.partial(func, **func_kwds)
return func
def _load_object_from_kwds(name, kwds):
"""Load a plugin argument as an object created from the given kwds.
This function is a helper to load and possibly instanciate an argument
to the plugin. It grabs the value from the dotted python name found in
kwds[name]. If this is a callable, it looks for arguments of the form
kwds[name_*] and calls it with them to instanciate an object.
"""
# See if we actually have the named object.
dotted_name = kwds.pop(name, None)
if dotted_name is None:
return None
obj = resolveDotted(dotted_name)
# Extract any arguments for the callable.
obj_kwds = {}
prefix = name + "_"
for key in kwds.keys():
if key.startswith(prefix):
obj_kwds[key[len(prefix):]] = kwds.pop(key)
# Call it if callable.
if callable(obj):
obj = obj(**obj_kwds)
elif obj_kwds:
raise ValueError("arguments provided for non-callable %r" % (name,))
return obj | /repoze.who.plugins.macauth-0.1.0.tar.gz/repoze.who.plugins.macauth-0.1.0/repoze/who/plugins/macauth/__init__.py | 0.67971 | 0.191611 | __init__.py | pypi |
__ver_major__ = 0
__ver_minor__ = 1
__ver_patch__ = 1
__ver_sub__ = ""
__version__ = "%d.%d.%d%s" % (__ver_major__,__ver_minor__,__ver_patch__,__ver_sub__)
import os
import json
import hmac
import hashlib
import pylibmc
from zope.interface import implements
from repoze.who.api import get_api
from repoze.who.interfaces import IAuthenticator, IMetadataProvider
class MemcachedPlugin(object):
"""Cache IAuthenticator/IMetadataProvider plugin results using memcached.
This is a repoze.who IAuthenticator/IMetadataProvider plugin that can
cache the results of *other* plugins using memcached. It's useful for
reducing load on e.g. a backend LDAP auth system.
To use it, give it the name of an authenticator and/or metadata provider
whose results it should wrap::
[plugin:ldap]
use = my.ldap.authenticator
[plugin:cached_ldap]
use = repoze.who.plugins.memcached
authenticator_name = ldap
[authenticators]
plugins = cached_ldap ldap;unused
(The "ldap;unused" bit ensures that the wrapped ldap plugin still gets
loaded, but is not used for matching any requests. Yeah, it's yuck.)
To prevent a compromise of the cache from revealing auth credentials, this
plugin calculates a HMAC hash of the items in the incoming identity and
uses that as the cache key. This makes it possible to check the cache for
a match to an incoming identity, while preventing the cache keys from being
reversed back into a valid identity.
Items added to the identity by the wrapped plugin will be stored in the
cached value and will *not* be encryped or obfuscated in any way.
The following configuration options are available:
* memcached_urls: A list of URLs for the underlying memcached store.
* authenticator_name: The name of an IAuthenticator plugin to wrap.
* mdprovider_name: The name of an IMetadataProvider plugin to wrap.
* key_items: A list of names from the identity dict that should be
hashed to produce the cache key. These items should
uniquely and validly identity a user. By default it
will use all keys in the identity in sorted order.
* value_items: A list of names from the identity dict that should be
stored in the cache. These would typically be items
of metadata such as the user's email address. By
default this will include all items that the wrapped
plugin adds to the identity.
* secret: A string used when calculating the HMAC the cache keys.
All servers accessing a shared cache should use the same
secret so they produce the same set of cache keys.
* ttl: The time for which cache entries should persist, in seconds.
"""
implements(IAuthenticator, IMetadataProvider)
def __init__(self, memcached_urls, authenticator_name=None,
mdprovider_name=None, key_items=None, value_items=None,
secret=None, ttl=None):
if authenticator_name is None and mdprovider_name is None:
msg = "You must specify authenticator_name and/or "\
"mdprovider_name (otherwise this plugin won't "\
"actually do anything...)"
raise ValueError(msg)
if secret is None:
secret = os.urandom(16)
if ttl is None:
ttl = 60
self.memcached_urls = memcached_urls
self.authenticator_name = authenticator_name
self.mdprovider_name = mdprovider_name
self.key_items = key_items
self.value_items = value_items
self.secret = secret
self.ttl = ttl
self._client = pylibmc.Client(memcached_urls)
def authenticate(self, environ, identity):
"""Authenticate the identity from the given request environment.
This method checks whether an entry matching the identity is currently
stored in the cache. If so, it loads data from the stored entry and
return successfully.
"""
# Only do anything if we're wrapping an authenticator.
if self.authenticator_name is None:
return None
api = get_api(environ)
if api is None:
return None
wrapped_authenticator = api.name_registry[self.authenticator_name]
# Check if we've got cached data already.
data = self._get_cached(environ, identity, "authenticate")
if data is not None:
identity.update(data)
return identity.get("repoze.who.userid")
# Not cached, check with the wrapped authenticator.
value_items = self.value_items
if value_items is None:
old_keys = set(identity.iterkeys())
userid = wrapped_authenticator.authenticate(environ, identity)
if userid is None:
return None
# If that was successful, cache it along with any added data.
# Make sure to always cache repoze.who.userid.
if value_items is None:
value_items = [k for k in identity.iterkeys() if k not in old_keys]
identity.setdefault("repoze.who.userid", userid)
value_items = ["repoze.who.userid"] + list(value_items)
self._set_cached(environ, identity, value_items, "authenticate")
return userid
def add_metadata(self, environ, identity):
"""Add metadata to the given identity dict."""
# Only do anything if we're wrapping an mdprovider.
if self.mdprovider_name is None:
return None
api = get_api(environ)
if api is None:
return None
wrapped_mdprovider = api.name_registry[self.mdprovider_name]
# Check if we've got cached data already.
data = self._get_cached(environ, identity, "add_metadata")
if data is not None:
identity.update(data)
return None
# Not cached, check with the wrapped mdprovider.
value_items = self.value_items
if value_items is None:
old_keys = set(identity.iterkeys())
wrapped_mdprovider.add_metadata(environ, identity)
# Cache any data that was added.
if value_items is None:
value_items = [k for k in identity.iterkeys() if k not in old_keys]
self._set_cached(environ, identity, value_items, "add_metadata")
def _get_cached(self, environ, identity, method_name):
"""Get the cached data for the given identity, if any.
If the given identity has previously been seen and stored in the
cache, this method returns the dict of cached data. If the identity
is not cached then it returns None.
"""
key = self._get_cache_key(environ, identity, method_name)
# Grab it from the cache as a JSON string.
try:
value = self._client.get(key)
except pylibmc.Error:
return None
# Parse it into a dict of data.
try:
value = json.loads(value)
except ValueError:
return None
return value
def _set_cached(self, environ, identity, value_items, method_name):
"""Set the cached data for the given identity.
This method extracted the named value_items from the identity and
stores them in the cache for future reference. It ignores errors
from writing to the cache.
"""
key = self._get_cache_key(environ, identity, method_name)
data = {}
for name in value_items:
try:
data[name] = identity[name]
except KeyError:
pass
try:
self._client.set(key, json.dumps(data), time=self.ttl)
except pylibmc.Error:
pass
def _get_cache_key(self, environ, identity, method_name):
"""Get the key under which to cache the given identity.
The cache key is a hmac over all the items named in self.key_items
if specified, or over all the values in sorted key order. Using a
hmac ensures that each identity has a unique key while not leaking
credential information into memcached.
A single instance of this plugin might cache results for both an
authenticator and an mdprovider. To prevent conflicts the name of
the calling method is also included in the hash.
"""
# We cache the key when first calculated, so that we always use the
# value as it was first seen by the plugin. This prevents cache
# misses when some other plugin scribbles on the identity.
envkey = "repoze.who.plugins.memcached.cache-key-" + method_name
key = environ.get(envkey)
if key is not None:
return key
# If no list of key items is specified, use all keys in sorted order.
key_items = self.key_items
if key_items is None:
key_items = identity.keys()
key_items.sort()
# Hash in the method name and the value of all key items.
hasher = hmac.new(self.secret, method_name, hashlib.sha1)
hasher.update("\x00")
for name in key_items:
hasher.update(str(identity.get(name, "")))
hasher.update("\x00")
key = hasher.hexdigest()
# Cache it for future reference.
environ[envkey] = key
return key
def make_plugin(memcached_urls, authenticator_name=None, mdprovider_name=None,
key_items=None, value_items=None, secret=None, ttl=None):
"""CachedAuthPlugin helper for loading from ini files."""
memcached_urls = memcached_urls.split()
if not memcached_urls:
raise ValueError("You must specify at least one memcached URL")
if key_items is not None:
key_items = key_items.split()
if value_items is not None:
value_items = value_items.split()
if ttl is not None:
ttl = int(ttl)
plugin = MemcachedPlugin(memcached_urls, authenticator_name,
mdprovider_name,key_items, value_items,
secret, ttl)
return plugin | /repoze.who.plugins.memcached-0.1.1.tar.gz/repoze.who.plugins.memcached-0.1.1/repoze/who/plugins/memcached/__init__.py | 0.585575 | 0.15785 | __init__.py | pypi |
from zope.interface import implements
from repoze.who.interfaces import IMetadataProvider
from repoze.who.plugins.metadata_cache.base import (MetadataCachePluginBase,
IMetadataCache)
class MetadataCachePluginIdentityMemory(MetadataCachePluginBase):
""" A very basic metadata cache plugin that uses a basic dict as cache.
This is using a (very basic) Python dictionary in memory to
store metadata, so it won't persist beyond a single Python instance
and a single process.
This plugin pulls attributes to cache from the ``repoze.who`` identity
at some point (eg during initial authentication) and then replays
them into the identity for subsequent requests.
"""
implements(IMetadataCache)
def __init__(self, name='attributes', cache=None):
"""Initialise the metadata cache plugin.
Arguments/Keywords
See :meth:`MetadataCachePluginBase.__init__` for basic configuration
in addition to the following:
name
The identifier used to load attributes from within the
``repoze.who``identity, and the identifier used to replay
cached attributes. This ensures that the downstream application
will always recieve attributes in the same fashion.
Default: ``'attributes'``
cache
A dict-like structure that will be used to store metadata.
Replace this with something dict-like if you'd like to use
another type for storage.
Default: ``{}`` (empty dict)
"""
super(MetadataCachePluginIdentityMemory, self).__init__(name=name)
self.cache = cache or {}
def get_attributes(self, environ, identity):
return identity.get(self.name)
def store(self, key, value):
self.cache[key] = value
def fetch(self, key):
return self.cache.get(key, {})
def make_plugin(name='attributes', cache=None):
""" Create and configure a :class:`MetadataCacheIdentityMemoryPlugin`.
See :class:`MetadataCacheIdentityMemoryPlugin` for argument and keyword
information.
"""
return MetadataCachePluginIdentityMemory(name=name, cache=cache) | /repoze.who.plugins.metadata_cache-0.1.zip/repoze.who.plugins.metadata_cache-0.1/src/repoze/who/plugins/metadata_cache/memory.py | 0.887741 | 0.29598 | memory.py | pypi |
import logging
from repoze.who.interfaces import IMetadataProvider
from zope.interface import implements, Interface
log = logging.getLogger(__name__)
class IMetadataCache(Interface):
""" Interface for metadata caches.
"""
def get_attributes(self, environ, identity):
""" Return a value representing metadata able to be stored in the cache.
This method can use either the ``environ``, ``identity`` or both
to build a set of suitable attributes.
"""
def store(self, key, value):
""" Store the given ``value`` into a cache using the given ``key``.
"""
def fetch(self, key):
""" Fetch the given value associated with the ``key`` from the cache.
"""
class MetadataCachePluginBase(object):
""" ``repoze.who`` plugin to cache identity metadata for later use.
This plugin stores some data in the identity - referenced by
:attr:`MetdataCachePlugin.name` - into a cache for recall on
subsequent requests. Useful in the situation where another plugin
has obtained a user's metadata during its non-``add_metadata`` process
and needs to store it somewhere for later recall.
A prime example of this is CAS authentication and attribute
release. The CAS plugin from ``repoze.who.plugins.cas`` is a
``repoze.who`` ``identifier`` and user metadata attributes are
released from the CAS server during this identification process.
So, when the CAS service ticket is validated against the server,
metdata is returned part of the same response. As there is no
way to directly save this information during the identification
process, that plugin simply returns the metadata as part of the
identity. Then, this plugin can capture the metadata out of the
identity and store it for later use.
This class is designed to be extended or modified by any future
``cache`` types. Create your own class as a proxy if you'd like to easily
use some other storage method.
Note: This is a work in progress.
"""
implements(IMetadataProvider)
def __init__(self, name='attributes'):
"""Initialise the metadata cache plugin.
Arguments/Keywords
name
Identifier within the ``repoze.who`` identity to locate
incoming data to cache. The same identifier will be where
cached metadata is restored to on later requests.
Default: ``'attributes'``
"""
self.name = name
# IMetadataProvider
def add_metadata(self, environ, identity):
""" Add user metadata into the identity, or cache if present.
"""
userid = identity.get('repoze.who.userid')
if userid:
attributes = self.get_attributes(environ, identity)
if attributes:
log.debug("Storing incoming attributes to cache.")
self.store(userid, attributes)
else:
log.debug("Fetching attributes from cache into identity.")
identity[self.name] = self.fetch(userid) | /repoze.who.plugins.metadata_cache-0.1.zip/repoze.who.plugins.metadata_cache-0.1/src/repoze/who/plugins/metadata_cache/base.py | 0.790126 | 0.360433 | base.py | pypi |
import os
import re
import time
import hmac
from hashlib import sha1
from base64 import b64encode
# Regular expression matching a single param in the HTTP_AUTHORIZATION header.
# This is basically <name>=<value> where <value> can be an unquoted token,
# an empty quoted string, or a quoted string where the ending quote is *not*
# preceded by a backslash.
_AUTH_PARAM_RE = r'([a-zA-Z0-9_\-]+)=(([a-zA-Z0-9_\-]+)|("")|(".*[^\\]"))'
_AUTH_PARAM_RE = re.compile(r"^\s*" + _AUTH_PARAM_RE + r"\s*$")
# Regular expression matching an unescaped quote character.
_UNESC_QUOTE_RE = r'(^")|([^\\]")'
_UNESC_QUOTE_RE = re.compile(_UNESC_QUOTE_RE)
# Regular expression matching a backslash-escaped characer.
_ESCAPED_CHAR = re.compile(r"\\.")
def parse_authz_header(request, *default):
"""Parse the authorization header into an identity dict.
This function can be used to extract the Authorization header from a
request and parse it into a dict of its constituent parameters. The
auth scheme name will be included under the key "scheme", and any other
auth params will appear as keys in the dictionary.
For example, given the following auth header value:
'Digest realm="Sync" userame=user1 response="123456"'
This function will return the following dict:
{"scheme": "Digest", realm: "Sync",
"username": "user1", "response": "123456"}
"""
# This outer try-except catches ValueError and
# turns it into return-default if necessary.
try:
# Grab the auth header from the request, if any.
authz = request.environ.get("HTTP_AUTHORIZATION")
if authz is None:
raise ValueError("Missing auth parameters")
scheme, kvpairs_str = authz.split(None, 1)
# Split the parameters string into individual key=value pairs.
# In the simple case we can just split by commas to get each pair.
# Unfortunately this will break if one of the values contains a comma.
# So if we find a component that isn't a well-formed key=value pair,
# then we stitch bits back onto the end of it until it is.
kvpairs = []
if kvpairs_str:
for kvpair in kvpairs_str.split(","):
if not kvpairs or _AUTH_PARAM_RE.match(kvpairs[-1]):
kvpairs.append(kvpair)
else:
kvpairs[-1] = kvpairs[-1] + "," + kvpair
if not _AUTH_PARAM_RE.match(kvpairs[-1]):
raise ValueError('Malformed auth parameters')
# Now we can just split by the equal-sign to get each key and value.
params = {"scheme": scheme}
for kvpair in kvpairs:
(key, value) = kvpair.strip().split("=", 1)
# For quoted strings, remove quotes and backslash-escapes.
if value.startswith('"'):
value = value[1:-1]
if _UNESC_QUOTE_RE.search(value):
raise ValueError("Unescaped quote in quoted-string")
value = _ESCAPED_CHAR.sub(lambda m: m.group(0)[1], value)
params[key] = value
return params
except ValueError:
if default:
return default[0]
raise
def strings_differ(string1, string2):
"""Check whether two strings differ while avoiding timing attacks.
This function returns True if the given strings differ and False
if they are equal. It's careful not to leak information about *where*
they differ as a result of its running time, which can be very important
to avoid certain timing-related crypto attacks:
http://seb.dbzteam.org/crypto/python-oauth-timing-hmac.pdf
"""
if len(string1) != len(string2):
return True
invalid_bits = 0
for a, b in zip(string1, string2):
invalid_bits += a != b
return invalid_bits != 0
def sign_request(request, token, secret):
"""Sign the given request using MAC access authentication.
This function implements the client-side request signing algorithm as
expected by the server, i.e. MAC access authentication as defined by
RFC-TODO.
It's not used by the repoze.who plugin itself, but is handy for testing
purposes and possibly for python client libraries.
"""
if isinstance(token, unicode):
token = token.encode("ascii")
if isinstance(secret, unicode):
secret = secret.encode("ascii")
# Use MAC parameters from the request if present.
# Otherwise generate some fresh ones.
params = parse_authz_header(request, {})
if params and params.pop("scheme") != "MAC":
params.clear()
params["id"] = token
if "ts" not in params:
params["ts"] = str(int(time.time()))
if "nonce" not in params:
params["nonce"] = os.urandom(5).encode("hex")
# Calculate the signature and add it to the parameters.
params["mac"] = get_mac_signature(request, secret, params)
# Serialize the parameters back into the authz header.
# WebOb has logic to do this that's not perfect, but good enough for us.
request.authorization = ("MAC", params)
def get_mac_signature(request, secret, params=None):
"""Get the MAC signature for the given data, using the given secret."""
if params is None:
params = parse_authz_header(request, {})
sigstr = get_normalized_request_string(request, params)
return b64encode(hmac.new(secret, sigstr, sha1).digest())
def get_normalized_request_string(request, params=None):
"""Get the string to be signed for MAC access authentication.
This method takes a WebOb Request object and returns the data that
should be signed for MAC access authentication of that request, a.k.a
the "normalized request string" as defined in section 3.2.1 of RFC-TODO.
If the "params" parameter is not None, it is assumed to be a pre-parsed
dict of MAC parameters as one might find in the Authorization header. If
it is missing or None then the Authorization header from the request will
be parsed to determine the necessary parameters.
"""
if params is None:
params = parse_authz_header(request, {})
bits = []
bits.append(params["ts"])
bits.append(params["nonce"])
bits.append(request.method.upper())
bits.append(request.path_qs)
try:
host, port = request.host.rsplit(":", 1)
except ValueError:
host = request.host
if request.scheme == "http":
port = "80"
elif request.scheme == "https":
port = "443"
else:
msg = "Unknown scheme %r has no default port" % (request.scheme,)
raise ValueError(msg)
bits.append(host.lower())
bits.append(port)
bits.append(params.get("ext", ""))
bits.append("") # to get the trailing newline
return "\n".join(bits)
def check_mac_signature(request, secret, params=None):
"""Check that the request is correctly signed with the given secret."""
if params is None:
params = parse_authz_header(request, {})
# Any KeyError here indicates a missing parameter,
# which implies an invalid signature.
try:
expected_sig = get_mac_signature(request, secret, params)
return not strings_differ(params["mac"], expected_sig)
except KeyError:
return False | /repoze.who.plugins.vepauth-0.3.0.tar.gz/repoze.who.plugins.vepauth-0.3.0/repoze/who/plugins/vepauth/utils.py | 0.492432 | 0.343342 | utils.py | pypi |
import os
import time
import math
import json
import hmac
import hashlib
from base64 import urlsafe_b64encode as b64encode
from base64 import urlsafe_b64decode as b64decode
from webob.exc import HTTPNotFound
from repoze.who.plugins.vepauth.utils import strings_differ
class TokenManager(object):
"""Interface definition for management of MAC tokens.
This class defines the necessary methods for managing tokens as part
of MAC request signing:
* make_token: create a new (token, secret) pair
* parse_token: extract (data, secret) from a given token
Token management is split out into a separate class to make it easy
to adjust the various time-vs-memory-security tradeoffs involved -
for example, you might provide a custom TokenManager that stores its
state in memcache so it can be shared by several servers.
"""
def make_token(self, data):
"""Generate a new token value.
This method generates a new token associated with the given VEP data,
along with a secret key used for signing requests. These will both
be unique and non-forgable and contain only characters from the
urlsafe base64 alphabet.
The method also returns a third value which is additional data to give
to the client.
If the asserted data does not correspond to a valid user then this
method will return (None, None, None).
"""
raise NotImplementedError # pragma: no cover
def parse_token(self, token):
"""Get the data and secret associated with the given token.
If the given token is valid then this method returns its user data
dict and the associated secret key. If the token is invalid (e.g.
it is expired) then this method raises ValueError.
"""
raise NotImplementedError # pragma: no cover
class SignedTokenManager(object):
"""Class managing signed MAC auth tokens.
This class provides a TokenManager implementation based on signed
timestamped tokens. It should provide a good balance between speed,
memory-usage and security for most applications.
The token contains an embedded (unencrypted!) userid and timestamp.
The secret key is derived from the token using HKDF.
The following options customize the use of this class:
* secret: string key used for signing the token;
if not specified then a random bytestring is used.
* timeout: the time after which a token will expire.
* hashmod: the hashing module to use for various HMAC operations;
if not specified then hashlib.sha1 will be used.
* applications: If the request contains a matchdict with "application"
in it, it should be one of the ones provided by this
option;
if not specified then an empty list will be used (and
all the applications will be considered valid)
"""
def __init__(self, secret=None, timeout=None, hashmod=None,
applications=None):
# Default hashmod is SHA1
if hashmod is None:
hashmod = hashlib.sha1
elif isinstance(hashmod, basestring):
hashmod = getattr(hashlib, hashmod)
digest_size = hashmod().digest_size
# Default secret is a random bytestring.
if secret is None:
secret = os.urandom(digest_size)
# Default timeout is five minutes.
if timeout is None:
timeout = 5 * 60
# Default list of applications is empty
if applications is None:
applications = ()
self.secret = secret
self._sig_secret = HKDF(self.secret, salt=None, info="SIGNING",
size=digest_size)
self.timeout = timeout
self.hashmod = hashmod
self.hashmod_digest_size = digest_size
self.applications = applications
def make_token(self, request, data):
"""Generate a new token for the given userid.
In this implementation the token is a JSON dump of the given data,
including an expiry time and salt. It has a HMAC signature appended
and is b64-encoded for transmission.
"""
self._validate_request(request, data)
data = data.copy()
data["salt"] = os.urandom(3).encode("hex")
data["expires"] = time.time() + self.timeout
payload = json.dumps(data)
sig = self._get_signature(payload)
assert len(sig) == self.hashmod_digest_size
token = b64encode(payload + sig)
return token, self._get_secret(token, data), None
def parse_token(self, token):
"""Extract the data and secret key from the token, if valid.
In this implementation the token is valid is if has a valid signature
and if the embedded expiry time has not passed.
"""
# Parse the payload and signature from the token.
try:
decoded_token = b64decode(token)
except TypeError, e:
raise ValueError(str(e))
payload = decoded_token[:-self.hashmod_digest_size]
sig = decoded_token[-self.hashmod_digest_size:]
# Carefully check the signature.
# This is a deliberately slow string-compare to avoid timing attacks.
# Read the docstring of strings_differ for more details.
expected_sig = self._get_signature(payload)
if strings_differ(sig, expected_sig):
raise ValueError("token has invalid signature")
# Only decode *after* we've confirmed the signature.
data = json.loads(payload)
# Check whether it has expired.
if data["expires"] <= time.time():
raise ValueError("token has expired")
# Find something we can use as repoze.who.userid.
if "repoze.who.userid" not in data:
for key in ("username", "userid", "uid", "email"):
if key in data:
data["repoze.who.userid"] = data[key]
break
else:
raise ValueError("token contains no userid")
# Re-generate the secret key and return.
return data, self._get_secret(token, data)
def _validate_request(self, request, data):
"""Checks that the request is valid.
If the matchdict contains "application", checks that application is one
of the defined ones in self.applications.
This method is to be overwritted by potential cihlds of this class, for
e.g looking the list of possible application choices in a database.
It is up to this method to return the appropriate HTTP exceptions (e.g
a 404 if it is found that the requested url does not exist)
"""
if ('application' in request.matchdict and self.applications and
request.matchdict['application'] not in self.applications):
raise HTTPNotFound("The '%s' application does not exist" %
request.matchdict['application'])
def _get_secret(self, token, data):
"""Get the secret key associated with the given token.
In this implementation we generate the secret key using HKDF-Expand
with the token as the "info" parameter. This avoids having to keep
any extra state in memory while being sufficiently unguessable.
"""
size = self.hashmod_digest_size
salt = data["salt"].encode("ascii")
secret = HKDF(self.secret, salt=salt, info=token, size=size)
return b64encode(secret)
def _get_signature(self, value):
"""Calculate the HMAC signature for the given value."""
return hmac.new(self._sig_secret, value, self.hashmod).digest()
def HKDF_extract(salt, IKM, hashmod=hashlib.sha1):
"""HKDF-Extract; see RFC-5869 for the details."""
if salt is None:
salt = "\x00" * hashmod().digest_size
return hmac.new(salt, IKM, hashmod).digest()
def HKDF_expand(PRK, info, L, hashmod=hashlib.sha1):
"""HKDF-Expand; see RFC-5869 for the details."""
digest_size = hashmod().digest_size
N = int(math.ceil(L * 1.0 / digest_size))
assert N <= 255
T = ""
output = []
for i in xrange(1, N + 1):
data = T + info + chr(i)
T = hmac.new(PRK, data, hashmod).digest()
output.append(T)
return "".join(output)[:L]
def HKDF(secret, salt, info, size, hashmod=hashlib.sha1):
"""HKDF-extract-and-expand as a single function."""
PRK = HKDF_extract(salt, secret, hashmod)
return HKDF_expand(PRK, info, size, hashmod) | /repoze.who.plugins.vepauth-0.3.0.tar.gz/repoze.who.plugins.vepauth-0.3.0/repoze/who/plugins/vepauth/tokenmanager.py | 0.708616 | 0.443841 | tokenmanager.py | pypi |
from zope.interface import Interface
from zope.interface import Attribute
class IWorkflowFactory(Interface):
def __call__(self, context, machine):
""" Return an object which implements IWorkflow """
class IWorkflow(Interface):
def add_state(name, callback=None, **kw):
""" Add a new state. ``callback`` is a callback that will be
called when a piece of content enters this state."""
def add_transition(name, from_state, to_state, callback=None, **kw):
""" Add a new transition. ``callback`` is the callback that
will be called when this transition is made (before it enters
the next state)."""
def check():
""" Check the consistency of the workflow state machine. Raise
an error if it's inconsistent."""
def state_of(content):
""" Return the current state of the content object ``content``
or None if the content object has not particpated yet in this
workflow. """
def has_state(content):
""" Return true if the content has any state, false if not. """
def state_info(content, request, context=None, from_state=None):
""" Return a sequence of state info dictionaries """
def initialize(content, request=None):
""" Initialize the content object to the initial state of this
workflow. Return a tuple of (state, msg) """
def reset(content, request=None):
""" Reset the object by calling the callback of it's current
state and setting its state attr. If ``content`` has no
current state, it will be initialized into the initial state
for this workflow (see ``initialize``). Return a tuple of
(state, msg)"""
def transition(content, request, transition_name, context=None, guards=()):
""" Execute a transition using a transition name.
"""
def transition_to_state(content, request, to_state, context=None,
guards=(), skip_same=True):
""" Execute a transition to another state using a state name
(``to_state``). If ``skip_same`` is True, and the
``to_state`` is the same as the content state, do nothing."""
def get_transitions(content, request, context=None, from_state=None):
""" Return a sequence of transition dictionaries """
class IWorkflowList(Interface):
""" Marker interface used internally by get_workflow and the ZCML
machinery. An item registered as an IWorkflowList utility in
the component registry is a dictionary that contains lists of
workflow info dictionaries keyed by content type. """
class IDefaultWorkflow(Interface):
""" Marker interface used internally for workflows that aren't
associated with a particular content type"""
class IStateMachine(Interface):
# NB: this is a backwards compatibility interface only! See the
# comment at the top of statemachine.py for more info.
def add(state, transition_id, newstate, transition_fn, **kw):
"""Add a transition to the FSM."""
def execute(content, transition_id):
"""Perform a transition and execute an action."""
def state_of(content):
""" Return the current state of the given object """
def transitions(content, from_state=None):
""" Return the available transition ids for the given object
(from_state defaults to the object's current state)"""
def transition_info(content, from_state=None):
""" Return sequence of dictionaries representing the
transition information for content (from_state defaults to the
object's current state). Each dictionary has the keys
``transition_id``, ``from_state``, ``to_state`` as well as any
keywords passed in to the ``add`` method for this transition."""
def before_transition(state, newstate, transition_id, content, **kw):
"""
Hook method to be overridden by subclasses (or injected
directly onto an instance) to allow for before transition
actions (such as firing an event).
Raise an exception here to abort the transition.
"""
def after_transition(state, newstate, transition_id, content, **kw):
"""
Hook method to be overridden by subclasses (or injected
directly onto an instance) to allow for after transition
actions (such as firing an event).
"""
class ICallbackInfo(Interface):
""" Interface used internally to represent 'callback info' objects
(the 2nd argument passed to callbacks) """
transition = Attribute('A dictionary representing the transition underway')
workflow = Attribute('The workflow object that invoked the callback')
request = Attribute('The request object. May be None.') | /repoze.workflow-1.1-py3-none-any.whl/repoze/workflow/interfaces.py | 0.891031 | 0.453806 | interfaces.py | pypi |
from repoze.workflow.interfaces import IStateMachine
from zope.interface import implementer
_marker = ()
class StateMachineError(Exception):
""" Invalid input to finite state machine"""
@implementer(IStateMachine)
class StateMachine(object):
""" Finite state machine featuring transition actions.
The class stores a dictionary of (from_state, transition_id) keys, and
(to_state, transition_fn) values.
When a (state, transition_id) search is performed:
* an exact match is checked first,
* (state, None) is checked next.
The transition function must be of the following form:
* function(current_state, new_state, transition_id, context, **kw)
It is recommended that all transition functions be module level
callables to facilitate issues related to StateMachine
persistence.
"""
def __init__(self, state_attr, states=None, initial_state=None):
"""
o state_attr - attribute name where a given object's current
state will be stored (object is responsible for
persisting)
o states - state dictionary
o initial_state - initial state for any object using this
state machine
"""
if states is None:
states = {}
self.states = states
self.state_attr = state_attr
self.initial_state = initial_state
def add(self, state, transition_id, newstate, transition_fn, **kw):
self.states[(state, transition_id)] = (newstate, transition_fn, kw)
def execute(self, context, transition_id):
state = getattr(context, self.state_attr, _marker)
if state is _marker:
state = self.initial_state
si = (state, transition_id)
sn = (state, None)
newstate = None
# exact state match?
if si in self.states:
newstate, transition_fn, kw = self.states[si]
# no exact match, how about a None (catch-all) match?
elif sn in self.states:
newstate, transition_fn, kw = self.states[sn]
if newstate is None:
raise StateMachineError(
'No transition from %r using transition %r'
% (state, transition_id))
self.before_transition(state, newstate, transition_id, context, **kw)
transition_fn(state, newstate, transition_id, context, **kw)
self.after_transition(state, newstate, transition_id, context, **kw)
setattr(context, self.state_attr, newstate)
def state_of(self, context):
state = getattr(context, self.state_attr, self.initial_state)
return state
def transitions(self, context, from_state=None):
if from_state is None:
from_state = self.state_of(context)
transitions = [t_id for (state, t_id) in self.states.keys()
if state == from_state]
return transitions
def transition_info(self, context, from_state=None):
if from_state is None:
from_state = self.state_of(context)
L = []
for (state,t_id), (newstate,transition_fn,kw) in self.states.items():
if state == from_state:
newkw = {}
newkw.update(kw)
newkw['transition_id'] = t_id
newkw['from_state'] = state
newkw['to_state'] = newstate
L.append(newkw)
return L
def before_transition(self, state, newstate, transition_id, context, **kw):
pass
def after_transition(self, state, newstate, transition_id, context, **kw):
pass | /repoze.workflow-1.1-py3-none-any.whl/repoze/workflow/statemachine.py | 0.774796 | 0.58053 | statemachine.py | pypi |
from zope.interface import Interface
from zope.interface import implementedBy
from zope.interface import providedBy
from zope.schema import TextLine
from zope.component import adaptedBy
from zope.component import getSiteManager
from zope.configuration.fields import GlobalInterface
from zope.configuration.fields import GlobalObject
from zope.configuration.fields import Tokens
from ._compat import BLANK
from ._compat import text_ as _u
def handler(methodName, *args, **kwargs):
method = getattr(getSiteManager(), methodName)
method(*args, **kwargs)
def adapter(_context, factory, provides=None, for_=None, name=''):
if for_ is None:
if len(factory) == 1:
for_ = adaptedBy(factory[0])
if for_ is None:
raise TypeError("No for attribute was provided and can't "
"determine what the factory adapts.")
for_ = tuple(for_)
if provides is None:
if len(factory) == 1:
p = list(implementedBy(factory[0]))
if len(p) == 1:
provides = p[0]
if provides is None:
raise TypeError("Missing 'provides' attribute")
# Generate a single factory from multiple factories:
factories = factory
if len(factories) == 1:
factory = factories[0]
elif len(factories) < 1:
raise ValueError("No factory specified")
elif len(factories) > 1 and len(for_) != 1:
raise ValueError("Can't use multiple factories and multiple for")
else:
factory = _rolledUpFactory(factories)
_context.action(
discriminator = ('adapter', for_, provides, name),
callable = handler,
args = ('registerAdapter',
factory, for_, provides, name, _context.info),
)
class IAdapterDirective(Interface):
"""
Register an adapter
"""
factory = Tokens(
title=_u("Adapter factory/factories"),
description=(_u("A list of factories (usually just one) that create"
" the adapter instance.")),
required=True,
value_type=GlobalObject()
)
provides = GlobalInterface(
title=_u("Interface the component provides"),
description=(_u("This attribute specifies the interface the adapter"
" instance must provide.")),
required=False,
)
for_ = Tokens(
title=_u("Specifications to be adapted"),
description=_u("This should be a list of interfaces or classes"),
required=False,
value_type=GlobalObject(
missing_value=object(),
),
)
name = TextLine(
title=_u("Name"),
description=(_u("Adapters can have names.\n\n"
"This attribute allows you to specify the name for"
" this adapter.")),
required=False,
)
_handler = handler
def subscriber(_context, for_=None, factory=None, handler=None, provides=None):
if factory is None:
if handler is None:
raise TypeError("No factory or handler provided")
if provides is not None:
raise TypeError("Cannot use handler with provides")
factory = handler
else:
if handler is not None:
raise TypeError("Cannot use handler with factory")
if provides is None:
raise TypeError(
"You must specify a provided interface when registering "
"a factory")
if for_ is None:
for_ = adaptedBy(factory)
if for_ is None:
raise TypeError("No for attribute was provided and can't "
"determine what the factory (or handler) adapts.")
for_ = tuple(for_)
if handler is not None:
_context.action(
discriminator = None,
callable = _handler,
args = ('registerHandler',
handler, for_, BLANK, _context.info),
)
else:
_context.action(
discriminator = None,
callable = _handler,
args = ('registerSubscriptionAdapter',
factory, for_, provides, BLANK, _context.info),
)
class ISubscriberDirective(Interface):
"""
Register a subscriber
"""
factory = GlobalObject(
title=_u("Subscriber factory"),
description=_u("A factory used to create the subscriber instance."),
required=False,
)
handler = GlobalObject(
title=_u("Handler"),
description=_u("A callable object that handles events."),
required=False,
)
provides = GlobalInterface(
title=_u("Interface the component provides"),
description=(_u("This attribute specifies the interface the adapter"
" instance must provide.")),
required=False,
)
for_ = Tokens(
title=_u("Interfaces or classes that this subscriber depends on"),
description=_u("This should be a list of interfaces or classes"),
required=False,
value_type=GlobalObject(
missing_value = object(),
),
)
def utility(_context, provides=None, component=None, factory=None, name=''):
if factory and component:
raise TypeError("Can't specify factory and component.")
if provides is None:
if factory:
provides = list(implementedBy(factory))
else:
provides = list(providedBy(component))
if len(provides) == 1:
provides = provides[0]
else:
raise TypeError("Missing 'provides' attribute")
if factory:
kw = dict(factory=factory)
else:
# older zope.component registries don't accept factory as a kwarg,
# so if we don't need it, we don't pass it
kw = {}
_context.action(
discriminator = ('utility', provides, name),
callable = handler,
args = ('registerUtility', component, provides, name),
kw = kw,
)
class IUtilityDirective(Interface):
"""Register a utility."""
component = GlobalObject(
title=_u("Component to use"),
description=(_u("Python name of the implementation object. This"
" must identify an object in a module using the"
" full dotted name. If specified, the"
" ``factory`` field must be left blank.")),
required=False,
)
factory = GlobalObject(
title=_u("Factory"),
description=(_u("Python name of a factory which can create the"
" implementation object. This must identify an"
" object in a module using the full dotted name."
" If specified, the ``component`` field must"
" be left blank.")),
required=False,
)
provides = GlobalInterface(
title=_u("Provided interface"),
description=_u("Interface provided by the utility."),
required=False,
)
name = TextLine(
title=_u("Name"),
description=(_u("Name of the registration. This is used by"
" application code when locating a utility.")),
required=False,
)
def _rolledUpFactory(factories):
def factory(ob):
for f in factories:
ob = f(ob)
return ob
# Store the original factory for documentation
factory.factory = factories[0]
return factory | /repoze.zcml-1.1-py3-none-any.whl/repoze/zcml/__init__.py | 0.670177 | 0.20268 | __init__.py | pypi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.