code stringlengths 1 25.8M | language stringclasses 18 values | source stringclasses 4 values | repo stringclasses 78 values | path stringlengths 0 268 |
|---|---|---|---|---|
import dynamic from 'next/dynamic';
const DynamicComponentWithCustomLoading = dynamic(()=>import('../components/hello')
, {
loadableGenerated: {
webpack: ()=>[
require.resolveWeak("../components/hello")
]
},
loading: ()=><p >...</p>
});
const DynamicClientOnlyComponent = dynamic(()=>import('../components/hello')
, {
loadableGenerated: {
webpack: ()=>[
require.resolveWeak("../components/hello")
]
},
ssr: false
});
const DynamicClientOnlyComponentWithSuspense = dynamic(()=>import('../components/hello')
, {
loadableGenerated: {
webpack: ()=>[
require.resolveWeak("../components/hello")
]
},
ssr: false,
suspense: true
}); | javascript | github | https://github.com/vercel/next.js | crates/next-custom-transforms/tests/fixture/next-dynamic/with-options/output-prod.js |
Design
======
This document describes how libnetwork has been designed in order to achieve this.
Requirements for individual releases can be found on the [Project Page](https://github.com/docker/libnetwork/wiki).
Many of the design decisions are inspired by the learnings from the Docker networking design as of Docker v1.6.
Please refer to this [Docker v1.6 Design](legacy.md) document for more information on networking design as of Docker v1.6.
## Goal
libnetwork project will follow Docker and Linux philosophy of developing small, highly modular and composable tools that work well independently.
Libnetwork aims to satisfy that composable need for Networking in Containers.
## The Container Network Model
Libnetwork implements Container Network Model (CNM) which formalizes the steps required to provide networking for containers while providing an abstraction that can be used to support multiple network drivers. The CNM is built on 3 main components (shown below)

**Sandbox**
A Sandbox contains the configuration of a container's network stack.
This includes management of the container's interfaces, routing table and DNS settings.
An implementation of a Sandbox could be a Linux Network Namespace, a FreeBSD Jail or other similar concept.
A Sandbox may contain *many* endpoints from *multiple* networks.
**Endpoint**
An Endpoint joins a Sandbox to a Network.
An implementation of an Endpoint could be a `veth` pair, an Open vSwitch internal port or similar.
An Endpoint can belong to only one network and it can belong to only one Sandbox, if connected.
**Network**
A Network is a group of Endpoints that are able to communicate with each-other directly.
An implementation of a Network could be a Linux bridge, a VLAN, etc.
Networks consist of *many* endpoints.
## CNM Objects
**NetworkController**
`NetworkController` object provides the entry-point into libnetwork that exposes simple APIs for the users (such as Docker Engine) to allocate and manage Networks. libnetwork supports multiple active drivers (both inbuilt and remote). `NetworkController` allows user to bind a particular driver to a given network.
**Driver**
`Driver` is not a user visible object, but drivers provide the actual network implementation. `NetworkController` provides an API to configure a driver with driver-specific options/labels that is transparent to libnetwork, but can be handled by the drivers directly. Drivers can be both inbuilt (such as Bridge, Host, None & overlay) and remote (from plugin providers) to satisfy various use cases & deployment scenarios. At this point, the Driver owns a network and is responsible for managing the network (including IPAM, etc.). This can be improved in the future by having multiple drivers participating in handling various network management functionalities.
**Network**
`Network` object is an implementation of the `CNM : Network` as defined above. `NetworkController` provides APIs to create and manage `Network` object. Whenever a `Network` is created or updated, the corresponding `Driver` will be notified of the event. LibNetwork treats `Network` objects at an abstract level to provide connectivity between a group of endpoints that belong to the same network and isolation from the rest. The `Driver` performs the actual work of providing the required connectivity and isolation. The connectivity can be within the same host or across multiple hosts. Hence `Network` has a global scope within a cluster.
**Endpoint**
`Endpoint` represents a Service Endpoint. It provides the connectivity for services exposed by a container in a network with other services provided by other containers in the network. `Network` object provides APIs to create and manage an endpoint. An endpoint can be attached to only one network. `Endpoint` creation calls are made to the corresponding `Driver` which is responsible for allocating resources for the corresponding `Sandbox`. Since `Endpoint` represents a Service and not necessarily a particular container, `Endpoint` has a global scope within a cluster.
**Sandbox**
`Sandbox` object represents container's network configuration such as IP address, MAC address, routes, DNS entries. A `Sandbox` object is created when the user requests to create an endpoint on a network. The `Driver` that handles the `Network` is responsible for allocating the required network resources (such as the IP address) and passing the info called `SandboxInfo` back to libnetwork. libnetwork will make use of OS specific constructs (example: netns for Linux) to populate the network configuration into the containers that is represented by the `Sandbox`. A `Sandbox` can have multiple endpoints attached to different networks. Since `Sandbox` is associated with a particular container in a given host, it has a local scope that represents the Host that the Container belongs to.
**CNM Attributes**
***Options***
`Options` provides a generic and flexible mechanism to pass `Driver` specific configuration options from the user to the `Driver` directly. `Options` are just key-value pairs of data with `key` represented by a string and `value` represented by a generic object (such as a Go `interface{}`). Libnetwork will operate on the `Options` ONLY if the `key` matches any of the well-known `Labels` defined in the `net-labels` package. `Options` also encompasses `Labels` as explained below. `Options` are generally NOT end-user visible (in UI), while `Labels` are.
***Labels***
`Labels` are very similar to `Options` and are in fact just a subset of `Options`. `Labels` are typically end-user visible and are represented in the UI explicitly using the `--labels` option. They are passed from the UI to the `Driver` so that `Driver` can make use of it and perform any `Driver` specific operation (such as a subnet to allocate IP-Addresses from in a Network).
## CNM Lifecycle
Consumers of the CNM, like Docker, interact through the CNM Objects and its APIs to network the containers that they manage.
1. `Drivers` register with `NetworkController`. Built-in drivers register inside of libnetwork, while remote drivers register with libnetwork via the Plugin mechanism (*plugin-mechanism is WIP*). Each `driver` handles a particular `networkType`.
2. `NetworkController` object is created using `libnetwork.New()` API to manage the allocation of Networks and optionally configure a `Driver` with driver specific `Options`.
3. `Network` is created using the controller's `NewNetwork()` API by providing a `name` and `networkType`. `networkType` parameter helps to choose a corresponding `Driver` and binds the created `Network` to that `Driver`. From this point, any operation on `Network` will be handled by that `Driver`.
4. `controller.NewNetwork()` API also takes in optional `options` parameter which carries Driver-specific options and `Labels`, which the Drivers can make use of for its purpose.
5. `network.CreateEndpoint()` can be called to create a new Endpoint in a given network. This API also accepts optional `options` parameter which drivers can make use of. These 'options' carry both well-known labels and driver-specific labels. Drivers will in turn be called with `driver.CreateEndpoint` and it can choose to reserve IPv4/IPv6 addresses when an `Endpoint` is created in a `Network`. The `Driver` will assign these addresses using `InterfaceInfo` interface defined in the `driverapi`. The IP/IPv6 are needed to complete the endpoint as service definition along with the ports the endpoint exposes since essentially a service endpoint is nothing but a network address and the port number that the application container is listening on.
6. `endpoint.Join()` can be used to attach a container to an `Endpoint`. The Join operation will create a `Sandbox` if it doesn't exist already for that container. The Drivers can make use of the Sandbox Key to identify multiple endpoints attached to a same container. This API also accepts optional `options` parameter which drivers can make use of.
* Though it is not a direct design issue of LibNetwork, it is highly encouraged to have users like `Docker` to call the endpoint.Join() during Container's `Start()` lifecycle that is invoked *before* the container is made operational. As part of Docker integration, this will be taken care of.
* One of a FAQ on endpoint join() API is that, why do we need an API to create an Endpoint and another to join the endpoint.
- The answer is based on the fact that Endpoint represents a Service which may or may not be backed by a Container. When an Endpoint is created, it will have its resources reserved so that any container can get attached to the endpoint later and get a consistent networking behaviour.
7. `endpoint.Leave()` can be invoked when a container is stopped. The `Driver` can cleanup the states that it allocated during the `Join()` call. LibNetwork will delete the `Sandbox` when the last referencing endpoint leaves the network. But LibNetwork keeps hold of the IP addresses as long as the endpoint is still present and will be reused when the container(or any container) joins again. This ensures that the container's resources are reused when they are Stopped and Started again.
8. `endpoint.Delete()` is used to delete an endpoint from a network. This results in deleting an endpoint and cleaning up the cached `sandbox.Info`.
9. `network.Delete()` is used to delete a network. LibNetwork will not allow the delete to proceed if there are any existing endpoints attached to the Network.
## Implementation Details
### Networks & Endpoints
LibNetwork's Network and Endpoint APIs are primarily for managing the corresponding Objects and book-keeping them to provide a level of abstraction as required by the CNM. It delegates the actual implementation to the drivers which realize the functionality as promised in the CNM. For more information on these details, please see [the drivers section](#drivers)
### Sandbox
Libnetwork provides a framework to implement of a Sandbox in multiple operating systems. Currently we have implemented Sandbox for Linux using `namespace_linux.go` and `configure_linux.go` in `sandbox` package.
This creates a Network Namespace for each sandbox which is uniquely identified by a path on the host filesystem.
Netlink calls are used to move interfaces from the global namespace to the Sandbox namespace.
Netlink is also used to manage the routing table in the namespace.
## Drivers
## API
Drivers are essentially an extension of libnetwork and provide the actual implementation for all of the LibNetwork APIs defined above. Hence there is an 1-1 correspondence for all the `Network` and `Endpoint` APIs, which includes :
* `driver.Config`
* `driver.CreateNetwork`
* `driver.DeleteNetwork`
* `driver.CreateEndpoint`
* `driver.DeleteEndpoint`
* `driver.Join`
* `driver.Leave`
These Driver facing APIs make use of unique identifiers (`networkid`,`endpointid`,...) instead of names (as seen in user-facing APIs).
The APIs are still work in progress and there can be changes to these based on the driver requirements especially when it comes to Multi-host networking.
### Driver semantics
* `Driver.CreateEndpoint`
This method is passed an interface `EndpointInfo`, with methods `Interface` and `AddInterface`.
If the value returned by `Interface` is non-nil, the driver is expected to make use of the interface information therein (e.g., treating the address or addresses as statically supplied), and must return an error if it cannot. If the value is `nil`, the driver should allocate exactly one _fresh_ interface, and use `AddInterface` to record them; or return an error if it cannot.
It is forbidden to use `AddInterface` if `Interface` is non-nil.
## Implementations
Libnetwork includes the following driver packages:
- null
- bridge
- overlay
- remote
### Null
The null driver is a `noop` implementation of the driver API, used only in cases where no networking is desired. This is to provide backward compatibility to the Docker's `--net=none` option.
### Bridge
The `bridge` driver provides a Linux-specific bridging implementation based on the Linux Bridge.
For more details, please [see the Bridge Driver documentation](bridge.md).
### Overlay
The `overlay` driver implements networking that can span multiple hosts using overlay network encapsulations such as VXLAN.
For more details on its design, please see the [Overlay Driver Design](overlay.md).
### Remote
The `remote` package does not provide a driver, but provides a means of supporting drivers over a remote transport.
This allows a driver to be written in a language of your choice.
For further details, please see the [Remote Driver Design](remote.md). | unknown | github | https://github.com/moby/moby | daemon/libnetwork/docs/design.md |
"""Machinery for interspersing lines of text with linked and colored regions
The typical entrypoints are es_lines() and html_line().
Within this file, "tag" means a tuple of (file-wide offset, is_start, payload).
"""
import cgi
from itertools import chain
try:
from itertools import compress
except ImportError:
from itertools import izip
def compress(data, selectors):
return (d for d, s in izip(data, selectors) if s)
import json
from warnings import warn
from jinja2 import Markup
from dxr.plugins import all_plugins
from dxr.utils import without_ending
class Line(object):
"""Representation of a line's beginning and ending as the contents of a tag
Exists to motivate the balancing machinery to close all the tags at the end
of every line (and reopen any afterward that span lines).
"""
sort_order = 0 # Sort Lines outermost.
def __repr__(self):
return 'Line()'
LINE = Line()
class RefClassIdTagger(type):
"""Metaclass which automatically generates an ``id`` attr on the class as
a serializable class identifier.
Having a dedicated identifier allows Ref subclasses to move or change name
without breaking index compatibility.
Expects a ``_plugin`` attr to use as a prefix.
"""
def __new__(metaclass, name, bases, dict):
dict['id'] = without_ending('Ref', name)
return type.__new__(metaclass, name, bases, dict)
class Ref(object):
"""Abstract superclass for a cross-reference attached to a run of text
Carries enough data to construct a context menu, highlight instances of
the same symbol, and show something informative on hover.
"""
sort_order = 1
__slots__ = ['menu_data', 'hover', 'qualname_hash']
__metaclass__ = RefClassIdTagger
def __init__(self, tree, menu_data, hover=None, qualname=None, qualname_hash=None):
"""
:arg menu_data: Arbitrary JSON-serializable data from which we can
construct a context menu
:arg hover: The contents of the <a> tag's title attribute. (The first
one wins.)
:arg qualname: A hashable unique identifier for the symbol surrounded
by this ref, for highlighting
:arg qualname_hash: The hashed version of ``qualname``, which you can
pass instead of ``qualname`` if you have access to the
already-hashed version
"""
self.tree = tree
self.menu_data = menu_data
self.hover = hover
self.qualname_hash = hash(qualname) if qualname else qualname_hash
def es(self):
"""Return a serialization of myself to store in elasticsearch."""
ret = {'plugin': self.plugin,
'id': self.id,
# Smash the data into a string, because it will have a
# different schema from subclass to subclass, and ES will freak
# out:
'menu_data': json.dumps(self.menu_data)}
if self.hover:
ret['hover'] = self.hover
if self.qualname_hash is not None: # could be 0
ret['qualname_hash'] = self.qualname_hash
return ret
@staticmethod
def es_to_triple(es_data, tree):
"""Convert ES-dwelling ref representation to a (start, end,
:class:`~dxr.lines.Ref` subclass) triple.
Return a subclass of Ref, chosen according to the ES data. Into its
attributes "menu_data", "hover" and "qualname_hash", copy the ES
properties of the same names, JSON-decoding "menu_data" first.
:arg es_data: An item from the array under the 'refs' key of an ES LINE
document
:arg tree: The :class:`~dxr.config.TreeConfig` representing the tree
from which the ``es_data`` was pulled
"""
def ref_class(plugin, id):
"""Return the subclass of Ref identified by a combination of
plugin and class ID."""
plugins = all_plugins()
try:
return plugins[plugin].refs[id]
except KeyError:
warn('Ref subclass from plugin %s with ID %s was referenced '
'in the index but not found in the current '
'implementation. Ignored.' % (plugin, id))
payload = es_data['payload']
cls = ref_class(payload['plugin'], payload['id'])
return (es_data['start'],
es_data['end'],
cls(tree,
json.loads(payload['menu_data']),
hover=payload.get('hover'),
qualname_hash=payload.get('qualname_hash')))
def menu_items(self):
"""Return an iterable of menu items to be attached to a ref.
Return an iterable of dicts of this form::
{
html: the HTML to be used as the menu item itself
href: the URL to visit when the menu item is chosen
title: the tooltip text given on hovering over the menu item
icon: the icon to show next to the menu item: the name of a PNG
from the ``icons`` folder, without the .png extension
}
Typically, this pulls data out of ``self.menu_data``.
"""
raise NotImplementedError
def opener(self):
"""Emit the opening anchor tag for a cross reference.
Menu item text, links, and metadata are JSON-encoded and dumped into a
data attr on the tag. JS finds them there and creates a menu on click.
"""
if self.hover:
title = ' title="' + cgi.escape(self.hover, True) + '"'
else:
title = ''
if self.qualname_hash is not None:
cls = ' class="tok%i"' % self.qualname_hash
else:
cls = ''
menu_items = list(self.menu_items())
return u'<a data-menu="%s"%s%s>' % (
cgi.escape(json.dumps(menu_items), True),
title,
cls)
def closer(self):
return u'</a>'
class Region(object):
"""A <span> tag with a CSS class, wrapped around a run of text"""
sort_order = 2 # Sort Regions innermost, as it doesn't matter if we split
# them.
__slots__ = ['css_class']
def __init__(self, css_class):
self.css_class = css_class
def es(self):
return self.css_class
@classmethod
def es_to_triple(cls, es_region):
"""Convert ES-dwelling region representation to a (start, end,
:class:`~dxr.lines.Region`) triple."""
return es_region['start'], es_region['end'], cls(es_region['payload'])
def opener(self):
return u'<span class="%s">' % cgi.escape(self.css_class, True)
def closer(self):
return u'</span>'
def __repr__(self):
"""Return a nice representation for debugging."""
return 'Region("%s")' % self.css_class
def balanced_tags(tags):
"""Come up with a balanced series of tags which express the semantics of
the given sorted interleaved ones.
Return an iterable of (point, is_start, Region/Reg/Line) without any
(pointless) zero-width tag spans. The output isn't necessarily optimal, but
it's fast and not embarrassingly wasteful of space.
"""
return without_empty_tags(balanced_tags_with_empties(tags))
def without_empty_tags(tags):
"""Filter zero-width tagged spans out of a sorted, balanced tag stream.
Maintain tag order. Line break tags are considered self-closing.
"""
buffer = [] # tags
depth = 0
for tag in tags:
point, is_start, payload = tag
if is_start:
buffer.append(tag)
depth += 1
else:
top_point, _, top_payload = buffer[-1]
if top_payload is payload and top_point == point:
# It's a closer, and it matches the last thing in buffer and, it
# and that open tag form a zero-width span. Cancel the last thing
# in buffer.
buffer.pop()
else:
# It's an end tag that actually encloses some stuff.
buffer.append(tag)
depth -= 1
# If we have a balanced set of non-zero-width tags, emit them:
if not depth:
for b in buffer:
yield b
del buffer[:]
def balanced_tags_with_empties(tags):
"""Come up with a balanced series of tags which express the semantics of
the given sorted interleaved ones.
Return an iterable of (point, is_start, Region/Reg/Line), possibly
including some zero-width tag spans. Each line is enclosed within Line tags.
:arg tags: An iterable of (offset, is_start, payload) tuples, with one
closer for each opener but possibly interleaved. There is one tag for
each line break, with a payload of LINE and an is_start of False. Tags
are ordered with closers first, then line breaks, then openers.
"""
def close(to=None):
"""Return an iterable of closers for open tags up to (but not
including) the one with the payload ``to``."""
# Loop until empty (if we're not going "to" anything in particular) or
# until the corresponding opener is at the top of the stack. We check
# that "to is None" just to surface any stack-tracking bugs that would
# otherwise cause opens to empty too soon.
while opens if to is None else opens[-1] is not to:
intermediate_payload = opens.pop()
yield point, False, intermediate_payload
closes.append(intermediate_payload)
def reopen():
"""Yield open tags for all temporarily closed ones."""
while closes:
intermediate_payload = closes.pop()
yield point, True, intermediate_payload
opens.append(intermediate_payload)
opens = [] # payloads of tags which are currently open
closes = [] # payloads of tags which we've had to temporarily close so we could close an overlapping tag
point = 0
yield 0, True, LINE
for point, is_start, payload in tags:
if is_start:
yield point, is_start, payload
opens.append(payload)
elif payload is LINE:
# Close all open tags before a line break (since each line is
# wrapped in its own <code> tag pair), and reopen them afterward.
for t in close(): # I really miss "yield from".
yield t
# Since preserving self-closing linebreaks would throw off
# without_empty_tags(), we convert to explicit closers here. We
# surround each line with them because empty balanced ones would
# get filtered out.
yield point, False, LINE
yield point, True, LINE
for t in reopen():
yield t
else:
# Temporarily close whatever's been opened between the start tag of
# the thing we're trying to close and here:
for t in close(to=payload):
yield t
# Close the current tag:
yield point, False, payload
opens.pop()
# Reopen the temporarily closed ones:
for t in reopen():
yield t
yield point, False, LINE
def tag_boundaries(tags):
"""Return a sequence of (offset, is_start, Region/Ref/Line) tuples.
Basically, split the atomic tags that come out of plugins into separate
start and end points, which can then be thrown together in a bag and sorted
as the first step in the tag-balancing process.
Like in Python slice notation, the offset of a tag refers to the index of
the source code char it comes before.
:arg tags: An iterable of (start, end, Ref) and (start, end, Region) tuples
"""
for start, end, data in tags:
# Filter out zero-length spans which don't do any good and
# which can cause starts to sort after ends, crashing the tag
# balancer. Incidentally filter out spans where start tags come
# after end tags, though that should never happen.
#
# Also filter out None starts and ends. I don't know where they
# come from. That shouldn't happen and should be fixed in the
# plugins.
if (start is not None and start != -1 and
end is not None and end != -1 and
start < end):
yield start, True, data
yield end, False, data
def line_boundaries(lines):
"""Return a tag for the end of each line in a string.
:arg lines: iterable of the contents of lines in a file, including any
trailing newline character
Endpoints and start points are coincident: right after a (universal)
newline.
"""
up_to = 0
for line in lines:
up_to += len(line)
yield up_to, False, LINE
def non_overlapping_refs(tags):
"""Yield a False for each Ref in ``tags`` that overlaps a subsequent one,
a True for the rest.
Assumes the incoming tags, while not necessarily well balanced, have the
start tag come before the end tag, if both are present. (Lines are weird.)
"""
blacklist = set()
open_ref = None
for point, is_start, payload in tags:
if isinstance(payload, Ref):
if payload in blacklist: # It's the evil close tag of a misnested tag.
blacklist.remove(payload)
yield False
elif open_ref is None: # and is_start: (should always be true if input is sane)
assert is_start
open_ref = payload
yield True
elif open_ref is payload: # it's the closer
open_ref = None
yield True
else: # It's an evil open tag of a misnested tag.
warn('htmlifier plugins requested overlapping <a> tags. Fix the plugins.')
blacklist.add(payload)
yield False
else:
yield True
def remove_overlapping_refs(tags):
"""For any series of <a> tags that overlap each other, filter out all but
the first.
There's no decent way to represent that sort of thing in the UI, so we
don't support it.
:arg tags: A list of (point, is_start, payload) tuples, sorted by point.
The tags do not need to be properly balanced.
"""
# Reuse the list so we don't use any more memory.
i = None
for i, tag in enumerate(compress(tags, non_overlapping_refs(tags))):
tags[i] = tag
if i is not None:
del tags[i + 1:]
def nesting_order((point, is_start, payload)):
"""Return a sorting key that places coincident Line boundaries outermost,
then Ref boundaries, and finally Region boundaries.
The Line bit saves some empty-tag elimination. The Ref bit saves splitting
an <a> tag (and the attendant weird UI) for the following case::
Ref ____________ # The Ref should go on the outside.
Region _____
Other scenarios::
Reg _______________ # Would be nice if Reg ended before Ref
Ref ________________ # started. We'll see about this later.
Reg _____________________ # Works either way
Ref _______
Reg _____________________
Ref _______ # This should be fine.
Reg _____________ # This should be fine as well.
Ref ____________
Reg _____
Ref _____ # This is fine either way.
Also, endpoints sort before coincident start points to save work for the
tag balancer.
"""
return point, is_start, (payload.sort_order if is_start else
-payload.sort_order)
def finished_tags(lines, refs, regions):
"""Return an ordered iterable of properly nested tags which fully describe
the refs and regions and their places in a file's text.
:arg lines: iterable of lines of text of the file to htmlify.
Benchmarking reveals that this function is O(number of tags) in practice,
on inputs on the order of thousands of lines. On my laptop, it takes .02s
for a 3000-line file with some pygmentize regions and some python refs.
"""
# Plugins return unicode offsets, not byte ones.
# Get start and endpoints of intervals:
tags = list(tag_boundaries(chain(refs, regions)))
tags.extend(line_boundaries(lines))
# Sorting is actually not a significant use of time in an actual indexing
# run.
tags.sort(key=nesting_order) # balanced_tags undoes this, but we tolerate
# that in html_lines().
remove_overlapping_refs(tags)
return balanced_tags(tags)
def tags_per_line(flat_tags):
"""Split tags on LINE tags, yielding the tags of one line at a time
(no LINE tags are yielded)
:arg flat_tags: An iterable of ordered, non-overlapping, non-empty tag
boundaries with Line endpoints at (and outermost at) the index of the
end of each line.
"""
tags = []
for tag in flat_tags:
point, is_start, payload = tag
if payload is LINE:
if not is_start:
yield tags
tags = []
else:
tags.append(tag)
def es_lines(tags):
"""Yield lists of dicts, one per source code line, that can be indexed
into the ``refs`` or ``regions`` field of the ``line`` doctype in
elasticsearch, depending on the payload type.
:arg tags: An iterable of ordered, non-overlapping, non-empty tag
boundaries with Line endpoints at (and outermost at) the index of the
end of each line.
"""
for line in tags_per_line(tags):
payloads = {}
for pos, is_start, payload in line:
if is_start:
payloads[payload] = {'start': pos}
else:
payloads[payload]['end'] = pos
# Index objects are refs or regions. Regions' payloads are just
# strings; refs' payloads are objects. See mappings in plugins/core.py
yield [{'payload': payload.es(),
'start': pos['start'],
'end': pos['end']}
for payload, pos in payloads.iteritems()]
# tags always ends with a LINE closer, so we don't need any additional
# yield here to catch remnants.
def html_line(text, tags, bof_offset):
"""Return a line of Markup, interleaved with the refs and regions that
decorate it.
:arg tags: An ordered iterable of tags from output of finished_tags
representing regions and refs
:arg text: The unicode text to decorate
:arg bof_offset: The byte position of the start of the line from the
beginning of the file.
"""
def segments(text, tags, bof_offset):
up_to = 0
for pos, is_start, payload in tags:
# Convert from file-based position to line-based position.
pos -= bof_offset
yield cgi.escape(text[up_to:pos])
up_to = pos
if not is_start: # It's a closer. Most common.
yield payload.closer()
else:
yield payload.opener()
yield cgi.escape(text[up_to:])
return Markup(u''.join(segments(text, tags, bof_offset))) | unknown | codeparrot/codeparrot-clean | ||
# -*- encoding: utf-8 -*-
##############################################################################
#
# @author - Fekete Mihai <feketemihai@gmail.com>
# Copyright (C) 2011 TOTAL PC SYSTEMS (http://www.www.erpsystems.ro).
# Copyright (C) 2009 (<http://www.filsystem.ro>)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import res_partner
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4: | unknown | codeparrot/codeparrot-clean | ||
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef _ERASURE_CODE_H_
#define _ERASURE_CODE_H_
#include <stddef.h>
/**
* Interface to functions supporting erasure code encode and decode.
*
* This file defines the interface to optimized functions used in erasure
* codes. Encode and decode of erasures in GF(2^8) are made by calculating the
* dot product of the symbols (bytes in GF(2^8)) across a set of buffers and a
* set of coefficients. Values for the coefficients are determined by the type
* of erasure code. Using a general dot product means that any sequence of
* coefficients may be used including erasure codes based on random
* coefficients.
* Multiple versions of dot product are supplied to calculate 1-6 output
* vectors in one pass.
* Base GF multiply and divide functions can be sped up by defining
* GF_LARGE_TABLES at the expense of memory size.
*
*/
/**
* Initialize tables for fast Erasure Code encode and decode.
*
* Generates the expanded tables needed for fast encode or decode for erasure
* codes on blocks of data. 32bytes is generated for each input coefficient.
*
* @param k The number of vector sources or rows in the generator matrix
* for coding.
* @param rows The number of output vectors to concurrently encode/decode.
* @param a Pointer to sets of arrays of input coefficients used to encode
* or decode data.
* @param gftbls Pointer to start of space for concatenated output tables
* generated from input coefficients. Must be of size 32*k*rows.
* @returns none
*/
void h_ec_init_tables(int k, int rows, unsigned char* a, unsigned char* gftbls);
/**
* Generate or decode erasure codes on blocks of data, runs appropriate version.
*
* Given a list of source data blocks, generate one or multiple blocks of
* encoded data as specified by a matrix of GF(2^8) coefficients. When given a
* suitable set of coefficients, this function will perform the fast generation
* or decoding of Reed-Solomon type erasure codes.
*
* This function determines what instruction sets are enabled and
* selects the appropriate version at runtime.
*
* @param len Length of each block of data (vector) of source or dest data.
* @param k The number of vector sources or rows in the generator matrix
* for coding.
* @param rows The number of output vectors to concurrently encode/decode.
* @param gftbls Pointer to array of input tables generated from coding
* coefficients in ec_init_tables(). Must be of size 32*k*rows
* @param data Array of pointers to source input buffers.
* @param coding Array of pointers to coded output buffers.
* @returns none
*/
void h_ec_encode_data(int len, int k, int rows, unsigned char *gftbls,
unsigned char **data, unsigned char **coding);
/**
* @brief Generate update for encode or decode of erasure codes from single
* source, runs appropriate version.
*
* Given one source data block, update one or multiple blocks of encoded data as
* specified by a matrix of GF(2^8) coefficients. When given a suitable set of
* coefficients, this function will perform the fast generation or decoding of
* Reed-Solomon type erasure codes from one input source at a time.
*
* This function determines what instruction sets are enabled and selects the
* appropriate version at runtime.
*
* @param len Length of each block of data (vector) of source or dest data.
* @param k The number of vector sources or rows in the generator matrix
* for coding.
* @param rows The number of output vectors to concurrently encode/decode.
* @param vec_i The vector index corresponding to the single input source.
* @param gftbls Pointer to array of input tables generated from coding
* coefficients in ec_init_tables(). Must be of size 32*k*rows
* @param data Pointer to single input source used to update output parity.
* @param coding Array of pointers to coded output buffers.
* @returns none
*/
void h_ec_encode_data_update(int len, int k, int rows, int vec_i,
unsigned char *gftbls, unsigned char *data, unsigned char **coding);
#endif //_ERASURE_CODE_H_ | c | github | https://github.com/apache/hadoop | hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_code.h |
pr: 140637
summary: CPS handles datastreams
area: Search
type: enhancement
issues: [] | unknown | github | https://github.com/elastic/elasticsearch | docs/changelog/140637.yaml |
# encoding: utf-8
"""A fancy version of Python's builtin :func:`dir` function.
"""
# Copyright (c) IPython Development Team.
# Distributed under the terms of the Modified BSD License.
import inspect
from .py3compat import string_types
def safe_hasattr(obj, attr):
"""In recent versions of Python, hasattr() only catches AttributeError.
This catches all errors.
"""
try:
getattr(obj, attr)
return True
except:
return False
def dir2(obj):
"""dir2(obj) -> list of strings
Extended version of the Python builtin dir(), which does a few extra
checks.
This version is guaranteed to return only a list of true strings, whereas
dir() returns anything that objects inject into themselves, even if they
are later not really valid for attribute access (many extension libraries
have such bugs).
"""
# Start building the attribute list via dir(), and then complete it
# with a few extra special-purpose calls.
try:
words = set(dir(obj))
except Exception:
# TypeError: dir(obj) does not return a list
words = set()
# filter out non-string attributes which may be stuffed by dir() calls
# and poor coding in third-party modules
words = [w for w in words if isinstance(w, string_types)]
return sorted(words)
def get_real_method(obj, name):
"""Like getattr, but with a few extra sanity checks:
- If obj is a class, ignore its methods
- Check if obj is a proxy that claims to have all attributes
- Catch attribute access failing with any exception
- Check that the attribute is a callable object
Returns the method or None.
"""
if inspect.isclass(obj):
return None
try:
canary = getattr(obj, '_ipython_canary_method_should_not_exist_', None)
except Exception:
return None
if canary is not None:
# It claimed to have an attribute it should never have
return None
try:
m = getattr(obj, name, None)
except Exception:
return None
if callable(m):
return m
return None | unknown | codeparrot/codeparrot-clean | ||
__author__ = 'tylin'
__version__ = '1.0.1'
# Interface for accessing the Microsoft COCO dataset.
# Microsoft COCO is a large image dataset designed for object detection,
# segmentation, and caption generation. pycocotools is a Python API that
# assists in loading, parsing and visualizing the annotations in COCO.
# Please visit http://mscoco.org/ for more information on COCO, including
# for the data, paper, and tutorials. The exact format of the annotations
# is also described on the COCO website. For example usage of the pycocotools
# please see pycocotools_demo.ipynb. In addition to this API, please download both
# the COCO images and annotations in order to run the demo.
# An alternative to using the API is to load the annotations directly
# into Python dictionary
# Using the API provides additional utility functions. Note that this API
# supports both *instance* and *caption* annotations. In the case of
# captions not all functions are defined (e.g. categories are undefined).
# The following API functions are defined:
# COCO - COCO api class that loads COCO annotation file and prepare data structures.
# decodeMask - Decode binary mask M encoded via run-length encoding.
# encodeMask - Encode binary mask M using run-length encoding.
# getAnnIds - Get ann ids that satisfy given filter conditions.
# getCatIds - Get cat ids that satisfy given filter conditions.
# getImgIds - Get img ids that satisfy given filter conditions.
# loadAnns - Load anns with the specified ids.
# loadCats - Load cats with the specified ids.
# loadImgs - Load imgs with the specified ids.
# segToMask - Convert polygon segmentation to binary mask.
# showAnns - Display the specified annotations.
# loadRes - Load algorithm results and create API for accessing them.
# download - Download COCO images from mscoco.org server.
# Throughout the API "ann"=annotation, "cat"=category, and "img"=image.
# Help on each functions can be accessed by: "help COCO>function".
# See also COCO>decodeMask,
# COCO>encodeMask, COCO>getAnnIds, COCO>getCatIds,
# COCO>getImgIds, COCO>loadAnns, COCO>loadCats,
# COCO>loadImgs, COCO>segToMask, COCO>showAnns
# Microsoft COCO Toolbox. version 2.0
# Data, paper, and tutorials available at: http://mscoco.org/
# Code written by Piotr Dollar and Tsung-Yi Lin, 2014.
# Licensed under the Simplified BSD License [see bsd.txt]
import json
import datetime
import time
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from matplotlib.patches import Polygon
import numpy as np
from skimage.draw import polygon
import urllib
import copy
import itertools
import mask
import os
class COCO:
def __init__(self, annotation_file=None):
"""
Constructor of Microsoft COCO helper class for reading and visualizing annotations.
:param annotation_file (str): location of annotation file
:param image_folder (str): location to the folder that hosts images.
:return:
"""
# load dataset
self.dataset = {}
self.anns = []
self.imgToAnns = {}
self.catToImgs = {}
self.imgs = {}
self.cats = {}
if not annotation_file == None:
print 'loading annotations into memory...'
tic = time.time()
dataset = json.load(open(annotation_file, 'r'))
print 'Done (t=%0.2fs)'%(time.time()- tic)
self.dataset = dataset
self.createIndex()
def createIndex(self):
# create index
print 'creating index...'
anns = {}
imgToAnns = {}
catToImgs = {}
cats = {}
imgs = {}
if 'annotations' in self.dataset:
imgToAnns = {ann['image_id']: [] for ann in self.dataset['annotations']}
anns = {ann['id']: [] for ann in self.dataset['annotations']}
for ann in self.dataset['annotations']:
imgToAnns[ann['image_id']] += [ann]
anns[ann['id']] = ann
if 'images' in self.dataset:
imgs = {im['id']: {} for im in self.dataset['images']}
for img in self.dataset['images']:
imgs[img['id']] = img
if 'categories' in self.dataset:
cats = {cat['id']: [] for cat in self.dataset['categories']}
for cat in self.dataset['categories']:
cats[cat['id']] = cat
if 'annotations' in self.dataset and 'categories' in self.dataset:
catToImgs = {cat['id']: [] for cat in self.dataset['categories']}
for ann in self.dataset['annotations']:
catToImgs[ann['category_id']] += [ann['image_id']]
print 'index created!'
# create class members
self.anns = anns
self.imgToAnns = imgToAnns
self.catToImgs = catToImgs
self.imgs = imgs
self.cats = cats
def info(self):
"""
Print information about the annotation file.
:return:
"""
for key, value in self.dataset['info'].items():
print '%s: %s'%(key, value)
def getAnnIds(self, imgIds=[], catIds=[], areaRng=[], iscrowd=None):
"""
Get ann ids that satisfy given filter conditions. default skips that filter
:param imgIds (int array) : get anns for given imgs
catIds (int array) : get anns for given cats
areaRng (float array) : get anns for given area range (e.g. [0 inf])
iscrowd (boolean) : get anns for given crowd label (False or True)
:return: ids (int array) : integer array of ann ids
"""
imgIds = imgIds if type(imgIds) == list else [imgIds]
catIds = catIds if type(catIds) == list else [catIds]
if len(imgIds) == len(catIds) == len(areaRng) == 0:
anns = self.dataset['annotations']
else:
if not len(imgIds) == 0:
# this can be changed by defaultdict
lists = [self.imgToAnns[imgId] for imgId in imgIds if imgId in self.imgToAnns]
anns = list(itertools.chain.from_iterable(lists))
else:
anns = self.dataset['annotations']
anns = anns if len(catIds) == 0 else [ann for ann in anns if ann['category_id'] in catIds]
anns = anns if len(areaRng) == 0 else [ann for ann in anns if ann['area'] > areaRng[0] and ann['area'] < areaRng[1]]
if not iscrowd == None:
ids = [ann['id'] for ann in anns if ann['iscrowd'] == iscrowd]
else:
ids = [ann['id'] for ann in anns]
return ids
def getCatIds(self, catNms=[], supNms=[], catIds=[]):
"""
filtering parameters. default skips that filter.
:param catNms (str array) : get cats for given cat names
:param supNms (str array) : get cats for given supercategory names
:param catIds (int array) : get cats for given cat ids
:return: ids (int array) : integer array of cat ids
"""
catNms = catNms if type(catNms) == list else [catNms]
supNms = supNms if type(supNms) == list else [supNms]
catIds = catIds if type(catIds) == list else [catIds]
if len(catNms) == len(supNms) == len(catIds) == 0:
cats = self.dataset['categories']
else:
cats = self.dataset['categories']
cats = cats if len(catNms) == 0 else [cat for cat in cats if cat['name'] in catNms]
cats = cats if len(supNms) == 0 else [cat for cat in cats if cat['supercategory'] in supNms]
cats = cats if len(catIds) == 0 else [cat for cat in cats if cat['id'] in catIds]
ids = [cat['id'] for cat in cats]
return ids
def getImgIds(self, imgIds=[], catIds=[]):
'''
Get img ids that satisfy given filter conditions.
:param imgIds (int array) : get imgs for given ids
:param catIds (int array) : get imgs with all given cats
:return: ids (int array) : integer array of img ids
'''
imgIds = imgIds if type(imgIds) == list else [imgIds]
catIds = catIds if type(catIds) == list else [catIds]
if len(imgIds) == len(catIds) == 0:
ids = self.imgs.keys()
else:
ids = set(imgIds)
for i, catId in enumerate(catIds):
if i == 0 and len(ids) == 0:
ids = set(self.catToImgs[catId])
else:
ids &= set(self.catToImgs[catId])
return list(ids)
def loadAnns(self, ids=[]):
"""
Load anns with the specified ids.
:param ids (int array) : integer ids specifying anns
:return: anns (object array) : loaded ann objects
"""
if type(ids) == list:
return [self.anns[id] for id in ids]
elif type(ids) == int:
return [self.anns[ids]]
def loadCats(self, ids=[]):
"""
Load cats with the specified ids.
:param ids (int array) : integer ids specifying cats
:return: cats (object array) : loaded cat objects
"""
if type(ids) == list:
return [self.cats[id] for id in ids]
elif type(ids) == int:
return [self.cats[ids]]
def loadImgs(self, ids=[]):
"""
Load anns with the specified ids.
:param ids (int array) : integer ids specifying img
:return: imgs (object array) : loaded img objects
"""
if type(ids) == list:
return [self.imgs[id] for id in ids]
elif type(ids) == int:
return [self.imgs[ids]]
def showAnns(self, anns):
"""
Display the specified annotations.
:param anns (array of object): annotations to display
:return: None
"""
if len(anns) == 0:
return 0
if 'segmentation' in anns[0]:
datasetType = 'instances'
elif 'caption' in anns[0]:
datasetType = 'captions'
if datasetType == 'instances':
ax = plt.gca()
polygons = []
color = []
for ann in anns:
c = np.random.random((1, 3)).tolist()[0]
if type(ann['segmentation']) == list:
# polygon
for seg in ann['segmentation']:
poly = np.array(seg).reshape((len(seg)/2, 2))
polygons.append(Polygon(poly, True,alpha=0.4))
color.append(c)
else:
# mask
t = self.imgs[ann['image_id']]
if type(ann['segmentation']['counts']) == list:
rle = mask.frPyObjects([ann['segmentation']], t['height'], t['width'])
else:
rle = [ann['segmentation']]
m = mask.decode(rle)
img = np.ones( (m.shape[0], m.shape[1], 3) )
if ann['iscrowd'] == 1:
color_mask = np.array([2.0,166.0,101.0])/255
if ann['iscrowd'] == 0:
color_mask = np.random.random((1, 3)).tolist()[0]
for i in range(3):
img[:,:,i] = color_mask[i]
ax.imshow(np.dstack( (img, m*0.5) ))
p = PatchCollection(polygons, facecolors=color, edgecolors=(0,0,0,1), linewidths=3, alpha=0.4)
ax.add_collection(p)
elif datasetType == 'captions':
for ann in anns:
print ann['caption']
def loadRes(self, resFile):
"""
Load result file and return a result api object.
:param resFile (str) : file name of result file
:return: res (obj) : result api object
"""
res = COCO()
res.dataset['images'] = [img for img in self.dataset['images']]
# res.dataset['info'] = copy.deepcopy(self.dataset['info'])
# res.dataset['licenses'] = copy.deepcopy(self.dataset['licenses'])
print 'Loading and preparing results... '
tic = time.time()
anns = json.load(open(resFile))
assert type(anns) == list, 'results in not an array of objects'
annsImgIds = [ann['image_id'] for ann in anns]
assert set(annsImgIds) == (set(annsImgIds) & set(self.getImgIds())), \
'Results do not correspond to current coco set'
if 'caption' in anns[0]:
imgIds = set([img['id'] for img in res.dataset['images']]) & set([ann['image_id'] for ann in anns])
res.dataset['images'] = [img for img in res.dataset['images'] if img['id'] in imgIds]
for id, ann in enumerate(anns):
ann['id'] = id+1
elif 'bbox' in anns[0] and not anns[0]['bbox'] == []:
res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])
for id, ann in enumerate(anns):
bb = ann['bbox']
x1, x2, y1, y2 = [bb[0], bb[0]+bb[2], bb[1], bb[1]+bb[3]]
if not 'segmentation' in ann:
ann['segmentation'] = [[x1, y1, x1, y2, x2, y2, x2, y1]]
ann['area'] = bb[2]*bb[3]
ann['id'] = id+1
ann['iscrowd'] = 0
elif 'segmentation' in anns[0]:
res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])
for id, ann in enumerate(anns):
# now only support compressed RLE format as segmentation results
ann['area'] = mask.area([ann['segmentation']])[0]
if not 'bbox' in ann:
ann['bbox'] = mask.toBbox([ann['segmentation']])[0]
ann['id'] = id+1
ann['iscrowd'] = 0
print 'DONE (t=%0.2fs)'%(time.time()- tic)
res.dataset['annotations'] = anns
res.createIndex()
return res
def download( self, tarDir = None, imgIds = [] ):
'''
Download COCO images from mscoco.org server.
:param tarDir (str): COCO results directory name
imgIds (list): images to be downloaded
:return:
'''
if tarDir is None:
print 'Please specify target directory'
return -1
if len(imgIds) == 0:
imgs = self.imgs.values()
else:
imgs = self.loadImgs(imgIds)
N = len(imgs)
if not os.path.exists(tarDir):
os.makedirs(tarDir)
for i, img in enumerate(imgs):
tic = time.time()
fname = os.path.join(tarDir, img['file_name'])
if not os.path.exists(fname):
urllib.urlretrieve(img['coco_url'], fname)
print 'downloaded %d/%d images (t=%.1fs)'%(i, N, time.time()- tic)
@staticmethod
def decodeMask(R):
"""
Decode binary mask M encoded via run-length encoding.
:param R (object RLE) : run-length encoding of binary mask
:return: M (bool 2D array) : decoded binary mask
"""
N = len(R['counts'])
M = np.zeros( (R['size'][0]*R['size'][1], ))
n = 0
val = 1
for pos in range(N):
val = not val
for c in range(R['counts'][pos]):
R['counts'][pos]
M[n] = val
n += 1
return M.reshape((R['size']), order='F')
@staticmethod
def encodeMask(M):
"""
Encode binary mask M using run-length encoding.
:param M (bool 2D array) : binary mask to encode
:return: R (object RLE) : run-length encoding of binary mask
"""
[h, w] = M.shape
M = M.flatten(order='F')
N = len(M)
counts_list = []
pos = 0
# counts
counts_list.append(1)
diffs = np.logical_xor(M[0:N-1], M[1:N])
for diff in diffs:
if diff:
pos +=1
counts_list.append(1)
else:
counts_list[pos] += 1
# if array starts from 1. start with 0 counts for 0
if M[0] == 1:
counts_list = [0] + counts_list
return {'size': [h, w],
'counts': counts_list ,
}
@staticmethod
def segToMask( S, h, w ):
"""
Convert polygon segmentation to binary mask.
:param S (float array) : polygon segmentation mask
:param h (int) : target mask height
:param w (int) : target mask width
:return: M (bool 2D array) : binary mask
"""
M = np.zeros((h,w), dtype=np.bool)
for s in S:
N = len(s)
rr, cc = polygon(np.array(s[1:N:2]).clip(max=h-1), \
np.array(s[0:N:2]).clip(max=w-1)) # (y, x)
M[rr, cc] = 1
return M
def annToRLE(self, ann):
"""
Convert annotation which can be polygons, uncompressed RLE to RLE.
:return: binary mask (numpy 2D array)
"""
t = self.imgs[ann['image_id']]
h, w = t['height'], t['width']
segm = ann['segmentation']
if type(segm) == list:
# polygon -- a single object might consist of multiple parts
# we merge all parts into one mask rle code
rles = mask.frPyObjects(segm, h, w)
rle = mask.merge(rles)
elif type(segm['counts']) == list:
# uncompressed RLE
rle = mask.frPyObjects(segm, h, w)
else:
# rle
rle = ann['segmentation']
return rle
def annToMask(self, ann):
"""
Convert annotation which can be polygons, uncompressed RLE, or RLE to binary mask.
:return: binary mask (numpy 2D array)
"""
rle = self.annToRLE(ann)
m = mask.decode(rle)
return m | unknown | codeparrot/codeparrot-clean | ||
"""
=============================================
Integration and ODEs (:mod:`scipy.integrate`)
=============================================
.. currentmodule:: scipy.integrate
Integrating functions, given function object
============================================
.. autosummary::
:toctree: generated/
quad -- General purpose integration
dblquad -- General purpose double integration
tplquad -- General purpose triple integration
nquad -- General purpose n-dimensional integration
fixed_quad -- Integrate func(x) using Gaussian quadrature of order n
quadrature -- Integrate with given tolerance using Gaussian quadrature
romberg -- Integrate func using Romberg integration
quad_explain -- Print information for use of quad
newton_cotes -- Weights and error coefficient for Newton-Cotes integration
IntegrationWarning -- Warning on issues during integration
Integrating functions, given fixed samples
==========================================
.. autosummary::
:toctree: generated/
trapz -- Use trapezoidal rule to compute integral.
cumtrapz -- Use trapezoidal rule to cumulatively compute integral.
simps -- Use Simpson's rule to compute integral from samples.
romb -- Use Romberg Integration to compute integral from
-- (2**k + 1) evenly-spaced samples.
.. seealso::
:mod:`scipy.special` for orthogonal polynomials (special) for Gaussian
quadrature roots and weights for other weighting factors and regions.
Integrators of ODE systems
==========================
.. autosummary::
:toctree: generated/
odeint -- General integration of ordinary differential equations.
ode -- Integrate ODE using VODE and ZVODE routines.
complex_ode -- Convert a complex-valued ODE to real-valued and integrate.
solve_bvp -- Solve a boundary value problem for a system of ODEs.
"""
from __future__ import division, print_function, absolute_import
from .quadrature import *
from .odepack import *
from .quadpack import *
from ._ode import *
from ._bvp import solve_bvp
__all__ = [s for s in dir() if not s.startswith('_')]
from numpy.testing import Tester
test = Tester().test | unknown | codeparrot/codeparrot-clean | ||
from __future__ import unicode_literals
from calaccess_raw import fields
from django.utils.encoding import python_2_unicode_compatible
from .base import CalAccessBaseModel
@python_2_unicode_compatible
class CvrSoCd(CalAccessBaseModel):
"""
Cover page for a statement of organization creation or termination
form filed by a slate-mailer organization or recipient committee.
"""
UNIQUE_KEY = (
"FILING_ID",
"AMEND_ID",
"LINE_ITEM",
"REC_TYPE",
"FORM_TYPE",
)
acct_opendt = fields.DateTimeField(
db_column="ACCT_OPENDT",
null=True,
help_text='This field is undocumented',
)
ACTIVITY_LEVEL_CHOICES = (
("CI", "City"),
("CO", "County"),
("ST", "State"),
("", "Unknown"),
)
actvty_lvl = fields.CharField(
max_length=2,
db_column="ACTVTY_LVL",
blank=True,
choices=ACTIVITY_LEVEL_CHOICES,
verbose_name="Activity level",
help_text="Organization's level of activity"
)
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
bank_adr1 = fields.CharField(
max_length=55,
db_column="BANK_ADR1",
blank=True,
help_text='This field is undocumented',
)
bank_adr2 = fields.CharField(
max_length=55,
db_column="BANK_ADR2",
blank=True,
help_text='This field is undocumented',
)
bank_city = fields.CharField(
max_length=30,
db_column="BANK_CITY",
blank=True,
help_text='This field is undocumented',
)
bank_nam = fields.CharField(
max_length=200,
db_column="BANK_NAM",
blank=True,
help_text='This field is undocumented',
)
bank_phon = fields.CharField(
max_length=20,
db_column="BANK_PHON",
blank=True,
help_text='This field is undocumented',
)
bank_st = fields.CharField(
max_length=2,
db_column="BANK_ST",
blank=True,
help_text='This field is undocumented',
)
bank_zip4 = fields.CharField(
max_length=10,
db_column="BANK_ZIP4",
blank=True,
help_text='This field is undocumented',
)
brdbase_cb = fields.CharField(
max_length=1,
db_column="BRDBASE_CB",
blank=True,
help_text='This field is undocumented',
)
city = fields.CharField(
max_length=30,
db_column="CITY",
blank=True,
help_text='This field is undocumented',
)
cmte_email = fields.CharField(
max_length=60,
db_column="CMTE_EMAIL",
blank=True,
help_text='This field is undocumented',
)
cmte_fax = fields.CharField(
max_length=20,
db_column="CMTE_FAX",
blank=True,
help_text='This field is undocumented',
)
com82013id = fields.CharField(
max_length=9,
db_column="COM82013ID",
blank=True,
help_text='This field is undocumented',
)
com82013nm = fields.CharField(
max_length=200,
db_column="COM82013NM",
blank=True,
help_text='This field is undocumented',
)
com82013yn = fields.CharField(
max_length=1,
db_column="COM82013YN",
blank=True,
help_text='This field is undocumented',
)
control_cb = fields.CharField(
max_length=1,
db_column="CONTROL_CB",
blank=True,
help_text='This field is undocumented',
)
county_act = fields.CharField(
max_length=20,
db_column="COUNTY_ACT",
blank=True,
help_text='This field is undocumented',
)
county_res = fields.CharField(
max_length=20,
db_column="COUNTY_RES",
blank=True,
help_text='This field is undocumented',
)
ENTITY_CODE_CHOICES = (
# Defined here:
# http://www.documentcloud.org/documents/1308003-cal-access-cal-\
# format.html#document/p9
('', 'Unknown'),
('BMC', 'Ballot measure committee'),
('CAO', 'Candidate/officeholder'),
('COM', 'Committee'),
('CTL', 'Controlled committee'),
('RCP', 'Recipient committee'),
('SMO', 'Slate-mailer organization'),
)
entity_cd = fields.CharField(
max_length=3,
db_column="ENTITY_CD",
blank=True,
choices=ENTITY_CODE_CHOICES,
verbose_name="Entity code"
)
filer_id = fields.CharField(
verbose_name='filer ID',
db_column='FILER_ID',
max_length=9,
blank=True,
db_index=True,
help_text="Filer's unique identification number",
)
filer_namf = fields.CharField(
max_length=45,
db_column="FILER_NAMF",
blank=True,
verbose_name="Filer first name"
)
filer_naml = fields.CharField(
max_length=200,
db_column="FILER_NAML",
blank=True,
verbose_name="Filer last name"
)
filer_nams = fields.CharField(
max_length=10,
db_column="FILER_NAMS",
blank=True,
verbose_name="Filer name suffix"
)
filer_namt = fields.CharField(
max_length=10,
db_column="FILER_NAMT",
blank=True,
verbose_name="Filer name title"
)
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
FORM_TYPE_CHOICES = (
('F400', 'Form 400 (Statement of organization, \
slate mailer organization)'),
('F402', 'Form 402 (Statement of termination, \
slate mailer organization'),
('F410', 'Form 410 (Statement of organization, recipient committee)'),
)
form_type = fields.CharField(
max_length=4,
db_column="FORM_TYPE",
choices=FORM_TYPE_CHOICES,
help_text='Name of the source filing form or schedule'
)
genpurp_cb = fields.CharField(
max_length=1,
db_column="GENPURP_CB",
blank=True,
help_text='This field is undocumented',
)
gpc_descr = fields.CharField(
max_length=300,
db_column="GPC_DESCR",
blank=True,
help_text='This field is undocumented',
)
mail_city = fields.CharField(
max_length=30,
db_column="MAIL_CITY",
blank=True,
help_text='This field is undocumented',
)
mail_st = fields.CharField(
max_length=2,
db_column="MAIL_ST",
blank=True,
help_text='This field is undocumented',
)
mail_zip4 = fields.CharField(
max_length=10,
db_column="MAIL_ZIP4",
blank=True,
help_text='This field is undocumented',
)
phone = fields.CharField(
max_length=20,
db_column="PHONE",
blank=True,
help_text='This field is undocumented',
)
primfc_cb = fields.CharField(
max_length=1,
db_column="PRIMFC_CB",
blank=True,
help_text='This field is undocumented',
)
qualfy_dt = fields.DateTimeField(
db_column="QUALFY_DT",
null=True,
verbose_name="Date qualified",
help_text="Date qualified as an organization"
)
qual_cb = fields.CharField(
max_length=1,
db_column="QUAL_CB",
blank=True,
help_text='This field is undocumented',
)
REC_TYPE_CHOICES = (
("CVR", "CVR"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
report_num = fields.CharField(
max_length=3,
db_column="REPORT_NUM",
blank=True,
help_text='This field is undocumented',
)
rpt_date = fields.DateTimeField(
db_column="RPT_DATE",
null=True,
help_text='This field is undocumented',
)
smcont_qualdt = fields.DateTimeField(
db_column="SMCONT_QUALDT",
null=True,
help_text='This field is undocumented',
)
sponsor_cb = fields.CharField(
max_length=1,
db_column="SPONSOR_CB",
blank=True,
help_text='This field is undocumented',
)
st = fields.CharField(
max_length=2,
db_column="ST",
blank=True,
help_text='This field is undocumented',
)
surplusdsp = fields.CharField(
max_length=90,
db_column="SURPLUSDSP",
blank=True,
help_text='This field is undocumented',
)
term_date = fields.DateTimeField(
db_column="TERM_DATE",
null=True,
help_text='This field is undocumented',
)
tres_city = fields.CharField(
max_length=30,
db_column="TRES_CITY",
blank=True,
verbose_name="Treasurer's city"
)
tres_namf = fields.CharField(
max_length=45,
db_column="TRES_NAMF",
blank=True,
verbose_name="Treasurer's first name"
)
tres_naml = fields.CharField(
max_length=200,
db_column="TRES_NAML",
blank=True,
verbose_name="Treasurer's last name"
)
tres_nams = fields.CharField(
max_length=10,
db_column="TRES_NAMS",
blank=True,
verbose_name="Treasurer's name suffix"
)
tres_namt = fields.CharField(
max_length=10,
db_column="TRES_NAMT",
blank=True,
verbose_name="Treasurer's name title"
)
tres_phon = fields.CharField(
max_length=20,
db_column="TRES_PHON",
blank=True,
verbose_name="Treasurer's phone number"
)
tres_st = fields.CharField(
max_length=2,
db_column="TRES_ST",
blank=True,
verbose_name="Treasurer's street",
)
tres_zip4 = fields.CharField(
max_length=10,
db_column="TRES_ZIP4",
blank=True,
help_text="Treasurer's ZIP Code"
)
zip4 = fields.CharField(
max_length=10,
db_column="ZIP4",
blank=True,
help_text='This field is undocumented',
)
class Meta:
app_label = 'calaccess_raw'
db_table = "CVR_SO_CD"
verbose_name = 'CVR_SO_CD'
verbose_name_plural = 'CVR_SO_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class Cvr2SoCd(CalAccessBaseModel):
"""
Additional names and committees information included on the second page
of a statement of organization creation form filed
by a slate-mailer organization or recipient committee.
"""
UNIQUE_KEY = (
"FILING_ID",
"AMEND_ID",
"LINE_ITEM",
"REC_TYPE",
"FORM_TYPE"
)
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
REC_TYPE_CHOICES = (
("CVR2", "CVR2"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
FORM_TYPE_CHOICES = (
('F400', 'Form 400 (Statement of organization, \
slate mailer organization)'),
('F410', 'Form 410 (Statement of organization, recipient committee)'),
)
form_type = fields.CharField(
choices=FORM_TYPE_CHOICES,
db_column='FORM_TYPE',
max_length=4,
help_text='Name of the source filing form or schedule'
)
tran_id = fields.CharField(
verbose_name='transaction ID',
max_length=20,
db_column='TRAN_ID',
blank=True,
help_text='Permanent value unique to this item',
)
ENTITY_CODE_CHOICES = (
# Defined here:
# http://www.documentcloud.org/documents/1308003-cal-access-cal-\
# format.html#document/p9
('', 'Unknown'),
('ATH', 'Authorizing individual'),
('ATR', 'Assistant treasurer'),
('BMN', 'BMN (Unknown)'),
('BNM', 'Ballot measure\'s name/title'),
('CAO', 'Candidate/officeholder'),
('COM', 'Committee'),
('CTL', 'Controlled committee'),
('OFF', 'Officer'),
('POF', 'Principal officer'),
('PRO', 'Proponent'),
('SPO', 'Sponsor'),
)
entity_cd = fields.CharField(
db_column='ENTITY_CD',
max_length=3,
blank=True,
verbose_name='entity code',
choices=ENTITY_CODE_CHOICES,
)
enty_naml = fields.CharField(
db_column='ENTY_NAML',
max_length=194,
blank=True,
help_text="Entity's business name or last name if the entity is an \
individual"
)
enty_namf = fields.CharField(
db_column='ENTY_NAMF',
max_length=34,
blank=True,
help_text="Entity's first name if the entity is an individual"
)
enty_namt = fields.CharField(
db_column='ENTY_NAMT',
max_length=9,
blank=True,
help_text="Entity's name prefix or title if the entity is an \
individual"
)
enty_nams = fields.CharField(
db_column='ENTY_NAMS',
max_length=10,
blank=True,
help_text="Entity's name suffix if the entity is an individual"
)
item_cd = fields.CharField(
db_column='ITEM_CD',
max_length=4,
blank=True,
help_text="Section of the Statement of Organization this \
itemization relates to. See CAL document for the definition \
of legal values for this column."
)
mail_city = fields.CharField(
db_column='MAIL_CITY',
max_length=25,
blank=True,
help_text="City portion of the entity's mailing address"
)
mail_st = fields.CharField(
db_column='MAIL_ST',
max_length=4,
blank=True,
help_text="State portion of the entity's mailing address"
)
mail_zip4 = fields.CharField(
db_column='MAIL_ZIP4',
max_length=10,
blank=True,
help_text="Zipcode portion of the entity's mailing address"
)
day_phone = fields.CharField(
db_column='DAY_PHONE',
max_length=20,
blank=True,
help_text="Entity's daytime phone number"
)
fax_phone = fields.CharField(
db_column='FAX_PHONE',
max_length=20,
blank=True,
help_text="Entity's fax number"
)
email_adr = fields.CharField(
db_column='EMAIL_ADR',
max_length=40,
blank=True,
help_text="Email address. Not contained in current forms."
)
cmte_id = fields.IntegerField(
db_column='CMTE_ID',
blank=True,
null=True,
verbose_name="Committee ID",
help_text="Entity's identification number"
)
ind_group = fields.CharField(
db_column='IND_GROUP',
max_length=87,
blank=True,
help_text="Industry group/affiliation description"
)
office_cd = fields.CharField(
db_column='OFFICE_CD',
max_length=4,
blank=True,
help_text="Code that identifies the office being sought. See \
CAL document for a list of valid codes."
)
offic_dscr = fields.CharField(
db_column='OFFIC_DSCR',
max_length=40,
blank=True,
help_text="Office sought description used if the office sought code \
(OFFICE_CD) equals other (OTH)."
)
juris_cd = fields.CharField(
db_column='JURIS_CD',
max_length=4,
blank=True,
help_text="Office jurisdiction code. See CAL document for a \
list of legal values."
)
juris_dscr = fields.CharField(
db_column='JURIS_DSCR',
max_length=40,
blank=True,
help_text="Office jurisdiction description provided if the \
jurisdiction code (JURIS_CD) equals other (OTH)."
)
dist_no = fields.CharField(
db_column='DIST_NO',
max_length=4,
blank=True,
help_text="Office district number for Senate, Assembly, and Board \
of Equalization districts."
)
off_s_h_cd = fields.CharField(
db_column='OFF_S_H_CD',
max_length=4,
blank=True,
help_text="Office sought/held code. Legal values are 'S' for sought \
and 'H' for held."
)
non_pty_cb = fields.CharField(
db_column='NON_PTY_CB',
max_length=4,
blank=True,
help_text="Non-partisan check-box. Legal values are 'X' and null."
)
party_name = fields.CharField(
db_column='PARTY_NAME',
max_length=63,
blank=True,
help_text="Name of party (if partisan)"
)
bal_num = fields.CharField(
db_column='BAL_NUM',
max_length=7,
blank=True,
help_text="Ballot measure number or letter"
)
bal_juris = fields.CharField(
db_column='BAL_JURIS',
max_length=40,
blank=True,
help_text="Jurisdiction of ballot measure"
)
sup_opp_cd = fields.CharField(
db_column='SUP_OPP_CD',
max_length=4,
blank=True,
help_text="Support/oppose code (S/O). Legal values are 'S' for \
support and 'O' for oppose."
)
year_elect = fields.CharField(
db_column='YEAR_ELECT',
max_length=4,
blank=True,
help_text="Year of election"
)
pof_title = fields.CharField(
db_column='POF_TITLE',
max_length=44,
blank=True,
help_text="Position/title of the principal officer"
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'CVR2_SO_CD'
verbose_name = 'CVR2_SO_CD'
verbose_name_plural = 'CVR2_SO_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class CvrCampaignDisclosureCd(CalAccessBaseModel):
"""
Cover page information from campaign disclosure forms
"""
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
amendexp_1 = fields.CharField(
max_length=100,
db_column='AMENDEXP_1',
blank=True,
help_text='Amendment explanation line 1'
)
amendexp_2 = fields.CharField(
max_length=100,
db_column='AMENDEXP_2',
blank=True,
help_text="Amendment explanation line 2"
)
amendexp_3 = fields.CharField(
max_length=100,
db_column='AMENDEXP_3',
blank=True,
help_text="Amendment explanation line 3"
)
assoc_cb = fields.CharField(
max_length=4,
db_column='ASSOC_CB',
blank=True,
help_text="Association Interests info included check-box. Legal \
values are 'X' and null."
)
assoc_int = fields.CharField(
max_length=90,
db_column='ASSOC_INT',
blank=True,
help_text="Description of association interests"
)
bal_id = fields.CharField(
max_length=9,
db_column='BAL_ID',
blank=True,
help_text="This field is undocumented"
)
bal_juris = fields.CharField(
max_length=40,
db_column='BAL_JURIS',
blank=True,
help_text="Ballot measure jurisdiction"
)
bal_name = fields.CharField(
max_length=200,
db_column='BAL_NAME',
blank=True,
help_text="Ballot measure name"
)
bal_num = fields.CharField(
max_length=4,
db_column='BAL_NUM',
blank=True,
help_text="Ballot measure number or letter"
)
brdbase_yn = fields.CharField(
max_length=1,
db_column='BRDBASE_YN',
blank=True,
help_text="Broad Base Committee (yes/no) check box. Legal \
values are 'Y' or 'N'."
)
# bus_adr1 = fields.CharField(
# max_length=55, db_column='BUS_ADR1', blank=True
# )
# bus_adr2 = fields.CharField(
# max_length=55, db_column='BUS_ADR2', blank=True
# )
bus_city = fields.CharField(
max_length=30,
db_column='BUS_CITY',
blank=True,
help_text="Employer/business address city"
)
bus_inter = fields.CharField(
max_length=40,
db_column='BUS_INTER',
blank=True,
help_text="Employer/business interest description"
)
bus_name = fields.CharField(
max_length=200,
db_column='BUS_NAME',
blank=True,
help_text="Name of employer/business. Applies to the form 461."
)
bus_st = fields.CharField(
max_length=2,
db_column='BUS_ST',
blank=True,
help_text="Employer/business address state"
)
bus_zip4 = fields.CharField(
max_length=10,
db_column='BUS_ZIP4',
blank=True,
help_text="Employer/business address ZIP Code"
)
busact_cb = fields.CharField(
max_length=10,
db_column='BUSACT_CB',
blank=True,
help_text="Business activity info included check-box. Valid values \
are 'X' and null"
)
busactvity = fields.CharField(
max_length=90,
db_column='BUSACTVITY',
blank=True,
help_text="Business activity description"
)
# cand_adr1 = fields.CharField(
# max_length=55, db_column='CAND_ADR1', blank=True
# )
# cand_adr2 = fields.CharField(
# max_length=55, db_column='CAND_ADR2', blank=True
# )
cand_city = fields.CharField(
max_length=30,
db_column='CAND_CITY',
blank=True,
help_text='Candidate/officeholder city'
)
cand_email = fields.CharField(
max_length=60,
db_column='CAND_EMAIL',
blank=True,
help_text='Candidate/officeholder email. This field \
is not contained on the forms.'
)
cand_fax = fields.CharField(
max_length=20,
db_column='CAND_FAX',
blank=True,
help_text='Candidate/officeholder fax. This field \
is not contained on the forms.'
)
cand_id = fields.CharField(
max_length=9,
db_column='CAND_ID',
blank=True,
help_text="This field is not documented"
)
cand_namf = fields.CharField(
max_length=45,
db_column='CAND_NAMF',
blank=True,
help_text='Candidate/officeholder first name'
)
cand_naml = fields.CharField(
max_length=200,
db_column='CAND_NAML',
blank=True,
help_text="Candidate/officeholder's last name. Applies to forms \
460, 465, and 496."
)
cand_nams = fields.CharField(
max_length=10,
db_column='CAND_NAMS',
blank=True,
help_text="Candidate/officeholder's name suffix"
)
cand_namt = fields.CharField(
max_length=10,
db_column='CAND_NAMT',
blank=True,
help_text="Candidate/officeholder's prefix or title"
)
cand_phon = fields.CharField(
max_length=20,
db_column='CAND_PHON',
blank=True,
help_text='Candidate/officeholder phone'
)
cand_st = fields.CharField(
max_length=4,
db_column='CAND_ST',
blank=True,
help_text="Candidate/officeholder's state"
)
cand_zip4 = fields.CharField(
max_length=10,
db_column='CAND_ZIP4',
blank=True,
help_text="Candidate/officeholder's ZIP Code"
)
cmtte_id = fields.CharField(
max_length=9,
db_column='CMTTE_ID',
blank=True,
verbose_name="Committee ID",
help_text="Committee ID (Filer_id) of recipient Committee who's \
campaign statement is attached. This field applies to the form 401."
)
cmtte_type = fields.CharField(
max_length=1,
db_column='CMTTE_TYPE',
blank=True,
verbose_name="Committee type",
help_text="Type of Recipient Committee. Applies to the 450/460."
)
control_yn = fields.IntegerField(
null=True,
db_column='CONTROL_YN',
blank=True,
help_text="Controlled Committee (yes/no) check box. Legal values \
are 'Y' or 'N'."
)
dist_no = fields.CharField(
max_length=4,
db_column='DIST_NO',
blank=True,
help_text="District number for the office being sought. Populated \
for Senate, Assembly, or Board of Equalization races."
)
elect_date = fields.DateTimeField(
null=True,
db_column='ELECT_DATE',
blank=True,
help_text="Date of the General Election"
)
emplbus_cb = fields.CharField(
max_length=4,
db_column='EMPLBUS_CB',
blank=True,
help_text="Employer/Business Info included check-box. Legal \
values are 'X' or null. Applies to the Form 461."
)
employer = fields.CharField(
max_length=200,
db_column='EMPLOYER',
blank=True,
help_text="Employer. This field is most likely unused."
)
ENTITY_CODE_CHOICES = (
# Defined here:
# http://www.documentcloud.org/documents/1308003-cal-access-cal-\
# format.html#document/p9
('', 'Unknown'),
('BMC', 'Ballot measure committee'),
('CAO', 'Candidate/officeholder'),
('COM', 'Committee'),
('CTL', 'Controlled committee'),
('IND', 'Person (Spending > $5,000)'),
('MDI', 'Major donor/independent expenditure'),
('OTH', 'Other'),
('PTY', 'Political party'),
('RCP', 'Recipient committee'),
('SCC', 'Small contributor committee'),
('SMO', 'Slate mailer organization'),
)
entity_cd = fields.CharField(
max_length=4,
db_column='ENTITY_CD',
blank=True,
choices=ENTITY_CODE_CHOICES,
verbose_name='entity code'
)
file_email = fields.CharField(
max_length=60,
db_column='FILE_EMAIL',
blank=True,
help_text="Filer's email address"
)
# filer_adr1 = fields.CharField(
# max_length=55, db_column='FILER_ADR1', blank=True
# )
# filer_adr2 = fields.CharField(
# max_length=55, db_column='FILER_ADR2', blank=True
# )
filer_city = fields.CharField(
max_length=30,
db_column='FILER_CITY',
blank=True,
help_text="Filer's city"
)
filer_fax = fields.CharField(
max_length=20,
db_column='FILER_FAX',
blank=True,
help_text="Filer's fax"
)
filer_id = fields.CharField(
verbose_name='filer ID',
db_column='FILER_ID',
max_length=15,
blank=True,
db_index=True,
help_text="Filer's unique identification number",
)
filer_namf = fields.CharField(
max_length=45,
db_column='FILER_NAMF',
blank=True,
help_text="Filer's first name, if an individual"
)
filer_naml = fields.CharField(
max_length=200,
db_column='FILER_NAML',
help_text="The committee's or organization's name or if an \
individual the filer's last name."
)
filer_nams = fields.CharField(
max_length=10,
db_column='FILER_NAMS',
blank=True,
help_text="Filer's suffix, if an individual"
)
filer_namt = fields.CharField(
max_length=10,
db_column='FILER_NAMT',
blank=True,
help_text="Filer's title or prefix, if an individual"
)
filer_phon = fields.CharField(
max_length=20,
db_column='FILER_PHON',
blank=True,
help_text="Filer phone number"
)
filer_st = fields.CharField(
max_length=4,
db_column='FILER_ST',
blank=True,
help_text="Filer state"
)
filer_zip4 = fields.CharField(
max_length=10,
db_column='FILER_ZIP4',
blank=True,
help_text="Filer ZIP Code"
)
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
FORM_TYPE_CHOICES = (
('F511', 'Form 511 (Paid spokesman report)'),
('F900', 'Form 900 (Public employee\'s retirement board, \
candidate campaign statement)'),
('F425', 'Form 425 (Semi-annual statement of no activity, \
non-controlled recipient committee)'),
('F450', 'Form 450 (Recipient committee campaign statement, \
short form)'),
('F401', 'Form 401 (Slate mailer organization campaign statement)'),
('F498', 'Form 498 (Late payment report, slate mailer organizations'),
('F465', 'Form 465 (Supplemental independent expenditure report'),
('F496', 'Form 496 (Late independent expenditure report)'),
('F461', 'Form 461 (Independent expenditure committee \
and major donor committee campaign statement)'),
('F460', 'Form 460 (Recipient committee campaign statement)'),
('F497', 'Form 497 (Late contribution report)')
)
form_type = fields.CharField(
choices=FORM_TYPE_CHOICES,
max_length=4,
db_column='FORM_TYPE',
help_text='Name of the source filing form or schedule'
)
from_date = fields.DateTimeField(
null=True,
db_column='FROM_DATE',
blank=True,
help_text="Reporting period from date"
)
juris_cd = fields.CharField(
max_length=3,
db_column='JURIS_CD',
blank=True,
help_text="Office jurisdiction code"
)
juris_dscr = fields.CharField(
max_length=40,
db_column='JURIS_DSCR',
blank=True,
help_text="Office Jurisdiction description if the field JURIS_CD is \
set to city (CIT), county (CTY), local (LOC), or other \
(OTH)."
)
late_rptno = fields.CharField(
max_length=30,
db_column='LATE_RPTNO',
blank=True,
help_text="Identifying Report Number used to distinguish multiple \
reports filed during the same filing period. For example, \
this field allows for multiple form 497s to be filed on the \
same day."
)
# mail_adr1 = fields.CharField(
# max_length=55, db_column='MAIL_ADR1', blank=True
# )
# mail_adr2 = fields.CharField(
# max_length=55, db_column='MAIL_ADR2', blank=True
# )
mail_city = fields.CharField(
max_length=30,
db_column='MAIL_CITY',
blank=True,
help_text="Filer mailing address city"
)
mail_st = fields.CharField(
max_length=4,
db_column='MAIL_ST',
blank=True,
help_text="Filer mailing address state"
)
mail_zip4 = fields.CharField(
max_length=10,
db_column='MAIL_ZIP4',
blank=True,
help_text="Filer mailing address ZIP Code"
)
occupation = fields.CharField(
max_length=60,
db_column='OCCUPATION',
blank=True,
help_text="Occupation. This field is most likely unused."
)
off_s_h_cd = fields.CharField(
max_length=1,
db_column='OFF_S_H_CD',
blank=True,
help_text='Office Sought/Held Code. Legal values are "S" for \
sought and "H" for held.'
)
offic_dscr = fields.CharField(
max_length=40,
db_column='OFFIC_DSCR',
blank=True,
help_text="Office sought description if the field OFFICE_CD is set \
to other (OTH)"
)
office_cd = fields.CharField(
max_length=3,
db_column='OFFICE_CD',
blank=True,
verbose_name="Office code",
help_text="Code that identifies the office being sought"
)
other_cb = fields.CharField(
max_length=1,
db_column='OTHER_CB',
blank=True,
help_text="Other entity interests info included check-box. Legal \
values are 'X' and null."
)
other_int = fields.CharField(
max_length=90,
db_column='OTHER_INT',
blank=True,
help_text="Other entity interests description"
)
primfrm_yn = fields.CharField(
max_length=1,
db_column='PRIMFRM_YN',
blank=True,
help_text="Primarily Formed Committee (yes/no) checkbox. Legal \
values are 'Y' or 'N'."
)
REC_TYPE_CHOICES = (
("CVR", "Cover"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
report_num = fields.CharField(
max_length=3,
db_column='REPORT_NUM',
help_text="Amendment number, as reported by the filer \
Report Number 000 represents an original filing. 001-999 are amendments."
)
reportname = fields.CharField(
max_length=3,
db_column='REPORTNAME',
blank=True,
help_text="Attached campaign disclosure statement type. Legal \
values are 450, 460, and 461."
)
rpt_att_cb = fields.CharField(
max_length=4,
db_column='RPT_ATT_CB',
blank=True,
help_text="Committee Report Attached check-box. Legal values \
are 'X' or null. This field applies to the form 401."
)
rpt_date = fields.DateTimeField(
db_column='RPT_DATE',
null=True,
help_text="Date this report was filed, according to the filer"
)
rptfromdt = fields.DateTimeField(
null=True,
db_column='RPTFROMDT',
blank=True,
help_text="Attached campaign disclosure statement - Period from \
date."
)
rptthrudt = fields.DateTimeField(
null=True,
db_column='RPTTHRUDT',
blank=True,
help_text="Attached campaign disclosure statement - Period \
through date."
)
selfemp_cb = fields.CharField(
max_length=1,
db_column='SELFEMP_CB',
blank=True,
help_text="Self employed check-box"
)
sponsor_yn = fields.IntegerField(
null=True,
db_column='SPONSOR_YN',
blank=True,
help_text="Sponsored Committee (yes/no) checkbox. Legal values \
are 'Y' or 'N'."
)
stmt_type = fields.CharField(
max_length=2,
db_column='STMT_TYPE',
blank=True,
help_text='Type of statement'
)
sup_opp_cd = fields.CharField(
max_length=1,
db_column='SUP_OPP_CD',
blank=True,
help_text='Support/oppose code. Legal values are "S" for support \
or "O" for oppose.'
)
thru_date = fields.DateTimeField(
null=True,
db_column='THRU_DATE',
blank=True,
help_text='Reporting period through date'
)
# tres_adr1 = fields.CharField(
# max_length=55, db_column='TRES_ADR1', blank=True
# )
# tres_adr2 = fields.CharField(
# max_length=55, db_column='TRES_ADR2', blank=True
# )
tres_city = fields.CharField(
max_length=30,
db_column='TRES_CITY',
blank=True,
help_text="City portion of the treasurer or responsible \
officer's street address."
)
tres_email = fields.CharField(
max_length=60,
db_column='TRES_EMAIL',
blank=True,
help_text="Treasurer or responsible officer's email"
)
tres_fax = fields.CharField(
max_length=20,
db_column='TRES_FAX',
blank=True,
help_text="Treasurer or responsible officer's fax number"
)
tres_namf = fields.CharField(
max_length=45,
db_column='TRES_NAMF',
blank=True,
help_text="Treasurer or responsible officer's first name"
)
tres_naml = fields.CharField(
max_length=200,
db_column='TRES_NAML',
blank=True,
help_text="Treasurer or responsible officer's last name"
)
tres_nams = fields.CharField(
max_length=10,
db_column='TRES_NAMS',
blank=True,
help_text="Treasurer or responsible officer's suffix"
)
tres_namt = fields.CharField(
max_length=10,
db_column='TRES_NAMT',
blank=True,
help_text="Treasurer or responsible officer's prefix or title"
)
tres_phon = fields.CharField(
max_length=20,
db_column='TRES_PHON',
blank=True,
help_text="Treasurer or responsible officer's phone number"
)
tres_st = fields.CharField(
max_length=2,
db_column='TRES_ST',
blank=True,
help_text="Treasurer or responsible officer's state"
)
tres_zip4 = fields.CharField(
max_length=10,
db_column='TRES_ZIP4',
blank=True,
help_text="Treasurer or responsible officer's ZIP Code"
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'CVR_CAMPAIGN_DISCLOSURE_CD'
verbose_name = 'CVR_CAMPAIGN_DISCLOSURE_CD'
verbose_name_plural = 'CVR_CAMPAIGN_DISCLOSURE_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class Cvr2CampaignDisclosureCd(CalAccessBaseModel):
"""
Record used to carry additional names for the campaign
disclosure forms below.
"""
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
bal_juris = fields.CharField(
max_length=40,
db_column='BAL_JURIS',
blank=True,
help_text="Ballot measure jurisdiction"
)
bal_name = fields.CharField(
max_length=200,
db_column='BAL_NAME',
blank=True,
help_text="Ballot measure name"
)
bal_num = fields.CharField(
max_length=7,
db_column='BAL_NUM',
blank=True,
help_text="Ballot measure number or letter"
)
cmte_id = fields.CharField(
max_length=9,
db_column='CMTE_ID',
blank=True,
help_text="Commitee identification number, when the entity \
is a committee"
)
control_yn = fields.IntegerField(
null=True,
db_column='CONTROL_YN',
blank=True,
help_text='Controlled Committee (yes/no) checkbox. Legal values \
are "Y" or "N".'
)
dist_no = fields.CharField(
max_length=3,
db_column='DIST_NO',
blank=True,
help_text="District number for the office being sought. Populated \
for Senate, Assembly, or Board of Equalization races."
)
ENTITY_CODE_CHOICES = (
# Defined here:
# http://www.documentcloud.org/documents/1308003-cal-access-cal-\
# format.html#document/p9
('', 'Unknown'),
('ATR', 'Assistant treasurer'),
('BNM', 'Ballot measure\'s name/title'),
('CAO', 'Candidate/officeholder'),
('CTL', 'Controlled committee'),
('COM', 'Committee'),
('FIL', 'Candidate filing/ballot fees'),
('OFF', 'Officer (Responsible)'),
('PEX', 'PEX (Unknown)'),
('POF', 'Principal officer'),
('PRO', 'Proponent'),
('RCP', 'Recipient committee'),
('RDP', 'RDP (Unknown)'),
)
entity_cd = fields.CharField(
max_length=3,
db_column='ENTITY_CD',
blank=True,
verbose_name='entity code',
choices=ENTITY_CODE_CHOICES,
)
# enty_adr1 = fields.CharField(
# max_length=55, db_column='ENTY_ADR1', blank=True
# )
# enty_adr2 = fields.CharField(
# max_length=55, db_column='ENTY_ADR2', blank=True
# )
enty_city = fields.CharField(
max_length=30,
db_column='ENTY_CITY',
blank=True,
help_text="Entity city"
)
enty_email = fields.CharField(
max_length=60,
db_column='ENTY_EMAIL',
blank=True,
help_text="Entity email address"
)
enty_fax = fields.CharField(
max_length=20,
db_column='ENTY_FAX',
blank=True,
help_text="Entity fax number"
)
enty_namf = fields.CharField(
max_length=45,
db_column='ENTY_NAMF',
blank=True,
help_text="Entity first name, if an individual"
)
enty_naml = fields.CharField(
max_length=200,
db_column='ENTY_NAML',
blank=True,
help_text="Entity name, or last name if an individual"
)
enty_nams = fields.CharField(
max_length=10,
db_column='ENTY_NAMS',
blank=True,
help_text="Entity suffix, if an individual"
)
enty_namt = fields.CharField(
max_length=10,
db_column='ENTY_NAMT',
blank=True,
help_text="Entity prefix or title, if an individual"
)
enty_phon = fields.CharField(
max_length=20,
db_column='ENTY_PHON',
blank=True,
help_text="Entity phone number"
)
enty_st = fields.CharField(
max_length=2,
db_column='ENTY_ST',
blank=True,
help_text="Entity state"
)
enty_zip4 = fields.CharField(
max_length=10,
db_column='ENTY_ZIP4',
blank=True,
help_text="Entity ZIP code"
)
f460_part = fields.CharField(
max_length=2,
db_column='F460_PART',
blank=True,
help_text="Part of 460 cover page coded on ths cvr2 record. Legal \
values are 3, 4a, 4b, 5a, 5b, or 6."
)
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
FORM_TYPE_CHOICES = (
('F425', 'Form 425 (Semi-annual statement of no activity, \
non-controlled committees)'),
('F450', 'Form 450 (Recipient committee campaign statement, \
short form)'),
('F460', 'Form 460 (Recipient committee campaign statement)'),
('F465', 'Form 465 (Supplemental independent expenditure report)'),
)
form_type = fields.CharField(
choices=FORM_TYPE_CHOICES,
max_length=4,
db_column='FORM_TYPE',
help_text='Name of the source filing form or schedule'
)
juris_cd = fields.CharField(
max_length=3,
db_column='JURIS_CD',
blank=True,
help_text="Office jurisdiction code"
)
juris_dscr = fields.CharField(
max_length=40,
db_column='JURIS_DSCR',
blank=True,
help_text="Office jurisdiction description"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
# mail_adr1 = fields.CharField(
# max_length=55, db_column='MAIL_ADR1', blank=True
# )
# mail_adr2 = fields.CharField(
# max_length=55, db_column='MAIL_ADR2', blank=True
# )
mail_city = fields.CharField(
max_length=30,
db_column='MAIL_CITY',
blank=True,
help_text="Filer's mailing city"
)
mail_st = fields.CharField(
max_length=2,
db_column='MAIL_ST',
blank=True,
help_text="Filer's mailing state"
)
mail_zip4 = fields.CharField(
max_length=10,
db_column='MAIL_ZIP4',
blank=True,
help_text="Filer's mailing ZIP Code"
)
off_s_h_cd = fields.CharField(
max_length=1,
db_column='OFF_S_H_CD',
blank=True,
help_text='Office sought/held code. Indicates if the candidate is an \
incumbent. Legal values are "S" for sought and "H" for held.'
)
offic_dscr = fields.CharField(
max_length=40,
db_column='OFFIC_DSCR',
blank=True,
help_text="Office sought description"
)
office_cd = fields.CharField(
max_length=3,
db_column='OFFICE_CD',
blank=True,
verbose_name="Office code",
help_text="Code that identifies the office being sought"
)
REC_TYPE_CHOICES = (
("CVR2", "Cover, Page 2"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
sup_opp_cd = fields.CharField(
max_length=1,
db_column='SUP_OPP_CD',
blank=True,
help_text='Support/Oppose (S/O) code for the ballot measure. \
Legal values are "S" for support or "O" for oppose.'
)
title = fields.CharField(
max_length=90,
db_column='TITLE',
blank=True,
help_text="Official title of filing officer. Applies to the form 465."
)
tran_id = fields.CharField(
verbose_name='transaction ID',
max_length=20,
db_column='TRAN_ID',
blank=True,
help_text='Permanent value unique to this item',
)
tres_namf = fields.CharField(
max_length=45,
db_column='TRES_NAMF',
blank=True,
help_text="Treasurer or responsible officer's first name"
)
tres_naml = fields.CharField(
max_length=200,
db_column='TRES_NAML',
blank=True,
help_text="Treasurer or responsible officer's last name"
)
tres_nams = fields.CharField(
max_length=10,
db_column='TRES_NAMS',
blank=True,
help_text="Treasurer or responsible officer's suffix"
)
tres_namt = fields.CharField(
max_length=10,
db_column='TRES_NAMT',
blank=True,
help_text="Treasurer or responsible officer's prefix or title"
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'CVR2_CAMPAIGN_DISCLOSURE_CD'
verbose_name = 'CVR2_CAMPAIGN_DISCLOSURE_CD'
verbose_name_plural = 'CVR2_CAMPAIGN_DISCLOSURE_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class RcptCd(CalAccessBaseModel):
"""
Receipts schedules for the following forms.
Form 460 (Recipient Committee Campaign Statement)
Schedules A, C, I, and A-1.
Form 401 (Slate Mailer Organization Campaign Statement) Schedule A.
"""
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
amount = fields.DecimalField(
decimal_places=2,
max_digits=14,
db_column='AMOUNT',
help_text="Amount Received (Monetary, Inkkind, Promise)"
)
bakref_tid = fields.CharField(
max_length=20,
db_column='BAKREF_TID',
blank=True,
help_text="Back Reference to a transaction identifier of a parent \
record"
)
bal_juris = fields.CharField(
max_length=40,
db_column='BAL_JURIS',
blank=True,
help_text="Jurisdiction of ballot measure. Used on the Form 401 \
Schedule A"
)
bal_name = fields.CharField(
max_length=200,
db_column='BAL_NAME',
blank=True,
help_text="Ballot measure name. Used on the Form 401 Schedule A"
)
bal_num = fields.CharField(
max_length=7,
db_column='BAL_NUM',
blank=True,
help_text="Ballot measure number or letter. Used on the Form 401 \
Schedule A"
)
cand_namf = fields.CharField(
max_length=45,
db_column='CAND_NAMF',
blank=True,
help_text="Candidate/officeholder's first name. Used on the Form \
401 Schedule A"
)
cand_naml = fields.CharField(
max_length=200,
db_column='CAND_NAML',
blank=True,
help_text="Candidate/officeholder's last name. Used on the Form \
401 Schedule A"
)
cand_nams = fields.CharField(
max_length=10,
db_column='CAND_NAMS',
blank=True,
help_text="Candidate/officeholder's name suffix. Used on the Form \
401 Schedule A"
)
cand_namt = fields.CharField(
max_length=10,
db_column='CAND_NAMT',
blank=True,
help_text="Candidate/officeholder's name prefix or title. Used on \
the Form 401 Schedule A"
)
cmte_id = fields.CharField(
max_length=9,
db_column='CMTE_ID',
blank=True,
help_text="Committee Identification number"
)
# ctrib_adr1 = fields.CharField(
# max_length=55,
# db_column='CTRIB_ADR1',
# blank=True,
# default="",
# help_text="First line of the contributor's street address"
# )
# ctrib_adr2 = fields.CharField(
# max_length=55,
# db_column='CTRIB_ADR2',
# blank=True,
# help_text="Second line of the contributor's street address"
# )
ctrib_city = fields.CharField(
max_length=30,
db_column='CTRIB_CITY',
blank=True,
help_text="Contributor's City"
)
ctrib_dscr = fields.CharField(
max_length=90,
db_column='CTRIB_DSCR',
blank=True,
help_text="Description of goods/services received"
)
ctrib_emp = fields.CharField(
max_length=200,
db_column='CTRIB_EMP',
blank=True,
help_text="Employer"
)
ctrib_namf = fields.CharField(
max_length=45,
db_column='CTRIB_NAMF',
blank=True,
help_text="Contributor's First Name"
)
ctrib_naml = fields.CharField(
max_length=200,
db_column='CTRIB_NAML',
help_text="Contributor's last name or business name"
)
ctrib_nams = fields.CharField(
max_length=10,
db_column='CTRIB_NAMS',
blank=True,
help_text="Contributor's Suffix"
)
ctrib_namt = fields.CharField(
max_length=10,
db_column='CTRIB_NAMT',
blank=True,
help_text="Contributor's Prefix or Title"
)
ctrib_occ = fields.CharField(
max_length=60,
db_column='CTRIB_OCC',
blank=True,
help_text="Occupation"
)
ctrib_self = fields.CharField(
max_length=1,
db_column='CTRIB_SELF',
blank=True,
help_text="Self Employed Check-box"
)
ctrib_st = fields.CharField(
max_length=2,
db_column='CTRIB_ST',
blank=True,
help_text="Contributor's State"
)
ctrib_zip4 = fields.CharField(
max_length=10,
db_column='CTRIB_ZIP4',
blank=True,
help_text="Contributor's ZIP+4"
)
cum_oth = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='CUM_OTH',
blank=True,
help_text="Cumulative Other (Sched A, A-1)"
)
cum_ytd = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='CUM_YTD',
blank=True,
help_text="Cumulative year to date amount (Form 460 Schedule A \
and Form 401 Schedule A, A-1)"
)
date_thru = fields.DateField(
null=True,
db_column='DATE_THRU',
blank=True,
help_text="End of date range for items received"
)
dist_no = fields.CharField(
max_length=3,
db_column='DIST_NO',
blank=True,
help_text="Office District Number (used on F401A)"
)
ENTITY_CODE_CHOICES = (
# Defined here:
# http://www.documentcloud.org/documents/1308003-cal-access-cal-\
# format.html#document/p9
("", "None"),
("0", "0 (Unknown)"),
("BNM", "Ballot measure\'s name/title"),
("COM", "Committee"),
("IND", "Individual"),
("OFF", "Officer (Responsible)"),
("OTH", "Other"),
("PTY", "Political party"),
("RCP", "Recipient commmittee"),
("SCC", "Small contributor committee"),
)
entity_cd = fields.CharField(
max_length=3,
db_column='ENTITY_CD',
blank=True,
help_text="Entity code: Values [CMO|RCP|IND|OTH]",
choices=ENTITY_CODE_CHOICES
)
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
FORM_TYPE_CHOICES = (
('F900', 'Form 900 (Public employee\'s retirement board, \
candidate campaign statement): Schedule A'),
('A-1', 'Form 460: Schedule A-1, contributions transferred \
to special election committees'),
('E530', 'Form E530 (Issue advocacy receipts)'),
('F496P3', 'Form 496 (Late independent expenditure): \
Part 3, contributions > $100 received'),
('F401A', 'Form 401 (Slate mailer organization): Schedule A, \
payments received'),
('I', 'Form 460 (Recipient committee campaign statement): \
Schedule I, miscellanous increases to cash'),
('C', 'Form 460 (Recipient committee campaign statement): \
Schedule C, non-monetary contributions received'),
('A', 'Form 460 (Recipient committee campaign statement): \
Schedule A, monetary contributions received')
)
form_type = fields.CharField(
choices=FORM_TYPE_CHOICES,
max_length=9,
db_column='FORM_TYPE',
help_text='Name of the source filing form or schedule'
)
int_rate = fields.CharField(
max_length=9,
db_column='INT_RATE',
blank=True,
help_text="This field is undocumented"
)
# intr_adr1 = fields.CharField(
# max_length=55,
# db_column='INTR_ADR1',
# blank=True,
# help_text="First line of the intermediary's street address."
# )
# intr_adr2 = fields.CharField(
# max_length=55,
# db_column='INTR_ADR2',
# blank=True,
# help_text="Second line of the Intermediary's street address."
# )
intr_city = fields.CharField(
max_length=30,
db_column='INTR_CITY',
blank=True,
help_text="Intermediary's City"
)
intr_cmteid = fields.CharField(
max_length=9,
db_column='INTR_CMTEID',
blank=True,
help_text="This field is undocumented"
)
intr_emp = fields.CharField(
max_length=200,
db_column='INTR_EMP',
blank=True,
help_text="Intermediary's Employer"
)
intr_namf = fields.CharField(
max_length=45,
db_column='INTR_NAMF',
blank=True,
help_text="Intermediary's First Name"
)
intr_naml = fields.CharField(
max_length=200,
db_column='INTR_NAML',
blank=True,
help_text="Intermediary's Last Name"
)
intr_nams = fields.CharField(
max_length=10,
db_column='INTR_NAMS',
blank=True,
help_text="Intermediary's Suffix"
)
intr_namt = fields.CharField(
max_length=10,
db_column='INTR_NAMT',
blank=True,
help_text="Intermediary's Prefix or Title"
)
intr_occ = fields.CharField(
max_length=60,
db_column='INTR_OCC',
blank=True,
help_text="Intermediary's Occupation"
)
intr_self = fields.CharField(
max_length=1,
db_column='INTR_SELF',
blank=True,
help_text="Intermediary's self employed check box"
)
intr_st = fields.CharField(
max_length=2,
db_column='INTR_ST',
blank=True,
help_text="Intermediary's state"
)
intr_zip4 = fields.CharField(
max_length=10,
db_column='INTR_ZIP4',
blank=True,
help_text="Intermediary's zip code"
)
juris_cd = fields.CharField(
max_length=3,
db_column='JURIS_CD',
blank=True,
help_text="Office jurisdiction code. See the CAL document for the \
list of legal values. Used on Form 401 Schedule A"
)
juris_dscr = fields.CharField(
max_length=40,
db_column='JURIS_DSCR',
blank=True,
help_text="Office Jurisdiction Description (used on F401A)"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
memo_code = fields.CharField(
max_length=1,
db_column='MEMO_CODE',
blank=True,
help_text="Memo amount flag (Date/Amount are informational only)"
)
memo_refno = fields.CharField(
max_length=20,
db_column='MEMO_REFNO',
blank=True,
help_text="Reference to text contained in a TEXT record"
)
off_s_h_cd = fields.CharField(
max_length=1,
db_column='OFF_S_H_CD',
blank=True,
help_text="Office Sought/Held Code. Used on the Form 401 \
Schedule A. Legal values are 'S' for sought and 'H' for \
held"
)
offic_dscr = fields.CharField(
max_length=40,
db_column='OFFIC_DSCR',
blank=True,
help_text="Office Sought Description (used on F401A)"
)
office_cd = fields.CharField(
max_length=3,
db_column='OFFICE_CD',
blank=True,
help_text="Code that identifies the office being sought. See the \
CAL document for a list of valid codes. Used on the \
Form 401 Schedule A)"
)
rcpt_date = fields.DateField(
db_column='RCPT_DATE',
null=True,
help_text="Date item received"
)
REC_TYPE_CHOICES = (
("E530", "E530"),
("RCPT", "RCPT"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
sup_opp_cd = fields.CharField(
max_length=1,
db_column='SUP_OPP_CD',
blank=True,
help_text="Support/oppose code. Legal values are 'S' for support \
or 'O' for oppose. Used on Form 401 Sechedule A. \
Transaction identifier - permanent value unique to this item"
)
tran_id = fields.CharField(
verbose_name='transaction ID',
max_length=20,
db_column='TRAN_ID',
blank=True,
help_text='Permanent value unique to this item',
)
tran_type = fields.CharField(
max_length=1,
db_column='TRAN_TYPE',
blank=True,
help_text="Transaction Type: Values T- third party | F Forgiven \
loan | R Returned (Negative amount)"
)
# tres_adr1 = fields.CharField(
# max_length=55,
# db_column='TRES_ADR1',
# blank=True,
# help_text="First line of the treasurer or responsible officer's \
# street address"
# )
# tres_adr2 = fields.CharField(
# max_length=55,
# db_column='TRES_ADR2',
# blank=True,
# help_text="Second line of the treasurer or responsible officer's \
# street address"
# )
tres_city = fields.CharField(
max_length=30,
db_column='TRES_CITY',
blank=True,
help_text="City portion of the treasurer or responsible officer's \
street address"
)
tres_namf = fields.CharField(
max_length=45,
db_column='TRES_NAMF',
blank=True,
help_text="Treasurer or responsible officer's first name"
)
tres_naml = fields.CharField(
max_length=200,
db_column='TRES_NAML',
blank=True,
help_text="Treasurer or responsible officer's last name"
)
tres_nams = fields.CharField(
max_length=10,
db_column='TRES_NAMS',
blank=True,
help_text="Treasurer or responsible officer's suffix"
)
tres_namt = fields.CharField(
max_length=10,
db_column='TRES_NAMT',
blank=True,
help_text="Treasurer or responsible officer's prefix or title"
)
tres_st = fields.CharField(
max_length=2,
db_column='TRES_ST',
blank=True,
help_text="State portion of the treasurer or responsible officer's \
address"
)
tres_zip4 = fields.CharField(
null=True,
max_length=10,
blank=True,
db_column='TRES_ZIP4',
help_text="Zip code portion of the treasurer or responsible officer's \
address"
)
xref_match = fields.CharField(
max_length=1,
db_column='XREF_MATCH',
blank=True,
help_text="Related item on other schedule has same transaction \
identifier. 'X' indicates this condition is true"
)
xref_schnm = fields.CharField(
max_length=2,
db_column='XREF_SCHNM',
blank=True,
help_text="Related record is included on Sched 'B2' or 'F'"
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'RCPT_CD'
verbose_name = 'RCPT_CD'
verbose_name_plural = 'RCPT_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class Cvr3VerificationInfoCd(CalAccessBaseModel):
"""
Cover page verification information from campaign disclosure forms
"""
UNIQUE_KEY = (
"FILING_ID",
"AMEND_ID",
"LINE_ITEM",
"REC_TYPE",
"FORM_TYPE"
)
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
REC_TYPE_CHOICES = (
("CVR3", "CVR3"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
FORM_TYPE_CHOICES = (
('F400', 'Form 400 (Statement of organization, \
slate mailer organization)'),
('F401', 'Form 401 (Slate mailer organization campaign statement)'),
('F402', 'Form 402 (Statement of termination, \
slate mailer organization'),
('F410', 'Form 410 (Statement of organization, recipient committee)'),
('F425', 'Form 425 (Semi-annual statement of no activity, \
non-controlled committees)'),
('F450', 'Form 450 (Recipient committee campaign statement, \
short form)'),
('F460', 'Form 460 (Recipient committee campaign statement)'),
('F461', 'Form 461 (Independent expenditure and major donor \
committee campaign statement)'),
('F465', 'Form 465 (Supplemental independent expenditure report)'),
('F511', 'Form 511 (Paid spokesman report)'),
('F900', 'Form 900 (Public employee\'s retirement board, \
candidate campaign statement)'),
)
form_type = fields.CharField(
db_column='FORM_TYPE',
max_length=4,
help_text='Name of the source filing form or schedule',
db_index=True,
choices=FORM_TYPE_CHOICES,
)
tran_id = fields.CharField(
verbose_name='transaction ID',
max_length=20,
db_column='TRAN_ID',
blank=True,
help_text='Permanent value unique to this item',
)
ENTITY_CODE_CHOICES = (
# Defined here:
# http://www.documentcloud.org/documents/1308003-cal-access-cal-\
# format.html#document/p9
('', 'Unknown'),
('0', '0 (Unknown)'),
('ATR', 'Assistant treasurer'),
('BBB', 'BBB (Unknown)'),
('COA', 'COA (Unknown)'),
('CAO', 'Candidate/officeholder'),
('CON', 'State controller'),
('MAI', 'MAI (Unknown)'),
('MDI', 'Major donor/independent expenditure'),
('OFF', 'Officer (Responsible)'),
('POF', 'Principal officer'),
('PRO', 'Proponent'),
('RCP', 'Recipient committee'),
('SPO', 'Sponsor'),
('TRE', 'Treasurer'),
)
entity_cd = fields.CharField(
db_column='ENTITY_CD',
max_length=3,
blank=True,
verbose_name='entity code',
choices=ENTITY_CODE_CHOICES,
)
sig_date = fields.DateField(
verbose_name='signed date',
db_column='SIG_DATE',
blank=True,
null=True,
help_text='date when signed',
)
sig_loc = fields.CharField(
verbose_name='signed location',
db_column='SIG_LOC',
max_length=39,
blank=True,
help_text='city and state where signed',
)
sig_naml = fields.CharField(
verbose_name='last name',
db_column='SIG_NAML',
max_length=56,
blank=True,
help_text='last name of the signer',
)
sig_namf = fields.CharField(
verbose_name='first name',
db_column='SIG_NAMF',
max_length=45,
blank=True,
help_text='first name of the signer',
)
sig_namt = fields.CharField(
verbose_name='title',
db_column='SIG_NAMT',
max_length=10,
blank=True,
help_text='title of the signer',
)
sig_nams = fields.CharField(
verbose_name='suffix',
db_column='SIG_NAMS',
max_length=8,
blank=True,
help_text='suffix of the signer',
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'CVR3_VERIFICATION_INFO_CD'
verbose_name = 'CVR3_VERIFICATION_INFO_CD'
verbose_name_plural = 'CVR3_VERIFICATION_INFO_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class LoanCd(CalAccessBaseModel):
"""
Loans received and made
"""
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
bakref_tid = fields.CharField(
max_length=20,
db_column='BAKREF_TID',
blank=True,
help_text="Back Reference to transaction identifier of parent record"
)
cmte_id = fields.CharField(
max_length=9,
db_column='CMTE_ID',
blank=True,
verbose_name="Committee ID",
help_text="Committee identification number"
)
ENTITY_CODE_CHOICES = (
# Defined here:
# http://www.documentcloud.org/documents/1308003-cal-access-cal-\
# format.html#document/p9
('', 'Unknown'),
('COM', "Committee"),
("IND", "Person (spending > $5,000)"),
("OTH", "Other"),
("PTY", "Political party"),
('RCP', 'Recipient committee'),
('SCC', 'Small contributor committee'),
)
entity_cd = fields.CharField(
max_length=3,
db_column='ENTITY_CD',
blank=True,
verbose_name="entity code",
choices=ENTITY_CODE_CHOICES,
)
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
FORM_TYPE_CHOICES = (
('B1', 'Form 460 (Recipient committee campaign statement): \
Schedule B1'),
('B2', 'Form 460 (Recipient committee campaign statement): \
Schedule B2'),
('B3', 'Form 460 (Recipient committee campaign statement): \
Schedule B3'),
('H', 'Form 460 (Recipient committee campaign statement): \
Schedule H'),
('H1', 'Form 460 (Recipient committee campaign statement): \
Schedule H1'),
('H2', 'Form 460 (Recipient committee campaign statement): \
Schedule H2'),
('H3', 'Form 460 (Recipient committee campaign statement): \
Schedule H3'),
)
form_type = fields.CharField(
max_length=2,
db_column='FORM_TYPE',
choices=FORM_TYPE_CHOICES,
help_text='Name of the source filing form or schedule'
)
# intr_adr1 = fields.CharField(
# max_length=55, db_column='INTR_ADR1', blank=True
# )
# intr_adr2 = fields.CharField(
# max_length=55, db_column='INTR_ADR2', blank=True
# )
intr_city = fields.CharField(
max_length=30,
db_column='INTR_CITY',
blank=True,
help_text="Intermediary's city"
)
intr_namf = fields.CharField(
max_length=45,
db_column='INTR_NAMF',
blank=True,
help_text="Intermediary's first name"
)
intr_naml = fields.CharField(
max_length=200,
db_column='INTR_NAML',
blank=True,
help_text="Intermediary's last name"
)
intr_nams = fields.CharField(
max_length=10,
db_column='INTR_NAMS',
blank=True,
help_text="Intermediary's suffix"
)
intr_namt = fields.CharField(
max_length=10,
db_column='INTR_NAMT',
blank=True,
help_text="Intermediary's title or prefix"
)
intr_st = fields.CharField(
max_length=2,
db_column='INTR_ST',
blank=True,
help_text="Intermediary's state"
)
intr_zip4 = fields.CharField(
max_length=10,
db_column='INTR_ZIP4',
blank=True,
help_text="Intermediary's ZIP Code"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
lndr_namf = fields.CharField(
max_length=45,
db_column='LNDR_NAMF',
blank=True,
help_text="Lender's first name"
)
lndr_naml = fields.CharField(
max_length=200,
db_column='LNDR_NAML',
help_text="Lender's last name or business name"
)
lndr_nams = fields.CharField(
max_length=10,
db_column='LNDR_NAMS',
blank=True,
help_text="Lender's suffix"
)
lndr_namt = fields.CharField(
max_length=10,
db_column='LNDR_NAMT',
blank=True,
help_text="Lender's title or prefix"
)
# loan_adr1 = fields.CharField(
# max_length=55, db_column='LOAN_ADR1', blank=True
# )
# loan_adr2 = fields.CharField(
# max_length=55, db_column='LOAN_ADR2', blank=True
# )
loan_amt1 = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='LOAN_AMT1',
blank=True,
help_text="Repaid or forgiven amount; Original loan amount. The \
content of this column varies based on the \
schedule/part that the record applies to. See the CAL \
document for a description of the value of this field."
)
loan_amt2 = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='LOAN_AMT2',
blank=True,
help_text="Outstanding Principal; unpaid balance. The content of \
this column varies based on the schedule/part that the \
record applies to. See the CAL document for a \
description of the value of this field."
)
loan_amt3 = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='LOAN_AMT3',
blank=True,
help_text="Interest Paid; Unpaid interest; Interest received. The \
content of this column varies based on the \
schedule/part that the record applies to. See the CAL \
document for a description of the value of this field."
)
loan_amt4 = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='LOAN_AMT4',
blank=True,
help_text="Cumulative Amount/Other. The content of this column \
varies based on the schedule/part that the record \
applies to. See the CAL document for a description of the \
value of this field."
)
loan_amt5 = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='LOAN_AMT5',
blank=True,
help_text="This field is undocumented"
)
loan_amt6 = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='LOAN_AMT6',
blank=True,
help_text="This field is undocumented"
)
loan_amt7 = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='LOAN_AMT7',
blank=True,
help_text="This field is undocumented"
)
loan_amt8 = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='LOAN_AMT8',
blank=True,
help_text="This field is undocumented"
)
loan_city = fields.CharField(
max_length=30,
db_column='LOAN_CITY',
blank=True,
help_text="Lender's city"
)
loan_date1 = fields.DateField(
db_column='LOAN_DATE1',
null=True,
help_text="Date the loan was made or recieved. The content of this \
column varies based on the schedule/part that the \
record applies to. See the CAL document for a description of the value."
)
loan_date2 = fields.DateField(
null=True,
db_column='LOAN_DATE2',
blank=True,
help_text="Date repaid/forgiven; date loan due. The content of this \
column varies based on the schedule/part that the \
record applies to. See the CAL document for a \
description of the value of this field."
)
loan_emp = fields.CharField(
max_length=200,
db_column='LOAN_EMP',
blank=True,
help_text="Loan employer. Applies to the Form 460 Schedule B \
Part 1."
)
loan_occ = fields.CharField(
max_length=60,
db_column='LOAN_OCC',
blank=True,
help_text="Loan occupation. Applies to the Form 460 Schedule B \
Part 1."
)
loan_rate = fields.CharField(
max_length=30,
db_column='LOAN_RATE',
blank=True,
help_text="Interest Rate. The content of this column varies based \
on the schedule/part that the record applies to. See the \
CAL document for a description of the value of this field."
)
loan_self = fields.CharField(
max_length=1,
db_column='LOAN_SELF',
blank=True,
help_text="Self-employed checkbox"
)
loan_st = fields.CharField(
max_length=2,
db_column='LOAN_ST',
blank=True,
help_text="Lender's state"
)
loan_type = fields.CharField(
max_length=3,
db_column='LOAN_TYPE',
blank=True,
help_text="Type of loan"
)
loan_zip4 = fields.CharField(
max_length=10,
db_column='LOAN_ZIP4',
blank=True,
help_text="Lender's ZIP Code"
)
memo_code = fields.CharField(
max_length=1,
db_column='MEMO_CODE',
blank=True,
help_text="Memo amount flag"
)
memo_refno = fields.CharField(
max_length=20,
db_column='MEMO_REFNO',
blank=True,
help_text="Reference to text contained in a TEXT record"
)
REC_TYPE_CHOICES = (
("LOAN", "LOAN"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
tran_id = fields.CharField(
verbose_name='transaction ID',
max_length=20,
db_column='TRAN_ID',
blank=True,
help_text='Permanent value unique to this item',
)
# tres_adr1 = fields.CharField(
# max_length=55, db_column='TRES_ADR1', blank=True
# )
# tres_adr2 = fields.CharField(
# max_length=55, db_column='TRES_ADR2', blank=True
# )
tres_city = fields.CharField(
max_length=30,
db_column='TRES_CITY',
blank=True,
help_text="Treasurer or responsible officer's city"
)
tres_namf = fields.CharField(
max_length=45,
db_column='TRES_NAMF',
blank=True,
help_text="Treasurer or responsible officer's first name"
)
tres_naml = fields.CharField(
max_length=200,
db_column='TRES_NAML',
blank=True,
help_text="Treasurer or responsible officer's last name"
)
tres_nams = fields.CharField(
max_length=10,
db_column='TRES_NAMS',
blank=True,
help_text="Treasurer or responsible officer's suffix"
)
tres_namt = fields.CharField(
max_length=10,
db_column='TRES_NAMT',
blank=True,
help_text="Treasurer or responsible officer's title or prefix"
)
tres_st = fields.CharField(
max_length=2,
db_column='TRES_ST',
blank=True,
help_text="Treasurer or responsible officer's street address"
)
tres_zip4 = fields.CharField(
max_length=10,
db_column='TRES_ZIP4',
blank=True,
help_text="Treasurer or responsible officer's ZIP Code"
)
xref_match = fields.CharField(
max_length=1,
db_column='XREF_MATCH',
blank=True,
help_text='Related item on other schedule has same transaction \
identifier. "X" indicates this condition is true.'
)
xref_schnm = fields.CharField(
max_length=2,
db_column='XREF_SCHNM',
blank=True,
help_text="Related record is included on Form 460 Schedule 'A' or 'E'"
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'LOAN_CD'
verbose_name = 'LOAN_CD'
verbose_name_plural = 'LOAN_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class S401Cd(CalAccessBaseModel):
"""
This table contains Form 401 (Slate Mailer Organization) payment and other
disclosure schedule (F401B, F401B-1, F401C, F401D) information.
"""
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
REC_TYPE_CHOICES = (
("S401", "S401"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
FORM_TYPE_CHOICES = (
('F401B', 'Form 401 (Slate mailer organization campaign statement): \
Schedule B, payments made'),
('F401B-1', 'Form 401 (Slate mailer organization campaign statement): \
Schedule B-1, payments made by agent or independent contractor'),
('F401C', 'Form 401 (Slate mailer organization campaign statement): \
Schedule C, persons receiving $1,000 or more'),
('F401D', 'Form 401 (Slate mailer organization campaign statement): \
Schedule D, candidates or measures supported or opposed with < $100 payment'),
)
form_type = fields.CharField(
max_length=7,
db_column='FORM_TYPE',
blank=True,
choices=FORM_TYPE_CHOICES,
help_text='Name of the source filing form or schedule'
)
tran_id = fields.CharField(
verbose_name='transaction ID',
max_length=20,
db_column='TRAN_ID',
blank=True,
help_text='Permanent value unique to this item',
)
agent_naml = fields.CharField(
max_length=200,
db_column='AGENT_NAML',
blank=True,
help_text="Agent or independent contractor's last name"
)
agent_namf = fields.CharField(
max_length=45,
db_column='AGENT_NAMF',
blank=True,
help_text="Agent or independent contractor's first name"
)
agent_namt = fields.CharField(
max_length=200,
db_column='AGENT_NAMT',
blank=True,
help_text="Agent or independent contractor's title or prefix"
)
agent_nams = fields.CharField(
max_length=10,
db_column='AGENT_NAMS',
blank=True,
help_text="Agent or independent contractor's suffix"
)
payee_naml = fields.CharField(
max_length=200,
db_column='PAYEE_NAML',
blank=True,
help_text="Payee's business name or last name if the payee is an \
individual"
)
payee_namf = fields.CharField(
max_length=45,
db_column='PAYEE_NAMF',
blank=True,
help_text="Payee's first name if the payee is an individual"
)
payee_namt = fields.CharField(
max_length=10,
db_column='PAYEE_NAMT',
blank=True,
help_text="Payee's title or prefix if the payee is an individual"
)
payee_nams = fields.CharField(
max_length=10,
db_column='PAYEE_NAMS',
blank=True,
help_text="Payee's suffix if the payee is an individual"
)
payee_city = fields.CharField(
max_length=30,
db_column='PAYEE_CITY',
blank=True,
help_text="Payee's city address"
)
payee_st = fields.CharField(
max_length=2,
db_column='PAYEE_ST',
blank=True,
help_text="Payee state address"
)
payee_zip4 = fields.CharField(
max_length=10,
db_column='PAYEE_ZIP4',
blank=True,
help_text="Payee ZIP Code"
)
amount = fields.DecimalField(
max_digits=16,
decimal_places=2,
db_column='AMOUNT',
help_text="Amount (Sched F401B, 401B-1, 401C)"
)
aggregate = fields.DecimalField(
max_digits=16,
decimal_places=2,
db_column='AGGREGATE',
help_text="Aggregate year-to-date amount (Sched 401C)"
)
expn_dscr = fields.CharField(
max_length=90,
db_column='EXPN_DSCR',
blank=True,
help_text="Purpose of expense and/or description/explanation"
)
cand_naml = fields.CharField(
max_length=200,
db_column='CAND_NAML',
blank=True,
help_text="Candidate/officeholder last name"
)
cand_namf = fields.CharField(
max_length=45,
db_column='CAND_NAMF',
blank=True,
help_text="Candidate/officeholder first name"
)
cand_namt = fields.CharField(
max_length=10,
db_column='CAND_NAMT',
blank=True,
help_text="Candidate/officeholder title or prefix"
)
cand_nams = fields.CharField(
max_length=10,
db_column='CAND_NAMS',
blank=True,
help_text="Candidate/officeholder suffix"
)
office_cd = fields.CharField(
max_length=3,
db_column='OFFICE_CD',
blank=True,
verbose_name="Office code",
help_text="Code that identifies the office being sought"
)
offic_dscr = fields.CharField(
max_length=40,
db_column='OFFIC_DSCR',
blank=True,
help_text="Office sought description"
)
juris_cd = fields.CharField(
max_length=3,
db_column='JURIS_CD',
blank=True,
help_text="Office jurisdiction code"
)
juris_dscr = fields.CharField(
max_length=40,
db_column='JURIS_DSCR',
blank=True,
help_text="Office jurisdiction description"
)
dist_no = fields.CharField(
max_length=3,
db_column='DIST_NO',
blank=True,
help_text="District number for the office being sought. Populated \
for Senate, Assembly, or Board of Equalization races."
)
off_s_h_cd = fields.CharField(
max_length=1,
db_column='OFF_S_H_CD',
blank=True,
help_text="Office sought/held code"
)
bal_name = fields.CharField(
max_length=200,
db_column='BAL_NAME',
blank=True,
help_text="Ballot measure name"
)
bal_num = fields.CharField(
max_length=7,
db_column='BAL_NUM',
blank=True,
help_text="Ballot measure number or letter"
)
bal_juris = fields.CharField(
max_length=40,
db_column='BAL_JURIS',
blank=True,
help_text="Ballot measure jurisdiction"
)
sup_opp_cd = fields.CharField(
max_length=1,
db_column='SUP_OPP_CD',
blank=True,
help_text='Support/oppose code. Legal values are "S" for support \
or "O" for oppose. Used on Form 401.'
)
memo_code = fields.CharField(
max_length=1,
db_column='MEMO_CODE',
blank=True,
help_text="Memo amount flag"
)
memo_refno = fields.CharField(
max_length=20,
db_column='MEMO_REFNO',
blank=True,
help_text="Reference to text contained in the TEXT record"
)
bakref_tid = fields.CharField(
max_length=20,
db_column='BAKREF_TID',
blank=True,
help_text="Back reference to transaction identifier of parent record"
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'S401_CD'
verbose_name = 'S401_CD'
verbose_name_plural = 'S401_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class ExpnCd(CalAccessBaseModel):
"""
Campaign expenditures from a variety of forms
"""
agent_namf = fields.CharField(
max_length=45,
db_column='AGENT_NAMF',
blank=True,
help_text="Agent of Ind. Contractor's First name"
)
agent_naml = fields.CharField(
max_length=200,
db_column='AGENT_NAML',
blank=True,
help_text="Agent of Ind. Contractor's Last name (Sched G)"
)
agent_nams = fields.CharField(
max_length=10,
db_column='AGENT_NAMS',
blank=True,
help_text="Agent of Ind. Contractor's Suffix"
)
agent_namt = fields.CharField(
max_length=10,
db_column='AGENT_NAMT',
blank=True,
help_text="Agent of Ind. Contractor's Prefix or Title"
)
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
amount = fields.DecimalField(
decimal_places=2,
max_digits=14,
db_column='AMOUNT',
help_text="Amount of Payment"
)
bakref_tid = fields.CharField(
max_length=20,
db_column='BAKREF_TID',
blank=True,
help_text="Back Reference to a Tran_ID of a 'parent' record"
)
bal_juris = fields.CharField(
max_length=40,
db_column='BAL_JURIS',
blank=True,
help_text="Jurisdiction"
)
bal_name = fields.CharField(
max_length=200,
db_column='BAL_NAME',
blank=True,
help_text="Ballot Measure Name"
)
bal_num = fields.CharField(
max_length=7,
db_column='BAL_NUM',
blank=True,
help_text="Ballot Number or Letter"
)
cand_namf = fields.CharField(
max_length=45,
db_column='CAND_NAMF',
blank=True,
help_text="Candidate's First name"
)
cand_naml = fields.CharField(
max_length=200,
db_column='CAND_NAML',
blank=True,
help_text="Candidate's Last name"
)
cand_nams = fields.CharField(
max_length=10,
db_column='CAND_NAMS',
blank=True,
help_text="Candidate's Suffix"
)
cand_namt = fields.CharField(
max_length=10,
db_column='CAND_NAMT',
blank=True,
help_text="Candidate's Prefix or Title"
)
cmte_id = fields.CharField(
max_length=9,
db_column='CMTE_ID',
blank=True,
help_text="Committee ID (If [COM|RCP] & no ID#, Treas info Req.)"
)
cum_oth = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='CUM_OTH',
blank=True,
help_text="Cumulative / 'Other' (No Cumulative on Sched E & G)"
)
cum_ytd = fields.DecimalField(
decimal_places=2,
null=True,
max_digits=14,
db_column='CUM_YTD',
blank=True,
help_text="Cumulative / Year-to-date amount \
(No Cumulative on Sched E & G)"
)
dist_no = fields.CharField(
max_length=3,
db_column='DIST_NO',
blank=True,
help_text="Office District Number (Req. if Juris_Cd=[SEN|ASM|BOE]"
)
ENTITY_CODE_CHOICES = (
# Defined here:
# http://www.documentcloud.org/documents/1308003-cal-access-cal-\
# format.html#document/p9
('', 'Unknown'),
('0', '0 (Unknown)'),
('COM', 'Committee'),
('RCP', 'Recipient Committee'),
('IND', 'Person (spending > $5,000)'),
('OTH', 'Other'),
('PTY', 'Political party'),
('SCC', 'Small contributor committee'),
('BNM', 'Ballot measure\'s name/title'),
('CAO', 'Candidate/officeholder'),
('OFF', 'Officer'),
('PTH', 'PTH (Unknown)'),
('RFD', 'RFD (Unknown)'),
('MBR', 'MBR (Unknown)'),
)
entity_cd = fields.CharField(
choices=ENTITY_CODE_CHOICES,
max_length=3,
db_column='ENTITY_CD',
blank=True,
verbose_name='entity code',
)
expn_chkno = fields.CharField(
max_length=20,
db_column='EXPN_CHKNO',
blank=True,
help_text="Check Number (Optional)"
)
expn_code = fields.CharField(
max_length=3,
db_column='EXPN_CODE',
blank=True,
help_text="Expense Code - Values: (Refer to list in Overview) \
Note: CTB & IND need explanation & listing on Sched D TRC & TRS require \
explanation."
)
expn_date = fields.DateField(
null=True,
db_column='EXPN_DATE',
blank=True,
help_text="Date of Expenditure (Note: Date not on Sched E & G)"
)
expn_dscr = fields.CharField(
max_length=400,
db_column='EXPN_DSCR',
blank=True,
help_text="Purpose of Expense and/or Description/explanation"
)
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
FORM_TYPE_CHOICES = (
('D', 'Form 460 (Recipient committee campaign statement): \
Schedule D, summary of expenditure supporting/opposing other candidates, \
measures and committees'),
('E', 'Form 460 (Recipient committee campaign statement): \
Schedule E, payments made'),
('G', 'Form 460 (Recipient committee campaign statement): \
Schedule G, payments made by agent of independent contractor'),
('F450P5', 'Form 450 (Recipient Committee Campaign Statement \
Short Form): Part 5, payments made'),
('F461P5', 'Form 461 (Independent expenditure and major donor \
committee campaign statement): Part 5, contributions and expenditures made'),
('F465P3', 'Form 465 (Supplemental independent expenditure \
report): Part 3, independent expenditures made'),
('F900', 'Form 900 (Public Employee\'s Retirement Board Candidate \
Campaign Statement), Schedule B, expenditures made'),
)
form_type = fields.CharField(
choices=FORM_TYPE_CHOICES,
max_length=6,
db_column='FORM_TYPE',
help_text='Name of the source filing form or schedule'
)
g_from_e_f = fields.CharField(
max_length=1,
db_column='G_FROM_E_F',
blank=True,
help_text="Back Reference from Sched G to Sched 'E' or 'F'?"
)
juris_cd = fields.CharField(
max_length=3,
db_column='JURIS_CD',
blank=True,
help_text="Office Jurisdiction Code Values: STW=Statewide; \
SEN=Senate District; ASM=Assembly District; \
BOE=Board of Equalization District; \
CIT=City; CTY=County; LOC=Local; OTH=Other"
)
juris_dscr = fields.CharField(
max_length=40,
db_column='JURIS_DSCR',
blank=True,
help_text="Office Jurisdiction Description \
(Req. if Juris_Cd=[CIT|CTY|LOC|OTH]"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
memo_code = fields.CharField(
max_length=1,
db_column='MEMO_CODE',
blank=True,
help_text="Memo Amount? (Date/Amount are informational only)"
)
memo_refno = fields.CharField(
max_length=20,
db_column='MEMO_REFNO',
blank=True,
help_text="Reference to text contained in a TEXT record."
)
OFF_S_H_CD_CHOICES = (
('H', 'Office Held'),
('S', 'Office Sought'),
('A', 'A - Unknown'),
('8', '8 - Unknown'),
('O', 'O - Unknown'),
)
off_s_h_cd = fields.CharField(
choices=OFF_S_H_CD_CHOICES,
max_length=1,
db_column='OFF_S_H_CD',
blank=True,
help_text="Office Sought/Held Code: H=Held; S=Sought"
)
offic_dscr = fields.CharField(
max_length=40,
db_column='OFFIC_DSCR',
blank=True,
help_text="Office Sought Description (Req. if Office_Cd=OTH)"
)
office_cd = fields.CharField(
max_length=3,
db_column='OFFICE_CD',
blank=True,
help_text="Office Sought (See table of code in Overview)"
)
# payee_adr1 = fields.CharField(
# max_length=55,
# db_column='PAYEE_ADR1',
# blank=True,
# help_text="Address of Payee"
# )
# payee_adr2 = fields.CharField(
# max_length=55,
# db_column='PAYEE_ADR2',
# blank=True,
# help_text="Optional 2nd line of Address"
# )
payee_city = fields.CharField(
max_length=30,
db_column='PAYEE_CITY',
blank=True,
help_text="Payee City"
)
payee_namf = fields.CharField(
max_length=45,
db_column='PAYEE_NAMF',
blank=True,
help_text="Payee's First name"
)
payee_naml = fields.CharField(
max_length=200,
db_column='PAYEE_NAML',
blank=True,
help_text="Payee's Last name"
)
payee_nams = fields.CharField(
max_length=10,
db_column='PAYEE_NAMS',
blank=True,
help_text="Payee's Suffix"
)
payee_namt = fields.CharField(
max_length=10,
db_column='PAYEE_NAMT',
blank=True,
help_text="Payee's Prefix or Title"
)
payee_st = fields.CharField(
max_length=2,
db_column='PAYEE_ST',
blank=True,
help_text="State code"
)
payee_zip4 = fields.CharField(
max_length=10,
db_column='PAYEE_ZIP4',
blank=True,
help_text="Zip+4"
)
REC_TYPE_CHOICES = (
("EXPN", "EXPN"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
sup_opp_cd = fields.CharField(
max_length=1,
db_column='SUP_OPP_CD',
blank=True,
help_text="Support/Oppose? Values: S; O (F450, F461)"
)
tran_id = fields.CharField(
verbose_name='transaction ID',
max_length=20,
db_column='TRAN_ID',
blank=True,
help_text='Permanent value unique to this item',
)
# tres_adr1 = fields.CharField(
# max_length=55,
# db_column='TRES_ADR1',
# blank=True,
# help_text="Treasurer Street 1(Req if [COM|RCP] & no ID#)"
# )
# tres_adr2 = fields.CharField(
# max_length=55,
# db_column='TRES_ADR2',
# blank=True,
# help_text="Treasurer Street 2"
# )
tres_city = fields.CharField(
max_length=30,
db_column='TRES_CITY',
blank=True,
help_text="Treasurer City"
)
tres_namf = fields.CharField(
max_length=45,
db_column='TRES_NAMF',
blank=True,
help_text="Treasurer's First name (Req if [COM|RCP] & no ID#)"
)
tres_naml = fields.CharField(
max_length=200,
db_column='TRES_NAML',
blank=True,
help_text="Treasurer's Last name (Req if [COM|RCP] & no ID#)"
)
tres_nams = fields.CharField(
max_length=10,
db_column='TRES_NAMS',
blank=True,
help_text="Treasurer's Suffix"
)
tres_namt = fields.CharField(
max_length=10,
db_column='TRES_NAMT',
blank=True,
help_text="Treasurer's Prefix or Title"
)
tres_st = fields.CharField(
max_length=2,
db_column='TRES_ST',
blank=True,
help_text="Treasurer State"
)
tres_zip4 = fields.CharField(
max_length=10,
db_column='TRES_ZIP4',
blank=True,
help_text="Treasurer ZIP+4"
)
xref_match = fields.CharField(
max_length=1,
db_column='XREF_MATCH',
blank=True,
help_text="X = Related item on other Sched has same Tran_ID"
)
xref_schnm = fields.CharField(
max_length=2,
db_column='XREF_SCHNM',
blank=True,
help_text="Related item is included on Sched 'C' or 'H2'"
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'EXPN_CD'
verbose_name = 'EXPN_CD'
verbose_name_plural = 'EXPN_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class F495P2Cd(CalAccessBaseModel):
"""
F495 Supplemental Preelection Campaign Statement
It's attatchment to the forms below
F450 Recipient Committee Campaign Statement Short Form
F460 Recipient Committee Campaign Statement
Form 495 is for use by a recipient committee that
makes contributions totaling $10,000 or more in
connection with an election for which the committee
is not required to file regular preelection reports.
Form 495 is filed as an attachment to a campaign
disclosure statement (Form 450 or 460). On the
Form 450 or 460, the committee will report all
contributions received and expenditures made since
its last report.
"""
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
REC_TYPE_CHOICES = (
('F495', 'F495'),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
FORM_TYPE_CHOICES = (
('F450', 'Form 450 (Recipient committee campaign statement, \
short form)'),
('F460', 'Form 460 (Recipient committee campaign statement)'),
)
form_type = fields.CharField(
db_column='FORM_TYPE',
max_length=4,
choices=FORM_TYPE_CHOICES,
help_text='Name of the source filing form or schedule'
)
elect_date = fields.DateField(
db_column='ELECT_DATE',
blank=True,
null=True,
help_text="Date of the General Election This date will be the same \
as on the filing's cover (CVR) record."
)
electjuris = fields.CharField(
db_column='ELECTJURIS',
max_length=40,
help_text="Jurisdiction of the election"
)
contribamt = fields.FloatField(
db_column='CONTRIBAMT',
help_text="Contribution amount (For the period of 6 months prior to \
17 days before the election)"
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'F495P2_CD'
verbose_name = 'F495P2_CD'
verbose_name_plural = 'F495P2_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class DebtCd(CalAccessBaseModel):
"""
Form 460 (Recipient Committee Campaign Statement)
Schedule (F) Accrued Expenses (Unpaid Bills) records
"""
UNIQUE_KEY = (
"FILING_ID",
"AMEND_ID",
"LINE_ITEM",
"REC_TYPE",
"FORM_TYPE"
)
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
amt_incur = fields.DecimalField(
decimal_places=2,
max_digits=14,
db_column='AMT_INCUR',
help_text='Amount incurred this period',
)
amt_paid = fields.DecimalField(
decimal_places=2,
max_digits=14,
db_column='AMT_PAID',
help_text='Amount paid this period.'
)
bakref_tid = fields.CharField(
max_length=20,
db_column='BAKREF_TID',
blank=True,
help_text='Back reference to a transaction identifier \
of a parent record.'
)
beg_bal = fields.DecimalField(
decimal_places=2,
max_digits=14,
db_column='BEG_BAL',
help_text='Outstanding balance at beginning of period',
)
cmte_id = fields.CharField(
max_length=9,
db_column='CMTE_ID',
blank=True,
help_text='Committee identification number',
)
end_bal = fields.DecimalField(
decimal_places=2,
max_digits=14,
db_column='END_BAL',
help_text='Outstanding balance at close of this period',
)
ENTITY_CODE_CHOICES = (
# Defined here:
# http://www.documentcloud.org/documents/1308003-cal-access-cal-\
# format.html#document/p9
('', 'Unknown'),
('BNM', 'Ballot measure\'s name/title'),
('COM', 'Committee'),
('IND', 'Person (spending > $5,000)'),
('OTH', 'Other'),
('PTY', 'Political party'),
('RCP', 'Recipient Committee'),
('SCC', 'Small contributor committee'),
)
entity_cd = fields.CharField(
max_length=3,
db_column='ENTITY_CD',
blank=True,
verbose_name='entity code',
choices=ENTITY_CODE_CHOICES,
help_text='Entity code of the payee',
)
expn_code = fields.CharField(
max_length=3,
db_column='EXPN_CODE',
blank=True,
help_text='Expense code',
)
expn_dscr = fields.CharField(
max_length=400,
db_column='EXPN_DSCR',
blank=True,
help_text='Purpose of expense and/or description/explanation',
)
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number of the parent filing",
)
FORM_TYPE_CHOICES = (
('F', 'Form 460 (Recipient committee campaign statement): \
Schedule F, accrued expenses (unpaid bills)'),
)
form_type = fields.CharField(
max_length=1,
db_column='FORM_TYPE',
choices=FORM_TYPE_CHOICES,
help_text='Schedule Name/ID: (F - Sched F / Accrued Expenses)'
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Record line item number",
db_index=True,
)
memo_code = fields.CharField(
max_length=1, db_column='MEMO_CODE', blank=True,
help_text='Memo amount flag',
)
memo_refno = fields.CharField(
max_length=20,
db_column='MEMO_REFNO',
blank=True,
help_text='Reference to text contained in a TEXT record.'
)
# payee_adr1 = fields.CharField(
# max_length=55, db_column='PAYEE_ADR1', blank=True
# )
# payee_adr2 = fields.CharField(
# max_length=55, db_column='PAYEE_ADR2', blank=True
# )
payee_city = fields.CharField(
max_length=30,
db_column='PAYEE_CITY',
blank=True,
help_text='First line of the payee\'s street address',
)
payee_namf = fields.CharField(
max_length=45,
db_column='PAYEE_NAMF',
blank=True,
help_text='Payee\'s first name if the payee is an individual',
)
payee_naml = fields.CharField(
max_length=200,
db_column='PAYEE_NAML',
help_text="Payee's business name or last name if the payee is an \
individual."
)
payee_nams = fields.CharField(
max_length=10,
db_column='PAYEE_NAMS',
blank=True,
help_text='Payee\'s name suffix if the payee is an individual',
)
payee_namt = fields.CharField(
max_length=100,
db_column='PAYEE_NAMT',
blank=True,
help_text='Payee\'s prefix or title if the payee is an individual',
)
payee_st = fields.CharField(
max_length=2,
db_column='PAYEE_ST',
blank=True,
help_text='Payee\'s state',
)
payee_zip4 = fields.CharField(
max_length=10,
db_column='PAYEE_ZIP4',
blank=True,
help_text='Payee\'s ZIP Code',
)
REC_TYPE_CHOICES = (
("DEBT", "DEBT"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
help_text='Record type value: DEBT',
)
tran_id = fields.CharField(
verbose_name='transaction ID',
max_length=20,
db_column='TRAN_ID',
blank=True,
help_text='Transaction identifier - permanent value unique to \
this item',
)
# tres_adr1 = fields.CharField(
# max_length=55, db_column='TRES_ADR1', blank=True
# )
# tres_adr2 = fields.CharField(
# max_length=55, db_column='TRES_ADR2', blank=True
# )
tres_city = fields.CharField(
max_length=30,
db_column='TRES_CITY',
blank=True,
help_text='City portion of the treasurer or responsible \
officer\'s street address',
)
tres_namf = fields.CharField(
max_length=45,
db_column='TRES_NAMF',
blank=True,
help_text='Treasurer or responsible officer\'s first name'
)
tres_naml = fields.CharField(
max_length=200,
db_column='TRES_NAML',
blank=True,
help_text='Treasurer or responsible officer\'s last name'
)
tres_nams = fields.CharField(
max_length=10,
db_column='TRES_NAMS',
blank=True,
help_text='Treasurer or responsible officer\'s suffix',
)
tres_namt = fields.CharField(
max_length=100,
db_column='TRES_NAMT',
blank=True,
help_text='Treasurer or responsible officer\'s prefix or title',
)
tres_st = fields.CharField(
max_length=2,
db_column='TRES_ST',
blank=True,
help_text='State portion of the treasurer or responsible \
officer\'s address',
)
tres_zip4 = fields.CharField(
max_length=10,
db_column='TRES_ZIP4',
blank=True,
help_text='ZIP Code portion of the treasurer or responsible \
officer\'s address',
)
xref_match = fields.CharField(
max_length=1,
db_column='XREF_MATCH',
blank=True,
help_text='Related item on other schedule has same \
transaction identifier. /"X/" indicates this condition is true'
)
xref_schnm = fields.CharField(
max_length=2, db_column='XREF_SCHNM', blank=True,
help_text='Related record is included on Schedule C.'
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'DEBT_CD'
verbose_name = 'DEBT_CD'
verbose_name_plural = 'DEBT_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class S496Cd(CalAccessBaseModel):
"""
Form 496 Late Independent Expenditures
"""
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
REC_TYPE_CHOICES = (
('S496', 'S496'),
)
rec_type = fields.CharField(
verbose_name='record type',
max_length=4,
db_column='REC_TYPE',
db_index=True,
choices=REC_TYPE_CHOICES,
)
FORM_TYPE_CHOICES = (
('F496', 'F496 (Late independent expenditure report)'),
)
form_type = fields.CharField(
max_length=4, db_column='FORM_TYPE', blank=True,
choices=FORM_TYPE_CHOICES,
help_text='Name of the source filing form or schedule'
)
tran_id = fields.CharField(
verbose_name='transaction ID',
max_length=20,
db_column='TRAN_ID',
blank=True,
help_text='Permanent value unique to this item',
)
amount = fields.DecimalField(
max_digits=16,
decimal_places=2,
db_column='AMOUNT',
help_text="Expenditure amount"
)
exp_date = fields.DateField(
db_column='EXP_DATE',
null=True,
help_text="Expenditure dates"
)
expn_dscr = fields.CharField(
max_length=90,
db_column='EXPN_DSCR',
blank=True,
help_text="Purpose of expense and/or description/explanation"
)
memo_code = fields.CharField(
max_length=1,
db_column='MEMO_CODE',
blank=True,
help_text="Memo amount flag"
)
memo_refno = fields.CharField(
max_length=20,
db_column='MEMO_REFNO',
blank=True,
help_text="Reference to text contained in a TEXT record"
)
date_thru = fields.DateField(
db_column='DATE_THRU',
null=True,
help_text="End of date range for items paid"
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'S496_CD'
verbose_name = 'S496_CD'
verbose_name_plural = 'S496_CD'
def __str__(self):
return "{} Filing {}, Amendment {}".format(
self.form_type,
self.filing_id,
self.amend_id
)
@python_2_unicode_compatible
class SpltCd(CalAccessBaseModel):
"""
Split records
"""
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
elec_amount = fields.DecimalField(
max_digits=16,
decimal_places=2,
db_column='ELEC_AMOUNT',
help_text="This field is undocumented"
)
elec_code = fields.CharField(
max_length=2,
db_column='ELEC_CODE',
blank=True,
help_text='This field is undocumented',
)
elec_date = fields.DateField(
db_column='ELEC_DATE',
null=True,
help_text="This field is undocumented"
)
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
PFORM_TYPE_CHOICES = (
('A', ''),
('B1', ''),
('B2', ''),
('C', ''),
('D', ''),
('F450P5', ''),
('H', ''),
)
pform_type = fields.CharField(
max_length=7,
db_column='PFORM_TYPE',
db_index=True,
choices=PFORM_TYPE_CHOICES,
help_text='This field is undocumented',
)
ptran_id = fields.CharField(
verbose_name='transaction ID',
max_length=32,
db_column='PTRAN_ID',
blank=True,
help_text='Permanent value unique to this item',
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'SPLT_CD'
verbose_name = 'SPLT_CD'
verbose_name_plural = 'SPLT_CD'
def __str__(self):
return str(self.filing_id)
@python_2_unicode_compatible
class S497Cd(CalAccessBaseModel):
"""
Form 497: Late Contributions Received/Made
"""
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
REC_TYPE_CHOICES = (
("S497", "S497"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
FORM_TYPE_CHOICES = (
('F497P1', 'Form 497 (Late contribution report): \
Part 1, late contributions received'),
('F497P2', 'Form 497 (Late contribution report): \
Part 2, late contributions made')
)
form_type = fields.CharField(
max_length=6,
db_column='FORM_TYPE',
choices=FORM_TYPE_CHOICES,
help_text='Name of the source filing form or schedule'
)
tran_id = fields.CharField(
verbose_name='transaction ID',
max_length=20,
db_column='TRAN_ID',
blank=True,
help_text='Permanent value unique to this item',
)
ENTITY_CODE_CHOICES = (
# Defined here:
# http://www.documentcloud.org/documents/1308003-cal-access-cal-\
# format.html#document/p9
('', 'Unknown'),
('0', '0 (Unknown)'),
('BNM', 'Ballot measure\'s name/title'),
('CAO', 'Candidate/officerholder'),
('CTL', 'Controlled committee'),
('COM', 'Committee'),
('IND', 'Person (spending > $5,000)'),
('OFF', 'Officer'),
('OTH', 'Other'),
('PTY', 'Political party'),
('RCP', 'Recipient Committee'),
('SCC', 'Small contributor committee'),
)
entity_cd = fields.CharField(
max_length=3,
db_column='ENTITY_CD',
blank=True,
verbose_name='entity code',
choices=ENTITY_CODE_CHOICES,
)
enty_naml = fields.CharField(
max_length=200,
db_column='ENTY_NAML',
blank=True,
help_text="Entity's last name or business name"
)
enty_namf = fields.CharField(
max_length=45,
db_column='ENTY_NAMF',
blank=True,
help_text="Entity's first name"
)
enty_namt = fields.CharField(
max_length=10,
db_column='ENTY_NAMT',
blank=True,
help_text="Entity's title or prefix"
)
enty_nams = fields.CharField(
max_length=10,
db_column='ENTY_NAMS',
blank=True,
help_text="Entity's suffix"
)
enty_city = fields.CharField(
max_length=30,
db_column='ENTY_CITY',
blank=True,
help_text="Filing committee's city address"
)
enty_st = fields.CharField(
max_length=2,
db_column='ENTY_ST',
blank=True,
help_text="Filing committee's state address"
)
enty_zip4 = fields.CharField(
max_length=10,
db_column='ENTY_ZIP4',
blank=True,
help_text="Filing committee's ZIP Code"
)
ctrib_emp = fields.CharField(
max_length=200,
db_column='CTRIB_EMP',
blank=True,
help_text="Employer"
)
ctrib_occ = fields.CharField(
max_length=60,
db_column='CTRIB_OCC',
blank=True,
help_text="Occupation"
)
ctrib_self = fields.CharField(
max_length=1,
db_column='CTRIB_SELF',
blank=True,
help_text='Self employed checkbox. "X" indicates the contributor is \
self-employed.'
)
elec_date = fields.DateField(
db_column='ELEC_DATE',
null=True,
help_text="Date of election"
)
ctrib_date = fields.DateField(
db_column='CTRIB_DATE',
null=True,
help_text="Date item received/made"
)
date_thru = fields.DateField(
db_column='DATE_THRU',
null=True,
help_text="End of date range for items received"
)
amount = fields.DecimalField(
max_digits=16,
decimal_places=2,
db_column='AMOUNT',
help_text="Amount received/made"
)
cmte_id = fields.CharField(
max_length=9,
db_column='CMTE_ID',
blank=True,
verbose_name="Committee ID",
help_text="Committee identification number"
)
cand_naml = fields.CharField(
max_length=200,
db_column='CAND_NAML',
blank=True,
help_text="Candidate/officeholder's last name"
)
cand_namf = fields.CharField(
max_length=45,
db_column='CAND_NAMF',
blank=True,
help_text="Candidate/officeholder's first name"
)
cand_namt = fields.CharField(
max_length=10,
db_column='CAND_NAMT',
blank=True,
help_text="Candidate/officeholder's title or prefix"
)
cand_nams = fields.CharField(
max_length=10,
db_column='CAND_NAMS',
blank=True,
help_text="Candidate/officeholder's suffix"
)
office_cd = fields.CharField(
max_length=3,
db_column='OFFICE_CD',
blank=True,
verbose_name="Office code",
help_text="Office sought code"
)
offic_dscr = fields.CharField(
max_length=40,
db_column='OFFIC_DSCR',
blank=True,
help_text="Office sought description"
)
juris_cd = fields.CharField(
max_length=3,
db_column='JURIS_CD',
blank=True,
verbose_name="Jurisdiction code"
)
juris_dscr = fields.CharField(
max_length=40,
db_column='JURIS_DSCR',
blank=True,
help_text="Office jurisdiction description"
)
dist_no = fields.CharField(
max_length=3,
db_column='DIST_NO',
blank=True,
help_text="District number for the office being sought. Populated \
for Senate, Assembly, or Board of Equalization races."
)
off_s_h_cd = fields.CharField(
max_length=1,
db_column='OFF_S_H_CD',
blank=True,
help_text='Office Sought/Held Code. Legal values are "S" for \
sought and "H" for held.'
)
bal_name = fields.CharField(
max_length=200,
db_column='BAL_NAME',
blank=True,
help_text="Ballot measure name"
)
bal_num = fields.CharField(
max_length=7,
db_column='BAL_NUM',
blank=True,
help_text="Ballot measure number"
)
bal_juris = fields.CharField(
max_length=40,
db_column='BAL_JURIS',
blank=True,
help_text="Ballot measure jurisdiction"
)
memo_code = fields.CharField(
max_length=1,
db_column='MEMO_CODE',
blank=True,
help_text="Memo amount flag"
)
memo_refno = fields.CharField(
max_length=20,
db_column='MEMO_REFNO',
blank=True,
help_text="Reference to text contained in TEXT code"
)
bal_id = fields.CharField(
max_length=9,
db_column='BAL_ID',
blank=True,
help_text="This field is undocumented"
)
cand_id = fields.CharField(
max_length=9,
db_column='CAND_ID',
blank=True,
help_text="This field is undocumented"
)
sup_off_cd = fields.CharField(
max_length=1,
db_column='SUP_OFF_CD',
blank=True,
help_text="This field is undocumented"
)
sup_opp_cd = fields.CharField(
max_length=1,
db_column='SUP_OPP_CD',
blank=True,
help_text="This field is undocumented"
)
def __str__(self):
return "{} Filing {}, Amendment {}".format(
self.get_form_type_display(),
self.filing_id,
self.amend_id
)
class Meta:
app_label = 'calaccess_raw'
db_table = 'S497_CD'
verbose_name = 'S497_CD'
verbose_name_plural = 'S497_CD'
@python_2_unicode_compatible
class F501502Cd(CalAccessBaseModel):
"""
Candidate intention statement
"""
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
REC_TYPE_CHOICES = (
("CVR", "CVR"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
FORM_TYPE_CHOICES = (
('F501', 'Form 501 (Candidate intention statement)'),
('F502', 'Form 502 (Campaign bank account statement)')
)
form_type = fields.CharField(
db_column='FORM_TYPE',
max_length=4,
choices=FORM_TYPE_CHOICES,
help_text='Name of the source filing form or schedule'
)
filer_id = fields.CharField(
verbose_name='filer ID',
db_column='FILER_ID',
max_length=9,
blank=True,
db_index=True,
help_text="Filer's unique identification number",
)
committee_id = fields.CharField(
db_column='COMMITTEE_ID',
max_length=8,
blank=True,
verbose_name="Committee ID",
help_text='Committee identification number'
)
entity_cd = fields.CharField(
db_column='ENTITY_CD',
blank=True,
max_length=3,
help_text='Entity code'
)
report_num = fields.IntegerField(
db_column='REPORT_NUM',
blank=True,
null=True,
help_text='Report Number; 000 Original; 001-999 Amended'
)
rpt_date = fields.DateTimeField(
db_column='RPT_DATE',
blank=True,
null=True,
help_text='date this report is filed'
)
stmt_type = fields.IntegerField(
db_column='STMT_TYPE',
help_text="Type of statement"
)
from_date = fields.CharField(
db_column='FROM_DATE',
max_length=32,
blank=True,
help_text='Reporting period from date'
)
thru_date = fields.CharField(
db_column='THRU_DATE',
max_length=32,
blank=True,
help_text="Reporting period through date"
)
elect_date = fields.CharField(
db_column='ELECT_DATE',
max_length=32,
blank=True,
help_text='Date of election'
)
cand_naml = fields.CharField(
db_column='CAND_NAML',
max_length=81,
blank=True,
help_text="Candidate/officerholder last name"
)
cand_namf = fields.CharField(
db_column='CAND_NAMF',
max_length=25,
blank=True,
help_text="Candidate/officerholder first name"
)
can_namm = fields.CharField(
db_column='CAN_NAMM',
max_length=10,
blank=True,
help_text='Candidate/officeholder middle name'
)
cand_namt = fields.CharField(
db_column='CAND_NAMT',
max_length=7,
blank=True,
help_text="Candidate/officerholder title or prefix"
)
cand_nams = fields.CharField(
db_column='CAND_NAMS',
max_length=7,
blank=True,
help_text="Candidate/officeholder suffix"
)
moniker_pos = fields.CharField(
db_column='MONIKER_POS',
max_length=32,
blank=True,
help_text="Location of the candidate/officeholder's moniker"
)
moniker = fields.CharField(
db_column='MONIKER',
max_length=4,
blank=True,
help_text="Candidate/officeholder's moniker"
)
cand_city = fields.CharField(
db_column='CAND_CITY',
max_length=22,
blank=True,
help_text="Candidate/officerholder city"
)
cand_st = fields.CharField(
db_column='CAND_ST',
max_length=4,
blank=True,
help_text='Candidate/officeholder state'
)
cand_zip4 = fields.CharField(
db_column='CAND_ZIP4',
max_length=10,
blank=True,
help_text='Candidate/officeholder zip +4'
)
cand_phon = fields.CharField(
db_column='CAND_PHON',
max_length=14,
blank=True,
help_text='Candidate/officeholder phone number'
)
cand_fax = fields.CharField(
db_column='CAND_FAX',
max_length=14,
blank=True,
help_text="Candidate/officerholder fax"
)
cand_email = fields.CharField(
db_column='CAND_EMAIL',
max_length=37,
blank=True,
help_text='Candidate/officeholder email address'
)
fin_naml = fields.CharField(
db_column='FIN_NAML',
max_length=53,
blank=True,
help_text="Financial institution's business name"
)
fin_namf = fields.CharField(
db_column='FIN_NAMF',
max_length=32,
blank=True,
help_text="Unused. Financial institution's first name."
)
fin_namt = fields.CharField(
db_column='FIN_NAMT',
max_length=32,
blank=True,
help_text="Unused. Financial institution's title."
)
fin_nams = fields.CharField(
db_column='FIN_NAMS',
max_length=32,
blank=True,
help_text="Unused. Financial institution's suffix."
)
fin_city = fields.CharField(
db_column='FIN_CITY',
max_length=20,
blank=True,
help_text="Financial institution's city."
)
fin_st = fields.CharField(
db_column='FIN_ST',
max_length=4,
blank=True,
help_text="Financial institution's state."
)
fin_zip4 = fields.CharField(
db_column='FIN_ZIP4',
max_length=9,
blank=True,
help_text="Financial institution's zip code."
)
fin_phon = fields.CharField(
db_column='FIN_PHON',
max_length=14,
blank=True,
help_text="Financial institution's phone number."
)
fin_fax = fields.CharField(
db_column='FIN_FAX',
max_length=10,
blank=True,
help_text="Financial institution's FAX Number."
)
fin_email = fields.CharField(
db_column='FIN_EMAIL',
max_length=15,
blank=True,
help_text="Financial institution's e-mail address."
)
office_cd = fields.IntegerField(
db_column='OFFICE_CD',
help_text="Office sought code"
)
offic_dscr = fields.CharField(
db_column='OFFIC_DSCR',
max_length=50,
blank=True,
help_text="Office sought description"
)
agency_nam = fields.CharField(
db_column='AGENCY_NAM',
max_length=63,
blank=True,
help_text="Agency name"
)
juris_cd = fields.IntegerField(
db_column='JURIS_CD',
blank=True,
null=True,
help_text='Office jurisdiction code'
)
juris_dscr = fields.CharField(
db_column='JURIS_DSCR',
max_length=14,
blank=True,
help_text='office jurisdiction description'
)
dist_no = fields.CharField(
db_column='DIST_NO',
max_length=4,
blank=True,
help_text='District number for the office being sought. \
Populated for Senate, Assembly or Board of Equalization races.'
)
party = fields.CharField(
db_column='PARTY',
max_length=20,
blank=True,
help_text="Political party"
)
yr_of_elec = fields.IntegerField(
db_column='YR_OF_ELEC',
blank=True,
null=True,
help_text='Year of election'
)
elec_type = fields.IntegerField(
db_column='ELEC_TYPE',
blank=True,
null=True,
verbose_name="Election type"
)
execute_dt = fields.DateTimeField(
db_column='EXECUTE_DT',
blank=True,
null=True,
help_text='Execution date'
)
can_sig = fields.CharField(
db_column='CAN_SIG',
max_length=13,
blank=True,
help_text='Candidate signature'
)
account_no = fields.CharField(
db_column='ACCOUNT_NO',
max_length=22,
blank=True,
help_text='Account number'
)
acct_op_dt = fields.DateField(
db_column='ACCT_OP_DT',
blank=True,
null=True,
help_text='Account open date'
)
party_cd = fields.IntegerField(
db_column='PARTY_CD',
blank=True,
null=True,
help_text="Party code"
)
district_cd = fields.IntegerField(
db_column='DISTRICT_CD',
blank=True,
null=True,
help_text='District number for the office being sought. \
Populated for Senate, Assembly, or Board of Equalization races.'
)
accept_limit_yn = fields.IntegerField(
db_column='ACCEPT_LIMIT_YN',
blank=True,
null=True,
help_text='This field is undocumented'
)
did_exceed_dt = fields.DateField(
db_column='DID_EXCEED_DT',
blank=True,
null=True,
help_text='This field is undocumented'
)
cntrb_prsnl_fnds_dt = fields.DateField(
db_column='CNTRB_PRSNL_FNDS_DT',
blank=True,
null=True,
help_text="This field is undocumented"
)
def __str__(self):
return str(self.filing_id)
class Meta:
app_label = 'calaccess_raw'
db_table = 'F501_502_CD'
verbose_name = 'F501_502_CD'
verbose_name_plural = 'F501_502_CD'
@python_2_unicode_compatible
class S498Cd(CalAccessBaseModel):
"""
Form 498: Slate Mailer Late Independent Expenditures Made
"""
UNIQUE_KEY = (
"FILING_ID",
"AMEND_ID",
"LINE_ITEM",
"REC_TYPE",
"FORM_TYPE",
)
filing_id = fields.IntegerField(
db_column='FILING_ID',
db_index=True,
verbose_name='filing ID',
help_text="Unique filing identificiation number"
)
amend_id = fields.IntegerField(
db_column='AMEND_ID',
db_index=True,
help_text="Amendment identification number. A number of 0 is the \
original filing and 1 to 999 amendments.",
verbose_name="amendment ID"
)
line_item = fields.IntegerField(
db_column='LINE_ITEM',
help_text="Line item number of this record",
db_index=True,
)
REC_TYPE_CHOICES = (
("S498", "S498"),
)
rec_type = fields.CharField(
verbose_name='record type',
db_column='REC_TYPE',
max_length=4,
db_index=True,
choices=REC_TYPE_CHOICES,
)
FORM_TYPE_CHOICES = (
('F498-A', 'Form 498 (Slate mailer late payment report): \
Part A: late payments attributed to'),
('F498-R', 'Form 498 (Slate mailer late payment report): \
Part R: late payments received from')
)
form_type = fields.CharField(
max_length=9,
db_column='FORM_TYPE',
blank=True,
choices=FORM_TYPE_CHOICES,
help_text='Name of the source filing form or schedule'
)
tran_id = fields.CharField(
verbose_name='transaction ID',
max_length=20,
db_column='TRAN_ID',
blank=True,
help_text='Permanent value unique to this item',
)
ENTITY_CODE_CHOICES = (
# Defined here:
# http://www.documentcloud.org/documents/1308003-cal-access-cal-\
# format.html#document/p9
('', 'Unknown'),
('CAO', 'Candidate/officerholder'),
('COM', 'Committee'),
('IND', 'Person (spending > $5,000)'),
('OTH', 'Other'),
('RCP', 'Recipient Committee'),
)
entity_cd = fields.CharField(
max_length=3,
db_column='ENTITY_CD',
blank=True,
verbose_name='entity code',
choices=ENTITY_CODE_CHOICES,
)
cmte_id = fields.CharField(
max_length=9,
db_column='CMTE_ID',
blank=True,
verbose_name="Committee ID",
help_text="Committee identification number"
)
payor_naml = fields.CharField(
max_length=200,
db_column='PAYOR_NAML',
blank=True,
help_text="Payor's last name or business name"
)
payor_namf = fields.CharField(
max_length=45,
db_column='PAYOR_NAMF',
blank=True,
help_text="Payor's first name."
)
payor_namt = fields.CharField(
max_length=10,
db_column='PAYOR_NAMT',
blank=True,
help_text="Payor's Prefix or title."
)
payor_nams = fields.CharField(
max_length=10,
db_column='PAYOR_NAMS',
blank=True,
help_text="Payor's suffix."
)
payor_city = fields.CharField(
max_length=30,
db_column='PAYOR_CITY',
blank=True,
help_text="Payor's city."
)
payor_st = fields.CharField(
max_length=2,
db_column='PAYOR_ST',
blank=True,
help_text="Payor's State."
)
payor_zip4 = fields.CharField(
max_length=10,
db_column='PAYOR_ZIP4',
blank=True,
help_text="Payor's zip code"
)
date_rcvd = fields.DateField(
db_column='DATE_RCVD',
null=True,
help_text="Date received"
)
amt_rcvd = fields.DecimalField(
max_digits=16,
decimal_places=2,
db_column='AMT_RCVD',
help_text="Amount received"
)
cand_naml = fields.CharField(
max_length=200,
db_column='CAND_NAML',
blank=True,
help_text="Candidate/officerholder last name"
)
cand_namf = fields.CharField(
max_length=45,
db_column='CAND_NAMF',
blank=True,
help_text="Candidate/officerholder first name"
)
cand_namt = fields.CharField(
max_length=10,
db_column='CAND_NAMT',
blank=True,
help_text="Candidate/officerholder title or prefix"
)
cand_nams = fields.CharField(
max_length=10,
db_column='CAND_NAMS',
blank=True,
help_text="Candidate/officerholder suffix"
)
office_cd = fields.CharField(
max_length=3,
db_column='OFFICE_CD',
blank=True,
verbose_name='Office code',
help_text="Code that identifies the office being sought"
)
offic_dscr = fields.CharField(
max_length=40,
db_column='OFFIC_DSCR',
blank=True,
help_text="Description of office sought"
)
juris_cd = fields.CharField(
max_length=3,
db_column='JURIS_CD',
blank=True,
help_text="Office jurisdiction code"
)
juris_dscr = fields.CharField(
max_length=40,
db_column='JURIS_DSCR',
blank=True,
help_text="Office jurisdiction description"
)
dist_no = fields.CharField(
max_length=3,
db_column='DIST_NO',
blank=True,
help_text="District number for the office being sought. \
Populated for Senate, Assembly, or Board of Equalization races."
)
off_s_h_cd = fields.CharField(
max_length=1,
db_column='OFF_S_H_CD',
blank=True,
help_text='Office Sought/Held Code. Legal values are "S" for \
sought and "H" for held'
)
bal_name = fields.CharField(
max_length=200,
db_column='BAL_NAME',
blank=True,
help_text="Ballot measure name"
)
bal_num = fields.CharField(
max_length=7,
db_column='BAL_NUM',
blank=True,
help_text="Ballot measure number or letter."
)
bal_juris = fields.CharField(
max_length=40,
db_column='BAL_JURIS',
blank=True,
help_text="Jurisdiction of ballot measure"
)
sup_opp_cd = fields.CharField(
max_length=1,
db_column='SUP_OPP_CD',
blank=True,
help_text='Support/oppose code. Legal values are "S" for support \
or "O" for oppose.'
)
amt_attrib = fields.DecimalField(
max_digits=16,
decimal_places=2,
db_column='AMT_ATTRIB',
help_text="Amount attributed (only if Form_type = 'F498-A')"
)
memo_code = fields.CharField(
max_length=1,
db_column='MEMO_CODE',
blank=True,
help_text="Memo amount flat"
)
memo_refno = fields.CharField(
max_length=20,
db_column='MEMO_REFNO',
blank=True,
help_text='Reference text contained in TEXT record'
)
employer = fields.CharField(
max_length=200,
db_column='EMPLOYER',
blank=True,
help_text="This field is undocumented"
)
occupation = fields.CharField(
max_length=60,
db_column='OCCUPATION',
blank=True,
help_text='This field is undocumented'
)
selfemp_cb = fields.CharField(
max_length=1,
db_column='SELFEMP_CB',
blank=True,
help_text='Self-employed checkbox'
)
def __str__(self):
return str(self.filing_id)
class Meta:
app_label = 'calaccess_raw'
db_table = 'S498_CD'
verbose_name = 'S498_CD'
verbose_name_plural = 'S498_CD' | unknown | codeparrot/codeparrot-clean | ||
- Feature Name: DateStyle/IntervalStyle Enabled by Default
- Status: in-progress
- Start Date: 2021-11-12
- Authors: Ebony Brown
- RFC PR: [#75084](https://github.com/cockroachdb/cockroach/pull/75084)
- Cockroach Issue: [#69352](https://github.com/cockroachdb/cockroach/issues/69352)
# Summary
This document describes the process of enabling the DateStyle/IntervalStyle
options by default. In its current state, date to string and interval to string
casts aren't restricted when the DateStyle/IntervalStyle session variables are
false. This allows for inconsistencies in formatting when the session variables
are true.
This is an example of how having IntervalStyle set can lead to a corrupt
computed index.
```sql
CREATE TABLE t (
it interval,
computed string AS ((it + interval '2 minutes')::string) STORED
);
INSERT INTO t VALUES ('12:34:56.123456');
SELECT * FROM t;
it | computed
--------------------+---------------------
PT12H34M56.123456S | PT12H36M56.123456S
SET intervalstyle_enabled = true;
SET intervalstyle = 'sql_standard';
INSERT INTO t VALUES ('12:34:56.1234');
SELECT * FROM t;
it | computed
-----------------+---------------------
12:34:56.123456 | PT12H36M56.123456S
12:34:56.1234 | 12:36:56.1234
```
This will be corrected by removing the session variables as well as rewriting
all instances of the violating cast in computed columns, index expressions and
partial indexes.
# Background
In v21.2 an [experimental feature](https://github.com/cockroachdb/cockroach/pull/67000)
was added to CockroachDB which allowed DateStyle and IntervalStyle to take on multiple
values. The change was experimental due to it affecting the volatility of cast
to and from the Date, Time, and Interval types. The casts were changed from
Immutable to Stable, meaning their results can change across SQL statements and
should not be supported in computed columns.
Currently, users can cast date and interval types to strings within a computed
column, enable the DateStyle feature and change formats. This leads to
formatting inconsistencies since there is no way to reformat these strings.
# Design
## Migration
In v22.1,we will start a long-running migration that queries for all
TableDescriptors on the cluster. We will iterate through the descriptors looking
at each column. For every computed column found, we will check the computed
expression volatility. If we find an expression that isn't immutable, we can
assume it's the violating cast and rewrite the expression. We will walk through
the expression, and using a visitor type assert the cast expressions into
function expressions. We found that the formatting issue could also affect
indexes with expressions and partial indexes. Indexes with expressions would be
accounted when iterating through computed however partial indexes would not. To
account for partial indexes, all partial indexes will be iterated over after the
computed columns. The rewriting will use the builtin `parse_interval` and
`to_char_with_style`, created during the original Date/IntervalStyle
implementation. Other date to string cast seem to be blocked, so we can focus on
the few instances that are not. After rewriting the casts they are batched
together and saved.
# Alternative Considered
We also considered a simpler approach which involved keeping the session
variables for one more release. If v21.1 is active, DateStyle/IntervalStyle
enabled variables will be ignored since they will be enabled by default. If not
we’d add a migration to the registry which checks the virtual table and if it is
populated blocks the upgrade. We’d issue an error that specifies that the
violating cast are no longer supported. The customers could use the virtual
table to determine what needs to be changed in order for the migration to be
finalized.
For CC, the SRE team would have to be notified as soon as possible about this
change. They’d then have to facilitate reaching out to the customers about
altering their data if they want to finalize the upgrade. We can then allow
customers to keep their date-string cast, if they enable DateStyle/IntervalStyle
we can warn them about the cast issue and leave the decision to them.
We also considered using a virtual table with a precondition when rewriting the
violating cast. The virtual table would contain all computed columns where there
are instances of date/interval type to string type casts. It would be populated
by iterating through the table descriptors of all public columns and creating a
row for every computed column containing a cast with stable volatility. This
would require a full KV scan which would be expensive. We found this step
wouldn't be necessary since we could iterate through the descriptors during the
long-running migration. | unknown | github | https://github.com/cockroachdb/cockroach | docs/RFCS/20211220_DateStyle_IntervalStyle_Default.md |
################################################
### Battleships coded by TeCoEd ################
################################################
'''Text to Speach from http://www.fromtexttospeech.com/'''
import random
import time
import pygame
from pygame.locals import *
from sense_hat import SenseHat
pygame.init()
pygame.mixer.init()
pygame.display.set_mode((400, 400))
sense = SenseHat()
#sense.low_light = True '''enable to turn down brightness'''
global x #led position
global y
global score
x = 0
y = 0
score = 0
#################################################
######## Main Game Function #####################
#################################################
def main():
global x
global y
global score
'''Creates a hidden list of ships, ammo and sea'''
Sea2 = []
'''colours for ships and water'''
ship = [160, 200, 140]
water = [0, 0, 255]
hit = [255, 0, 0]
ammo = [255, 255, 0]
'''Creates a random mix of ships and water in a new hidden list'''
for i in Sea:
item = random.randrange(1, 6)
if item == 1:
Sea2.append(ship)
elif item == 2:
Sea2.append(water)
elif item == 3:
Sea2.append(water)
elif item == 4:
Sea2.append(water)
elif item == 5:
Sea2.append(ammo)
'''Counts how many ships there are in the sea'''
'''Looks through Sea2 list and counts number of ships'''
number_of_ships = 0
for entry in Sea2:
if entry == [160, 200, 140]:
number_of_ships = number_of_ships + 1
print("Number of Ships to find", number_of_ships)
sense.show_message(str(number_of_ships), text_colour=[255, 0, 0])
'''Sets the number of torpedos to match the ships'''
torpedos = number_of_ships + 5
#######################################################
############ Code to control joystick #################
### controls the movement of the targetting system ####
#######################################################
while torpedos > 0:
for event in pygame.event.get():
if event.type == KEYDOWN:
sense.set_pixel(x, y, 0, 0, 0) # Black 0,0,0 means OFF
if event.key == K_DOWN and y < 7:
y = y + 1
elif event.key == K_UP and y > 0:
y = y - 1
elif event.key == K_RIGHT and x < 7:
x = x + 1
elif event.key == K_LEFT and x > 0:
x = x - 1
elif event.key == K_RETURN:
print (x)
print (y)
torpedos = torpedos - 1
print ("You have", torpedos, "torpedos")
'''calculate your position on the grid as a 2d array'''
your_position = (y*8)+x
'''check what you hit and update the list'''
if Sea2[your_position] == ship:
print ("HIT")
Sea[your_position] = hit
sense.set_pixels(Sea)
### sound ###
pygame.mixer.music.load("sounds/impact.mp3")
pygame.mixer.music.play()
time.sleep(4)
number_of_ships = number_of_ships - 1
print ("There are", number_of_ships, "left to find")
pygame.mixer.music.load("sounds/shipsleft.mp3")
pygame.mixer.music.play()
sense.show_message(str(number_of_ships), text_colour=[255, 0, 0])
score = score + 1
elif Sea2[your_position] == water:
print ("MISS")
Sea[your_position] = water
sense.set_pixels(Sea)
### sound ###
pygame.mixer.music.load("sounds/miss.mp3")
pygame.mixer.music.play()
time.sleep(1)
elif Sea2[your_position] == ammo:
Sea[your_position] = ammo
sense.set_pixels(Sea)
torpedos = torpedos + 1
### ammo up sound ###
pygame.mixer.music.load("sounds/ammo.mp3")
pygame.mixer.music.play()
time.sleep(3)
sense.set_pixel(x, y, 0, 255, 0) #colour of pixel for location target
pygame.mixer.music.load("sounds/gameover.mp3")
pygame.mixer.music.play()
print ("TEST GAME OVER")
sense.show_message("Game Over", text_colour=[0, 255, 255], scroll_speed=0.07) #change to an animation
pygame.mixer.music.load("sounds/gameover_spoken.mp3")
pygame.mixer.music.play()
time.sleep(2)
#### add a score
pygame.mixer.music.load("sounds/yourscoreis.mp3")
pygame.mixer.music.play()
time.sleep(1)
sense.show_message(str(score), text_colour=[0, 200, 255])
print ("There are", number_of_ships, "ships left")
if number_of_ships == 0:
print ("WELL DONE, TOP JOB") #ADD SOME ANIMATION
sense.show_message("TOP JOB", text_colour=[0, 255, 255])
pygame.mixer.music.load("sounds/winner.mp3") ## add a sea sound
pygame.mixer.music.play()
time.sleep(3)
else:
print ("better luck next time") #ADD SOME ANIMATION
pygame.mixer.music.load("sounds/nexttime.mp3") ## add a sea sound
pygame.mixer.music.play()
### end of game ###
#####################################################
########### Intro to Game ##########################
#####################################################
### intro music ###
pygame.mixer.music.load("sounds/intro.mp3") ## add a sea sound
pygame.mixer.music.play()
import intro_ship
##### Welcome Message ###
sense.show_message("BATTLE SHIPS!", text_colour=[150, 150, 190], back_colour=[0, 0, 100])
play_game = True
### add main def and play again###
while play_game == True:
play_again = 1
print ("Welcome")
time.sleep(1)
S = [49, 49, 49] ### change to 000
'''Creates a grid of 64 blue pixels, the sea'''
Sea = [
S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S, S
]
sense.set_pixels(Sea)
time.sleep(1)
pygame.mixer.music.stop()
pygame.mixer.music.load("sounds/getready.mp3") ## add a sea sound
pygame.mixer.music.play()
for num in reversed(range(1,4)):
sense.show_letter(str(num))
time.sleep(1)
pygame.mixer.music.load("sounds/go.mp3")
pygame.mixer.music.play()
### Begin the main game ###
main()
### Play again? ###
print ("Would you like to play again? ")
sense.load_image("choice.png")
pygame.mixer.music.load("sounds/playagain.mp3")
pygame.mixer.music.play()
while play_again == 1:
for event in pygame.event.get():
if event.type == KEYDOWN:
if event.key == K_DOWN:
play_again = 0
play_game = False
pygame.mixer.music.load("sounds/soon.mp3")
pygame.mixer.music.play()
print ("Bye Bye")
import end_animation
break
elif event.key == K_UP:
play_again = 0 | unknown | codeparrot/codeparrot-clean | ||
/***************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
* are also available at https://curl.se/docs/copyright.html.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the COPYING file.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
* SPDX-License-Identifier: curl
*
***************************************************************************/
/* <DESC>
* Create a new IMAP folder
* </DESC>
*/
#include <stdio.h>
#include <curl/curl.h>
/* This is a simple example showing how to create a new mailbox folder using
* libcurl's IMAP capabilities.
*
* Note that this example requires libcurl 7.30.0 or above.
*/
int main(void)
{
CURL *curl;
CURLcode result = curl_global_init(CURL_GLOBAL_ALL);
if(result != CURLE_OK)
return (int)result;
curl = curl_easy_init();
if(curl) {
/* Set username and password */
curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
/* This is just the server URL */
curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com");
/* Set the CREATE command specifying the new folder name */
curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "CREATE FOLDER");
/* Perform the custom request */
result = curl_easy_perform(curl);
/* Check for errors */
if(result != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %s\n",
curl_easy_strerror(result));
/* Always cleanup */
curl_easy_cleanup(curl);
}
curl_global_cleanup();
return (int)result;
} | c | github | https://github.com/curl/curl | docs/examples/imap-create.c |
# -*- coding: utf-8 -*-
import copy
from django.test.client import RequestFactory
from elasticsearch_dsl import Search
from mock import Mock, patch
from rest_framework import serializers
from olympia import amo
from olympia.amo.tests import TestCase, create_switch
from olympia.constants.categories import CATEGORIES
from olympia.search.filters import (
ReviewedContentFilter, SearchParameterFilter, SearchQueryFilter,
SortingFilter)
class FilterTestsBase(TestCase):
# Base TestCase class - Does not need to inherit from ESTestCase as the
# queries will never actually be executed.
def setUp(self):
super(FilterTestsBase, self).setUp()
self.req = RequestFactory().get('/')
self.view_class = Mock()
def _filter(self, req=None, data=None):
req = req or RequestFactory().get('/', data=data or {})
queryset = Search()
for filter_class in self.filter_classes:
queryset = filter_class().filter_queryset(req, queryset,
self.view_class)
return queryset.to_dict()
class TestQueryFilter(FilterTestsBase):
filter_classes = [SearchQueryFilter]
def _test_q(self):
qs = self._filter(data={'q': 'tea pot'})
# Spot check a few queries.
should = qs['query']['function_score']['query']['bool']['should']
expected = {
'match_phrase': {
'name': {
'query': 'tea pot', 'boost': 4, 'slop': 1
}
}
}
assert expected in should
expected = {
'prefix': {'name': {'boost': 1.5, 'value': 'tea pot'}}
}
assert expected in should
expected = {
'match': {
'name_l10n_english': {
'query': 'tea pot', 'boost': 2.5,
'analyzer': 'english',
'operator': 'and'
}
}
}
assert expected in should
expected = {
'match_phrase': {
'description_l10n_english': {
'query': 'tea pot',
'boost': 0.6,
'analyzer': 'english',
}
}
}
assert expected in should
functions = qs['query']['function_score']['functions']
assert functions[0] == {'field_value_factor': {'field': 'boost'}}
return qs
def test_q(self):
qs = self._test_q()
functions = qs['query']['function_score']['functions']
assert len(functions) == 1
def test_q_too_long(self):
with self.assertRaises(serializers.ValidationError):
self._filter(data={'q': 'a' * 101})
def test_fuzzy_single_word(self):
qs = self._filter(data={'q': 'blah'})
should = qs['query']['function_score']['query']['bool']['should']
expected = {
'match': {
'name': {
'boost': 2, 'prefix_length': 4, 'query': 'blah',
'fuzziness': 'AUTO',
}
}
}
assert expected in should
def test_fuzzy_multi_word(self):
qs = self._filter(data={'q': 'search terms'})
should = qs['query']['function_score']['query']['bool']['should']
expected = {
'match': {
'name': {
'boost': 2, 'prefix_length': 4, 'query': 'search terms',
'fuzziness': 'AUTO',
}
}
}
assert expected in should
def test_no_fuzzy_if_query_too_long(self):
def do_test():
qs = self._filter(data={'q': 'this search query is too long.'})
should = qs['query']['function_score']['query']['bool']['should']
return should
# Make sure there is no fuzzy clause (the search query is too long).
should = do_test()
expected = {
'match': {
'name': {
'boost': 2, 'prefix_length': 4,
'query': 'this search query is too long.',
'fuzziness': 'AUTO',
}
}
}
assert expected not in should
# Re-do the same test but mocking the limit to a higher value, the
# fuzzy query should be present.
with patch.object(
SearchQueryFilter, 'MAX_QUERY_LENGTH_FOR_FUZZY_SEARCH', 100):
should = do_test()
assert expected in should
def test_webextension_boost(self):
create_switch('boost-webextensions-in-search')
# Repeat base test with the switch enabled.
qs = self._test_q()
functions = qs['query']['function_score']['functions']
assert len(functions) == 2
assert functions[1] == {
'weight': 2.0, # WEBEXTENSIONS_WEIGHT,
'filter': {'bool': {'should': [
{'term': {'current_version.files.is_webextension': True}},
{'term': {
'current_version.files.is_mozilla_signed_extension': True
}}
]}}
}
def test_q_exact(self):
qs = self._filter(data={'q': 'Adblock Plus'})
should = qs['query']['function_score']['query']['bool']['should']
expected = {
'term': {
'name.raw': {
'boost': 100, 'value': u'adblock plus',
}
}
}
assert expected in should
class TestReviewedContentFilter(FilterTestsBase):
filter_classes = [ReviewedContentFilter]
def test_status(self):
qs = self._filter(self.req)
must = qs['query']['bool']['must']
must_not = qs['query']['bool']['must_not']
assert {'terms': {'status': amo.REVIEWED_STATUSES}} in must
assert {'exists': {'field': 'current_version'}} in must
assert {'term': {'is_disabled': True}} in must_not
assert {'term': {'is_deleted': True}} in must_not
class TestSortingFilter(FilterTestsBase):
filter_classes = [SortingFilter]
def _reformat_order(self, key):
# elasticsearch-dsl transforms '-something' for us, so we have to
# expect the sort param in this format when we inspect the resulting
# queryset object.
return {key[1:]: {'order': 'desc'}} if key.startswith('-') else key
def test_sort_default(self):
qs = self._filter(data={'q': 'something'})
assert qs['sort'] == [self._reformat_order('_score')]
qs = self._filter()
assert qs['sort'] == [self._reformat_order('-weekly_downloads')]
def test_sort_query(self):
SORTING_PARAMS = copy.copy(SortingFilter.SORTING_PARAMS)
SORTING_PARAMS.pop('random') # Tested separately below.
for param in SORTING_PARAMS:
qs = self._filter(data={'sort': param})
assert qs['sort'] == [self._reformat_order(SORTING_PARAMS[param])]
# Having a search query does not change anything, the requested sort
# takes precedence.
for param in SORTING_PARAMS:
qs = self._filter(data={'q': 'something', 'sort': param})
assert qs['sort'] == [self._reformat_order(SORTING_PARAMS[param])]
# If the sort query is wrong.
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'sort': 'WRONGLOL'})
assert context.exception.detail == ['Invalid "sort" parameter.']
# Same as above but with a search query.
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'q': 'something', 'sort': 'WRONGLOL'})
assert context.exception.detail == ['Invalid "sort" parameter.']
def test_sort_query_multiple(self):
qs = self._filter(data={'sort': ['rating,created']})
assert qs['sort'] == [self._reformat_order('-bayesian_rating'),
self._reformat_order('-created')]
qs = self._filter(data={'sort': 'created,rating'})
assert qs['sort'] == [self._reformat_order('-created'),
self._reformat_order('-bayesian_rating')]
# If the sort query is wrong.
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'sort': ['LOLWRONG,created']})
assert context.exception.detail == ['Invalid "sort" parameter.']
def test_cant_combine_sorts_with_random(self):
expected = 'The "random" "sort" parameter can not be combined.'
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'sort': ['rating,random']})
assert context.exception.detail == [expected]
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'sort': 'random,created'})
assert context.exception.detail == [expected]
def test_sort_random_restrictions(self):
expected = ('The "sort" parameter "random" can only be specified when '
'the "featured" parameter is also present, and the "q" '
'parameter absent.')
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'q': 'something', 'sort': 'random'})
assert context.exception.detail == [expected]
with self.assertRaises(serializers.ValidationError) as context:
self._filter(
data={'q': 'something', 'featured': 'true', 'sort': 'random'})
assert context.exception.detail == [expected]
def test_sort_random(self):
qs = self._filter(data={'featured': 'true', 'sort': 'random'})
# Note: this test does not call AddonFeaturedQueryParam so it won't
# apply the featured filtering. That's tested below in
# TestCombinedFilter.test_filter_featured_sort_random
assert qs['sort'] == ['_score']
assert qs['query']['function_score']['functions'] == [
{'random_score': {}}
]
class TestSearchParameterFilter(FilterTestsBase):
filter_classes = [SearchParameterFilter]
def test_search_by_type_invalid(self):
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'type': unicode(amo.ADDON_EXTENSION + 666)})
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'type': 'nosuchtype'})
assert context.exception.detail == ['Invalid "type" parameter.']
def test_search_by_type_id(self):
qs = self._filter(data={'type': unicode(amo.ADDON_EXTENSION)})
must = qs['query']['bool']['must']
assert {'terms': {'type': [amo.ADDON_EXTENSION]}} in must
qs = self._filter(data={'type': unicode(amo.ADDON_PERSONA)})
must = qs['query']['bool']['must']
assert {'terms': {'type': [amo.ADDON_PERSONA]}} in must
def test_search_by_type_string(self):
qs = self._filter(data={'type': 'extension'})
must = qs['query']['bool']['must']
assert {'terms': {'type': [amo.ADDON_EXTENSION]}} in must
qs = self._filter(data={'type': 'persona'})
must = qs['query']['bool']['must']
assert {'terms': {'type': [amo.ADDON_PERSONA]}} in must
qs = self._filter(data={'type': 'persona,extension'})
must = qs['query']['bool']['must']
assert (
{'terms': {'type': [amo.ADDON_PERSONA, amo.ADDON_EXTENSION]}}
in must)
def test_search_by_app_invalid(self):
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'app': unicode(amo.FIREFOX.id + 666)})
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'app': 'nosuchapp'})
assert context.exception.detail == ['Invalid "app" parameter.']
def test_search_by_app_id(self):
qs = self._filter(data={'app': unicode(amo.FIREFOX.id)})
must = qs['query']['bool']['must']
assert {'term': {'app': amo.FIREFOX.id}} in must
qs = self._filter(data={'app': unicode(amo.THUNDERBIRD.id)})
must = qs['query']['bool']['must']
assert {'term': {'app': amo.THUNDERBIRD.id}} in must
def test_search_by_app_string(self):
qs = self._filter(data={'app': 'firefox'})
must = qs['query']['bool']['must']
assert {'term': {'app': amo.FIREFOX.id}} in must
qs = self._filter(data={'app': 'thunderbird'})
must = qs['query']['bool']['must']
assert {'term': {'app': amo.THUNDERBIRD.id}} in must
def test_search_by_appversion_app_missing(self):
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'appversion': '46.0'})
assert context.exception.detail == ['Invalid "app" parameter.']
def test_search_by_appversion_app_invalid(self):
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'appversion': '46.0',
'app': 'internet_explorer'})
assert context.exception.detail == ['Invalid "app" parameter.']
def test_search_by_appversion_invalid(self):
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'appversion': 'not_a_version',
'app': 'firefox'})
assert context.exception.detail == ['Invalid "appversion" parameter.']
def test_search_by_appversion(self):
qs = self._filter(data={'appversion': '46.0',
'app': 'firefox'})
must = qs['query']['bool']['must']
assert {'term': {'app': amo.FIREFOX.id}} in must
assert {'range': {'current_version.compatible_apps.1.min':
{'lte': 46000000200100}}} in must
assert {'range': {'current_version.compatible_apps.1.max':
{'gte': 46000000000100}}} in must
def test_search_by_platform_invalid(self):
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'platform': unicode(amo.PLATFORM_WIN.id + 42)})
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'platform': 'nosuchplatform'})
assert context.exception.detail == ['Invalid "platform" parameter.']
def test_search_by_platform_id(self):
qs = self._filter(data={'platform': unicode(amo.PLATFORM_WIN.id)})
must = qs['query']['bool']['must']
assert {'terms': {'platforms': [
amo.PLATFORM_WIN.id, amo.PLATFORM_ALL.id]}} in must
qs = self._filter(data={'platform': unicode(amo.PLATFORM_LINUX.id)})
must = qs['query']['bool']['must']
assert {'terms': {'platforms': [
amo.PLATFORM_LINUX.id, amo.PLATFORM_ALL.id]}} in must
def test_search_by_platform_string(self):
qs = self._filter(data={'platform': 'windows'})
must = qs['query']['bool']['must']
assert {'terms': {'platforms': [
amo.PLATFORM_WIN.id, amo.PLATFORM_ALL.id]}} in must
qs = self._filter(data={'platform': 'win'})
must = qs['query']['bool']['must']
assert {'terms': {'platforms': [
amo.PLATFORM_WIN.id, amo.PLATFORM_ALL.id]}} in must
qs = self._filter(data={'platform': 'darwin'})
must = qs['query']['bool']['must']
assert {'terms': {'platforms': [
amo.PLATFORM_MAC.id, amo.PLATFORM_ALL.id]}} in must
qs = self._filter(data={'platform': 'mac'})
must = qs['query']['bool']['must']
assert {'terms': {'platforms': [
amo.PLATFORM_MAC.id, amo.PLATFORM_ALL.id]}} in must
qs = self._filter(data={'platform': 'macosx'})
must = qs['query']['bool']['must']
assert {'terms': {'platforms': [
amo.PLATFORM_MAC.id, amo.PLATFORM_ALL.id]}} in must
qs = self._filter(data={'platform': 'linux'})
must = qs['query']['bool']['must']
assert {'terms': {'platforms': [
amo.PLATFORM_LINUX.id, amo.PLATFORM_ALL.id]}} in must
def test_search_by_category_slug_no_app_or_type(self):
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'category': 'other'})
assert context.exception.detail == ['Invalid "app" parameter.']
def test_search_by_category_id_no_app_or_type(self):
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'category': 1})
assert context.exception.detail == ['Invalid "app" parameter.']
def test_search_by_category_slug(self):
category = CATEGORIES[amo.FIREFOX.id][amo.ADDON_EXTENSION]['other']
qs = self._filter(data={
'category': 'other',
'app': 'firefox',
'type': 'extension'
})
must = qs['query']['bool']['must']
assert {'terms': {'category': [category.id]}} in must
def test_search_by_category_slug_multiple_types(self):
category_a = CATEGORIES[amo.FIREFOX.id][amo.ADDON_EXTENSION]['other']
category_b = CATEGORIES[amo.FIREFOX.id][amo.ADDON_PERSONA]['other']
qs = self._filter(data={
'category': 'other',
'app': 'firefox',
'type': 'extension,persona'
})
must = qs['query']['bool']['must']
assert (
{'terms': {'category': [category_a.id, category_b.id]}} in must)
def test_search_by_category_id(self):
qs = self._filter(data={
'category': 1,
'app': 'firefox',
'type': 'extension'
})
must = qs['query']['bool']['must']
assert {'terms': {'category': [1]}} in must
def test_search_by_category_invalid(self):
with self.assertRaises(serializers.ValidationError) as context:
self._filter(
data={'category': 666, 'app': 'firefox', 'type': 'extension'})
assert context.exception.detail == ['Invalid "category" parameter.']
def test_search_by_tag(self):
qs = self._filter(data={'tag': 'foo'})
must = qs['query']['bool']['must']
assert {'term': {'tags': 'foo'}} in must
qs = self._filter(data={'tag': 'foo,bar'})
must = qs['query']['bool']['must']
assert {'term': {'tags': 'foo'}} in must
assert {'term': {'tags': 'bar'}} in must
def test_search_by_author(self):
qs = self._filter(data={'author': 'fooBar'})
must = qs['query']['bool']['must']
assert {'terms': {'listed_authors.username': ['fooBar']}} in must
qs = self._filter(data={'author': 'foo,bar'})
must = qs['query']['bool']['must']
assert {'terms': {'listed_authors.username': ['foo', 'bar']}} in must
def test_exclude_addons(self):
qs = self._filter(data={'exclude_addons': 'fooBar'})
assert 'must' not in qs['query']['bool']
must_not = qs['query']['bool']['must_not']
assert must_not == [{'terms': {'slug': [u'fooBar']}}]
qs = self._filter(data={'exclude_addons': 1})
assert 'must' not in qs['query']['bool']
must_not = qs['query']['bool']['must_not']
assert must_not == [{'ids': {'values': [u'1']}}]
qs = self._filter(data={'exclude_addons': 'fooBar,1'})
assert 'must' not in qs['query']['bool']
must_not = qs['query']['bool']['must_not']
assert {'ids': {'values': [u'1']}} in must_not
assert {'terms': {'slug': [u'fooBar']}} in must_not
def test_search_by_featured_no_app_no_locale(self):
qs = self._filter(data={'featured': 'true'})
must = qs['query']['bool']['must']
assert {'term': {'is_featured': True}} in must
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'featured': 'false'})
assert context.exception.detail == ['Invalid "featured" parameter.']
def test_search_by_featured_yes_app_no_locale(self):
qs = self._filter(data={'featured': 'true', 'app': 'firefox'})
must = qs['query']['bool']['must']
assert {'term': {'is_featured': True}} not in must
assert must[0] == {'term': {'app': amo.FIREFOX.id}}
inner = must[1]['nested']['query']['bool']['must']
assert len(must) == 2
assert {'term': {'featured_for.application': amo.FIREFOX.id}} in inner
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'featured': 'true', 'app': 'foobaa'})
assert context.exception.detail == ['Invalid "app" parameter.']
def test_search_by_featured_yes_app_yes_locale(self):
qs = self._filter(data={'featured': 'true', 'app': 'firefox',
'lang': 'fr'})
must = qs['query']['bool']['must']
assert {'term': {'is_featured': True}} not in must
assert must[0] == {'term': {'app': amo.FIREFOX.id}}
inner = must[1]['nested']['query']['bool']['must']
assert len(must) == 2
assert {'term': {'featured_for.application': amo.FIREFOX.id}} in inner
assert {'terms': {'featured_for.locales': ['fr', 'ALL']}} in inner
with self.assertRaises(serializers.ValidationError) as context:
self._filter(data={'featured': 'true', 'app': 'foobaa'})
assert context.exception.detail == ['Invalid "app" parameter.']
def test_search_by_featured_no_app_yes_locale(self):
qs = self._filter(data={'featured': 'true', 'lang': 'fr'})
must = qs['query']['bool']['must']
assert {'term': {'is_featured': True}} not in must
inner = must[0]['nested']['query']['bool']['must']
assert len(must) == 1
assert {'terms': {'featured_for.locales': ['fr', 'ALL']}} in inner
class TestCombinedFilter(FilterTestsBase):
"""
Basic test to ensure that when filters are combined they result in the
expected query structure.
"""
filter_classes = [SearchQueryFilter, ReviewedContentFilter, SortingFilter]
def test_combined(self):
qs = self._filter(data={'q': 'test'})
filtered = qs['query']['bool']
assert filtered['must'][2]['function_score']
must = filtered['must']
assert {'terms': {'status': amo.REVIEWED_STATUSES}} in must
must_not = filtered['must_not']
assert {'term': {'is_disabled': True}} in must_not
assert qs['sort'] == ['_score']
should = must[2]['function_score']['query']['bool']['should']
expected = {
'match': {
'name_l10n_english': {
'analyzer': 'english', 'boost': 2.5, 'query': u'test',
'operator': 'and'
}
}
}
assert expected in should
def test_filter_featured_sort_random(self):
qs = self._filter(data={'featured': 'true', 'sort': 'random'})
filtered = qs['query']['bool']
must = filtered['must']
assert {'terms': {'status': amo.REVIEWED_STATUSES}} in must
must_not = filtered['must_not']
assert {'term': {'is_disabled': True}} in must_not
assert qs['sort'] == ['_score']
assert filtered['must'][2]['function_score']['functions'] == [
{'random_score': {}}
] | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Changing field 'FormFieldOption.label'
db.alter_column(u'customforms_formfieldoption', 'label', self.gf('django.db.models.fields.CharField')(max_length=5000))
def backwards(self, orm):
# Changing field 'FormFieldOption.label'
db.alter_column(u'customforms_formfieldoption', 'label', self.gf('django.db.models.fields.CharField')(max_length=100))
models = {
u'customforms.form': {
'Meta': {'object_name': 'Form'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'customforms.formfield': {
'Meta': {'object_name': 'FormField'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'customforms.formfieldoption': {
'Meta': {'ordering': "['weight']", 'object_name': 'FormFieldOption'},
'form': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'fields'", 'to': u"orm['customforms.Form']"}),
'form_field': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'config'", 'to': u"orm['customforms.FormField']"}),
'hint': ('django.db.models.fields.TextField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'label': ('django.db.models.fields.CharField', [], {'max_length': '5000'}),
'list_field': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'obligatory': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'options': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'weight': ('django.db.models.fields.IntegerField', [], {'default': '10'})
}
}
complete_apps = ['customforms'] | unknown | codeparrot/codeparrot-clean | ||
# Ansible module to manage CheckPoint Firewall (c) 2019
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import pytest
from units.modules.utils import set_module_args, exit_json, fail_json, AnsibleExitJson
from ansible.module_utils import basic
from ansible.modules.network.check_point import cp_mgmt_dns_domain
OBJECT = {
"name": ".www.example.com",
"is_sub_domain": False
}
CREATE_PAYLOAD = {
"name": ".www.example.com",
"is_sub_domain": False
}
UPDATE_PAYLOAD = {
"name": ".www.example.com",
"is_sub_domain": True
}
OBJECT_AFTER_UPDATE = UPDATE_PAYLOAD
DELETE_PAYLOAD = {
"name": ".www.example.com",
"state": "absent"
}
function_path = 'ansible.modules.network.check_point.cp_mgmt_dns_domain.api_call'
api_call_object = 'dns-domain'
class TestCheckpointDnsDomain(object):
module = cp_mgmt_dns_domain
@pytest.fixture(autouse=True)
def module_mock(self, mocker):
return mocker.patch.multiple(basic.AnsibleModule, exit_json=exit_json, fail_json=fail_json)
@pytest.fixture
def connection_mock(self, mocker):
connection_class_mock = mocker.patch('ansible.module_utils.network.checkpoint.checkpoint.Connection')
return connection_class_mock.return_value
def test_create(self, mocker, connection_mock):
mock_function = mocker.patch(function_path)
mock_function.return_value = {'changed': True, api_call_object: OBJECT}
result = self._run_module(CREATE_PAYLOAD)
assert result['changed']
assert OBJECT.items() == result[api_call_object].items()
def test_create_idempotent(self, mocker, connection_mock):
mock_function = mocker.patch(function_path)
mock_function.return_value = {'changed': False, api_call_object: OBJECT}
result = self._run_module(CREATE_PAYLOAD)
assert not result['changed']
def test_update(self, mocker, connection_mock):
mock_function = mocker.patch(function_path)
mock_function.return_value = {'changed': True, api_call_object: OBJECT_AFTER_UPDATE}
result = self._run_module(UPDATE_PAYLOAD)
assert result['changed']
assert OBJECT_AFTER_UPDATE.items() == result[api_call_object].items()
def test_update_idempotent(self, mocker, connection_mock):
mock_function = mocker.patch(function_path)
mock_function.return_value = {'changed': False, api_call_object: OBJECT_AFTER_UPDATE}
result = self._run_module(UPDATE_PAYLOAD)
assert not result['changed']
def test_delete(self, mocker, connection_mock):
mock_function = mocker.patch(function_path)
mock_function.return_value = {'changed': True}
result = self._run_module(DELETE_PAYLOAD)
assert result['changed']
def test_delete_idempotent(self, mocker, connection_mock):
mock_function = mocker.patch(function_path)
mock_function.return_value = {'changed': False}
result = self._run_module(DELETE_PAYLOAD)
assert not result['changed']
def _run_module(self, module_args):
set_module_args(module_args)
with pytest.raises(AnsibleExitJson) as ex:
self.module.main()
return ex.value.args[0] | unknown | codeparrot/codeparrot-clean | ||
<?php
/*
* This file is part of the Symfony package.
*
* (c) Fabien Potencier <fabien@symfony.com>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
namespace Symfony\Component\HttpFoundation;
// Help opcache.preload discover always-needed symbols
class_exists(ResponseHeaderBag::class);
/**
* Response represents an HTTP response.
*
* @author Fabien Potencier <fabien@symfony.com>
*/
class Response
{
public const HTTP_CONTINUE = 100;
public const HTTP_SWITCHING_PROTOCOLS = 101;
public const HTTP_PROCESSING = 102; // RFC2518
public const HTTP_EARLY_HINTS = 103; // RFC8297
public const HTTP_OK = 200;
public const HTTP_CREATED = 201;
public const HTTP_ACCEPTED = 202;
public const HTTP_NON_AUTHORITATIVE_INFORMATION = 203;
public const HTTP_NO_CONTENT = 204;
public const HTTP_RESET_CONTENT = 205;
public const HTTP_PARTIAL_CONTENT = 206;
public const HTTP_MULTI_STATUS = 207; // RFC4918
public const HTTP_ALREADY_REPORTED = 208; // RFC5842
public const HTTP_IM_USED = 226; // RFC3229
public const HTTP_MULTIPLE_CHOICES = 300;
public const HTTP_MOVED_PERMANENTLY = 301;
public const HTTP_FOUND = 302;
public const HTTP_SEE_OTHER = 303;
public const HTTP_NOT_MODIFIED = 304;
public const HTTP_USE_PROXY = 305;
public const HTTP_RESERVED = 306;
public const HTTP_TEMPORARY_REDIRECT = 307;
public const HTTP_PERMANENTLY_REDIRECT = 308; // RFC7238
public const HTTP_BAD_REQUEST = 400;
public const HTTP_UNAUTHORIZED = 401;
public const HTTP_PAYMENT_REQUIRED = 402;
public const HTTP_FORBIDDEN = 403;
public const HTTP_NOT_FOUND = 404;
public const HTTP_METHOD_NOT_ALLOWED = 405;
public const HTTP_NOT_ACCEPTABLE = 406;
public const HTTP_PROXY_AUTHENTICATION_REQUIRED = 407;
public const HTTP_REQUEST_TIMEOUT = 408;
public const HTTP_CONFLICT = 409;
public const HTTP_GONE = 410;
public const HTTP_LENGTH_REQUIRED = 411;
public const HTTP_PRECONDITION_FAILED = 412;
public const HTTP_REQUEST_ENTITY_TOO_LARGE = 413;
public const HTTP_REQUEST_URI_TOO_LONG = 414;
public const HTTP_UNSUPPORTED_MEDIA_TYPE = 415;
public const HTTP_REQUESTED_RANGE_NOT_SATISFIABLE = 416;
public const HTTP_EXPECTATION_FAILED = 417;
public const HTTP_I_AM_A_TEAPOT = 418; // RFC2324
public const HTTP_MISDIRECTED_REQUEST = 421; // RFC7540
public const HTTP_UNPROCESSABLE_ENTITY = 422; // RFC4918
public const HTTP_LOCKED = 423; // RFC4918
public const HTTP_FAILED_DEPENDENCY = 424; // RFC4918
public const HTTP_TOO_EARLY = 425; // RFC-ietf-httpbis-replay-04
public const HTTP_UPGRADE_REQUIRED = 426; // RFC2817
public const HTTP_PRECONDITION_REQUIRED = 428; // RFC6585
public const HTTP_TOO_MANY_REQUESTS = 429; // RFC6585
public const HTTP_REQUEST_HEADER_FIELDS_TOO_LARGE = 431; // RFC6585
public const HTTP_UNAVAILABLE_FOR_LEGAL_REASONS = 451; // RFC7725
public const HTTP_INTERNAL_SERVER_ERROR = 500;
public const HTTP_NOT_IMPLEMENTED = 501;
public const HTTP_BAD_GATEWAY = 502;
public const HTTP_SERVICE_UNAVAILABLE = 503;
public const HTTP_GATEWAY_TIMEOUT = 504;
public const HTTP_VERSION_NOT_SUPPORTED = 505;
public const HTTP_VARIANT_ALSO_NEGOTIATES_EXPERIMENTAL = 506; // RFC2295
public const HTTP_INSUFFICIENT_STORAGE = 507; // RFC4918
public const HTTP_LOOP_DETECTED = 508; // RFC5842
public const HTTP_NOT_EXTENDED = 510; // RFC2774
public const HTTP_NETWORK_AUTHENTICATION_REQUIRED = 511; // RFC6585
/**
* @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control
*/
private const HTTP_RESPONSE_CACHE_CONTROL_DIRECTIVES = [
'must_revalidate' => false,
'no_cache' => false,
'no_store' => false,
'no_transform' => false,
'public' => false,
'private' => false,
'proxy_revalidate' => false,
'max_age' => true,
's_maxage' => true,
'stale_if_error' => true, // RFC5861
'stale_while_revalidate' => true, // RFC5861
'immutable' => false,
'last_modified' => true,
'etag' => true,
];
public ResponseHeaderBag $headers;
protected string $content;
protected string $version;
protected int $statusCode;
protected string $statusText;
protected ?string $charset = null;
/**
* Status codes translation table.
*
* The list of codes is complete according to the
* {@link https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml Hypertext Transfer Protocol (HTTP) Status Code Registry}
* (last updated 2021-10-01).
*
* Unless otherwise noted, the status code is defined in RFC2616.
*
* @var array<int, string>
*/
public static array $statusTexts = [
100 => 'Continue',
101 => 'Switching Protocols',
102 => 'Processing', // RFC2518
103 => 'Early Hints',
200 => 'OK',
201 => 'Created',
202 => 'Accepted',
203 => 'Non-Authoritative Information',
204 => 'No Content',
205 => 'Reset Content',
206 => 'Partial Content',
207 => 'Multi-Status', // RFC4918
208 => 'Already Reported', // RFC5842
226 => 'IM Used', // RFC3229
300 => 'Multiple Choices',
301 => 'Moved Permanently',
302 => 'Found',
303 => 'See Other',
304 => 'Not Modified',
305 => 'Use Proxy',
307 => 'Temporary Redirect',
308 => 'Permanent Redirect', // RFC7238
400 => 'Bad Request',
401 => 'Unauthorized',
402 => 'Payment Required',
403 => 'Forbidden',
404 => 'Not Found',
405 => 'Method Not Allowed',
406 => 'Not Acceptable',
407 => 'Proxy Authentication Required',
408 => 'Request Timeout',
409 => 'Conflict',
410 => 'Gone',
411 => 'Length Required',
412 => 'Precondition Failed',
413 => 'Content Too Large', // RFC-ietf-httpbis-semantics
414 => 'URI Too Long',
415 => 'Unsupported Media Type',
416 => 'Range Not Satisfiable',
417 => 'Expectation Failed',
418 => 'I\'m a teapot', // RFC2324
421 => 'Misdirected Request', // RFC7540
422 => 'Unprocessable Content', // RFC-ietf-httpbis-semantics
423 => 'Locked', // RFC4918
424 => 'Failed Dependency', // RFC4918
425 => 'Too Early', // RFC-ietf-httpbis-replay-04
426 => 'Upgrade Required', // RFC2817
428 => 'Precondition Required', // RFC6585
429 => 'Too Many Requests', // RFC6585
431 => 'Request Header Fields Too Large', // RFC6585
451 => 'Unavailable For Legal Reasons', // RFC7725
500 => 'Internal Server Error',
501 => 'Not Implemented',
502 => 'Bad Gateway',
503 => 'Service Unavailable',
504 => 'Gateway Timeout',
505 => 'HTTP Version Not Supported',
506 => 'Variant Also Negotiates', // RFC2295
507 => 'Insufficient Storage', // RFC4918
508 => 'Loop Detected', // RFC5842
510 => 'Not Extended', // RFC2774
511 => 'Network Authentication Required', // RFC6585
];
/**
* Tracks headers already sent in informational responses.
*/
private array $sentHeaders;
/**
* @param int $status The HTTP status code (200 "OK" by default)
*
* @throws \InvalidArgumentException When the HTTP status code is not valid
*/
public function __construct(?string $content = '', int $status = 200, array $headers = [])
{
$this->headers = new ResponseHeaderBag($headers);
$this->setContent($content);
$this->setStatusCode($status);
$this->setProtocolVersion('1.0');
}
/**
* Returns the Response as an HTTP string.
*
* The string representation of the Response is the same as the
* one that will be sent to the client only if the prepare() method
* has been called before.
*
* @see prepare()
*/
public function __toString(): string
{
return
\sprintf('HTTP/%s %s %s', $this->version, $this->statusCode, $this->statusText)."\r\n".
$this->headers."\r\n".
$this->getContent();
}
/**
* Clones the current Response instance.
*/
public function __clone()
{
$this->headers = clone $this->headers;
}
/**
* Prepares the Response before it is sent to the client.
*
* This method tweaks the Response to ensure that it is
* compliant with RFC 2616. Most of the changes are based on
* the Request that is "associated" with this Response.
*
* @return $this
*/
public function prepare(Request $request): static
{
$headers = $this->headers;
if ($this->isInformational() || $this->isEmpty()) {
$this->setContent(null);
$headers->remove('Content-Type');
$headers->remove('Content-Length');
// prevent PHP from sending the Content-Type header based on default_mimetype
ini_set('default_mimetype', '');
} else {
// Content-type based on the Request
if (!$headers->has('Content-Type')) {
$format = $request->getRequestFormat(null);
if (null !== $format && $mimeType = $request->getMimeType($format)) {
$headers->set('Content-Type', $mimeType);
}
}
// Fix Content-Type
$charset = $this->charset ?: 'utf-8';
if (!$headers->has('Content-Type')) {
$headers->set('Content-Type', 'text/html; charset='.$charset);
} elseif (0 === stripos($headers->get('Content-Type') ?? '', 'text/') && false === stripos($headers->get('Content-Type') ?? '', 'charset')) {
// add the charset
$headers->set('Content-Type', $headers->get('Content-Type').'; charset='.$charset);
}
// Fix Content-Length
if ($headers->has('Transfer-Encoding')) {
$headers->remove('Content-Length');
}
if ($request->isMethod('HEAD')) {
// cf. RFC2616 14.13
$length = $headers->get('Content-Length');
$this->setContent(null);
if ($length) {
$headers->set('Content-Length', $length);
}
}
}
// Fix protocol
if ('HTTP/1.0' != $request->server->get('SERVER_PROTOCOL')) {
$this->setProtocolVersion('1.1');
}
// Check if we need to send extra expire info headers
if ('1.0' == $this->getProtocolVersion() && str_contains($headers->get('Cache-Control', ''), 'no-cache')) {
$headers->set('pragma', 'no-cache');
$headers->set('expires', -1);
}
$this->ensureIEOverSSLCompatibility($request);
if ($request->isSecure()) {
foreach ($headers->getCookies() as $cookie) {
$cookie->setSecureDefault(true);
}
}
return $this;
}
/**
* Sends HTTP headers.
*
* @param positive-int|null $statusCode The status code to use, override the statusCode property if set and not null
*
* @return $this
*/
public function sendHeaders(?int $statusCode = null): static
{
// headers have already been sent by the developer
if (headers_sent()) {
if (!\in_array(\PHP_SAPI, ['cli', 'phpdbg', 'embed'], true)) {
$statusCode ??= $this->statusCode;
header(\sprintf('HTTP/%s %s %s', $this->version, $statusCode, $this->statusText), true, $statusCode);
}
return $this;
}
$informationalResponse = $statusCode >= 100 && $statusCode < 200;
if ($informationalResponse && !\function_exists('headers_send')) {
// skip informational responses if not supported by the SAPI
return $this;
}
// headers
foreach ($this->headers->allPreserveCaseWithoutCookies() as $name => $values) {
// As recommended by RFC 8297, PHP automatically copies headers from previous 103 responses, we need to deal with that if headers changed
$previousValues = $this->sentHeaders[$name] ?? null;
if ($previousValues === $values) {
// Header already sent in a previous response, it will be automatically copied in this response by PHP
continue;
}
$replace = 0 === strcasecmp($name, 'Content-Type');
if (null !== $previousValues && array_diff($previousValues, $values)) {
header_remove($name);
$previousValues = null;
}
$newValues = null === $previousValues ? $values : array_diff($values, $previousValues);
foreach ($newValues as $value) {
header($name.': '.$value, $replace, $this->statusCode);
}
if ($informationalResponse) {
$this->sentHeaders[$name] = $values;
}
}
// cookies
foreach ($this->headers->getCookies() as $cookie) {
header('Set-Cookie: '.$cookie, false, $this->statusCode);
}
if ($informationalResponse) {
headers_send($statusCode);
return $this;
}
$statusCode ??= $this->statusCode;
// status
header(\sprintf('HTTP/%s %s %s', $this->version, $statusCode, $this->statusText), true, $statusCode);
return $this;
}
/**
* Sends content for the current web response.
*
* @return $this
*/
public function sendContent(): static
{
echo $this->content;
return $this;
}
/**
* Sends HTTP headers and content.
*
* @param bool $flush Whether output buffers should be flushed
*
* @return $this
*/
public function send(bool $flush = true): static
{
$this->sendHeaders();
$this->sendContent();
if (!$flush) {
return $this;
}
if (\function_exists('fastcgi_finish_request')) {
fastcgi_finish_request();
} elseif (\function_exists('litespeed_finish_request')) {
litespeed_finish_request();
} elseif (!\in_array(\PHP_SAPI, ['cli', 'phpdbg', 'embed'], true)) {
static::closeOutputBuffers(0, true);
flush();
}
return $this;
}
/**
* Sets the response content.
*
* @return $this
*/
public function setContent(?string $content): static
{
$this->content = $content ?? '';
return $this;
}
/**
* Gets the current response content.
*/
public function getContent(): string|false
{
return $this->content;
}
/**
* Sets the HTTP protocol version (1.0 or 1.1).
*
* @return $this
*
* @final
*/
public function setProtocolVersion(string $version): static
{
$this->version = $version;
return $this;
}
/**
* Gets the HTTP protocol version.
*
* @final
*/
public function getProtocolVersion(): string
{
return $this->version;
}
/**
* Sets the response status code.
*
* If the status text is null it will be automatically populated for the known
* status codes and left empty otherwise.
*
* @return $this
*
* @throws \InvalidArgumentException When the HTTP status code is not valid
*
* @final
*/
public function setStatusCode(int $code, ?string $text = null): static
{
$this->statusCode = $code;
if ($this->isInvalid()) {
throw new \InvalidArgumentException(\sprintf('The HTTP status code "%s" is not valid.', $code));
}
if (null === $text) {
$this->statusText = self::$statusTexts[$code] ?? 'unknown status';
return $this;
}
$this->statusText = $text;
return $this;
}
/**
* Retrieves the status code for the current web response.
*
* @final
*/
public function getStatusCode(): int
{
return $this->statusCode;
}
/**
* Sets the response charset.
*
* @return $this
*
* @final
*/
public function setCharset(string $charset): static
{
$this->charset = $charset;
return $this;
}
/**
* Retrieves the response charset.
*
* @final
*/
public function getCharset(): ?string
{
return $this->charset;
}
/**
* Returns true if the response may safely be kept in a shared (surrogate) cache.
*
* Responses marked "private" with an explicit Cache-Control directive are
* considered uncacheable.
*
* Responses with neither a freshness lifetime (Expires, max-age) nor cache
* validator (Last-Modified, ETag) are considered uncacheable because there is
* no way to tell when or how to remove them from the cache.
*
* Note that RFC 7231 and RFC 7234 possibly allow for a more permissive implementation,
* for example "status codes that are defined as cacheable by default [...]
* can be reused by a cache with heuristic expiration unless otherwise indicated"
* (https://tools.ietf.org/html/rfc7231#section-6.1)
*
* @final
*/
public function isCacheable(): bool
{
if (!\in_array($this->statusCode, [200, 203, 300, 301, 302, 404, 410], true)) {
return false;
}
if ($this->headers->hasCacheControlDirective('no-store') || $this->headers->getCacheControlDirective('private')) {
return false;
}
return $this->isValidateable() || $this->isFresh();
}
/**
* Returns true if the response is "fresh".
*
* Fresh responses may be served from cache without any interaction with the
* origin. A response is considered fresh when it includes a Cache-Control/max-age
* indicator or Expires header and the calculated age is less than the freshness lifetime.
*
* @final
*/
public function isFresh(): bool
{
return $this->getTtl() > 0;
}
/**
* Returns true if the response includes headers that can be used to validate
* the response with the origin server using a conditional GET request.
*
* @final
*/
public function isValidateable(): bool
{
return $this->headers->has('Last-Modified') || $this->headers->has('ETag');
}
/**
* Marks the response as "private".
*
* It makes the response ineligible for serving other clients.
*
* @return $this
*
* @final
*/
public function setPrivate(): static
{
$this->headers->removeCacheControlDirective('public');
$this->headers->addCacheControlDirective('private');
return $this;
}
/**
* Marks the response as "public".
*
* It makes the response eligible for serving other clients.
*
* @return $this
*
* @final
*/
public function setPublic(): static
{
$this->headers->addCacheControlDirective('public');
$this->headers->removeCacheControlDirective('private');
return $this;
}
/**
* Marks the response as "immutable".
*
* @return $this
*
* @final
*/
public function setImmutable(bool $immutable = true): static
{
if ($immutable) {
$this->headers->addCacheControlDirective('immutable');
} else {
$this->headers->removeCacheControlDirective('immutable');
}
return $this;
}
/**
* Returns true if the response is marked as "immutable".
*
* @final
*/
public function isImmutable(): bool
{
return $this->headers->hasCacheControlDirective('immutable');
}
/**
* Returns true if the response must be revalidated by shared caches once it has become stale.
*
* This method indicates that the response must not be served stale by a
* cache in any circumstance without first revalidating with the origin.
* When present, the TTL of the response should not be overridden to be
* greater than the value provided by the origin.
*
* @final
*/
public function mustRevalidate(): bool
{
return $this->headers->hasCacheControlDirective('must-revalidate') || $this->headers->hasCacheControlDirective('proxy-revalidate');
}
/**
* Returns the Date header as a DateTime instance.
*
* @throws \RuntimeException When the header is not parseable
*
* @final
*/
public function getDate(): ?\DateTimeImmutable
{
return $this->headers->getDate('Date');
}
/**
* Sets the Date header.
*
* @return $this
*
* @final
*/
public function setDate(\DateTimeInterface $date): static
{
$date = \DateTimeImmutable::createFromInterface($date);
$date = $date->setTimezone(new \DateTimeZone('UTC'));
$this->headers->set('Date', $date->format('D, d M Y H:i:s').' GMT');
return $this;
}
/**
* Returns the age of the response in seconds.
*
* @final
*/
public function getAge(): int
{
if (null !== $age = $this->headers->get('Age')) {
return (int) $age;
}
return max(time() - (int) $this->getDate()->format('U'), 0);
}
/**
* Marks the response stale by setting the Age header to be equal to the maximum age of the response.
*
* @return $this
*/
public function expire(): static
{
if ($this->isFresh()) {
$this->headers->set('Age', $this->getMaxAge());
$this->headers->remove('Expires');
}
return $this;
}
/**
* Returns the value of the Expires header as a DateTime instance.
*
* @final
*/
public function getExpires(): ?\DateTimeImmutable
{
try {
return $this->headers->getDate('Expires');
} catch (\RuntimeException) {
// according to RFC 2616 invalid date formats (e.g. "0" and "-1") must be treated as in the past
return \DateTimeImmutable::createFromFormat('U', time() - 172800);
}
}
/**
* Sets the Expires HTTP header with a DateTime instance.
*
* Passing null as value will remove the header.
*
* @return $this
*
* @final
*/
public function setExpires(?\DateTimeInterface $date): static
{
if (null === $date) {
$this->headers->remove('Expires');
return $this;
}
$date = \DateTimeImmutable::createFromInterface($date);
$date = $date->setTimezone(new \DateTimeZone('UTC'));
$this->headers->set('Expires', $date->format('D, d M Y H:i:s').' GMT');
return $this;
}
/**
* Returns the number of seconds after the time specified in the response's Date
* header when the response should no longer be considered fresh.
*
* First, it checks for a s-maxage directive, then a max-age directive, and then it falls
* back on an expires header. It returns null when no maximum age can be established.
*
* @final
*/
public function getMaxAge(): ?int
{
if ($this->headers->hasCacheControlDirective('s-maxage')) {
return (int) $this->headers->getCacheControlDirective('s-maxage');
}
if ($this->headers->hasCacheControlDirective('max-age')) {
return (int) $this->headers->getCacheControlDirective('max-age');
}
if (null !== $expires = $this->getExpires()) {
$maxAge = (int) $expires->format('U') - (int) $this->getDate()->format('U');
return max($maxAge, 0);
}
return null;
}
/**
* Sets the number of seconds after which the response should no longer be considered fresh.
*
* This method sets the Cache-Control max-age directive.
*
* @return $this
*
* @final
*/
public function setMaxAge(int $value): static
{
$this->headers->addCacheControlDirective('max-age', $value);
return $this;
}
/**
* Sets the number of seconds after which the response should no longer be returned by shared caches when backend is down.
*
* This method sets the Cache-Control stale-if-error directive.
*
* @return $this
*
* @final
*/
public function setStaleIfError(int $value): static
{
$this->headers->addCacheControlDirective('stale-if-error', $value);
return $this;
}
/**
* Sets the number of seconds after which the response should no longer return stale content by shared caches.
*
* This method sets the Cache-Control stale-while-revalidate directive.
*
* @return $this
*
* @final
*/
public function setStaleWhileRevalidate(int $value): static
{
$this->headers->addCacheControlDirective('stale-while-revalidate', $value);
return $this;
}
/**
* Sets the number of seconds after which the response should no longer be considered fresh by shared caches.
*
* This method sets the Cache-Control s-maxage directive.
*
* @return $this
*
* @final
*/
public function setSharedMaxAge(int $value): static
{
$this->setPublic();
$this->headers->addCacheControlDirective('s-maxage', $value);
return $this;
}
/**
* Returns the response's time-to-live in seconds.
*
* It returns null when no freshness information is present in the response.
*
* When the response's TTL is 0, the response may not be served from cache without first
* revalidating with the origin.
*
* @final
*/
public function getTtl(): ?int
{
$maxAge = $this->getMaxAge();
return null !== $maxAge ? max($maxAge - $this->getAge(), 0) : null;
}
/**
* Sets the response's time-to-live for shared caches in seconds.
*
* This method adjusts the Cache-Control/s-maxage directive.
*
* @return $this
*
* @final
*/
public function setTtl(int $seconds): static
{
$this->setSharedMaxAge($this->getAge() + $seconds);
return $this;
}
/**
* Sets the response's time-to-live for private/client caches in seconds.
*
* This method adjusts the Cache-Control/max-age directive.
*
* @return $this
*
* @final
*/
public function setClientTtl(int $seconds): static
{
$this->setMaxAge($this->getAge() + $seconds);
return $this;
}
/**
* Returns the Last-Modified HTTP header as a DateTime instance.
*
* @throws \RuntimeException When the HTTP header is not parseable
*
* @final
*/
public function getLastModified(): ?\DateTimeImmutable
{
return $this->headers->getDate('Last-Modified');
}
/**
* Sets the Last-Modified HTTP header with a DateTime instance.
*
* Passing null as value will remove the header.
*
* @return $this
*
* @final
*/
public function setLastModified(?\DateTimeInterface $date): static
{
if (null === $date) {
$this->headers->remove('Last-Modified');
return $this;
}
$date = \DateTimeImmutable::createFromInterface($date);
$date = $date->setTimezone(new \DateTimeZone('UTC'));
$this->headers->set('Last-Modified', $date->format('D, d M Y H:i:s').' GMT');
return $this;
}
/**
* Returns the literal value of the ETag HTTP header.
*
* @final
*/
public function getEtag(): ?string
{
return $this->headers->get('ETag');
}
/**
* Sets the ETag value.
*
* @param string|null $etag The ETag unique identifier or null to remove the header
* @param bool $weak Whether you want a weak ETag or not
*
* @return $this
*
* @final
*/
public function setEtag(?string $etag, bool $weak = false): static
{
if (null === $etag) {
$this->headers->remove('Etag');
} else {
if (!str_starts_with($etag, '"')) {
$etag = '"'.$etag.'"';
}
$this->headers->set('ETag', (true === $weak ? 'W/' : '').$etag);
}
return $this;
}
/**
* Sets the response's cache headers (validation and/or expiration).
*
* Available options are: must_revalidate, no_cache, no_store, no_transform, public, private, proxy_revalidate, max_age, s_maxage, immutable, last_modified and etag.
*
* @return $this
*
* @throws \InvalidArgumentException
*
* @final
*/
public function setCache(array $options): static
{
if ($diff = array_diff(array_keys($options), array_keys(self::HTTP_RESPONSE_CACHE_CONTROL_DIRECTIVES))) {
throw new \InvalidArgumentException(\sprintf('Response does not support the following options: "%s".', implode('", "', $diff)));
}
if (isset($options['etag'])) {
$this->setEtag($options['etag']);
}
if (isset($options['last_modified'])) {
$this->setLastModified($options['last_modified']);
}
if (isset($options['max_age'])) {
$this->setMaxAge($options['max_age']);
}
if (isset($options['s_maxage'])) {
$this->setSharedMaxAge($options['s_maxage']);
}
if (isset($options['stale_while_revalidate'])) {
$this->setStaleWhileRevalidate($options['stale_while_revalidate']);
}
if (isset($options['stale_if_error'])) {
$this->setStaleIfError($options['stale_if_error']);
}
foreach (self::HTTP_RESPONSE_CACHE_CONTROL_DIRECTIVES as $directive => $hasValue) {
if (!$hasValue && isset($options[$directive])) {
if ($options[$directive]) {
$this->headers->addCacheControlDirective(str_replace('_', '-', $directive));
} else {
$this->headers->removeCacheControlDirective(str_replace('_', '-', $directive));
}
}
}
if (isset($options['public'])) {
if ($options['public']) {
$this->setPublic();
} else {
$this->setPrivate();
}
}
if (isset($options['private'])) {
if ($options['private']) {
$this->setPrivate();
} else {
$this->setPublic();
}
}
return $this;
}
/**
* Modifies the response so that it conforms to the rules defined for a 304 status code.
*
* This sets the status, removes the body, and discards any headers
* that MUST NOT be included in 304 responses.
*
* @return $this
*
* @see https://tools.ietf.org/html/rfc2616#section-10.3.5
*
* @final
*/
public function setNotModified(): static
{
$this->setStatusCode(304);
$this->setContent(null);
// remove headers that MUST NOT be included with 304 Not Modified responses
foreach (['Allow', 'Content-Encoding', 'Content-Language', 'Content-Length', 'Content-MD5', 'Content-Type', 'Last-Modified'] as $header) {
$this->headers->remove($header);
}
return $this;
}
/**
* Returns true if the response includes a Vary header.
*
* @final
*/
public function hasVary(): bool
{
return null !== $this->headers->get('Vary');
}
/**
* Returns an array of header names given in the Vary header.
*
* @final
*/
public function getVary(): array
{
if (!$vary = $this->headers->all('Vary')) {
return [];
}
$ret = [];
foreach ($vary as $item) {
$ret[] = preg_split('/[\s,]+/', $item);
}
return array_merge([], ...$ret);
}
/**
* Sets the Vary header.
*
* @param bool $replace Whether to replace the actual value or not (true by default)
*
* @return $this
*
* @final
*/
public function setVary(string|array $headers, bool $replace = true): static
{
$this->headers->set('Vary', $headers, $replace);
return $this;
}
/**
* Determines if the Response validators (ETag, Last-Modified) match
* a conditional value specified in the Request.
*
* If the Response is not modified, it sets the status code to 304 and
* removes the actual content by calling the setNotModified() method.
*
* @final
*/
public function isNotModified(Request $request): bool
{
if (!$request->isMethodCacheable()) {
return false;
}
$notModified = false;
$lastModified = $this->headers->get('Last-Modified');
$modifiedSince = $request->headers->get('If-Modified-Since');
if (($ifNoneMatchEtags = $request->getETags()) && (null !== $etag = $this->getEtag())) {
if (0 == strncmp($etag, 'W/', 2)) {
$etag = substr($etag, 2);
}
// Use weak comparison as per https://tools.ietf.org/html/rfc7232#section-3.2.
foreach ($ifNoneMatchEtags as $ifNoneMatchEtag) {
if (0 == strncmp($ifNoneMatchEtag, 'W/', 2)) {
$ifNoneMatchEtag = substr($ifNoneMatchEtag, 2);
}
if ($ifNoneMatchEtag === $etag || '*' === $ifNoneMatchEtag) {
$notModified = true;
break;
}
}
}
// Only do If-Modified-Since date comparison when If-None-Match is not present as per https://tools.ietf.org/html/rfc7232#section-3.3.
elseif ($modifiedSince && $lastModified) {
$notModified = strtotime($modifiedSince) >= strtotime($lastModified);
}
if ($notModified) {
$this->setNotModified();
}
return $notModified;
}
/**
* Is response invalid?
*
* @see https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
*
* @final
*/
public function isInvalid(): bool
{
return $this->statusCode < 100 || $this->statusCode >= 600;
}
/**
* Is response informative?
*
* @final
*/
public function isInformational(): bool
{
return $this->statusCode >= 100 && $this->statusCode < 200;
}
/**
* Is response successful?
*
* @final
*/
public function isSuccessful(): bool
{
return $this->statusCode >= 200 && $this->statusCode < 300;
}
/**
* Is the response a redirect?
*
* @final
*/
public function isRedirection(): bool
{
return $this->statusCode >= 300 && $this->statusCode < 400;
}
/**
* Is there a client error?
*
* @final
*/
public function isClientError(): bool
{
return $this->statusCode >= 400 && $this->statusCode < 500;
}
/**
* Was there a server side error?
*
* @final
*/
public function isServerError(): bool
{
return $this->statusCode >= 500 && $this->statusCode < 600;
}
/**
* Is the response OK?
*
* @final
*/
public function isOk(): bool
{
return 200 === $this->statusCode;
}
/**
* Is the response forbidden?
*
* @final
*/
public function isForbidden(): bool
{
return 403 === $this->statusCode;
}
/**
* Is the response a not found error?
*
* @final
*/
public function isNotFound(): bool
{
return 404 === $this->statusCode;
}
/**
* Is the response a redirect of some form?
*
* @final
*/
public function isRedirect(?string $location = null): bool
{
return \in_array($this->statusCode, [201, 301, 302, 303, 307, 308], true) && (null === $location ?: $location == $this->headers->get('Location'));
}
/**
* Is the response empty?
*
* @final
*/
public function isEmpty(): bool
{
return \in_array($this->statusCode, [204, 304], true);
}
/**
* Cleans or flushes output buffers up to target level.
*
* Resulting level can be greater than target level if a non-removable buffer has been encountered.
*
* @final
*/
public static function closeOutputBuffers(int $targetLevel, bool $flush): void
{
$status = ob_get_status(true);
$level = \count($status);
$flags = \PHP_OUTPUT_HANDLER_REMOVABLE | ($flush ? \PHP_OUTPUT_HANDLER_FLUSHABLE : \PHP_OUTPUT_HANDLER_CLEANABLE);
while ($level-- > $targetLevel && ($s = $status[$level]) && (!isset($s['del']) ? !isset($s['flags']) || ($s['flags'] & $flags) === $flags : $s['del'])) {
if ($flush) {
ob_end_flush();
} else {
ob_end_clean();
}
}
}
/**
* Marks a response as safe according to RFC8674.
*
* @see https://tools.ietf.org/html/rfc8674
*/
public function setContentSafe(bool $safe = true): void
{
if ($safe) {
$this->headers->set('Preference-Applied', 'safe');
} elseif ('safe' === $this->headers->get('Preference-Applied')) {
$this->headers->remove('Preference-Applied');
}
$this->setVary('Prefer', false);
}
/**
* Checks if we need to remove Cache-Control for SSL encrypted downloads when using IE < 9.
*
* @see http://support.microsoft.com/kb/323308
*
* @final
*/
protected function ensureIEOverSSLCompatibility(Request $request): void
{
if (false !== stripos($this->headers->get('Content-Disposition') ?? '', 'attachment') && 1 == preg_match('/MSIE (.*?);/i', $request->server->get('HTTP_USER_AGENT') ?? '', $match) && true === $request->isSecure()) {
if ((int) preg_replace('/(MSIE )(.*?);/', '$2', $match[0]) < 9) {
$this->headers->remove('Cache-Control');
}
}
}
} | php | github | https://github.com/symfony/symfony | src/Symfony/Component/HttpFoundation/Response.php |
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: BUSL-1.1
package terraform
import (
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/providers"
"github.com/hashicorp/terraform/internal/states"
)
// HookAction is an enum of actions that can be taken as a result of a hook
// callback. This allows you to modify the behavior of Terraform at runtime.
type HookAction byte
const (
// HookActionContinue continues with processing as usual.
HookActionContinue HookAction = iota
// HookActionHalt halts immediately: no more hooks are processed
// and the action that Terraform was about to take is cancelled.
HookActionHalt
)
// HookResourceIdentity is passed to Hook interface methods to fully identify
// the resource instance being operated on. It currently includes the resource
// address and the provider address.
type HookResourceIdentity struct {
Addr addrs.AbsResourceInstance
ProviderAddr addrs.Provider
}
// HookActionIdentity is passed to Hook interface methods to fully identify
// the action being performed.
type HookActionIdentity struct {
Addr addrs.AbsActionInstance
ActionTrigger plans.ActionTrigger
}
func (i *HookActionIdentity) String() string {
return i.Addr.String() + " (triggered by " + i.ActionTrigger.String() + ")"
}
// Hook is the interface that must be implemented to hook into various
// parts of Terraform, allowing you to inspect or change behavior at runtime.
//
// There are MANY hook points into Terraform. If you only want to implement
// some hook points, but not all (which is the likely case), then embed the
// NilHook into your struct, which implements all of the interface but does
// nothing. Then, override only the functions you want to implement.
type Hook interface {
// PreApply and PostApply are called before and after an action for a
// single instance is applied. The error argument in PostApply is the
// error, if any, that was returned from the provider Apply call itself.
PreApply(id HookResourceIdentity, dk addrs.DeposedKey, action plans.Action, priorState, plannedNewState cty.Value) (HookAction, error)
PostApply(id HookResourceIdentity, dk addrs.DeposedKey, newState cty.Value, err error) (HookAction, error)
// PreDiff and PostDiff are called before and after a provider is given
// the opportunity to customize the proposed new state to produce the
// planned new state.
PreDiff(id HookResourceIdentity, dk addrs.DeposedKey, priorState, proposedNewState cty.Value, err error) (HookAction, error)
PostDiff(id HookResourceIdentity, dk addrs.DeposedKey, action plans.Action, priorState, plannedNewState cty.Value, err error) (HookAction, error)
// The provisioning hooks signal both the overall start end end of
// provisioning for a particular instance and of each of the individual
// configured provisioners for each instance. The sequence of these
// for a given instance might look something like this:
//
// PreProvisionInstance(aws_instance.foo[1], ...)
// PreProvisionInstanceStep(aws_instance.foo[1], "file")
// PostProvisionInstanceStep(aws_instance.foo[1], "file", nil)
// PreProvisionInstanceStep(aws_instance.foo[1], "remote-exec")
// ProvisionOutput(aws_instance.foo[1], "remote-exec", "Installing foo...")
// ProvisionOutput(aws_instance.foo[1], "remote-exec", "Configuring bar...")
// PostProvisionInstanceStep(aws_instance.foo[1], "remote-exec", nil)
// PostProvisionInstance(aws_instance.foo[1], ...)
//
// ProvisionOutput is called with output sent back by the provisioners.
// This will be called multiple times as output comes in, with each call
// representing one line of output. It cannot control whether the
// provisioner continues running.
PreProvisionInstance(id HookResourceIdentity, state cty.Value) (HookAction, error)
PostProvisionInstance(id HookResourceIdentity, state cty.Value) (HookAction, error)
PreProvisionInstanceStep(id HookResourceIdentity, typeName string) (HookAction, error)
PostProvisionInstanceStep(id HookResourceIdentity, typeName string, err error) (HookAction, error)
ProvisionOutput(id HookResourceIdentity, typeName string, line string)
// PreRefresh and PostRefresh are called before and after a single
// resource state is refreshed, respectively.
PreRefresh(id HookResourceIdentity, dk addrs.DeposedKey, priorState cty.Value) (HookAction, error)
PostRefresh(id HookResourceIdentity, dk addrs.DeposedKey, priorState cty.Value, newState cty.Value) (HookAction, error)
// PreImportState and PostImportState are called before and after
// (respectively) each state import operation for a given resource address when
// using the legacy import command.
PreImportState(id HookResourceIdentity, importID string) (HookAction, error)
PostImportState(id HookResourceIdentity, imported []providers.ImportedResource) (HookAction, error)
// PrePlanImport and PostPlanImport are called during a plan before and after planning to import
// a new resource using the configuration-driven import workflow.
PrePlanImport(id HookResourceIdentity, importTarget cty.Value) (HookAction, error)
PostPlanImport(id HookResourceIdentity, imported []providers.ImportedResource) (HookAction, error)
// PreApplyImport and PostApplyImport are called during an apply for each imported resource when
// using the configuration-driven import workflow.
PreApplyImport(id HookResourceIdentity, importing plans.ImportingSrc) (HookAction, error)
PostApplyImport(id HookResourceIdentity, importing plans.ImportingSrc) (HookAction, error)
// PreEphemeralOp and PostEphemeralOp are called during an operation on ephemeral resource
// such as opening, renewal or closing
PreEphemeralOp(id HookResourceIdentity, action plans.Action) (HookAction, error)
PostEphemeralOp(id HookResourceIdentity, action plans.Action, opErr error) (HookAction, error)
// PreListQuery and PostListQuery are called during a query operation before and after
// resources are queried from the provider.
PreListQuery(id HookResourceIdentity, inputConfig cty.Value) (HookAction, error)
PostListQuery(id HookResourceIdentity, results plans.QueryResults, identityVersion int64) (HookAction, error)
// StartAction, ProgressAction, and CompleteAction are called during the
// lifecycle of an action invocation.
StartAction(id HookActionIdentity) (HookAction, error)
ProgressAction(id HookActionIdentity, progress string) (HookAction, error)
CompleteAction(id HookActionIdentity, err error) (HookAction, error)
// Stopping is called if an external signal requests that Terraform
// gracefully abort an operation in progress.
//
// This notification might suggest that the user wants Terraform to exit
// ASAP and in that case it's possible that if Terraform runs for too much
// longer then it'll get killed un-gracefully, and so this hook could be
// an opportunity to persist any transient data that would be lost under
// a subsequent kill signal. However, implementations must take care to do
// so in a way that won't cause corruption if the process _is_ killed while
// this hook is still running.
//
// This hook cannot control whether Terraform continues, because the
// graceful shutdown process is typically already running by the time this
// function is called.
Stopping()
// PostStateUpdate is called each time the state is updated. The caller must
// coordinate a lock for the state if necessary, such that the Hook may
// access it freely without any need for additional locks to protect from
// concurrent writes. Implementations which modify or retain the state after
// the call has returned must copy the state.
PostStateUpdate(new *states.State) (HookAction, error)
}
// NilHook is a Hook implementation that does nothing. It exists only to
// simplify implementing hooks. You can embed this into your Hook implementation
// and only implement the functions you are interested in.
type NilHook struct{}
var _ Hook = (*NilHook)(nil)
func (*NilHook) PreApply(id HookResourceIdentity, dk addrs.DeposedKey, action plans.Action, priorState, plannedNewState cty.Value) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) PostApply(id HookResourceIdentity, dk addrs.DeposedKey, newState cty.Value, err error) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) PreDiff(id HookResourceIdentity, dk addrs.DeposedKey, priorState, proposedNewState cty.Value, err error) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) PostDiff(id HookResourceIdentity, dk addrs.DeposedKey, action plans.Action, priorState, plannedNewState cty.Value, err error) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) PreProvisionInstance(id HookResourceIdentity, state cty.Value) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) PostProvisionInstance(id HookResourceIdentity, state cty.Value) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) PreProvisionInstanceStep(id HookResourceIdentity, typeName string) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) PostProvisionInstanceStep(id HookResourceIdentity, typeName string, err error) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) ProvisionOutput(id HookResourceIdentity, typeName string, line string) {
}
func (*NilHook) PreRefresh(id HookResourceIdentity, dk addrs.DeposedKey, priorState cty.Value) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) PostRefresh(id HookResourceIdentity, dk addrs.DeposedKey, priorState cty.Value, newState cty.Value) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) PreImportState(id HookResourceIdentity, importID string) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) PostImportState(id HookResourceIdentity, imported []providers.ImportedResource) (HookAction, error) {
return HookActionContinue, nil
}
func (h *NilHook) PrePlanImport(id HookResourceIdentity, importTarget cty.Value) (HookAction, error) {
return HookActionContinue, nil
}
func (h *NilHook) PostPlanImport(id HookResourceIdentity, imported []providers.ImportedResource) (HookAction, error) {
return HookActionContinue, nil
}
func (h *NilHook) PreApplyImport(id HookResourceIdentity, importing plans.ImportingSrc) (HookAction, error) {
return HookActionContinue, nil
}
func (h *NilHook) PostApplyImport(id HookResourceIdentity, importing plans.ImportingSrc) (HookAction, error) {
return HookActionContinue, nil
}
func (h *NilHook) PreEphemeralOp(id HookResourceIdentity, action plans.Action) (HookAction, error) {
return HookActionContinue, nil
}
func (h *NilHook) PostEphemeralOp(id HookResourceIdentity, action plans.Action, opErr error) (HookAction, error) {
return HookActionContinue, nil
}
func (h *NilHook) PreListQuery(id HookResourceIdentity, input_config cty.Value) (HookAction, error) {
return HookActionContinue, nil
}
func (h *NilHook) PostListQuery(id HookResourceIdentity, results plans.QueryResults, identityVersion int64) (HookAction, error) {
return HookActionContinue, nil
}
func (h *NilHook) StartAction(id HookActionIdentity) (HookAction, error) {
return HookActionContinue, nil
}
func (h *NilHook) ProgressAction(id HookActionIdentity, progress string) (HookAction, error) {
return HookActionContinue, nil
}
func (h *NilHook) CompleteAction(id HookActionIdentity, err error) (HookAction, error) {
return HookActionContinue, nil
}
func (*NilHook) Stopping() {
// Does nothing at all by default
}
func (*NilHook) PostStateUpdate(new *states.State) (HookAction, error) {
return HookActionContinue, nil
} | go | github | https://github.com/hashicorp/terraform | internal/terraform/hook.go |
## Input
```javascript
function Component() {
const onClick = () => {
// Cannot assign to globals
someUnknownGlobal = true;
moduleLocal = true;
};
// It's possible that this could be an event handler / effect function,
// but we don't know that and optimistically assume it will only be
// called by an event handler or effect, where it is allowed to modify globals
return <div onClick={onClick} />;
}
export const FIXTURE_ENTRYPOINT = {
fn: Component,
params: [{}],
};
```
## Code
```javascript
import { c as _c } from "react/compiler-runtime";
function Component() {
const $ = _c(1);
const onClick = _temp;
let t0;
if ($[0] === Symbol.for("react.memo_cache_sentinel")) {
t0 = <div onClick={onClick} />;
$[0] = t0;
} else {
t0 = $[0];
}
return t0;
}
function _temp() {
someUnknownGlobal = true;
moduleLocal = true;
}
export const FIXTURE_ENTRYPOINT = {
fn: Component,
params: [{}],
};
```
### Eval output
(kind: ok) <div></div> | unknown | github | https://github.com/facebook/react | compiler/packages/babel-plugin-react-compiler/src/__tests__/fixtures/compiler/allow-reassignment-to-global-function-jsx-prop.expect.md |
# start compatibility with IPython Jupyter 4.0+
try:
from jupyter_client import BlockingKernelClient
except ImportError:
from IPython.kernel import BlockingKernelClient
# python3/python2 nonsense
try:
from Queue import Empty
except:
from queue import Empty
import atexit
import subprocess
import uuid
import time
import os
import sys
import json
__dirname = os.path.dirname(os.path.abspath(__file__))
vars_patch = """
import json
try:
import pandas as pd
except:
pd = None
def __get_variables():
if not pd:
print('[]')
variable_names = globals().keys()
data_frames = []
for v in variable_names:
if v.startswith("_"):
continue
if isinstance(globals()[v], pd.DataFrame):
data_frames.append({
"name": v,
"dtype": "DataFrame"
})
print(json.dumps(data_frames))
"""
class Kernel(object):
def __init__(self, active_dir, pyspark):
# kernel config is stored in a dot file with the active directory
config = os.path.join(active_dir, ".kernel-%s.json" % str(uuid.uuid4()))
# right now we're spawning a child process for IPython. we can
# probably work directly with the IPython kernel API, but the docs
# don't really explain how to do it.
log_file = None
if pyspark:
os.environ["IPYTHON_OPTS"] = "kernel -f %s" % config
pyspark = os.path.join(os.environ.get("SPARK_HOME"), "bin/pyspark")
spark_log = os.environ.get("SPARK_LOG", None)
if spark_log:
log_file = open(spark_log, "w")
spark_opts = os.environ.get("SPARK_OPTS", "")
args = [pyspark] + spark_opts.split() # $SPARK_HOME/bin/pyspark <SPARK_OPTS>
p = subprocess.Popen(args, stdout=log_file, stderr=log_file)
else:
args = [sys.executable, '-m', 'IPython', 'kernel', '-f', config]
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# when __this__ process exits, we're going to remove the ipython config
# file and kill the ipython subprocess
atexit.register(p.terminate)
def remove_config():
if os.path.isfile(config):
os.remove(config)
atexit.register(remove_config)
# i found that if i tried to connect to the kernel immediately, so we'll
# wait until the config file exists before moving on
while os.path.isfile(config)==False:
time.sleep(0.1)
def close_file():
if log_file:
log_file.close()
atexit.register(close_file)
# fire up the kernel with the appropriate config
self.client = BlockingKernelClient(connection_file=config)
self.client.load_connection_file()
self.client.start_channels()
# load our monkeypatches...
self.client.execute("%matplotlib inline")
self.client.execute(vars_patch)
def _run_code(self, code, timeout=0.1):
# this function executes some code and waits for it to completely finish
# before returning. i don't think that this is neccessarily the best
# way to do this, but the IPython documentation isn't very helpful for
# this particular topic.
#
# 1) execute code and grab the ID for that execution thread
# 2) look for messages coming from the "iopub" channel (this is just a
# stream of output)
# 3) when we get a message that is one of the following, save relevant
# data to `data`:
# - execute_result - content from repr
# - stream - content from stdout
# - error - ansii encoded stacktrace
# the final piece is that we check for when the message indicates that
# the kernel is idle and the message's parent is the original execution
# ID (msg_id) that's associated with our executing code. if this is the
# case, we'll return the data and the msg_id and exit
msg_id = self.client.execute(code)
output = { "msg_id": msg_id, "output": None, "image": None, "error": None }
while True:
try:
reply = self.client.get_iopub_msg(timeout=timeout)
except Empty:
continue
if "execution_state" in reply['content']:
if reply['content']['execution_state']=="idle" and reply['parent_header']['msg_id']==msg_id:
if reply['parent_header']['msg_type']=="execute_request":
return output
elif reply['header']['msg_type']=="execute_result":
output['output'] = reply['content']['data'].get('text/plain', '')
elif reply['header']['msg_type']=="display_data":
output['image'] = reply['content']['data'].get('image/png', '')
elif reply['header']['msg_type']=="stream":
output['output'] = reply['content'].get('text', '')
elif reply['header']['msg_type']=="error":
output['error'] = "\n".join(reply['content']['traceback'])
def execute(self, code):
return self._run_code(code)
def complete(self, code, timeout=0.1):
# Call ipython kernel complete, wait for response with the correct msg_id,
# and construct appropriate UI payload.
# See below for an example response from ipython kernel completion for 'el'
#
# {
# 'parent_header':
# {u'username': u'ubuntu', u'version': u'5.0', u'msg_type': u'complete_request',
# u'msg_id': u'5222d158-ada8-474e-88d8-8907eb7cc74c', u'session': u'cda4a03d-a8a1-4e6c-acd0-de62d169772e',
# u'date': datetime.datetime(2015, 5, 7, 15, 25, 8, 796886)},
# 'msg_type': u'complete_reply',
# 'msg_id': u'a3a957d6-5865-4c6f-a0b2-9aa8da718b0d',
# 'content':
# {u'matches': [u'elif', u'else'], u'status': u'ok', u'cursor_start': 0, u'cursor_end': 2, u'metadata': {}},
# 'header':
# {u'username': u'ubuntu', u'version': u'5.0', u'msg_type': u'complete_reply',
# u'msg_id': u'a3a957d6-5865-4c6f-a0b2-9aa8da718b0d', u'session': u'f1491112-7234-4782-8601-b4fb2697a2f6',
# u'date': datetime.datetime(2015, 5, 7, 15, 25, 8, 803470)},
# 'buffers': [],
# 'metadata': {}
# }
#
msg_id = self.client.complete(code)
output = { "msg_id": msg_id, "output": None, "image": None, "error": None }
while True:
try:
reply = self.client.get_shell_msg(timeout=timeout)
except Empty:
continue
if "matches" in reply['content'] and reply['msg_type']=="complete_reply" and reply['parent_header']['msg_id']==msg_id:
results = []
for completion in reply['content']['matches']:
result = {
"value": completion,
"dtype": "---"
}
if "." in code:
result['text'] = ".".join(result['value'].split(".")[1:])
result["dtype"] = "function"
else:
# result['text'] = result['value'].replace(code, '', 1)
result['text'] = result['value']
result["dtype"] = "session variable" # type(globals().get(code)).__name__
results.append(result)
jsonresults = json.dumps(results)
output['output'] = jsonresults
return output
#else:
#Don't know what to do with the rest.
#I've observed parent_header msg_types: kernel_info_request, execute_request
#Just discard for now
def get_dataframes(self):
return self.execute("__get_variables()") | unknown | codeparrot/codeparrot-clean | ||
#
# Copyright (c) 2013 Docker, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from heat.engine import properties
from heat.engine import resource
from heat.openstack.common.gettextutils import _
from heat.openstack.common import log as logging
logger = logging.getLogger(__name__)
DOCKER_INSTALLED = False
# conditionally import so tests can work without having the dependency
# satisfied
try:
import docker
DOCKER_INSTALLED = True
except ImportError:
docker = None
class DockerContainer(resource.Resource):
properties_schema = {
'docker_endpoint': properties.Schema(
properties.Schema.STRING,
_('Docker daemon endpoint (by default the local docker daemon '
'will be used)'),
default=None
),
'hostname': properties.Schema(
properties.Schema.STRING,
_('Hostname of the container'),
default=''
),
'user': properties.Schema(
properties.Schema.STRING,
_('Username or UID'),
default=''
),
'memory': properties.Schema(
properties.Schema.INTEGER,
_('Memory limit (Bytes)'),
default=0
),
'attach_stdin': properties.Schema(
properties.Schema.BOOLEAN,
_('Attach to the the process\' standard input'),
default=False
),
'attach_stdout': properties.Schema(
properties.Schema.BOOLEAN,
_('Attach to the process\' standard output'),
default=True
),
'attach_stderr': properties.Schema(
properties.Schema.BOOLEAN,
_('Attach to the process\' standard error'),
default=True
),
'port_specs': properties.Schema(
properties.Schema.LIST,
_('TCP/UDP ports mapping'),
default=None
),
'privileged': properties.Schema(
properties.Schema.BOOLEAN,
_('Enable extended privileges'),
default=False
),
'tty': properties.Schema(
properties.Schema.BOOLEAN,
_('Allocate a pseudo-tty'),
default=False
),
'open_stdin': properties.Schema(
properties.Schema.BOOLEAN,
_('Open stdin'),
default=False
),
'stdin_once': properties.Schema(
properties.Schema.BOOLEAN,
_('If true, close stdin after the 1 attached client disconnects'),
default=False
),
'env': properties.Schema(
properties.Schema.LIST,
_('Set environment variables'),
default=None
),
'cmd': properties.Schema(
properties.Schema.LIST,
_('Command to run after spawning the container'),
default=[]
),
'dns': properties.Schema(
properties.Schema.LIST,
_('Set custom dns servers'),
default=None
),
'image': properties.Schema(
properties.Schema.STRING,
_('Image name')
),
'volumes': properties.Schema(
properties.Schema.MAP,
_('Create a bind mount'),
default={}
),
'volumes_from': properties.Schema(
properties.Schema.STRING,
_('Mount all specified volumes'),
default=''
),
}
attributes_schema = {
'info': _('Container info'),
'network_info': _('Container network info'),
'network_ip': _('Container ip address'),
'network_gateway': _('Container ip gateway'),
'network_tcp_ports': _('Container TCP ports'),
'network_udp_ports': _('Container UDP ports'),
'logs': _('Container logs'),
'logs_head': _('Container first logs line'),
'logs_tail': _('Container last logs line')
}
def get_client(self):
client = None
if DOCKER_INSTALLED:
endpoint = self.properties.get('docker_endpoint')
if endpoint:
client = docker.Client(endpoint)
else:
client = docker.Client()
return client
def _parse_networkinfo_ports(self, networkinfo):
tcp = []
udp = []
for port, info in networkinfo['Ports'].iteritems():
p = port.split('/')
if not info or len(p) != 2 or 'HostPort' not in info[0]:
continue
port = info[0]['HostPort']
if p[1] == 'tcp':
tcp.append(port)
elif p[1] == 'udp':
udp.append(port)
return (','.join(tcp), ','.join(udp))
def _container_networkinfo(self, client, resource_id):
info = client.inspect_container(self.resource_id)
networkinfo = info['NetworkSettings']
ports = self._parse_networkinfo_ports(networkinfo)
networkinfo['TcpPorts'] = ports[0]
networkinfo['UdpPorts'] = ports[1]
return networkinfo
def _resolve_attribute(self, name):
if not self.resource_id:
return
if name == 'info':
client = self.get_client()
return client.inspect_container(self.resource_id)
if name == 'network_info':
client = self.get_client()
networkinfo = self._container_networkinfo(client, self.resource_id)
return networkinfo
if name == 'network_ip':
client = self.get_client()
networkinfo = self._container_networkinfo(client, self.resource_id)
return networkinfo['IPAddress']
if name == 'network_gateway':
client = self.get_client()
networkinfo = self._container_networkinfo(client, self.resource_id)
return networkinfo['Gateway']
if name == 'network_tcp_ports':
client = self.get_client()
networkinfo = self._container_networkinfo(client, self.resource_id)
return networkinfo['TcpPorts']
if name == 'network_udp_ports':
client = self.get_client()
networkinfo = self._container_networkinfo(client, self.resource_id)
return networkinfo['UdpPorts']
if name == 'logs':
client = self.get_client()
logs = client.logs(self.resource_id)
return logs
if name == 'logs_head':
client = self.get_client()
logs = client.logs(self.resource_id)
return logs.split('\n')[0]
if name == 'logs_tail':
client = self.get_client()
logs = client.logs(self.resource_id)
return logs.split('\n').pop()
def handle_create(self):
args = {
'image': self.properties['image'],
'command': self.properties['cmd'],
'hostname': self.properties['hostname'],
'user': self.properties['user'],
'stdin_open': self.properties['open_stdin'],
'tty': self.properties['tty'],
'mem_limit': self.properties['memory'],
'ports': self.properties['port_specs'],
'environment': self.properties['env'],
'dns': self.properties['dns'],
'volumes': self.properties['volumes'],
'volumes_from': self.properties['volumes_from'],
}
client = self.get_client()
result = client.create_container(**args)
container_id = result['Id']
self.resource_id_set(container_id)
kwargs = {}
if self.properties['privileged']:
kwargs['privileged'] = True
client.start(container_id, **kwargs)
return container_id
def _get_container_status(self, container_id):
client = self.get_client()
info = client.inspect_container(container_id)
return info['State']
def check_create_complete(self, container_id):
status = self._get_container_status(container_id)
return status['Running']
def handle_delete(self):
if self.resource_id is None:
return
client = self.get_client()
client.kill(self.resource_id)
return self.resource_id
def check_delete_complete(self, container_id):
status = self._get_container_status(container_id)
return (not status['Running'])
def handle_suspend(self):
if not self.resource_id:
return
client = self.get_client()
client.stop(self.resource_id)
return self.resource_id
def check_suspend_complete(self, container_id):
status = self._get_container_status(container_id)
return (not status['Running'])
def handle_resume(self):
if not self.resource_id:
return
client = self.get_client()
client.start(self.resource_id)
return self.resource_id
def check_resume_complete(self, container_id):
status = self._get_container_status(container_id)
return status['Running']
def resource_mapping():
return {
'DockerInc::Docker::Container': DockerContainer,
}
def available_resource_mapping():
if DOCKER_INSTALLED:
return resource_mapping()
else:
logger.warn(_("Docker plug-in loaded, but docker lib not installed."))
return {} | unknown | codeparrot/codeparrot-clean | ||
import {browser, element, by} from 'protractor';
describe('Reactive forms', () => {
const nameEditor = element(by.css('app-name-editor'));
const profileEditor = element(by.css('app-profile-editor'));
const nameEditorButton = element(by.cssContainingText('app-root > nav > button', 'Name Editor'));
const profileEditorButton = element(
by.cssContainingText('app-root > nav > button', 'Profile Editor'),
);
beforeAll(() => browser.get(''));
describe('Name Editor', () => {
const nameInput = nameEditor.element(by.css('input'));
const updateButton = nameEditor.element(by.buttonText('Update Name'));
const nameText = 'John Smith';
beforeAll(async () => {
await nameEditorButton.click();
});
beforeEach(async () => {
await nameInput.clear();
});
it('should update the name value when the name control is updated', async () => {
await nameInput.sendKeys(nameText);
const value = await nameInput.getAttribute('value');
expect(value).toBe(nameText);
});
it('should update the name control when the Update Name button is clicked', async () => {
await nameInput.sendKeys(nameText);
const value1 = await nameInput.getAttribute('value');
expect(value1).toBe(nameText);
await updateButton.click();
const value2 = await nameInput.getAttribute('value');
expect(value2).toBe('Nancy');
});
it('should update the displayed control value when the name control updated', async () => {
await nameInput.sendKeys(nameText);
const valueElement = nameEditor.element(by.cssContainingText('p', 'Value:'));
const nameValueElement = await valueElement.getText();
const nameValue = nameValueElement.toString().replace('Value: ', '');
expect(nameValue).toBe(nameText);
});
});
describe('Profile Editor', () => {
const firstNameInput = getInput('firstName');
const streetInput = getInput('street');
const addAliasButton = element(by.buttonText('+ Add another alias'));
const updateButton = profileEditor.element(by.buttonText('Update Profile'));
const profile: Record<string, string | number> = {
firstName: 'John',
lastName: 'Smith',
street: '345 South Lane',
city: 'Northtown',
state: 'XX',
zip: 12345,
};
beforeAll(async () => {
await profileEditorButton.click();
});
beforeEach(async () => {
await browser.get('');
await profileEditorButton.click();
});
it('should be invalid by default', async () => {
expect(await profileEditor.getText()).toContain('Form Status: INVALID');
});
it('should be valid if the First Name is filled in', async () => {
await firstNameInput.clear();
await firstNameInput.sendKeys('John Smith');
expect(await profileEditor.getText()).toContain('Form Status: VALID');
});
it('should update the name when the button is clicked', async () => {
await firstNameInput.clear();
await streetInput.clear();
await firstNameInput.sendKeys('John');
await streetInput.sendKeys('345 Smith Lane');
const firstNameInitial = await firstNameInput.getAttribute('value');
const streetNameInitial = await streetInput.getAttribute('value');
expect(firstNameInitial).toBe('John');
expect(streetNameInitial).toBe('345 Smith Lane');
await updateButton.click();
const nameValue = await firstNameInput.getAttribute('value');
const streetValue = await streetInput.getAttribute('value');
expect(nameValue).toBe('Nancy');
expect(streetValue).toBe('123 Drew Street');
});
it('should add an alias field when the Add Alias button is clicked', async () => {
await addAliasButton.click();
const aliasInputs = profileEditor.all(by.cssContainingText('label', 'Alias'));
expect(await aliasInputs.count()).toBe(2);
});
it('should update the displayed form value when form inputs are updated', async () => {
const aliasText = 'Johnny';
await Promise.all(
Object.keys(profile).map((key) => getInput(key).sendKeys(`${profile[key]}`)),
);
const aliasInput = profileEditor.all(by.css('#alias-0'));
await aliasInput.sendKeys(aliasText);
const formValueElement = profileEditor.all(by.cssContainingText('p', 'Form Value:'));
const formValue = await formValueElement.getText();
const formJson = JSON.parse(formValue.toString().replace('Form Value:', ''));
expect(profile['firstName']).toBe(formJson.firstName);
expect(profile['lastName']).toBe(formJson.lastName);
expect(formJson.aliases[0]).toBe(aliasText);
});
});
function getInput(key: string) {
return element(by.css(`input[formcontrolname=${key}`));
}
}); | typescript | github | https://github.com/angular/angular | adev/src/content/examples/reactive-forms/e2e/src/app.e2e-spec.ts |
# -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2011 Numérigraphe SARL.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import fields, osv
from openerp.tools.translate import _
class res_partner_bank(osv.osv):
"""Add fields and behavior for French RIB"""
_inherit = "res.partner.bank"
def _check_key(self, cr, uid, ids):
"""Check the RIB key"""
for bank_acc in self.browse(cr, uid, ids):
# Ignore the accounts of type other than rib
if bank_acc.state != 'rib':
continue
# Fail if the needed values are empty of too short
if (not bank_acc.bank_code
or len(bank_acc.bank_code) != 5
or not bank_acc.office or len(bank_acc.office) != 5
or not bank_acc.rib_acc_number or len(bank_acc.rib_acc_number) != 11
or not bank_acc.key or len(bank_acc.key) != 2):
return False
# Get the rib data (without the key)
rib = "%s%s%s" % (bank_acc.bank_code, bank_acc.office, bank_acc.rib_acc_number)
# Translate letters into numbers according to a specific table
# (notice how s -> 2)
table = dict((ord(a), b) for a, b in zip(
u'abcdefghijklmnopqrstuvwxyz', u'12345678912345678923456789'))
rib = rib.lower().translate(table)
# compute the key
key = 97 - (100 * int(rib)) % 97
if int(bank_acc.key) != key:
raise osv.except_osv(_('Error!'),
_("The RIB key %s does not correspond to the other codes: %s %s %s.") % \
(bank_acc.key, bank_acc.bank_code, bank_acc.office, bank_acc.rib_acc_number) )
if bank_acc.acc_number:
if not self.is_iban_valid(cr, uid, bank_acc.acc_number):
raise osv.except_osv(_('Error!'), _("The IBAN %s is not valid.") % bank_acc.acc_number)
return True
def onchange_bank_id(self, cr, uid, ids, bank_id, context=None):
"""Change the bank code"""
result = super(res_partner_bank, self).onchange_bank_id(cr, uid, ids, bank_id,
context=context)
if bank_id:
value = result.setdefault('value', {})
bank = self.pool.get('res.bank').browse(cr, uid, bank_id,
context=context)
value['bank_code'] = bank.rib_code
return result
_columns = {
'acc_number': fields.char('Account Number', size=64, required=False),
'rib_acc_number': fields.char('RIB account number', size=11, readonly=True,),
'bank_code': fields.char('Bank Code', size=64, readonly=True,),
'office': fields.char('Office Code', size=5, readonly=True,),
'key': fields.char('Key', size=2, readonly=True,
help="The key is a number allowing to check the "
"correctness of the other codes."),
}
_constraints = [(_check_key, 'The RIB and/or IBAN is not valid', ['rib_acc_number', 'bank_code', 'office', 'key'])]
class res_bank(osv.osv):
"""Add the bank code to make it easier to enter RIB data"""
_inherit = 'res.bank'
def name_search(self, cr, user, name, args=None, operator='ilike',
context=None, limit=80):
"""Search by bank code in addition to the standard search"""
# Get the standard results
results = super(res_bank, self).name_search(cr, user,
name, args=args ,operator=operator, context=context, limit=limit)
# Get additional results using the RIB code
ids = self.search(cr, user, [('rib_code', operator, name)],
limit=limit, context=context)
# Merge the results
results = list(set(results + self.name_get(cr, user, ids, context)))
return results
_columns = {
'rib_code': fields.char('RIB Bank Code'),
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4: | unknown | codeparrot/codeparrot-clean | ||
#ifndef SRC_NODE_WEBSTORAGE_H_
#define SRC_NODE_WEBSTORAGE_H_
#if defined(NODE_WANT_INTERNALS) && NODE_WANT_INTERNALS
#include "base_object.h"
#include "node_mem.h"
#include "sqlite3.h"
#include "util.h"
namespace node {
namespace webstorage {
struct conn_deleter {
void operator()(sqlite3* conn) const noexcept {
CHECK_EQ(sqlite3_close(conn), SQLITE_OK);
}
};
using conn_unique_ptr = std::unique_ptr<sqlite3, conn_deleter>;
struct stmt_deleter {
void operator()(sqlite3_stmt* stmt) const noexcept { sqlite3_finalize(stmt); }
};
using stmt_unique_ptr = std::unique_ptr<sqlite3_stmt, stmt_deleter>;
static constexpr std::string_view kInMemoryPath = ":memory:";
class Storage : public BaseObject {
public:
Storage(Environment* env,
v8::Local<v8::Object> object,
std::string_view location);
void MemoryInfo(MemoryTracker* tracker) const override;
static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
v8::Maybe<void> Clear();
v8::MaybeLocal<v8::Array> Enumerate();
v8::MaybeLocal<v8::Value> Length();
v8::MaybeLocal<v8::Value> Load(v8::Local<v8::Name> key);
v8::MaybeLocal<v8::Value> LoadKey(const int index);
v8::Maybe<void> Remove(v8::Local<v8::Name> key);
v8::Maybe<void> Store(v8::Local<v8::Name> key, v8::Local<v8::Value> value);
SET_MEMORY_INFO_NAME(Storage)
SET_SELF_SIZE(Storage)
private:
v8::Maybe<void> Open();
~Storage() override;
std::string location_;
conn_unique_ptr db_;
v8::Global<v8::Map> symbols_;
};
} // namespace webstorage
} // namespace node
#endif // defined(NODE_WANT_INTERNALS) && NODE_WANT_INTERNALS
#endif // SRC_NODE_WEBSTORAGE_H_ | c | github | https://github.com/nodejs/node | src/node_webstorage.h |
import os
import sys
from fnmatch import fnmatch
import logging
import time
from datetime import datetime
import requests.sessions
from requests.adapters import HTTPAdapter
from requests.exceptions import HTTPError
from requests import Response
from clint.textui import progress
import six
import six.moves.urllib as urllib
from . import __version__, session, iarequest, utils
log = logging.getLogger(__name__)
# Item class
# ________________________________________________________________________________________
class Item(object):
"""This class represents an archive.org item. You can use this
class to access item metadata::
>>> import internetarchive
>>> item = internetarchive.Item('stairs')
>>> print(item.metadata)
Or to modify the metadata for an item::
>>> metadata = dict(title='The Stairs')
>>> item.modify(metadata)
>>> print(item.metadata['title'])
u'The Stairs'
This class also uses IA's S3-like interface to upload files to an
item. You need to supply your IAS3 credentials in environment
variables in order to upload::
>>> item.upload('myfile.tar', access_key='Y6oUrAcCEs4sK8ey',
... secret_key='youRSECRETKEYzZzZ')
True
You can retrieve S3 keys here: `https://archive.org/account/s3.php
<https://archive.org/account/s3.php>`__
"""
# init()
# ____________________________________________________________________________________
def __init__(self, identifier, metadata_timeout=None, config=None, max_retries=1,
archive_session=None):
"""
:type identifier: str
:param identifier: The globally unique Archive.org identifier
for a given item.
:type metadata_timeout: int
:param metadata_timeout: (optional) Set a timeout for retrieving
an item's metadata.
:type config: dict
:param secure: (optional) Configuration options for session.
:type max_retries: int
:param max_retries: (optional) Maximum number of times to request
a website if the connection drops. (default: 1)
:type archive_session: :class:`ArchiveSession <ArchiveSession>`
:param archive_session: An :class:`ArchiveSession <ArchiveSession>`
object can be provided via the `archive_session`
parameter.
"""
self.session = archive_session if archive_session else session.get_session(config)
self.protocol = 'https:' if self.session.secure else 'http:'
self.http_session = requests.sessions.Session()
max_retries_adapter = HTTPAdapter(max_retries=max_retries)
self.http_session.mount('{0}//'.format(self.protocol), max_retries_adapter)
self.http_session.cookies = self.session.cookies
self.identifier = identifier
# Default empty attributes.
self.metadata = {}
self.files = []
self.created = None
self.d1 = None
self.d2 = None
self.dir = None
self.files_count = None
self.item_size = None
self.reviews = []
self.server = None
self.uniq = None
self.updated = None
self.tasks = None
self._json = self.get_metadata(metadata_timeout)
self.exists = False if self._json == {} else True
# __repr__()
# ____________________________________________________________________________________
def __repr__(self):
return ('Item(identifier={identifier!r}, '
'exists={exists!r})'.format(**self.__dict__))
# get_metadata()
# ____________________________________________________________________________________
def get_metadata(self, metadata_timeout=None):
"""Get an item's metadata from the `Metadata API
<http://blog.archive.org/2013/07/04/metadata-api/>`__
:type identifier: str
:param identifier: Globally unique Archive.org identifier.
:rtype: dict
:returns: Metadat API response.
"""
url = '{protocol}//archive.org/metadata/{identifier}'.format(**self.__dict__)
try:
resp = self.http_session.get(url, timeout=metadata_timeout)
resp.raise_for_status()
except HTTPError as e:
error_msg = 'Error retrieving metadata from {0}, {1}'.format(resp.url, e)
log.error(error_msg)
if e.response.status_code == 503:
time.sleep(2.0)
raise HTTPError(error_msg)
metadata = resp.json()
for key in metadata:
setattr(self, key, metadata[key])
return metadata
# iter_files()
# ____________________________________________________________________________________
def iter_files(self):
"""Generator for iterating over files in an item.
:rtype: generator
:returns: A generator that yields :class:`internetarchive.File
<File>` objects.
"""
for file_dict in self.files:
file = File(self, file_dict.get('name'))
yield file
# file()
# ____________________________________________________________________________________
def get_file(self, file_name):
"""Get a :class:`File <File>` object for the named file.
:rtype: :class:`internetarchive.File <File>`
:returns: An :class:`internetarchive.File <File>` object.
"""
for f in self.iter_files():
if f.name == file_name:
return f
# get_files()
# ____________________________________________________________________________________
def get_files(self, files=None, source=None, formats=None, glob_pattern=None):
files = [] if not files else files
source = [] if not source else source
if not isinstance(files, (list, tuple, set)):
files = [files]
if not isinstance(source, (list, tuple, set)):
source = [source]
if not isinstance(formats, (list, tuple, set)):
formats = [formats]
file_objects = []
for f in self.iter_files():
if f.name in files:
file_objects.append(f)
elif f.source in source:
file_objects.append(f)
elif f.format in formats:
file_objects.append(f)
elif glob_pattern:
# Support for | operator.
patterns = glob_pattern.split('|')
if not isinstance(patterns, list):
patterns = [patterns]
for p in patterns:
if fnmatch(f.name, p):
file_objects.append(f)
return file_objects
# download()
# ____________________________________________________________________________________
def download(self, concurrent=None, source=None, formats=None, glob_pattern=None,
dry_run=None, verbose=None, ignore_existing=None, checksum=None,
destdir=None, no_directory=None):
"""Download the entire item into the current working directory.
:type concurrent: bool
:param concurrent: Download files concurrently if ``True``.
:type source: str
:param source: Only download files matching given source.
:type formats: str
:param formats: Only download files matching the given Formats.
:type glob_pattern: str
:param glob_pattern: Only download files matching the given glob
pattern
:type ignore_existing: bool
:param ignore_existing: Overwrite local files if they already
exist.
:type checksum: bool
:param checksum: Skip downloading file based on checksum.
:type no_directory: bool
:param no_directory: Download files to current working
directory rather than creating an item
directory.
:rtype: bool
:returns: True if if files have been downloaded successfully.
"""
concurrent = False if concurrent is None else concurrent
dry_run = False if dry_run is None else dry_run
verbose = False if verbose is None else verbose
ignore_existing = False if ignore_existing is None else ignore_existing
checksum = False if checksum is None else checksum
no_directory = False if no_directory is None else no_directory
if verbose:
sys.stdout.write('{0}:\n'.format(self.identifier))
if self._json.get('is_dark') is True:
sys.stdout.write(' skipping: item is dark.\n')
log.warning('Not downloading item {0}, '
'item is dark'.format(self.identifier))
elif self.metadata == {}:
sys.stdout.write(' skipping: item does not exist.\n')
log.warning('Not downloading item {0}, '
'item does not exist.'.format(self.identifier))
if concurrent:
try:
from gevent import monkey
monkey.patch_socket()
from gevent.pool import Pool
pool = Pool()
except ImportError:
raise ImportError(
"""No module named gevent
Downloading files concurrently requires the gevent neworking library.
gevent and all of it's dependencies can be installed with pip:
\tpip install cython git+git://github.com/surfly/gevent.git@1.0rc2#egg=gevent
""")
files = self.iter_files()
if source:
files = self.get_files(source=source)
if formats:
files = self.get_files(formats=formats)
if glob_pattern:
files = self.get_files(glob_pattern=glob_pattern)
if not files and verbose:
sys.stdout.write(' no matching files found, nothing downloaded.\n')
for f in files:
fname = f.name.encode('utf-8')
if no_directory:
path = fname
else:
path = os.path.join(self.identifier, fname)
if dry_run:
sys.stdout.write(f.url + '\n')
continue
if concurrent:
pool.spawn(f.download, path, verbose, ignore_existing, checksum, destdir)
else:
f.download(path, verbose, ignore_existing, checksum, destdir)
if concurrent:
pool.join()
return True
# modify_metadata()
# ____________________________________________________________________________________
def modify_metadata(self, metadata, target=None, append=False, priority=None,
access_key=None, secret_key=None, debug=False):
"""Modify the metadata of an existing item on Archive.org.
Note: The Metadata Write API does not yet comply with the
latest Json-Patch standard. It currently complies with `version 02
<https://tools.ietf.org/html/draft-ietf-appsawg-json-patch-02>`__.
:type metadata: dict
:param metadata: Metadata used to update the item.
:type target: str
:param target: (optional) Set the metadata target to update.
:type priority: int
:param priority: (optional) Set task priority.
Usage::
>>> import internetarchive
>>> item = internetarchive.Item('mapi_test_item1')
>>> md = dict(new_key='new_value', foo=['bar', 'bar2'])
>>> item.modify_metadata(md)
:rtype: dict
:returns: A dictionary containing the status_code and response
returned from the Metadata API.
"""
access_key = self.session.access_key if not access_key else access_key
secret_key = self.session.secret_key if not secret_key else secret_key
target = 'metadata' if target is None else target
url = '{protocol}//archive.org/metadata/{identifier}'.format(**self.__dict__)
request = iarequest.MetadataRequest(
url=url,
metadata=metadata,
source_metadata=self._json.get(target.split('/')[0], {}),
target=target,
priority=priority,
access_key=access_key,
secret_key=secret_key,
append=append,
)
if debug:
return request
prepared_request = request.prepare()
resp = self.http_session.send(prepared_request)
self._json = self.get_metadata()
return resp
# s3_is_overloaded()
# ____________________________________________________________________________________
def s3_is_overloaded(self, access_key=None):
u = 'http://s3.us.archive.org'
p = dict(
check_limit=1,
accesskey=access_key,
bucket=self.identifier,
)
r = self.http_session.get(u, params=p)
j = r.json()
if j.get('over_limit') == 0:
return False
else:
return True
# upload_file()
# ____________________________________________________________________________________
def upload_file(self, body, key=None, metadata=None, headers=None,
access_key=None, secret_key=None, queue_derive=True,
ignore_preexisting_bucket=False, verbose=False, verify=True,
checksum=False, delete=False, retries=None, retries_sleep=None,
debug=False, **kwargs):
"""Upload a single file to an item. The item will be created
if it does not exist.
:type body: Filepath or file-like object.
:param body: File or data to be uploaded.
:type key: str
:param key: (optional) Remote filename.
:type metadata: dict
:param metadata: (optional) Metadata used to create a new item.
:type headers: dict
:param headers: (optional) Add additional IA-S3 headers to request.
:type queue_derive: bool
:param queue_derive: (optional) Set to False to prevent an item from
being derived after upload.
:type ignore_preexisting_bucket: bool
:param ignore_preexisting_bucket: (optional) Destroy and respecify the
metadata for an item
:type verify: bool
:param verify: (optional) Verify local MD5 checksum matches the MD5
checksum of the file received by IAS3.
:type checksum: bool
:param checksum: (optional) Skip based on checksum.
:type delete: bool
:param delete: (optional) Delete local file after the upload has been
successfully verified.
:type retries: int
:param retries: (optional) Number of times to retry the given request
if S3 returns a 503 SlowDown error.
:type retries_sleep: int
:param retries_sleep: (optional) Amount of time to sleep between
``retries``.
:type verbose: bool
:param verbose: (optional) Print progress to stdout.
:type debug: bool
:param debug: (optional) Set to True to print headers to stdout, and
exit without sending the upload request.
Usage::
>>> import internetarchive
>>> item = internetarchive.Item('identifier')
>>> item.upload_file('/path/to/image.jpg',
... key='photos/image1.jpg')
True
"""
# Defaults for empty params.
headers = {} if headers is None else headers
metadata = {} if metadata is None else metadata
access_key = self.session.access_key if access_key is None else access_key
secret_key = self.session.secret_key if secret_key is None else secret_key
retries = 0 if retries is None else retries
retries_sleep = 30 if retries_sleep is None else retries_sleep
if not hasattr(body, 'read'):
body = open(body, 'rb')
if not metadata.get('scanner'):
scanner = 'Internet Archive Python library {0}'.format(__version__)
metadata['scanner'] = scanner
try:
body.seek(0, os.SEEK_END)
size = body.tell()
body.seek(0, os.SEEK_SET)
except IOError:
size = None
if not headers.get('x-archive-size-hint'):
headers['x-archive-size-hint'] = size
key = body.name.split('/')[-1] if key is None else key
base_url = '{protocol}//s3.us.archive.org/{identifier}'.format(**self.__dict__)
url = '{base_url}/{key}'.format(base_url=base_url, key=urllib.parse.quote(key))
# Skip based on checksum.
md5_sum = utils.get_md5(body)
ia_file = self.get_file(key)
if (checksum) and (not self.tasks) and (ia_file) and (ia_file.md5 == md5_sum):
log.info('{f} already exists: {u}'.format(f=key, u=url))
if verbose:
sys.stdout.write(' {f} already exists, skipping.\n'.format(f=key))
if delete:
log.info(
'{f} successfully uploaded to https://archive.org/download/{i}/{f} '
'and verified, deleting '
'local copy'.format(i=self.identifier, f=key)
)
os.remove(body.name)
# Return an empty response object if checksums match.
# TODO: Is there a better way to handle this?
return Response()
# require the Content-MD5 header when delete is True.
if verify or delete:
headers['Content-MD5'] = md5_sum
# Delete retries and sleep_retries from kwargs.
if 'retries' in kwargs:
del kwargs['retries']
if 'retries_sleep' in kwargs:
del kwargs['retries_sleep']
def _build_request():
body.seek(0, os.SEEK_SET)
if verbose:
try:
chunk_size = 1048576
expected_size = size/chunk_size + 1
chunks = utils.chunk_generator(body, chunk_size)
progress_generator = progress.bar(chunks, expected_size=expected_size,
label=' uploading {f}: '.format(f=key))
data = utils.IterableToFileAdapter(progress_generator, size)
except:
sys.stdout.write(' uploading {f}: '.format(f=key))
data = body
else:
data = body
request = iarequest.S3Request(
method='PUT',
url=url,
headers=headers,
data=data,
metadata=metadata,
access_key=access_key,
secret_key=secret_key,
queue_derive=queue_derive,
**kwargs
)
return request
if debug:
return _build_request()
else:
try:
error_msg = ('s3 is overloaded, sleeping for '
'{0} seconds and retrying. '
'{1} retries left.'.format(retries_sleep, retries))
while True:
if retries > 0:
if self.s3_is_overloaded(access_key):
time.sleep(retries_sleep)
log.info(error_msg)
if verbose:
sys.stderr.write(' warning: {0}\n'.format(error_msg))
retries -= 1
continue
request = _build_request()
prepared_request = request.prepare()
response = self.http_session.send(prepared_request, stream=True)
if (response.status_code == 503) and (retries > 0):
log.info(error_msg)
if verbose:
sys.stderr.write(' warning: {0}\n'.format(error_msg))
time.sleep(retries_sleep)
retries -= 1
continue
else:
if response.status_code == 503:
log.info('maximum retries exceeded, upload failed.')
break
response.raise_for_status()
log.info('uploaded {f} to {u}'.format(f=key, u=url))
if delete and response.status_code == 200:
log.info(
'{f} successfully uploaded to '
'https://archive.org/download/{i}/{f} and verified, deleting '
'local copy'.format(i=self.identifier, f=key)
)
os.remove(body.name)
return response
except HTTPError as exc:
error_msg = (' error uploading {0} to {1}, '
'{2}'.format(key, self.identifier, exc))
log.error(error_msg)
if verbose:
sys.stderr.write(error_msg + '\n')
# Raise HTTPError with error message.
raise type(exc)(error_msg)
# upload()
# ____________________________________________________________________________________
def upload(self, files, **kwargs):
"""Upload files to an item. The item will be created if it
does not exist.
:type files: list
:param files: The filepaths or file-like objects to upload.
:type kwargs: dict
:param kwargs: The keyword arguments from the call to
upload_file().
Usage::
>>> import internetarchive
>>> item = internetarchive.Item('identifier')
>>> md = dict(mediatype='image', creator='Jake Johnson')
>>> item.upload('/path/to/image.jpg', metadata=md, queue_derive=False)
True
:rtype: bool
:returns: True if the request was successful and all files were
uploaded, False otherwise.
"""
def iter_directory(directory):
for path, dir, files in os.walk(directory):
for f in files:
filepath = os.path.join(path, f)
key = os.path.relpath(filepath, directory)
yield (filepath, key)
if isinstance(files, dict):
files = files.items()
if not isinstance(files, (list, tuple)):
files = [files]
queue_derive = kwargs.get('queue_derive', True)
responses = []
file_index = 0
for f in files:
file_index += 1
if isinstance(f, six.string_types) and os.path.isdir(f):
fdir_index = 0
for filepath, key in iter_directory(f):
# Set derive header if queue_derive is True,
# and this is the last request being made.
fdir_index += 1
if queue_derive is True and file_index >= len(files) \
and fdir_index >= len(os.listdir(f)):
kwargs['queue_derive'] = True
else:
kwargs['queue_derive'] = False
if not f.endswith('/'):
key = '{0}/{1}'.format(f, key)
resp = self.upload_file(filepath, key=key, **kwargs)
responses.append(resp)
else:
# Set derive header if queue_derive is True,
# and this is the last request being made.
if queue_derive is True and file_index >= len(files):
kwargs['queue_derive'] = True
else:
kwargs['queue_derive'] = False
if not isinstance(f, (list, tuple)):
key, body = (None, f)
else:
key, body = f
if key and not isinstance(key, six.string_types):
raise ValueError('Key must be a string.')
resp = self.upload_file(body, key=key, **kwargs)
responses.append(resp)
return responses
# File class
# ________________________________________________________________________________________
class File(object):
"""This class represents a file in an archive.org item. You
can use this class to access the file metadata::
>>> import internetarchive
>>> item = internetarchive.Item('stairs')
>>> file = internetarchive.File(item, 'stairs.avi')
>>> print(f.format, f.size)
(u'Cinepack', u'3786730')
Or to download a file::
>>> file.download()
>>> file.download('fabulous_movie_of_stairs.avi')
This class also uses IA's S3-like interface to delete a file
from an item. You need to supply your IAS3 credentials in
environment variables in order to delete::
>>> file.delete(access_key='Y6oUrAcCEs4sK8ey',
... secret_key='youRSECRETKEYzZzZ')
You can retrieve S3 keys here: `https://archive.org/account/s3.php
<https://archive.org/account/s3.php>`__
"""
# init()
# ____________________________________________________________________________________
def __init__(self, item, name):
"""
:type item: Item
:param item: The item that the file is part of.
:type name: str
:param name: The filename of the file.
"""
_file = {}
for f in item.files:
if f.get('name') == name:
_file = f
break
self._item = item
self.identifier = item.identifier
self.name = None
self.size = None
self.source = None
self.format = None
self.md5 = None
self.mtime = None
for key in _file:
setattr(self, key, _file[key])
self.mtime = float(self.mtime) if self.mtime else 0
self.size = int(self.size) if self.size else 0
base_url = '{protocol}//archive.org/download/{identifier}'.format(**item.__dict__)
self.url = '{base_url}/{name}'.format(base_url=base_url,
name=urllib.parse.quote(name.encode('utf-8')))
# __repr__()
# ____________________________________________________________________________________
def __repr__(self):
return ('File(identifier={identifier!r}, '
'filename={name!r}, '
'size={size!r}, '
'source={source!r}, '
'format={format!r})'.format(**self.__dict__))
# download()
# ____________________________________________________________________________________
def download(self, file_path=None, verbose=None, ignore_existing=None, checksum=None,
destdir=None):
"""Download the file into the current working directory.
:type file_path: str
:param file_path: Download file to the given file_path.
:type ignore_existing: bool
:param ignore_existing: Overwrite local files if they already
exist.
:type checksum: bool
:param checksum: Skip downloading file based on checksum.
"""
verbose = False if verbose is None else verbose
ignore_existing = False if ignore_existing is None else ignore_existing
checksum = False if checksum is None else checksum
file_path = self.name if not file_path else file_path
if destdir:
if not os.path.exists(destdir):
os.mkdir(destdir)
if os.path.isfile(destdir):
raise IOError('{} is not a directory!'.format(destdir))
file_path = os.path.join(destdir, file_path)
# Skip based on mtime and length if no other clobber/skip options specified.
if os.path.exists(file_path) and ignore_existing is False and checksum is False:
st = os.stat(file_path)
if (st.st_mtime == self.mtime) and (st.st_size == self.size) \
or self.name.endswith('_files.xml') and st.st_size != 0:
if verbose:
print(' skipping {0}: already exists.'.format(file_path))
log.info('not downloading file {0}, '
'file already exists.'.format(file_path))
return
if os.path.exists(file_path):
if ignore_existing is False and checksum is False:
raise IOError('file already downloaded: {0}'.format(file_path))
if checksum:
md5_sum = utils.get_md5(open(file_path))
if md5_sum == self.md5:
log.info('not downloading file {0}, '
'file already exists based on checksum.'.format(file_path))
if verbose:
sys.stdout.write(' skipping {0}: already exists based on checksum.\n'.format(file_path))
return
if verbose:
sys.stdout.write(' downloading: {0}\n'.format(file_path))
parent_dir = os.path.dirname(file_path)
if parent_dir != '' and not os.path.exists(parent_dir):
os.makedirs(parent_dir)
try:
response = self._item.http_session.get(self.url, stream=True)
response.raise_for_status()
except HTTPError as e:
raise HTTPError('error downloading {0}, {1}'.format(self.url, e))
with open(file_path, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
f.flush()
# Set mtime with mtime from files.xml.
os.utime(file_path, (0, self.mtime))
log.info('downloaded {0}/{1} to {2}'.format(self.identifier,
self.name.encode('utf-8'),
file_path))
# delete()
# ____________________________________________________________________________________
def delete(self, debug=False, verbose=False, cascade_delete=False, access_key=None,
secret_key=None):
"""Delete a file from the Archive. Note: Some files -- such as
<itemname>_meta.xml -- cannot be deleted.
:type debug: bool
:param debug: Set to True to print headers to stdout and exit
exit without sending the delete request.
:type verbose: bool
:param verbose: Print actions to stdout.
:type cascade_delete: bool
:param cascade_delete: Also deletes files derived from the file,
and files the file was derived from.
"""
url = 'http://s3.us.archive.org/{0}/{1}'.format(self.identifier,
self.name.encode('utf-8'))
access_key = self._item.session.access_key if not access_key else access_key
secret_key = self._item.session.secret_key if not secret_key else secret_key
request = iarequest.S3Request(
method='DELETE',
url=url,
headers={'x-archive-cascade-delete': int(cascade_delete)},
access_key=access_key,
secret_key=secret_key
)
if debug:
return request
else:
if verbose:
msg = ' deleting: {0}'.format(self.name.encode('utf-8'))
if cascade_delete:
msg += ' and all derivative files.\n'
else:
msg += '\n'
sys.stdout.write(msg)
prepared_request = request.prepare()
return self._item.http_session.send(prepared_request) | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python3
# -*- python-indent-offset: 2 -*-
# Copyright (c) 2014, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in
# the LICENSE file in the root directory of this source tree. An
# additional grant of patent rights can be found in the PATENTS file
# in the same directory.
# This program reads all the ar-archives (.a files, in practice) on
# the command line and prints the SHA256 digest (base64-encoded for
# brevity) of all the contents of all the archives, considered in the
# order they're given in the archive files and on the command
# line, respectively.
#
# We do _not_ include archive metadata, like UIDs, modtimes, and so on
# in the digest, so we can use the hash we compute for deterministic
# build identification. (We do include the file names
# themselves, however.)
#
# Why not "ar D"? Because we don't always use GNU ar.
#
import sys
import hashlib
import logging
from os.path import basename
from argparse import ArgumentParser
import arpy
from base64 import b64encode
log = logging.getLogger(basename(sys.argv[0]))
def main(argv):
p = ArgumentParser(
prog=basename(argv[0]),
description="Hash the contents of archives")
p.add_argument("--debug", action="store_true",
help="Enable debugging output")
p.add_argument("archives", metavar="ARCHIVES", nargs="*")
args = p.parse_args(argv[1:])
root_logger = logging.getLogger()
logging.basicConfig()
if args.debug:
root_logger.setLevel(logging.DEBUG)
else:
root_logger.setLevel(logging.INFO)
hash = hashlib.sha256()
for archive_filename in args.archives:
with open(archive_filename, "rb") as archive_file:
archive = arpy.Archive(fileobj=archive_file)
log.debug("opened archive %r", archive_filename)
for arfile in archive:
hash.update(arfile.header.name)
nbytes = 0
filehash = hashlib.sha256()
while True:
buf = arfile.read(32768)
if not buf:
break
hash.update(buf)
filehash.update(buf)
nbytes += len(buf)
log.debug("hashed %s/%s %r %s bytes",
archive_filename,
arfile.header.name.decode("utf-8"),
filehash.hexdigest(),
nbytes)
# 128 bits of entropy is enough for anyone
digest = hash.digest()[:16]
log.debug("digest %r", digest)
print(b64encode(digest, b"@_").decode("ascii").rstrip("="))
if __name__ == "__main__":
sys.exit(main(sys.argv)) | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright 2012-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.boot.gradle.junit;
import java.lang.reflect.Field;
import org.junit.jupiter.api.extension.BeforeEachCallback;
import org.junit.jupiter.api.extension.ExtensionContext;
import org.springframework.boot.testsupport.gradle.testkit.GradleBuild;
import org.springframework.util.Assert;
import org.springframework.util.ReflectionUtils;
/**
* {@link BeforeEachCallback} to set a test class's {@code gradleBuild} field prior to
* test execution.
*
* @author Andy Wilkinson
*/
final class GradleBuildFieldSetter implements BeforeEachCallback {
private final GradleBuild gradleBuild;
GradleBuildFieldSetter(GradleBuild gradleBuild) {
this.gradleBuild = gradleBuild;
}
@Override
public void beforeEach(ExtensionContext context) throws Exception {
Field field = ReflectionUtils.findField(context.getRequiredTestClass(), "gradleBuild");
Assert.notNull(field, "Field named gradleBuild not found in " + context.getRequiredTestClass().getName());
field.setAccessible(true);
field.set(context.getRequiredTestInstance(), this.gradleBuild);
}
} | java | github | https://github.com/spring-projects/spring-boot | build-plugin/spring-boot-gradle-plugin/src/test/java/org/springframework/boot/gradle/junit/GradleBuildFieldSetter.java |
#
# Python Imaging Library
# $Id: GimpPaletteFile.py 2134 2004-10-06 08:55:20Z fredrik $
#
# stuff to read GIMP palette files
#
# History:
# 1997-08-23 fl Created
# 2004-09-07 fl Support GIMP 2.0 palette files.
#
# Copyright (c) Secret Labs AB 1997-2004. All rights reserved.
# Copyright (c) Fredrik Lundh 1997-2004.
#
# See the README file for information on usage and redistribution.
#
import re, string
##
# File handler for GIMP's palette format.
class GimpPaletteFile:
rawmode = "RGB"
def __init__(self, fp):
self.palette = map(lambda i: chr(i)*3, range(256))
if fp.readline()[:12] != "GIMP Palette":
raise SyntaxError, "not a GIMP palette file"
i = 0
while i <= 255:
s = fp.readline()
if not s:
break
# skip fields and comment lines
if re.match("\w+:|#", s):
continue
if len(s) > 100:
raise SyntaxError, "bad palette file"
v = tuple(map(int, string.split(s)[:3]))
if len(v) != 3:
raise ValueError, "bad palette entry"
if 0 <= i <= 255:
self.palette[i] = chr(v[0]) + chr(v[1]) + chr(v[2])
i = i + 1
self.palette = string.join(self.palette, "")
def getpalette(self):
return self.palette, self.rawmode | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright 2012-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.boot.context.config;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Path;
import org.jspecify.annotations.Nullable;
import org.springframework.boot.origin.Origin;
import org.springframework.core.io.Resource;
import org.springframework.util.Assert;
/**
* {@link ConfigDataNotFoundException} thrown when a {@link ConfigDataResource} cannot be
* found.
*
* @author Phillip Webb
* @since 2.4.0
*/
public class ConfigDataResourceNotFoundException extends ConfigDataNotFoundException {
private final ConfigDataResource resource;
private final @Nullable ConfigDataLocation location;
/**
* Create a new {@link ConfigDataResourceNotFoundException} instance.
* @param resource the resource that could not be found
*/
public ConfigDataResourceNotFoundException(ConfigDataResource resource) {
this(resource, null);
}
/**
* Create a new {@link ConfigDataResourceNotFoundException} instance.
* @param resource the resource that could not be found
* @param cause the exception cause
*/
public ConfigDataResourceNotFoundException(ConfigDataResource resource, @Nullable Throwable cause) {
this(resource, null, cause);
}
private ConfigDataResourceNotFoundException(ConfigDataResource resource, @Nullable ConfigDataLocation location,
@Nullable Throwable cause) {
super(getMessage(resource, location), cause);
Assert.notNull(resource, "'resource' must not be null");
this.resource = resource;
this.location = location;
}
/**
* Return the resource that could not be found.
* @return the resource
*/
public ConfigDataResource getResource() {
return this.resource;
}
/**
* Return the original location that was resolved to determine the resource.
* @return the location or {@code null} if no location is available
*/
public @Nullable ConfigDataLocation getLocation() {
return this.location;
}
@Override
public @Nullable Origin getOrigin() {
return Origin.from(this.location);
}
@Override
public String getReferenceDescription() {
return getReferenceDescription(this.resource, this.location);
}
/**
* Create a new {@link ConfigDataResourceNotFoundException} instance with a location.
* @param location the location to set
* @return a new {@link ConfigDataResourceNotFoundException} instance
*/
ConfigDataResourceNotFoundException withLocation(ConfigDataLocation location) {
return new ConfigDataResourceNotFoundException(this.resource, location, getCause());
}
private static String getMessage(ConfigDataResource resource, @Nullable ConfigDataLocation location) {
return String.format("Config data %s cannot be found", getReferenceDescription(resource, location));
}
private static String getReferenceDescription(ConfigDataResource resource, @Nullable ConfigDataLocation location) {
String description = String.format("resource '%s'", resource);
if (location != null) {
description += String.format(" via location '%s'", location);
}
return description;
}
/**
* Throw a {@link ConfigDataNotFoundException} if the specified {@link Path} does not
* exist.
* @param resource the config data resource
* @param pathToCheck the path to check
*/
public static void throwIfDoesNotExist(ConfigDataResource resource, Path pathToCheck) {
throwIfNot(resource, Files.exists(pathToCheck));
}
/**
* Throw a {@link ConfigDataNotFoundException} if the specified {@link File} does not
* exist.
* @param resource the config data resource
* @param fileToCheck the file to check
*/
public static void throwIfDoesNotExist(ConfigDataResource resource, File fileToCheck) {
throwIfNot(resource, fileToCheck.exists());
}
/**
* Throw a {@link ConfigDataNotFoundException} if the specified {@link Resource} does
* not exist.
* @param resource the config data resource
* @param resourceToCheck the resource to check
*/
public static void throwIfDoesNotExist(ConfigDataResource resource, Resource resourceToCheck) {
throwIfNot(resource, resourceToCheck.exists());
}
private static void throwIfNot(ConfigDataResource resource, boolean check) {
if (!check) {
throw new ConfigDataResourceNotFoundException(resource);
}
}
} | java | github | https://github.com/spring-projects/spring-boot | core/spring-boot/src/main/java/org/springframework/boot/context/config/ConfigDataResourceNotFoundException.java |
# coding=utf-8
from django.core.management.base import BaseCommand
import os, re, sys
class Command(BaseCommand):
help = 'This is for rebooting the web part of the RapidSMS system (which is linked to Cherokee).'
def handle(self, **args):
moi = os.getenv('USER', os.getlogin())
permis = 'root'
redemarrer = None
if moi != permis:
sys.stderr.write(u'Mec, il faut que tu soit "%s" (et non "%s").' % (permis, moi))
exit(1)
cmd = "ps aux | grep rapidsms | grep runfcgi | grep -v grep"
with os.popen(cmd) as pipe:
for ligne in pipe:
bywho, pid, cpu, mem, vsz, rss, tt, stat, sttm, uptm, cmd = re.split(r'\s+', ligne, 10)
os.kill(int(pid), 9)
redemarrer = cmd.strip()
npid = os.fork()
if npid:
sys.stdout.write(u'On va continuer hors de vue, en ex\'ecutant "%s", et on finira par ^etre sous le PID %d.\n' % (redemarrer, npid))
exit(0)
exit(os.system(redemarrer)) | unknown | codeparrot/codeparrot-clean | ||
# coding=utf-8
# Copyright 2018 The Microsoft Research Asia LayoutLM Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Tokenization class for model LayoutLM."""
from ...utils import logging
from ..bert.tokenization_bert_fast import BertTokenizerFast
from .tokenization_layoutlm import LayoutLMTokenizer
logger = logging.get_logger(__name__)
VOCAB_FILES_NAMES = {"vocab_file": "vocab.txt", "tokenizer_file": "tokenizer.json"}
PRETRAINED_VOCAB_FILES_MAP = {
"vocab_file": {
"microsoft/layoutlm-base-uncased": "https://huggingface.co/microsoft/layoutlm-base-uncased/resolve/main/vocab.txt",
"microsoft/layoutlm-large-uncased": "https://huggingface.co/microsoft/layoutlm-large-uncased/resolve/main/vocab.txt",
},
"tokenizer_file": {
"microsoft/layoutlm-base-uncased": "https://huggingface.co/microsoft/layoutlm-base-uncased/resolve/main/tokenizer.json",
"microsoft/layoutlm-large-uncased": "https://huggingface.co/microsoft/layoutlm-large-uncased/resolve/main/tokenizer.json",
},
}
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
"microsoft/layoutlm-base-uncased": 512,
"microsoft/layoutlm-large-uncased": 512,
}
PRETRAINED_INIT_CONFIGURATION = {
"microsoft/layoutlm-base-uncased": {"do_lower_case": True},
"microsoft/layoutlm-large-uncased": {"do_lower_case": True},
}
class LayoutLMTokenizerFast(BertTokenizerFast):
r"""
Constructs a "Fast" LayoutLMTokenizer.
:class:`~transformers.LayoutLMTokenizerFast` is identical to :class:`~transformers.BertTokenizerFast` and runs
end-to-end tokenization: punctuation splitting + wordpiece.
Refer to superclass :class:`~transformers.BertTokenizerFast` for usage examples and documentation concerning
parameters.
"""
vocab_files_names = VOCAB_FILES_NAMES
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
pretrained_init_configuration = PRETRAINED_INIT_CONFIGURATION
slow_tokenizer_class = LayoutLMTokenizer | unknown | codeparrot/codeparrot-clean | ||
/**
* @license
* Copyright Google LLC All Rights Reserved.
*
* Use of this source code is governed by an MIT-style license that can be
* found in the LICENSE file at https://angular.dev/license
*/
/*
* Public API Surface of shared-utils
*/
export * from './lib/shared-utils';
export * from './lib/angular-check'; | typescript | github | https://github.com/angular/angular | devtools/projects/shared-utils/src/public-api.ts |
use std::{
future::Future,
pin::Pin,
task::{Context, Poll},
};
use actix_http::{error::PayloadError, Payload};
use bytes::{Bytes, BytesMut};
use futures_core::{ready, Stream};
use pin_project_lite::pin_project;
pin_project! {
pub(crate) struct ReadBody<S> {
#[pin]
pub(crate) stream: Payload<S>,
pub(crate) buf: BytesMut,
pub(crate) limit: usize,
}
}
impl<S> ReadBody<S> {
pub(crate) fn new(stream: Payload<S>, limit: usize) -> Self {
Self {
stream,
buf: BytesMut::new(),
limit,
}
}
}
impl<S> Future for ReadBody<S>
where
S: Stream<Item = Result<Bytes, PayloadError>>,
{
type Output = Result<Bytes, PayloadError>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let mut this = self.project();
while let Some(chunk) = ready!(this.stream.as_mut().poll_next(cx)?) {
if (this.buf.len() + chunk.len()) > *this.limit {
return Poll::Ready(Err(PayloadError::Overflow));
}
this.buf.extend_from_slice(&chunk);
}
Poll::Ready(Ok(this.buf.split().freeze()))
}
}
#[cfg(test)]
mod tests {
use static_assertions::assert_impl_all;
use super::*;
use crate::any_body::AnyBody;
assert_impl_all!(ReadBody<()>: Unpin);
assert_impl_all!(ReadBody<AnyBody>: Unpin);
} | rust | github | https://github.com/actix/actix-web | awc/src/responses/read_body.rs |
# This file is NOT licensed under the GPLv3, which is the license for the rest
# of YouCompleteMe.
#
# Here's the license text for this file:
#
# This is free and unencumbered software released into the public domain.
#
# Anyone is free to copy, modify, publish, use, compile, sell, or
# distribute this software, either in source code form or as a compiled
# binary, for any purpose, commercial or non-commercial, and by any
# means.
#
# In jurisdictions that recognize copyright laws, the author or authors
# of this software dedicate any and all copyright interest in the
# software to the public domain. We make this dedication for the benefit
# of the public at large and to the detriment of our heirs and
# successors. We intend this dedication to be an overt act of
# relinquishment in perpetuity of all present and future rights to this
# software under copyright law.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
#
# For more information, please refer to <http://unlicense.org/>
import os
import ycm_core
# These are the compilation flags that will be used in case there's no
# compilation database set (by default, one is not set).
# CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.
flags = [
'-Wall',
'-Wextra',
'-Werror',
'-fexceptions',
'-std=c++11',
# ...and the same thing goes for the magic -x option which specifies the
# language that the files to be compiled are written in. This is mostly
# relevant for c++ headers.
# For a C project, you would set this to 'c' instead of 'c++'.
'-x',
'c++',
'-isystem',
# This path will only work on OS X, but extra paths that don't exist are not
# harmful
'../llvm/tools/clang/include',
'-I',
'.',
]
# Set this to the absolute path to the folder (NOT the file!) containing the
# compile_commands.json file to use that instead of 'flags'. See here for
# more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html
#
# Most projects will NOT need to set this to anything; you can just change the
# 'flags' list of compilation flags. Notice that YCM itself uses that approach.
compilation_database_folder = ''
if compilation_database_folder:
database = ycm_core.CompilationDatabase( compilation_database_folder )
else:
database = None
def DirectoryOfThisScript():
return os.path.dirname( os.path.abspath( __file__ ) )
def MakeRelativePathsInFlagsAbsolute( flags, working_directory ):
if not working_directory:
return list( flags )
new_flags = []
make_next_absolute = False
path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ]
for flag in flags:
new_flag = flag
if make_next_absolute:
make_next_absolute = False
if not flag.startswith( '/' ):
new_flag = os.path.join( working_directory, flag )
for path_flag in path_flags:
if flag == path_flag:
make_next_absolute = True
break
if flag.startswith( path_flag ):
path = flag[ len( path_flag ): ]
new_flag = path_flag + os.path.join( working_directory, path )
break
if new_flag:
new_flags.append( new_flag )
return new_flags
def FlagsForFile( filename, **kwargs ):
if database:
# Bear in mind that compilation_info.compiler_flags_ does NOT return a
# python list, but a "list-like" StringVec object
compilation_info = database.GetCompilationInfoForFile( filename )
final_flags = MakeRelativePathsInFlagsAbsolute(
compilation_info.compiler_flags_,
compilation_info.compiler_working_dir_ )
else:
relative_to = DirectoryOfThisScript()
final_flags = MakeRelativePathsInFlagsAbsolute( flags, relative_to )
return {
'flags': final_flags,
'do_cache': True
} | unknown | codeparrot/codeparrot-clean | ||
#import <ATen/native/metal/MetalConvParams.h>
#import <ATen/native/metal/MetalPrepackOpContext.h>
#include <c10/util/ArrayRef.h>
namespace at::native::metal {
Tensor conv2d(
const Tensor& input,
const Tensor& weight,
const std::optional<at::Tensor>& bias,
IntArrayRef stride,
IntArrayRef padding,
IntArrayRef dilation,
int64_t groups);
namespace prepack {
Tensor conv2d(const Tensor& input, Conv2dOpContext& context);
}
} // namespace at::native::metal | c | github | https://github.com/pytorch/pytorch | aten/src/ATen/native/metal/ops/MetalConvolution.h |
# Copyright 2021 DeepMind Technologies Limited.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Builds CSR matrices which store the MAG graphs."""
import pathlib
from absl import app
from absl import flags
from absl import logging
import numpy as np
import scipy.sparse
# pylint: disable=g-bad-import-order
import data_utils
Path = pathlib.Path
FLAGS = flags.FLAGS
_DATA_FILES_AND_PARAMETERS = {
'author_affiliated_with_institution_edges.npy': {
'content_names': ('author', 'institution'),
'use_boolean': False
},
'author_writes_paper_edges.npy': {
'content_names': ('author', 'paper'),
'use_boolean': False
},
'paper_cites_paper_edges.npy': {
'content_names': ('paper', 'paper'),
'use_boolean': True
},
}
flags.DEFINE_string('data_root', None, 'Data root directory')
flags.DEFINE_boolean('skip_existing', True, 'Skips existing CSR files')
flags.mark_flags_as_required(['data_root'])
def _read_edge_data(path):
try:
return np.load(path, mmap_mode='r')
except FileNotFoundError:
# If the file path can't be found by np.load, use the file handle w/o mmap.
with path.open('rb') as fid:
return np.load(fid)
def _build_coo(edges_data, use_boolean=False):
if use_boolean:
mat_coo = scipy.sparse.coo_matrix(
(np.ones_like(edges_data[1, :],
dtype=bool), (edges_data[0, :], edges_data[1, :])))
else:
mat_coo = scipy.sparse.coo_matrix(
(edges_data[1, :], (edges_data[0, :], edges_data[1, :])))
return mat_coo
def _get_output_paths(directory, content_names, use_boolean):
boolean_str = '_b' if use_boolean else ''
transpose_str = '_t' if len(set(content_names)) == 1 else ''
output_prefix = '_'.join(content_names)
output_prefix_t = '_'.join(content_names[::-1])
output_filename = f'{output_prefix}{boolean_str}.npz'
output_filename_t = f'{output_prefix_t}{boolean_str}{transpose_str}.npz'
output_path = directory / output_filename
output_path_t = directory / output_filename_t
return output_path, output_path_t
def _write_csr(path, csr):
path.parent.mkdir(parents=True, exist_ok=True)
with path.open('wb') as fid:
scipy.sparse.save_npz(fid, csr)
def main(argv):
if len(argv) > 1:
raise app.UsageError('Too many command-line arguments.')
raw_data_dir = Path(FLAGS.data_root) / data_utils.RAW_DIR
preprocessed_dir = Path(FLAGS.data_root) / data_utils.PREPROCESSED_DIR
for input_filename, parameters in _DATA_FILES_AND_PARAMETERS.items():
input_path = raw_data_dir / input_filename
output_path, output_path_t = _get_output_paths(preprocessed_dir,
**parameters)
if FLAGS.skip_existing and output_path.exists() and output_path_t.exists():
# If both files exist, skip. When only one exists, that's handled below.
logging.info(
'%s and %s exist: skipping. Use flag `--skip_existing=False`'
'to force overwrite existing.', output_path, output_path_t)
continue
logging.info('Reading edge data from: %s', input_path)
edge_data = _read_edge_data(input_path)
logging.info('Building CSR matrices')
mat_coo = _build_coo(edge_data, use_boolean=parameters['use_boolean'])
# Convert matrices to CSR and write to disk.
if not FLAGS.skip_existing or not output_path.exists():
logging.info('Writing CSR matrix to: %s', output_path)
mat_csr = mat_coo.tocsr()
_write_csr(output_path, mat_csr)
del mat_csr # Free up memory asap.
else:
logging.info(
'%s exists: skipping. Use flag `--skip_existing=False`'
'to force overwrite existing.', output_path)
if not FLAGS.skip_existing or not output_path_t.exists():
logging.info('Writing (transposed) CSR matrix to: %s', output_path_t)
mat_csr_t = mat_coo.transpose().tocsr()
_write_csr(output_path_t, mat_csr_t)
del mat_csr_t # Free up memory asap.
else:
logging.info(
'%s exists: skipping. Use flag `--skip_existing=False`'
'to force overwrite existing.', output_path_t)
del mat_coo # Free up memory asap.
if __name__ == '__main__':
app.run(main) | unknown | codeparrot/codeparrot-clean | ||
#
# This file is part of pysnmp software.
#
# Copyright (c) 2005-2016, Ilya Etingof <ilya@glas.net>
# License: http://pysnmp.sf.net/license.html
#
from pysnmp.smi.rfc1902 import *
from pysnmp.entity.rfc3413 import ntforg
from pysnmp.hlapi.auth import *
from pysnmp.hlapi.context import *
from pysnmp.hlapi.lcd import *
from pysnmp.hlapi.varbinds import *
from pysnmp.hlapi.asyncore.transport import *
__all__ = ['sendNotification']
vbProcessor = NotificationOriginatorVarBinds()
lcd = NotificationOriginatorLcdConfigurator()
def sendNotification(snmpEngine, authData, transportTarget, contextData,
notifyType, varBinds, cbFun=None, cbCtx=None,
lookupMib=False):
"""Send SNMP notification.
Based on passed parameters, prepares SNMP TRAP or INFORM
notification (:RFC:`1905#section-4.2.6`) and schedules its
transmission by I/O framework at a later point of time.
Parameters
----------
snmpEngine : :py:class:`~pysnmp.hlapi.SnmpEngine`
Class instance representing SNMP engine.
authData : :py:class:`~pysnmp.hlapi.CommunityData` or :py:class:`~pysnmp.hlapi.UsmUserData`
Class instance representing SNMP credentials.
transportTarget : :py:class:`~pysnmp.hlapi.asyncore.UdpTransportTarget` or :py:class:`~pysnmp.hlapi.asyncore.Udp6TransportTarget`
Class instance representing transport type along with SNMP peer
address.
contextData : :py:class:`~pysnmp.hlapi.ContextData`
Class instance representing SNMP ContextEngineId and ContextName
values.
notifyType : str
Indicates type of notification to be sent. Recognized literal
values are *trap* or *inform*.
varBinds: tuple
Single :py:class:`~pysnmp.smi.rfc1902.NotificationType` class
instance representing a minimum sequence of MIB variables
required for particular notification type. Alternatively,
a sequence of :py:class:`~pysnmp.smi.rfc1902.ObjectType`
objects could be passed instead. In the latter case it is up to
the user to ensure proper Notification PDU contents.
cbInfo : tuple
* `cbFun` - user-supplied callable that is invoked to pass
SNMP response to *INFORM* notification or error to user at
a later point of time. The `cbFun` callable is never invoked
for *TRAP* notifications.
* `cbCtx` - user-supplied object passing additional parameters
to/from `cbFun`. Default is `None`.
Other Parameters
----------------
cbFun : callable
user-supplied callable that is invoked to pass SNMP response
to *INFORM* notification or error to user at a later point of
time. The `cbFun` callable is never invoked for *TRAP* notifications.
cbCtx : object
user-supplied object passing additional parameters to/from
`cbFun`.
lookupMib : bool
`lookupMib` - load MIB and resolve response MIB variables at
the cost of slightly reduced performance. Default is `True`.
Notes
-----
User-supplied `cbFun` callable must have the following call
signature:
* snmpEngine (:py:class:`~pysnmp.hlapi.SnmpEngine`):
Class instance representing SNMP engine.
* sendRequestHandle (int): Unique request identifier. Can be used
for matching multiple ongoing *INFORM* notifications with received
responses.
* errorIndication (str): True value indicates SNMP engine error.
* errorStatus (str): True value indicates SNMP PDU error.
* errorIndex (int): Non-zero value refers to `varBinds[errorIndex-1]`
* varBinds (tuple): A sequence of
:py:class:`~pysnmp.smi.rfc1902.ObjectType` class instances
representing MIB variables returned in SNMP response in exactly
the same order as `varBinds` in request.
* `cbCtx` : Original user-supplied object.
Returns
-------
sendRequestHandle : int
Unique request identifier. Can be used for matching received
responses with ongoing *INFORM* requests. Returns `None` for
*TRAP* notifications.
Raises
------
PySnmpError
Or its derivative indicating that an error occurred while
performing SNMP operation.
Examples
--------
>>> from pysnmp.hlapi.asyncore import *
>>>
>>> snmpEngine = SnmpEngine()
>>> sendNotification(
... snmpEngine,
... CommunityData('public'),
... UdpTransportTarget(('demo.snmplabs.com', 162)),
... ContextData(),
... 'trap',
... NotificationType(ObjectIdentity('SNMPv2-MIB', 'coldStart')),
... )
>>> snmpEngine.transportDispatcher.runDispatcher()
>>>
"""
def __cbFun(snmpEngine, sendRequestHandle, errorIndication,
errorStatus, errorIndex, varBinds, cbCtx):
lookupMib, cbFun, cbCtx = cbCtx
return cbFun and cbFun(
snmpEngine, sendRequestHandle, errorIndication,
errorStatus, errorIndex,
vbProcessor.unmakeVarBinds(
snmpEngine, varBinds, lookupMib
), cbCtx
)
notifyName = lcd.configure(snmpEngine, authData, transportTarget,
notifyType)
return ntforg.NotificationOriginator().sendVarBinds(
snmpEngine, notifyName,
contextData.contextEngineId, contextData.contextName,
vbProcessor.makeVarBinds(snmpEngine, varBinds),
__cbFun, (lookupMib, cbFun, cbCtx)
) | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2012, Jeroen Hoekx <jeroen@hoekx.be>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.action import ActionBase
class ActionModule(ActionBase):
''' Create inventory groups based on variables '''
# We need to be able to modify the inventory
TRANSFERS_FILES = False
def run(self, tmp=None, task_vars=None):
if task_vars is None:
task_vars = dict()
result = super(ActionModule, self).run(tmp, task_vars)
if 'key' not in self._task.args:
result['failed'] = True
result['msg'] = "the 'key' param is required when using group_by"
return result
group_name = self._task.args.get('key')
group_name = group_name.replace(' ', '-')
result['changed'] = False
result['add_group'] = group_name
return result | unknown | codeparrot/codeparrot-clean | ||
///////////////////////////////////////////////////////////////////////////
//
// Copyright (c) 2012, Weta Digital Ltd
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Weta Digital nor the names of
// its contributors may be used to endorse or promote products derived
// from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
///////////////////////////////////////////////////////////////////////////
#ifndef INCLUDED_IMF_COMPOSITEDEEPSCANLINE_H
#define INCLUDED_IMF_COMPOSITEDEEPSCANLINE_H
//-----------------------------------------------------------------------------
//
// Class to composite deep samples into a frame buffer
// Initialise with a deep input part or deep inputfile
// (also supports multiple files and parts, and will
// composite them together, as long as their sizes and channelmaps agree)
//
// Then call setFrameBuffer, and readPixels, exactly as for reading
// regular scanline images.
//
// Restrictions - source file(s) must contain at least Z and alpha channels
// - if multiple files/parts are provided, sizes must match
// - all requested channels will be composited as premultiplied
// - only half and float channels can be requested
//
// This object should not be considered threadsafe
//
// The default compositing engine will give spurious results with overlapping
// volumetric samples - you may derive from DeepCompositing class, override the
// sort_pixel() and composite_pixel() functions, and pass an instance to
// setCompositing().
//
//-----------------------------------------------------------------------------
#include "ImfForward.h"
#include "ImfNamespace.h"
#include "ImfExport.h"
#include <ImathBox.h>
OPENEXR_IMF_INTERNAL_NAMESPACE_HEADER_ENTER
class CompositeDeepScanLine
{
public:
IMF_EXPORT
CompositeDeepScanLine();
IMF_EXPORT
virtual ~CompositeDeepScanLine();
/// set the source data as a part
///@note all parts must remain valid until after last interaction with DeepComp
IMF_EXPORT
void addSource(DeepScanLineInputPart * part);
/// set the source data as a file
///@note all file must remain valid until after last interaction with DeepComp
IMF_EXPORT
void addSource(DeepScanLineInputFile * file);
/////////////////////////////////////////
//
// set the frame buffer for output values
// the buffers specified must be large enough
// to handle the dataWindow()
//
/////////////////////////////////////////
IMF_EXPORT
void setFrameBuffer(const FrameBuffer & fr);
/////////////////////////////////////////
//
// retrieve frameBuffer
//
////////////////////////////////////////
IMF_EXPORT
const FrameBuffer & frameBuffer() const;
//////////////////////////////////////////////////
//
// read scanlines start to end from the source(s)
// storing the result in the frame buffer provided
//
//////////////////////////////////////////////////
IMF_EXPORT
void readPixels(int start,int end);
IMF_EXPORT
int sources() const; // return number of sources
/////////////////////////////////////////////////
//
// retrieve the datawindow
// If multiple parts are specified, this will
// be the union of the dataWindow of all parts
//
////////////////////////////////////////////////
IMF_EXPORT
const IMATH_NAMESPACE::Box2i & dataWindow() const;
//
// override default sorting/compositing operation
// (otherwise an instance of the base class will be used)
//
IMF_EXPORT
void setCompositing(DeepCompositing *);
struct Data;
private :
struct Data *_Data;
CompositeDeepScanLine(const CompositeDeepScanLine &); // not implemented
const CompositeDeepScanLine & operator=(const CompositeDeepScanLine &); // not implemented
};
OPENEXR_IMF_INTERNAL_NAMESPACE_HEADER_EXIT
#endif | c | github | https://github.com/opencv/opencv | 3rdparty/openexr/IlmImf/ImfCompositeDeepScanLine.h |
import unittest
from test import test_support
def funcattrs(**kwds):
def decorate(func):
func.__dict__.update(kwds)
return func
return decorate
class MiscDecorators (object):
@staticmethod
def author(name):
def decorate(func):
func.__dict__['author'] = name
return func
return decorate
# -----------------------------------------------
class DbcheckError (Exception):
def __init__(self, exprstr, func, args, kwds):
# A real version of this would set attributes here
Exception.__init__(self, "dbcheck %r failed (func=%s args=%s kwds=%s)" %
(exprstr, func, args, kwds))
def dbcheck(exprstr, globals=None, locals=None):
"Decorator to implement debugging assertions"
def decorate(func):
expr = compile(exprstr, "dbcheck-%s" % func.func_name, "eval")
def check(*args, **kwds):
if not eval(expr, globals, locals):
raise DbcheckError(exprstr, func, args, kwds)
return func(*args, **kwds)
return check
return decorate
# -----------------------------------------------
def countcalls(counts):
"Decorator to count calls to a function"
def decorate(func):
func_name = func.func_name
counts[func_name] = 0
def call(*args, **kwds):
counts[func_name] += 1
return func(*args, **kwds)
call.func_name = func_name
return call
return decorate
# -----------------------------------------------
def memoize(func):
saved = {}
def call(*args):
try:
return saved[args]
except KeyError:
res = func(*args)
saved[args] = res
return res
except TypeError:
# Unhashable argument
return func(*args)
call.func_name = func.func_name
return call
# -----------------------------------------------
class TestDecorators(unittest.TestCase):
def test_single(self):
class C(object):
@staticmethod
def foo(): return 42
self.assertEqual(C.foo(), 42)
self.assertEqual(C().foo(), 42)
def test_staticmethod_function(self):
@staticmethod
def notamethod(x):
return x
self.assertRaises(TypeError, notamethod, 1)
def test_dotted(self):
decorators = MiscDecorators()
@decorators.author('Cleese')
def foo(): return 42
self.assertEqual(foo(), 42)
self.assertEqual(foo.author, 'Cleese')
def test_argforms(self):
# A few tests of argument passing, as we use restricted form
# of expressions for decorators.
def noteargs(*args, **kwds):
def decorate(func):
setattr(func, 'dbval', (args, kwds))
return func
return decorate
args = ( 'Now', 'is', 'the', 'time' )
kwds = dict(one=1, two=2)
@noteargs(*args, **kwds)
def f1(): return 42
self.assertEqual(f1(), 42)
self.assertEqual(f1.dbval, (args, kwds))
@noteargs('terry', 'gilliam', eric='idle', john='cleese')
def f2(): return 84
self.assertEqual(f2(), 84)
self.assertEqual(f2.dbval, (('terry', 'gilliam'),
dict(eric='idle', john='cleese')))
@noteargs(1, 2,)
def f3(): pass
self.assertEqual(f3.dbval, ((1, 2), {}))
def test_dbcheck(self):
@dbcheck('args[1] is not None')
def f(a, b):
return a + b
self.assertEqual(f(1, 2), 3)
self.assertRaises(DbcheckError, f, 1, None)
def test_memoize(self):
counts = {}
@memoize
@countcalls(counts)
def double(x):
return x * 2
self.assertEqual(double.func_name, 'double')
self.assertEqual(counts, dict(double=0))
# Only the first call with a given argument bumps the call count:
#
self.assertEqual(double(2), 4)
self.assertEqual(counts['double'], 1)
self.assertEqual(double(2), 4)
self.assertEqual(counts['double'], 1)
self.assertEqual(double(3), 6)
self.assertEqual(counts['double'], 2)
# Unhashable arguments do not get memoized:
#
self.assertEqual(double([10]), [10, 10])
self.assertEqual(counts['double'], 3)
self.assertEqual(double([10]), [10, 10])
self.assertEqual(counts['double'], 4)
def test_errors(self):
# Test syntax restrictions - these are all compile-time errors:
#
for expr in [ "1+2", "x[3]", "(1, 2)" ]:
# Sanity check: is expr is a valid expression by itself?
compile(expr, "testexpr", "exec")
codestr = "@%s\ndef f(): pass" % expr
self.assertRaises(SyntaxError, compile, codestr, "test", "exec")
# You can't put multiple decorators on a single line:
#
self.assertRaises(SyntaxError, compile,
"@f1 @f2\ndef f(): pass", "test", "exec")
# Test runtime errors
def unimp(func):
raise NotImplementedError
context = dict(nullval=None, unimp=unimp)
for expr, exc in [ ("undef", NameError),
("nullval", TypeError),
("nullval.attr", AttributeError),
("unimp", NotImplementedError)]:
codestr = "@%s\ndef f(): pass\nassert f() is None" % expr
code = compile(codestr, "test", "exec")
self.assertRaises(exc, eval, code, context)
def test_double(self):
class C(object):
@funcattrs(abc=1, xyz="haha")
@funcattrs(booh=42)
def foo(self): return 42
self.assertEqual(C().foo(), 42)
self.assertEqual(C.foo.abc, 1)
self.assertEqual(C.foo.xyz, "haha")
self.assertEqual(C.foo.booh, 42)
def test_order(self):
# Test that decorators are applied in the proper order to the function
# they are decorating.
def callnum(num):
"""Decorator factory that returns a decorator that replaces the
passed-in function with one that returns the value of 'num'"""
def deco(func):
return lambda: num
return deco
@callnum(2)
@callnum(1)
def foo(): return 42
self.assertEqual(foo(), 2,
"Application order of decorators is incorrect")
def test_eval_order(self):
# Evaluating a decorated function involves four steps for each
# decorator-maker (the function that returns a decorator):
#
# 1: Evaluate the decorator-maker name
# 2: Evaluate the decorator-maker arguments (if any)
# 3: Call the decorator-maker to make a decorator
# 4: Call the decorator
#
# When there are multiple decorators, these steps should be
# performed in the above order for each decorator, but we should
# iterate through the decorators in the reverse of the order they
# appear in the source.
actions = []
def make_decorator(tag):
actions.append('makedec' + tag)
def decorate(func):
actions.append('calldec' + tag)
return func
return decorate
class NameLookupTracer (object):
def __init__(self, index):
self.index = index
def __getattr__(self, fname):
if fname == 'make_decorator':
opname, res = ('evalname', make_decorator)
elif fname == 'arg':
opname, res = ('evalargs', str(self.index))
else:
assert False, "Unknown attrname %s" % fname
actions.append('%s%d' % (opname, self.index))
return res
c1, c2, c3 = map(NameLookupTracer, [ 1, 2, 3 ])
expected_actions = [ 'evalname1', 'evalargs1', 'makedec1',
'evalname2', 'evalargs2', 'makedec2',
'evalname3', 'evalargs3', 'makedec3',
'calldec3', 'calldec2', 'calldec1' ]
actions = []
@c1.make_decorator(c1.arg)
@c2.make_decorator(c2.arg)
@c3.make_decorator(c3.arg)
def foo(): return 42
self.assertEqual(foo(), 42)
self.assertEqual(actions, expected_actions)
# Test the equivalence claim in chapter 7 of the reference manual.
#
actions = []
def bar(): return 42
bar = c1.make_decorator(c1.arg)(c2.make_decorator(c2.arg)(c3.make_decorator(c3.arg)(bar)))
self.assertEqual(bar(), 42)
self.assertEqual(actions, expected_actions)
class TestClassDecorators(unittest.TestCase):
def test_simple(self):
def plain(x):
x.extra = 'Hello'
return x
@plain
class C(object): pass
self.assertEqual(C.extra, 'Hello')
def test_double(self):
def ten(x):
x.extra = 10
return x
def add_five(x):
x.extra += 5
return x
@add_five
@ten
class C(object): pass
self.assertEqual(C.extra, 15)
def test_order(self):
def applied_first(x):
x.extra = 'first'
return x
def applied_second(x):
x.extra = 'second'
return x
@applied_second
@applied_first
class C(object): pass
self.assertEqual(C.extra, 'second')
def test_main():
test_support.run_unittest(TestDecorators)
test_support.run_unittest(TestClassDecorators)
if __name__=="__main__":
test_main() | unknown | codeparrot/codeparrot-clean | ||
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Annotate Ref in Prog with C types by parsing gcc debug output.
// Conversion of debug output to Go types.
package main
import (
"bytes"
"debug/dwarf"
"debug/elf"
"debug/macho"
"debug/pe"
"encoding/binary"
"errors"
"flag"
"fmt"
"go/ast"
"go/parser"
"go/token"
"internal/xcoff"
"math"
"os"
"os/exec"
"path/filepath"
"slices"
"strconv"
"strings"
"sync/atomic"
"unicode"
"unicode/utf8"
"cmd/internal/quoted"
)
var debugDefine = flag.Bool("debug-define", false, "print relevant #defines")
var debugGcc = flag.Bool("debug-gcc", false, "print gcc invocations")
var nameToC = map[string]string{
"schar": "signed char",
"uchar": "unsigned char",
"ushort": "unsigned short",
"uint": "unsigned int",
"ulong": "unsigned long",
"longlong": "long long",
"ulonglong": "unsigned long long",
"complexfloat": "float _Complex",
"complexdouble": "double _Complex",
}
var incomplete = "_cgopackage.Incomplete"
// cname returns the C name to use for C.s.
// The expansions are listed in nameToC and also
// struct_foo becomes "struct foo", and similarly for
// union and enum.
func cname(s string) string {
if t, ok := nameToC[s]; ok {
return t
}
if t, ok := strings.CutPrefix(s, "struct_"); ok {
return "struct " + t
}
if t, ok := strings.CutPrefix(s, "union_"); ok {
return "union " + t
}
if t, ok := strings.CutPrefix(s, "enum_"); ok {
return "enum " + t
}
if t, ok := strings.CutPrefix(s, "sizeof_"); ok {
return "sizeof(" + cname(t) + ")"
}
return s
}
// ProcessCgoDirectives processes the import C preamble:
// 1. discards all #cgo CFLAGS, LDFLAGS, nocallback and noescape directives,
// so they don't make their way into _cgo_export.h.
// 2. parse the nocallback and noescape directives.
func (f *File) ProcessCgoDirectives() {
linesIn := strings.Split(f.Preamble, "\n")
linesOut := make([]string, 0, len(linesIn))
f.NoCallbacks = make(map[string]bool)
f.NoEscapes = make(map[string]bool)
for _, line := range linesIn {
l := strings.TrimSpace(line)
if len(l) < 5 || l[:4] != "#cgo" || !unicode.IsSpace(rune(l[4])) {
linesOut = append(linesOut, line)
} else {
linesOut = append(linesOut, "")
// #cgo (nocallback|noescape) <function name>
if fields := strings.Fields(l); len(fields) == 3 {
directive := fields[1]
funcName := fields[2]
if directive == "nocallback" {
f.NoCallbacks[funcName] = true
} else if directive == "noescape" {
f.NoEscapes[funcName] = true
}
}
}
}
f.Preamble = strings.Join(linesOut, "\n")
}
// addToFlag appends args to flag.
func (p *Package) addToFlag(flag string, args []string) {
if flag == "CFLAGS" {
// We'll also need these when preprocessing for dwarf information.
// However, discard any -g options: we need to be able
// to parse the debug info, so stick to what we expect.
for _, arg := range args {
if !strings.HasPrefix(arg, "-g") {
p.GccOptions = append(p.GccOptions, arg)
}
}
}
if flag == "LDFLAGS" {
p.LdFlags = append(p.LdFlags, args...)
}
}
// splitQuoted splits the string s around each instance of one or more consecutive
// white space characters while taking into account quotes and escaping, and
// returns an array of substrings of s or an empty list if s contains only white space.
// Single quotes and double quotes are recognized to prevent splitting within the
// quoted region, and are removed from the resulting substrings. If a quote in s
// isn't closed err will be set and r will have the unclosed argument as the
// last element. The backslash is used for escaping.
//
// For example, the following string:
//
// `a b:"c d" 'e''f' "g\""`
//
// Would be parsed as:
//
// []string{"a", "b:c d", "ef", `g"`}
func splitQuoted(s string) (r []string, err error) {
var args []string
arg := make([]rune, len(s))
escaped := false
quoted := false
quote := '\x00'
i := 0
for _, r := range s {
switch {
case escaped:
escaped = false
case r == '\\':
escaped = true
continue
case quote != 0:
if r == quote {
quote = 0
continue
}
case r == '"' || r == '\'':
quoted = true
quote = r
continue
case unicode.IsSpace(r):
if quoted || i > 0 {
quoted = false
args = append(args, string(arg[:i]))
i = 0
}
continue
}
arg[i] = r
i++
}
if quoted || i > 0 {
args = append(args, string(arg[:i]))
}
if quote != 0 {
err = errors.New("unclosed quote")
} else if escaped {
err = errors.New("unfinished escaping")
}
return args, err
}
// loadDebug runs gcc to load debug information for the File. The debug
// information will be saved to the debugs field of the file, and be
// processed when Translate is called on the file later.
// loadDebug is called concurrently with different files.
func (f *File) loadDebug(p *Package) {
for _, cref := range f.Ref {
// Convert C.ulong to C.unsigned long, etc.
cref.Name.C = cname(cref.Name.Go)
}
ft := fileTypedefs{typedefs: make(map[string]bool)}
numTypedefs := -1
for len(ft.typedefs) > numTypedefs {
numTypedefs = len(ft.typedefs)
// Also ask about any typedefs we've seen so far.
for _, info := range ft.typedefList {
if f.Name[info.typedef] != nil {
continue
}
n := &Name{
Go: info.typedef,
C: info.typedef,
}
f.Name[info.typedef] = n
f.NamePos[n] = info.pos
}
needType := p.guessKinds(f)
if len(needType) > 0 {
f.debugs = append(f.debugs, p.loadDWARF(f, &ft, needType))
}
// In godefs mode we're OK with the typedefs, which
// will presumably also be defined in the file, we
// don't want to resolve them to their base types.
if *godefs {
break
}
}
}
// Translate rewrites f.AST, the original Go input, to remove
// references to the imported package C, replacing them with
// references to the equivalent Go types, functions, and variables.
// Preconditions: File.loadDebug must be called prior to translate.
func (p *Package) Translate(f *File) {
var conv typeConv
conv.Init(p.PtrSize, p.IntSize)
for _, d := range f.debugs {
p.recordTypes(f, d, &conv)
}
p.prepareNames(f)
if p.rewriteCalls(f) {
// Add `import _cgo_unsafe "unsafe"` after the package statement.
f.Edit.Insert(f.offset(f.AST.Name.End()), "; import _cgo_unsafe \"unsafe\"")
}
p.rewriteRef(f)
}
// loadDefines coerces gcc into spitting out the #defines in use
// in the file f and saves relevant renamings in f.Name[name].Define.
// Returns true if env:CC is Clang
func (f *File) loadDefines(gccOptions []string) bool {
var b bytes.Buffer
b.WriteString(builtinProlog)
b.WriteString(f.Preamble)
stdout := gccDefines(b.Bytes(), gccOptions)
var gccIsClang bool
for line := range strings.SplitSeq(stdout, "\n") {
if len(line) < 9 || line[0:7] != "#define" {
continue
}
line = strings.TrimSpace(line[8:])
var key, val string
spaceIndex := strings.Index(line, " ")
tabIndex := strings.Index(line, "\t")
if spaceIndex == -1 && tabIndex == -1 {
continue
} else if tabIndex == -1 || (spaceIndex != -1 && spaceIndex < tabIndex) {
key = line[0:spaceIndex]
val = strings.TrimSpace(line[spaceIndex:])
} else {
key = line[0:tabIndex]
val = strings.TrimSpace(line[tabIndex:])
}
if key == "__clang__" {
gccIsClang = true
}
if n := f.Name[key]; n != nil {
if *debugDefine {
fmt.Fprintf(os.Stderr, "#define %s %s\n", key, val)
}
n.Define = val
}
}
return gccIsClang
}
// guessKinds tricks gcc into revealing the kind of each
// name xxx for the references C.xxx in the Go input.
// The kind is either a constant, type, or variable.
// guessKinds is called concurrently with different files.
func (p *Package) guessKinds(f *File) []*Name {
// Determine kinds for names we already know about,
// like #defines or 'struct foo', before bothering with gcc.
var names, needType []*Name
optional := map[*Name]bool{}
for _, key := range nameKeys(f.Name) {
n := f.Name[key]
// If we've already found this name as a #define
// and we can translate it as a constant value, do so.
if n.Define != "" {
if i, err := strconv.ParseInt(n.Define, 0, 64); err == nil {
n.Kind = "iconst"
// Turn decimal into hex, just for consistency
// with enum-derived constants. Otherwise
// in the cgo -godefs output half the constants
// are in hex and half are in whatever the #define used.
n.Const = fmt.Sprintf("%#x", i)
} else if n.Define[0] == '\'' {
if _, err := parser.ParseExpr(n.Define); err == nil {
n.Kind = "iconst"
n.Const = n.Define
}
} else if n.Define[0] == '"' {
if _, err := parser.ParseExpr(n.Define); err == nil {
n.Kind = "sconst"
n.Const = n.Define
}
}
if n.IsConst() {
continue
}
}
// If this is a struct, union, or enum type name, no need to guess the kind.
if strings.HasPrefix(n.C, "struct ") || strings.HasPrefix(n.C, "union ") || strings.HasPrefix(n.C, "enum ") {
n.Kind = "type"
needType = append(needType, n)
continue
}
if (goos == "darwin" || goos == "ios") && strings.HasSuffix(n.C, "Ref") {
// For FooRef, find out if FooGetTypeID exists.
s := n.C[:len(n.C)-3] + "GetTypeID"
n := &Name{Go: s, C: s}
names = append(names, n)
optional[n] = true
}
// Otherwise, we'll need to find out from gcc.
names = append(names, n)
}
// Bypass gcc if there's nothing left to find out.
if len(names) == 0 {
return needType
}
// Coerce gcc into telling us whether each name is a type, a value, or undeclared.
// For names, find out whether they are integer constants.
// We used to look at specific warning or error messages here, but that tied the
// behavior too closely to specific versions of the compilers.
// Instead, arrange that we can infer what we need from only the presence or absence
// of an error on a specific line.
//
// For each name, we generate these lines, where xxx is the index in toSniff plus one.
//
// #line xxx "not-declared"
// void __cgo_f_xxx_1(void) { __typeof__(name) *__cgo_undefined__1; }
// #line xxx "not-type"
// void __cgo_f_xxx_2(void) { name *__cgo_undefined__2; }
// #line xxx "not-int-const"
// void __cgo_f_xxx_3(void) { enum { __cgo_undefined__3 = (name)*1 }; }
// #line xxx "not-num-const"
// void __cgo_f_xxx_4(void) { static const double __cgo_undefined__4 = (name); }
// #line xxx "not-str-lit"
// void __cgo_f_xxx_5(void) { static const char __cgo_undefined__5[] = (name); }
//
// If we see an error at not-declared:xxx, the corresponding name is not declared.
// If we see an error at not-type:xxx, the corresponding name is not a type.
// If we see an error at not-int-const:xxx, the corresponding name is not an integer constant.
// If we see an error at not-num-const:xxx, the corresponding name is not a number constant.
// If we see an error at not-str-lit:xxx, the corresponding name is not a string literal.
//
// The specific input forms are chosen so that they are valid C syntax regardless of
// whether name denotes a type or an expression.
var b bytes.Buffer
b.WriteString(builtinProlog)
b.WriteString(f.Preamble)
for i, n := range names {
fmt.Fprintf(&b, "#line %d \"not-declared\"\n"+
"void __cgo_f_%d_1(void) { __typeof__(%s) *__cgo_undefined__1; }\n"+
"#line %d \"not-type\"\n"+
"void __cgo_f_%d_2(void) { %s *__cgo_undefined__2; }\n"+
"#line %d \"not-int-const\"\n"+
"void __cgo_f_%d_3(void) { enum { __cgo_undefined__3 = (%s)*1 }; }\n"+
"#line %d \"not-num-const\"\n"+
"void __cgo_f_%d_4(void) { static const double __cgo_undefined__4 = (%s); }\n"+
"#line %d \"not-str-lit\"\n"+
"void __cgo_f_%d_5(void) { static const char __cgo_undefined__5[] = (%s); }\n",
i+1, i+1, n.C,
i+1, i+1, n.C,
i+1, i+1, n.C,
i+1, i+1, n.C,
i+1, i+1, n.C,
)
}
fmt.Fprintf(&b, "#line 1 \"completed\"\n"+
"int __cgo__1 = __cgo__2;\n")
// We need to parse the output from this gcc command, so ensure that it
// doesn't have any ANSI escape sequences in it. (TERM=dumb is
// insufficient; if the user specifies CGO_CFLAGS=-fdiagnostics-color,
// GCC will ignore TERM, and GCC can also be configured at compile-time
// to ignore TERM.)
stderr := p.gccErrors(b.Bytes(), "-fdiagnostics-color=never")
if strings.Contains(stderr, "unrecognized command line option") {
// We're using an old version of GCC that doesn't understand
// -fdiagnostics-color. Those versions can't print color anyway,
// so just rerun without that option.
stderr = p.gccErrors(b.Bytes())
}
if stderr == "" {
fatalf("%s produced no output\non input:\n%s", gccBaseCmd[0], b.Bytes())
}
completed := false
sniff := make([]int, len(names))
const (
notType = 1 << iota
notIntConst
notNumConst
notStrLiteral
notDeclared
)
sawUnmatchedErrors := false
for line := range strings.SplitSeq(stderr, "\n") {
// Ignore warnings and random comments, with one
// exception: newer GCC versions will sometimes emit
// an error on a macro #define with a note referring
// to where the expansion occurs. We care about where
// the expansion occurs, so in that case treat the note
// as an error.
isError := strings.Contains(line, ": error:")
isErrorNote := strings.Contains(line, ": note:") && sawUnmatchedErrors
if !isError && !isErrorNote {
continue
}
c1 := strings.Index(line, ":")
if c1 < 0 {
continue
}
c2 := strings.Index(line[c1+1:], ":")
if c2 < 0 {
continue
}
c2 += c1 + 1
filename := line[:c1]
i, _ := strconv.Atoi(line[c1+1 : c2])
i--
if i < 0 || i >= len(names) {
if isError {
sawUnmatchedErrors = true
}
continue
}
switch filename {
case "completed":
// Strictly speaking, there is no guarantee that seeing the error at completed:1
// (at the end of the file) means we've seen all the errors from earlier in the file,
// but usually it does. Certainly if we don't see the completed:1 error, we did
// not get all the errors we expected.
completed = true
case "not-declared":
sniff[i] |= notDeclared
case "not-type":
sniff[i] |= notType
case "not-int-const":
sniff[i] |= notIntConst
case "not-num-const":
sniff[i] |= notNumConst
case "not-str-lit":
sniff[i] |= notStrLiteral
default:
if isError {
sawUnmatchedErrors = true
}
continue
}
sawUnmatchedErrors = false
}
if !completed {
fatalf("%s did not produce error at completed:1\non input:\n%s\nfull error output:\n%s", gccBaseCmd[0], b.Bytes(), stderr)
}
for i, n := range names {
switch sniff[i] {
default:
if sniff[i]¬Declared != 0 && optional[n] {
// Ignore optional undeclared identifiers.
// Don't report an error, and skip adding n to the needType array.
continue
}
error_(f.NamePos[n], "could not determine what C.%s refers to", fixGo(n.Go))
case notStrLiteral | notType:
n.Kind = "iconst"
case notIntConst | notStrLiteral | notType:
n.Kind = "fconst"
case notIntConst | notNumConst | notType:
n.Kind = "sconst"
case notIntConst | notNumConst | notStrLiteral:
n.Kind = "type"
case notIntConst | notNumConst | notStrLiteral | notType:
n.Kind = "not-type"
}
needType = append(needType, n)
}
if nerrors > 0 {
// Check if compiling the preamble by itself causes any errors,
// because the messages we've printed out so far aren't helpful
// to users debugging preamble mistakes. See issue 8442.
preambleErrors := p.gccErrors([]byte(builtinProlog + f.Preamble))
if len(preambleErrors) > 0 {
error_(token.NoPos, "\n%s errors for preamble:\n%s", gccBaseCmd[0], preambleErrors)
}
fatalf("unresolved names")
}
return needType
}
// loadDWARF parses the DWARF debug information generated
// by gcc to learn the details of the constants, variables, and types
// being referred to as C.xxx.
// loadDwarf is called concurrently with different files.
func (p *Package) loadDWARF(f *File, ft *fileTypedefs, names []*Name) *debug {
// Extract the types from the DWARF section of an object
// from a well-formed C program. Gcc only generates DWARF info
// for symbols in the object file, so it is not enough to print the
// preamble and hope the symbols we care about will be there.
// Instead, emit
// __typeof__(names[i]) *__cgo__i;
// for each entry in names and then dereference the type we
// learn for __cgo__i.
var b bytes.Buffer
b.WriteString(builtinProlog)
b.WriteString(f.Preamble)
b.WriteString("#line 1 \"cgo-dwarf-inference\"\n")
for i, n := range names {
fmt.Fprintf(&b, "__typeof__(%s) *__cgo__%d;\n", n.C, i)
if n.Kind == "iconst" {
fmt.Fprintf(&b, "enum { __cgo_enum__%d = %s };\n", i, n.C)
}
}
// We create a data block initialized with the values,
// so we can read them out of the object file.
fmt.Fprintf(&b, "long long __cgodebug_ints[] = {\n")
for _, n := range names {
if n.Kind == "iconst" {
fmt.Fprintf(&b, "\t%s,\n", n.C)
} else {
fmt.Fprintf(&b, "\t0,\n")
}
}
// for the last entry, we cannot use 0, otherwise
// in case all __cgodebug_data is zero initialized,
// LLVM-based gcc will place the it in the __DATA.__common
// zero-filled section (our debug/macho doesn't support
// this)
fmt.Fprintf(&b, "\t1\n")
fmt.Fprintf(&b, "};\n")
// do the same work for floats.
fmt.Fprintf(&b, "double __cgodebug_floats[] = {\n")
for _, n := range names {
if n.Kind == "fconst" {
fmt.Fprintf(&b, "\t%s,\n", n.C)
} else {
fmt.Fprintf(&b, "\t0,\n")
}
}
fmt.Fprintf(&b, "\t1\n")
fmt.Fprintf(&b, "};\n")
// do the same work for strings.
for i, n := range names {
if n.Kind == "sconst" {
fmt.Fprintf(&b, "const char __cgodebug_str__%d[] = %s;\n", i, n.C)
fmt.Fprintf(&b, "const unsigned long long __cgodebug_strlen__%d = sizeof(%s)-1;\n", i, n.C)
}
}
d, ints, floats, strs := p.gccDebug(b.Bytes(), len(names))
// Scan DWARF info for top-level TagVariable entries with AttrName __cgo__i.
types := make([]dwarf.Type, len(names))
r := d.Reader()
for {
e, err := r.Next()
if err != nil {
fatalf("reading DWARF entry: %s", err)
}
if e == nil {
break
}
switch e.Tag {
case dwarf.TagVariable:
name, _ := e.Val(dwarf.AttrName).(string)
// As of https://reviews.llvm.org/D123534, clang
// now emits DW_TAG_variable DIEs that have
// no name (so as to be able to describe the
// type and source locations of constant strings)
// like the second arg in the call below:
//
// myfunction(42, "foo")
//
// If a var has no name we won't see attempts to
// refer to it via "C.<name>", so skip these vars
//
// See issue 53000 for more context.
if name == "" {
break
}
typOff, _ := e.Val(dwarf.AttrType).(dwarf.Offset)
if typOff == 0 {
if e.Val(dwarf.AttrSpecification) != nil {
// Since we are reading all the DWARF,
// assume we will see the variable elsewhere.
break
}
fatalf("malformed DWARF TagVariable entry")
}
if !strings.HasPrefix(name, "__cgo__") {
break
}
typ, err := d.Type(typOff)
if err != nil {
fatalf("loading DWARF type: %s", err)
}
t, ok := typ.(*dwarf.PtrType)
if !ok || t == nil {
fatalf("internal error: %s has non-pointer type", name)
}
i, err := strconv.Atoi(name[7:])
if err != nil {
fatalf("malformed __cgo__ name: %s", name)
}
types[i] = t.Type
ft.recordTypedefs(t.Type, f.NamePos[names[i]])
}
if e.Tag != dwarf.TagCompileUnit {
r.SkipChildren()
}
}
return &debug{names, types, ints, floats, strs}
}
// debug is the data extracted by running an iteration of loadDWARF on a file.
type debug struct {
names []*Name
types []dwarf.Type
ints []int64
floats []float64
strs []string
}
func (p *Package) recordTypes(f *File, data *debug, conv *typeConv) {
names, types, ints, floats, strs := data.names, data.types, data.ints, data.floats, data.strs
// Record types and typedef information.
for i, n := range names {
if strings.HasSuffix(n.Go, "GetTypeID") && types[i].String() == "func() CFTypeID" {
conv.getTypeIDs[n.Go[:len(n.Go)-9]] = true
}
}
for i, n := range names {
if types[i] == nil {
continue
}
pos := f.NamePos[n]
f, fok := types[i].(*dwarf.FuncType)
if n.Kind != "type" && fok {
n.Kind = "func"
n.FuncType = conv.FuncType(f, pos)
} else {
n.Type = conv.Type(types[i], pos)
switch n.Kind {
case "iconst":
if i < len(ints) {
if _, ok := types[i].(*dwarf.UintType); ok {
n.Const = fmt.Sprintf("%#x", uint64(ints[i]))
} else {
n.Const = fmt.Sprintf("%#x", ints[i])
}
}
case "fconst":
if i >= len(floats) {
break
}
switch base(types[i]).(type) {
case *dwarf.IntType, *dwarf.UintType:
// This has an integer type so it's
// not really a floating point
// constant. This can happen when the
// C compiler complains about using
// the value as an integer constant,
// but not as a general constant.
// Treat this as a variable of the
// appropriate type, not a constant,
// to get C-style type handling,
// avoiding the problem that C permits
// uint64(-1) but Go does not.
// See issue 26066.
n.Kind = "var"
default:
n.Const = fmt.Sprintf("%f", floats[i])
}
case "sconst":
if i < len(strs) {
n.Const = fmt.Sprintf("%q", strs[i])
}
}
}
conv.FinishType(pos)
}
}
type fileTypedefs struct {
typedefs map[string]bool // type names that appear in the types of the objects we're interested in
typedefList []typedefInfo
}
// recordTypedefs remembers in ft.typedefs all the typedefs used in dtypes and its children.
func (ft *fileTypedefs) recordTypedefs(dtype dwarf.Type, pos token.Pos) {
ft.recordTypedefs1(dtype, pos, map[dwarf.Type]bool{})
}
func (ft *fileTypedefs) recordTypedefs1(dtype dwarf.Type, pos token.Pos, visited map[dwarf.Type]bool) {
if dtype == nil {
return
}
if visited[dtype] {
return
}
visited[dtype] = true
switch dt := dtype.(type) {
case *dwarf.TypedefType:
if strings.HasPrefix(dt.Name, "__builtin") {
// Don't look inside builtin types. There be dragons.
return
}
if !ft.typedefs[dt.Name] {
ft.typedefs[dt.Name] = true
ft.typedefList = append(ft.typedefList, typedefInfo{dt.Name, pos})
ft.recordTypedefs1(dt.Type, pos, visited)
}
case *dwarf.PtrType:
ft.recordTypedefs1(dt.Type, pos, visited)
case *dwarf.ArrayType:
ft.recordTypedefs1(dt.Type, pos, visited)
case *dwarf.QualType:
ft.recordTypedefs1(dt.Type, pos, visited)
case *dwarf.FuncType:
ft.recordTypedefs1(dt.ReturnType, pos, visited)
for _, a := range dt.ParamType {
ft.recordTypedefs1(a, pos, visited)
}
case *dwarf.StructType:
for _, l := range dt.Field {
ft.recordTypedefs1(l.Type, pos, visited)
}
}
}
// prepareNames finalizes the Kind field of not-type names and sets
// the mangled name of all names.
func (p *Package) prepareNames(f *File) {
for _, n := range f.Name {
if n.Kind == "not-type" {
if n.Define == "" {
n.Kind = "var"
} else {
n.Kind = "macro"
n.FuncType = &FuncType{
Result: n.Type,
Go: &ast.FuncType{
Results: &ast.FieldList{List: []*ast.Field{{Type: n.Type.Go}}},
},
}
}
}
p.mangleName(n)
if n.Kind == "type" && typedef[n.Mangle] == nil {
typedef[n.Mangle] = n.Type
}
}
}
// mangleName does name mangling to translate names
// from the original Go source files to the names
// used in the final Go files generated by cgo.
func (p *Package) mangleName(n *Name) {
// When using gccgo variables have to be
// exported so that they become global symbols
// that the C code can refer to.
prefix := "_C"
if *gccgo && n.IsVar() {
prefix = "C"
}
n.Mangle = prefix + n.Kind + "_" + n.Go
}
func (f *File) isMangledName(s string) bool {
t, ok := strings.CutPrefix(s, "_C")
if !ok {
return false
}
return slices.ContainsFunc(nameKinds, func(k string) bool {
return strings.HasPrefix(t, k+"_")
})
}
// rewriteCalls rewrites all calls that pass pointers to check that
// they follow the rules for passing pointers between Go and C.
// This reports whether the package needs to import unsafe as _cgo_unsafe.
func (p *Package) rewriteCalls(f *File) bool {
needsUnsafe := false
// Walk backward so that in C.f1(C.f2()) we rewrite C.f2 first.
for _, call := range f.Calls {
if call.Done {
continue
}
start := f.offset(call.Call.Pos())
end := f.offset(call.Call.End())
str, nu := p.rewriteCall(f, call)
if str != "" {
f.Edit.Replace(start, end, str)
if nu {
needsUnsafe = true
}
}
}
return needsUnsafe
}
// rewriteCall rewrites one call to add pointer checks.
// If any pointer checks are required, we rewrite the call into a
// function literal that calls _cgoCheckPointer for each pointer
// argument and then calls the original function.
// This returns the rewritten call and whether the package needs to
// import unsafe as _cgo_unsafe.
// If it returns the empty string, the call did not need to be rewritten.
func (p *Package) rewriteCall(f *File, call *Call) (string, bool) {
// This is a call to C.xxx; set goname to "xxx".
// It may have already been mangled by rewriteName.
var goname string
switch fun := call.Call.Fun.(type) {
case *ast.SelectorExpr:
goname = fun.Sel.Name
case *ast.Ident:
goname = strings.TrimPrefix(fun.Name, "_C2func_")
goname = strings.TrimPrefix(goname, "_Cfunc_")
}
if goname == "" || goname == "malloc" {
return "", false
}
name := f.Name[goname]
if name == nil || name.Kind != "func" {
// Probably a type conversion.
return "", false
}
params := name.FuncType.Params
args := call.Call.Args
end := call.Call.End()
// Avoid a crash if the number of arguments doesn't match
// the number of parameters.
// This will be caught when the generated file is compiled.
if len(args) != len(params) {
return "", false
}
any := false
for i, param := range params {
if p.needsPointerCheck(f, param.Go, args[i]) {
any = true
break
}
}
if !any {
return "", false
}
// We need to rewrite this call.
//
// Rewrite C.f(p) to
// func() {
// _cgo0 := p
// _cgoCheckPointer(_cgo0, nil)
// C.f(_cgo0)
// }()
// Using a function literal like this lets us evaluate the
// function arguments only once while doing pointer checks.
// This is particularly useful when passing additional arguments
// to _cgoCheckPointer, as done in checkIndex and checkAddr.
//
// When the function argument is a conversion to unsafe.Pointer,
// we unwrap the conversion before checking the pointer,
// and then wrap again when calling C.f. This lets us check
// the real type of the pointer in some cases. See issue #25941.
//
// When the call to C.f is deferred, we use an additional function
// literal to evaluate the arguments at the right time.
// defer func() func() {
// _cgo0 := p
// return func() {
// _cgoCheckPointer(_cgo0, nil)
// C.f(_cgo0)
// }
// }()()
// This works because the defer statement evaluates the first
// function literal in order to get the function to call.
var sb bytes.Buffer
sb.WriteString("func() ")
if call.Deferred {
sb.WriteString("func() ")
}
needsUnsafe := false
result := false
twoResults := false
if !call.Deferred {
// Check whether this call expects two results.
for _, ref := range f.Ref {
if ref.Expr != &call.Call.Fun {
continue
}
if ref.Context == ctxCall2 {
sb.WriteString("(")
result = true
twoResults = true
}
break
}
// Add the result type, if any.
if name.FuncType.Result != nil {
rtype := p.rewriteUnsafe(name.FuncType.Result.Go)
if rtype != name.FuncType.Result.Go {
needsUnsafe = true
}
sb.WriteString(gofmt(rtype))
result = true
}
// Add the second result type, if any.
if twoResults {
if name.FuncType.Result == nil {
// An explicit void result looks odd but it
// seems to be how cgo has worked historically.
sb.WriteString("_Ctype_void")
}
sb.WriteString(", error)")
}
}
sb.WriteString("{ ")
// Define _cgoN for each argument value.
// Write _cgoCheckPointer calls to sbCheck.
var sbCheck bytes.Buffer
for i, param := range params {
origArg := args[i]
arg, nu := p.mangle(f, &args[i], true)
if nu {
needsUnsafe = true
}
// Use "var x T = ..." syntax to explicitly convert untyped
// constants to the parameter type, to avoid a type mismatch.
ptype := p.rewriteUnsafe(param.Go)
if !p.needsPointerCheck(f, param.Go, args[i]) || param.BadPointer || p.checkUnsafeStringData(args[i]) {
if ptype != param.Go {
needsUnsafe = true
}
fmt.Fprintf(&sb, "var _cgo%d %s = %s; ", i,
gofmt(ptype), gofmtPos(arg, origArg.Pos()))
continue
}
// Check for &a[i].
if p.checkIndex(&sb, &sbCheck, arg, i) {
continue
}
// Check for &x.
if p.checkAddr(&sb, &sbCheck, arg, i) {
continue
}
// Check for a[:].
if p.checkSlice(&sb, &sbCheck, arg, i) {
continue
}
fmt.Fprintf(&sb, "_cgo%d := %s; ", i, gofmtPos(arg, origArg.Pos()))
fmt.Fprintf(&sbCheck, "_cgoCheckPointer(_cgo%d, nil); ", i)
}
if call.Deferred {
sb.WriteString("return func() { ")
}
// Write out the calls to _cgoCheckPointer.
sb.WriteString(sbCheck.String())
if result {
sb.WriteString("return ")
}
m, nu := p.mangle(f, &call.Call.Fun, false)
if nu {
needsUnsafe = true
}
sb.WriteString(gofmtPos(m, end))
sb.WriteString("(")
for i := range params {
if i > 0 {
sb.WriteString(", ")
}
fmt.Fprintf(&sb, "_cgo%d", i)
}
sb.WriteString("); ")
if call.Deferred {
sb.WriteString("}")
}
sb.WriteString("}")
if call.Deferred {
sb.WriteString("()")
}
sb.WriteString("()")
return sb.String(), needsUnsafe
}
// needsPointerCheck reports whether the type t needs a pointer check.
// This is true if t is a pointer and if the value to which it points
// might contain a pointer.
func (p *Package) needsPointerCheck(f *File, t ast.Expr, arg ast.Expr) bool {
// An untyped nil does not need a pointer check, and when
// _cgoCheckPointer returns the untyped nil the type assertion we
// are going to insert will fail. Easier to just skip nil arguments.
// TODO: Note that this fails if nil is shadowed.
if id, ok := arg.(*ast.Ident); ok && id.Name == "nil" {
return false
}
return p.hasPointer(f, t, true)
}
// hasPointer is used by needsPointerCheck. If top is true it returns
// whether t is or contains a pointer that might point to a pointer.
// If top is false it reports whether t is or contains a pointer.
// f may be nil.
func (p *Package) hasPointer(f *File, t ast.Expr, top bool) bool {
switch t := t.(type) {
case *ast.ArrayType:
if t.Len == nil {
if !top {
return true
}
return p.hasPointer(f, t.Elt, false)
}
return p.hasPointer(f, t.Elt, top)
case *ast.StructType:
return slices.ContainsFunc(t.Fields.List, func(field *ast.Field) bool {
return p.hasPointer(f, field.Type, top)
})
case *ast.StarExpr: // Pointer type.
if !top {
return true
}
// Check whether this is a pointer to a C union (or class)
// type that contains a pointer.
if unionWithPointer[t.X] {
return true
}
return p.hasPointer(f, t.X, false)
case *ast.FuncType, *ast.InterfaceType, *ast.MapType, *ast.ChanType:
return true
case *ast.Ident:
// TODO: Handle types defined within function.
for _, d := range p.Decl {
gd, ok := d.(*ast.GenDecl)
if !ok || gd.Tok != token.TYPE {
continue
}
for _, spec := range gd.Specs {
ts, ok := spec.(*ast.TypeSpec)
if !ok {
continue
}
if ts.Name.Name == t.Name {
return p.hasPointer(f, ts.Type, top)
}
}
}
if def := typedef[t.Name]; def != nil {
return p.hasPointer(f, def.Go, top)
}
if t.Name == "string" {
return !top
}
if t.Name == "error" {
return true
}
if t.Name == "any" {
return true
}
if goTypes[t.Name] != nil {
return false
}
// We can't figure out the type. Conservative
// approach is to assume it has a pointer.
return true
case *ast.SelectorExpr:
if l, ok := t.X.(*ast.Ident); !ok || l.Name != "C" {
// Type defined in a different package.
// Conservative approach is to assume it has a
// pointer.
return true
}
if f == nil {
// Conservative approach: assume pointer.
return true
}
name := f.Name[t.Sel.Name]
if name != nil && name.Kind == "type" && name.Type != nil && name.Type.Go != nil {
return p.hasPointer(f, name.Type.Go, top)
}
// We can't figure out the type. Conservative
// approach is to assume it has a pointer.
return true
default:
error_(t.Pos(), "could not understand type %s", gofmt(t))
return true
}
}
// mangle replaces references to C names in arg with the mangled names,
// rewriting calls when it finds them.
// It removes the corresponding references in f.Ref and f.Calls, so that we
// don't try to do the replacement again in rewriteRef or rewriteCall.
// If addPosition is true, add position info to the idents of C names in arg.
func (p *Package) mangle(f *File, arg *ast.Expr, addPosition bool) (ast.Expr, bool) {
needsUnsafe := false
f.walk(arg, ctxExpr, func(f *File, arg any, context astContext) {
px, ok := arg.(*ast.Expr)
if !ok {
return
}
sel, ok := (*px).(*ast.SelectorExpr)
if ok {
if l, ok := sel.X.(*ast.Ident); !ok || l.Name != "C" {
return
}
for _, r := range f.Ref {
if r.Expr == px {
*px = p.rewriteName(f, r, addPosition)
r.Done = true
break
}
}
return
}
call, ok := (*px).(*ast.CallExpr)
if !ok {
return
}
for _, c := range f.Calls {
if !c.Done && c.Call.Lparen == call.Lparen {
cstr, nu := p.rewriteCall(f, c)
if cstr != "" {
// Smuggle the rewritten call through an ident.
*px = ast.NewIdent(cstr)
if nu {
needsUnsafe = true
}
c.Done = true
}
}
}
})
return *arg, needsUnsafe
}
// checkIndex checks whether arg has the form &a[i], possibly inside
// type conversions. If so, then in the general case it writes
//
// _cgoIndexNN := a
// _cgoNN := &cgoIndexNN[i] // with type conversions, if any
//
// to sb, and writes
//
// _cgoCheckPointer(_cgoNN, _cgoIndexNN)
//
// to sbCheck, and returns true. If a is a simple variable or field reference,
// it writes
//
// _cgoIndexNN := &a
//
// and dereferences the uses of _cgoIndexNN. Taking the address avoids
// making a copy of an array.
//
// This tells _cgoCheckPointer to check the complete contents of the
// slice or array being indexed, but no other part of the memory allocation.
func (p *Package) checkIndex(sb, sbCheck *bytes.Buffer, arg ast.Expr, i int) bool {
// Strip type conversions.
x := arg
for {
c, ok := x.(*ast.CallExpr)
if !ok || len(c.Args) != 1 {
break
}
if !p.isType(c.Fun) && !p.isUnsafeData(c.Fun, false) {
break
}
x = c.Args[0]
}
u, ok := x.(*ast.UnaryExpr)
if !ok || u.Op != token.AND {
return false
}
index, ok := u.X.(*ast.IndexExpr)
if !ok {
return false
}
addr := ""
deref := ""
if p.isVariable(index.X) {
addr = "&"
deref = "*"
}
fmt.Fprintf(sb, "_cgoIndex%d := %s%s; ", i, addr, gofmtPos(index.X, index.X.Pos()))
origX := index.X
index.X = ast.NewIdent(fmt.Sprintf("_cgoIndex%d", i))
if deref == "*" {
index.X = &ast.StarExpr{X: index.X}
}
fmt.Fprintf(sb, "_cgo%d := %s; ", i, gofmtPos(arg, arg.Pos()))
index.X = origX
fmt.Fprintf(sbCheck, "_cgoCheckPointer(_cgo%d, %s_cgoIndex%d); ", i, deref, i)
return true
}
// checkAddr checks whether arg has the form &x, possibly inside type
// conversions. If so, it writes
//
// _cgoBaseNN := &x
// _cgoNN := _cgoBaseNN // with type conversions, if any
//
// to sb, and writes
//
// _cgoCheckPointer(_cgoBaseNN, true)
//
// to sbCheck, and returns true. This tells _cgoCheckPointer to check
// just the contents of the pointer being passed, not any other part
// of the memory allocation. This is run after checkIndex, which looks
// for the special case of &a[i], which requires different checks.
func (p *Package) checkAddr(sb, sbCheck *bytes.Buffer, arg ast.Expr, i int) bool {
// Strip type conversions.
px := &arg
for {
c, ok := (*px).(*ast.CallExpr)
if !ok || len(c.Args) != 1 {
break
}
if !p.isType(c.Fun) && !p.isUnsafeData(c.Fun, false) {
break
}
px = &c.Args[0]
}
if u, ok := (*px).(*ast.UnaryExpr); !ok || u.Op != token.AND {
return false
}
fmt.Fprintf(sb, "_cgoBase%d := %s; ", i, gofmtPos(*px, (*px).Pos()))
origX := *px
*px = ast.NewIdent(fmt.Sprintf("_cgoBase%d", i))
fmt.Fprintf(sb, "_cgo%d := %s; ", i, gofmtPos(arg, arg.Pos()))
*px = origX
// Use "0 == 0" to do the right thing in the unlikely event
// that "true" is shadowed.
fmt.Fprintf(sbCheck, "_cgoCheckPointer(_cgoBase%d, 0 == 0); ", i)
return true
}
// checkSlice checks whether arg has the form x[i:j], possibly inside
// type conversions. If so, it writes
//
// _cgoSliceNN := x[i:j]
// _cgoNN := _cgoSliceNN // with type conversions, if any
//
// to sb, and writes
//
// _cgoCheckPointer(_cgoSliceNN, true)
//
// to sbCheck, and returns true. This tells _cgoCheckPointer to check
// just the contents of the slice being passed, not any other part
// of the memory allocation.
func (p *Package) checkSlice(sb, sbCheck *bytes.Buffer, arg ast.Expr, i int) bool {
// Strip type conversions.
px := &arg
for {
c, ok := (*px).(*ast.CallExpr)
if !ok || len(c.Args) != 1 {
break
}
if !p.isType(c.Fun) && !p.isUnsafeData(c.Fun, false) {
break
}
px = &c.Args[0]
}
if _, ok := (*px).(*ast.SliceExpr); !ok {
return false
}
fmt.Fprintf(sb, "_cgoSlice%d := %s; ", i, gofmtPos(*px, (*px).Pos()))
origX := *px
*px = ast.NewIdent(fmt.Sprintf("_cgoSlice%d", i))
fmt.Fprintf(sb, "_cgo%d := %s; ", i, gofmtPos(arg, arg.Pos()))
*px = origX
// Use 0 == 0 to do the right thing in the unlikely event
// that "true" is shadowed.
fmt.Fprintf(sbCheck, "_cgoCheckPointer(_cgoSlice%d, 0 == 0); ", i)
return true
}
// checkUnsafeStringData checks for a call to unsafe.StringData.
// The result of that call can't contain a pointer so there is
// no need to call _cgoCheckPointer.
func (p *Package) checkUnsafeStringData(arg ast.Expr) bool {
x := arg
for {
c, ok := x.(*ast.CallExpr)
if !ok || len(c.Args) != 1 {
break
}
if p.isUnsafeData(c.Fun, true) {
return true
}
if !p.isType(c.Fun) {
break
}
x = c.Args[0]
}
return false
}
// isType reports whether the expression is definitely a type.
// This is conservative--it returns false for an unknown identifier.
func (p *Package) isType(t ast.Expr) bool {
switch t := t.(type) {
case *ast.SelectorExpr:
id, ok := t.X.(*ast.Ident)
if !ok {
return false
}
if id.Name == "unsafe" && t.Sel.Name == "Pointer" {
return true
}
if id.Name == "C" && typedef["_Ctype_"+t.Sel.Name] != nil {
return true
}
return false
case *ast.Ident:
// TODO: This ignores shadowing.
switch t.Name {
case "unsafe.Pointer", "bool", "byte",
"complex64", "complex128",
"error",
"float32", "float64",
"int", "int8", "int16", "int32", "int64",
"rune", "string",
"uint", "uint8", "uint16", "uint32", "uint64", "uintptr":
return true
}
if strings.HasPrefix(t.Name, "_Ctype_") {
return true
}
case *ast.ParenExpr:
return p.isType(t.X)
case *ast.StarExpr:
return p.isType(t.X)
case *ast.ArrayType, *ast.StructType, *ast.FuncType, *ast.InterfaceType,
*ast.MapType, *ast.ChanType:
return true
}
return false
}
// isUnsafeData reports whether the expression is unsafe.StringData
// or unsafe.SliceData. We can ignore these when checking for pointers
// because they don't change whether or not their argument contains
// any Go pointers. If onlyStringData is true we only check for StringData.
func (p *Package) isUnsafeData(x ast.Expr, onlyStringData bool) bool {
st, ok := x.(*ast.SelectorExpr)
if !ok {
return false
}
id, ok := st.X.(*ast.Ident)
if !ok {
return false
}
if id.Name != "unsafe" {
return false
}
if !onlyStringData && st.Sel.Name == "SliceData" {
return true
}
return st.Sel.Name == "StringData"
}
// isVariable reports whether x is a variable, possibly with field references.
func (p *Package) isVariable(x ast.Expr) bool {
switch x := x.(type) {
case *ast.Ident:
return true
case *ast.SelectorExpr:
return p.isVariable(x.X)
case *ast.IndexExpr:
return true
}
return false
}
// rewriteUnsafe returns a version of t with references to unsafe.Pointer
// rewritten to use _cgo_unsafe.Pointer instead.
func (p *Package) rewriteUnsafe(t ast.Expr) ast.Expr {
switch t := t.(type) {
case *ast.Ident:
// We don't see a SelectorExpr for unsafe.Pointer;
// this is created by code in this file.
if t.Name == "unsafe.Pointer" {
return ast.NewIdent("_cgo_unsafe.Pointer")
}
case *ast.ArrayType:
t1 := p.rewriteUnsafe(t.Elt)
if t1 != t.Elt {
r := *t
r.Elt = t1
return &r
}
case *ast.StructType:
changed := false
fields := *t.Fields
fields.List = nil
for _, f := range t.Fields.List {
ft := p.rewriteUnsafe(f.Type)
if ft == f.Type {
fields.List = append(fields.List, f)
} else {
fn := *f
fn.Type = ft
fields.List = append(fields.List, &fn)
changed = true
}
}
if changed {
r := *t
r.Fields = &fields
return &r
}
case *ast.StarExpr: // Pointer type.
x1 := p.rewriteUnsafe(t.X)
if x1 != t.X {
r := *t
r.X = x1
return &r
}
}
return t
}
// rewriteRef rewrites all the C.xxx references in f.AST to refer to the
// Go equivalents, now that we have figured out the meaning of all
// the xxx. In *godefs mode, rewriteRef replaces the names
// with full definitions instead of mangled names.
func (p *Package) rewriteRef(f *File) {
// Keep a list of all the functions, to remove the ones
// only used as expressions and avoid generating bridge
// code for them.
functions := make(map[string]bool)
for _, n := range f.Name {
if n.Kind == "func" {
functions[n.Go] = false
}
}
// Now that we have all the name types filled in,
// scan through the Refs to identify the ones that
// are trying to do a ,err call. Also check that
// functions are only used in calls.
for _, r := range f.Ref {
if r.Name.IsConst() && r.Name.Const == "" {
error_(r.Pos(), "unable to find value of constant C.%s", fixGo(r.Name.Go))
}
if r.Name.Kind == "func" {
switch r.Context {
case ctxCall, ctxCall2:
functions[r.Name.Go] = true
}
}
expr := p.rewriteName(f, r, false)
if *godefs {
// Substitute definition for mangled type name.
if r.Name.Type != nil && r.Name.Kind == "type" {
expr = r.Name.Type.Go
}
if id, ok := expr.(*ast.Ident); ok {
if t := typedef[id.Name]; t != nil {
expr = t.Go
}
if id.Name == r.Name.Mangle && r.Name.Const != "" {
expr = ast.NewIdent(r.Name.Const)
}
}
}
// Copy position information from old expr into new expr,
// in case expression being replaced is first on line.
// See golang.org/issue/6563.
pos := (*r.Expr).Pos()
if x, ok := expr.(*ast.Ident); ok {
expr = &ast.Ident{NamePos: pos, Name: x.Name}
}
// Change AST, because some later processing depends on it,
// and also because -godefs mode still prints the AST.
old := *r.Expr
*r.Expr = expr
// Record source-level edit for cgo output.
if !r.Done {
// Prepend a space in case the earlier code ends
// with '/', which would give us a "//" comment.
repl := " " + gofmtPos(expr, old.Pos())
end := fset.Position(old.End())
// Subtract 1 from the column if we are going to
// append a close parenthesis. That will set the
// correct column for the following characters.
sub := 0
if r.Name.Kind != "type" {
sub = 1
}
if end.Column > sub {
repl = fmt.Sprintf("%s /*line :%d:%d*/", repl, end.Line, end.Column-sub)
}
if r.Name.Kind != "type" {
repl = "(" + repl + ")"
}
f.Edit.Replace(f.offset(old.Pos()), f.offset(old.End()), repl)
}
}
// Remove functions only used as expressions, so their respective
// bridge functions are not generated.
for name, used := range functions {
if !used {
delete(f.Name, name)
}
}
}
// rewriteName returns the expression used to rewrite a reference.
// If addPosition is true, add position info in the ident name.
func (p *Package) rewriteName(f *File, r *Ref, addPosition bool) ast.Expr {
getNewIdent := ast.NewIdent
if addPosition {
getNewIdent = func(newName string) *ast.Ident {
mangledIdent := ast.NewIdent(newName)
if len(newName) == len(r.Name.Go) {
return mangledIdent
}
p := fset.Position((*r.Expr).End())
if p.Column == 0 {
return mangledIdent
}
return ast.NewIdent(fmt.Sprintf("%s /*line :%d:%d*/", newName, p.Line, p.Column))
}
}
var expr ast.Expr = getNewIdent(r.Name.Mangle) // default
switch r.Context {
case ctxCall, ctxCall2:
if r.Name.Kind != "func" {
if r.Name.Kind == "type" {
r.Context = ctxType
if r.Name.Type == nil {
error_(r.Pos(), "invalid conversion to C.%s: undefined C type '%s'", fixGo(r.Name.Go), r.Name.C)
}
break
}
error_(r.Pos(), "call of non-function C.%s", fixGo(r.Name.Go))
break
}
if r.Context == ctxCall2 {
if builtinDefs[r.Name.Go] != "" {
error_(r.Pos(), "no two-result form for C.%s", r.Name.Go)
break
}
// Invent new Name for the two-result function.
n := f.Name["2"+r.Name.Go]
if n == nil {
n = new(Name)
*n = *r.Name
n.AddError = true
n.Mangle = "_C2func_" + n.Go
f.Name["2"+r.Name.Go] = n
}
expr = getNewIdent(n.Mangle)
r.Name = n
break
}
case ctxExpr:
switch r.Name.Kind {
case "func":
if builtinDefs[r.Name.C] != "" {
error_(r.Pos(), "use of builtin '%s' not in function call", fixGo(r.Name.C))
}
// Function is being used in an expression, to e.g. pass around a C function pointer.
// Create a new Name for this Ref which causes the variable to be declared in Go land.
fpName := "fp_" + r.Name.Go
name := f.Name[fpName]
if name == nil {
name = &Name{
Go: fpName,
C: r.Name.C,
Kind: "fpvar",
Type: &Type{Size: p.PtrSize, Align: p.PtrSize, C: c("void*"), Go: ast.NewIdent("unsafe.Pointer")},
}
p.mangleName(name)
f.Name[fpName] = name
}
r.Name = name
// Rewrite into call to _Cgo_ptr to prevent assignments. The _Cgo_ptr
// function is defined in out.go and simply returns its argument. See
// issue 7757.
expr = &ast.CallExpr{
Fun: &ast.Ident{NamePos: (*r.Expr).Pos(), Name: "_Cgo_ptr"},
Args: []ast.Expr{getNewIdent(name.Mangle)},
}
case "type":
// Okay - might be new(T), T(x), Generic[T], etc.
if r.Name.Type == nil {
error_(r.Pos(), "expression C.%s: undefined C type '%s'", fixGo(r.Name.Go), r.Name.C)
}
case "var":
expr = &ast.StarExpr{Star: (*r.Expr).Pos(), X: expr}
case "macro":
expr = &ast.CallExpr{Fun: expr}
}
case ctxSelector:
if r.Name.Kind == "var" {
expr = &ast.StarExpr{Star: (*r.Expr).Pos(), X: expr}
} else {
error_(r.Pos(), "only C variables allowed in selector expression %s", fixGo(r.Name.Go))
}
case ctxType:
if r.Name.Kind != "type" {
error_(r.Pos(), "expression C.%s used as type", fixGo(r.Name.Go))
} else if r.Name.Type == nil {
// Use of C.enum_x, C.struct_x or C.union_x without C definition.
// GCC won't raise an error when using pointers to such unknown types.
error_(r.Pos(), "type C.%s: undefined C type '%s'", fixGo(r.Name.Go), r.Name.C)
}
default:
if r.Name.Kind == "func" {
error_(r.Pos(), "must call C.%s", fixGo(r.Name.Go))
}
}
return expr
}
// gofmtPos returns the gofmt-formatted string for an AST node,
// with a comment setting the position before the node.
func gofmtPos(n ast.Expr, pos token.Pos) string {
s := gofmt(n)
p := fset.Position(pos)
if p.Column == 0 {
return s
}
return fmt.Sprintf("/*line :%d:%d*/%s", p.Line, p.Column, s)
}
// checkGCCBaseCmd returns the start of the compiler command line.
// It uses $CC if set, or else $GCC, or else the compiler recorded
// during the initial build as defaultCC.
// defaultCC is defined in zdefaultcc.go, written by cmd/dist.
//
// The compiler command line is split into arguments on whitespace. Quotes
// are understood, so arguments may contain whitespace.
//
// checkGCCBaseCmd confirms that the compiler exists in PATH, returning
// an error if it does not.
func checkGCCBaseCmd() ([]string, error) {
// Use $CC if set, since that's what the build uses.
value := os.Getenv("CC")
if value == "" {
// Try $GCC if set, since that's what we used to use.
value = os.Getenv("GCC")
}
if value == "" {
value = defaultCC(goos, goarch)
}
args, err := quoted.Split(value)
if err != nil {
return nil, err
}
if len(args) == 0 {
return nil, errors.New("CC not set and no default found")
}
if _, err := exec.LookPath(args[0]); err != nil {
return nil, fmt.Errorf("C compiler %q not found: %v", args[0], err)
}
return args[:len(args):len(args)], nil
}
// gccMachine returns the gcc -m flag to use, either "-m32", "-m64" or "-marm".
func gccMachine() []string {
switch goarch {
case "amd64":
if goos == "darwin" {
return []string{"-arch", "x86_64", "-m64"}
}
return []string{"-m64"}
case "arm64":
if goos == "darwin" {
return []string{"-arch", "arm64"}
}
case "386":
return []string{"-m32"}
case "arm":
return []string{"-marm"} // not thumb
case "s390":
return []string{"-m31"}
case "s390x":
return []string{"-m64"}
case "mips64", "mips64le":
if gomips64 == "hardfloat" {
return []string{"-mabi=64", "-mhard-float"}
} else if gomips64 == "softfloat" {
return []string{"-mabi=64", "-msoft-float"}
}
case "mips", "mipsle":
if gomips == "hardfloat" {
return []string{"-mabi=32", "-mfp32", "-mhard-float", "-mno-odd-spreg"}
} else if gomips == "softfloat" {
return []string{"-mabi=32", "-msoft-float"}
}
case "loong64":
return []string{"-mabi=lp64d"}
}
return nil
}
var n atomic.Int64
func gccTmp() string {
c := strconv.Itoa(int(n.Add(1)))
return filepath.Join(outputDir(), "_cgo_"+c+".o")
}
// gccCmd returns the gcc command line to use for compiling
// the input.
// gccCommand is called concurrently for different files.
func (p *Package) gccCmd(ofile string) []string {
c := append(gccBaseCmd,
"-w", // no warnings
"-Wno-error", // warnings are not errors
"-o"+ofile, // write object to tmp
"-gdwarf-2", // generate DWARF v2 debugging symbols
"-c", // do not link
"-xc", // input language is C
)
if p.GccIsClang {
c = append(c,
"-ferror-limit=0",
// Apple clang version 1.7 (tags/Apple/clang-77) (based on LLVM 2.9svn)
// doesn't have -Wno-unneeded-internal-declaration, so we need yet another
// flag to disable the warning. Yes, really good diagnostics, clang.
"-Wno-unknown-warning-option",
"-Wno-unneeded-internal-declaration",
"-Wno-unused-function",
"-Qunused-arguments",
// Clang embeds prototypes for some builtin functions,
// like malloc and calloc, but all size_t parameters are
// incorrectly typed unsigned long. We work around that
// by disabling the builtin functions (this is safe as
// it won't affect the actual compilation of the C code).
// See: https://golang.org/issue/6506.
"-fno-builtin",
)
}
c = append(c, p.GccOptions...)
c = append(c, gccMachine()...)
if goos == "aix" {
c = append(c, "-maix64")
c = append(c, "-mcmodel=large")
}
// disable LTO so we get an object whose symbols we can read
c = append(c, "-fno-lto")
c = append(c, "-") //read input from standard input
return c
}
// gccDebug runs gcc -gdwarf-2 over the C program stdin and
// returns the corresponding DWARF data and, if present, debug data block.
// gccDebug is called concurrently with different C programs.
func (p *Package) gccDebug(stdin []byte, nnames int) (d *dwarf.Data, ints []int64, floats []float64, strs []string) {
ofile := gccTmp()
runGcc(stdin, p.gccCmd(ofile))
isDebugInts := func(s string) bool {
// Some systems use leading _ to denote non-assembly symbols.
return s == "__cgodebug_ints" || s == "___cgodebug_ints"
}
isDebugFloats := func(s string) bool {
// Some systems use leading _ to denote non-assembly symbols.
return s == "__cgodebug_floats" || s == "___cgodebug_floats"
}
indexOfDebugStr := func(s string) int {
// Some systems use leading _ to denote non-assembly symbols.
if strings.HasPrefix(s, "___") {
s = s[1:]
}
if strings.HasPrefix(s, "__cgodebug_str__") {
if n, err := strconv.Atoi(s[len("__cgodebug_str__"):]); err == nil {
return n
}
}
return -1
}
indexOfDebugStrlen := func(s string) int {
// Some systems use leading _ to denote non-assembly symbols.
if strings.HasPrefix(s, "___") {
s = s[1:]
}
if t, ok := strings.CutPrefix(s, "__cgodebug_strlen__"); ok {
if n, err := strconv.Atoi(t); err == nil {
return n
}
}
return -1
}
strs = make([]string, nnames)
strdata := make(map[int]string, nnames)
strlens := make(map[int]int, nnames)
buildStrings := func() {
for n, strlen := range strlens {
data := strdata[n]
if len(data) <= strlen {
fatalf("invalid string literal")
}
strs[n] = data[:strlen]
}
}
if f, err := macho.Open(ofile); err == nil {
defer f.Close()
d, err := f.DWARF()
if err != nil {
fatalf("cannot load DWARF output from %s: %v", ofile, err)
}
bo := f.ByteOrder
if f.Symtab != nil {
for i := range f.Symtab.Syms {
s := &f.Symtab.Syms[i]
switch {
case isDebugInts(s.Name):
// Found it. Now find data section.
if i := int(s.Sect) - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if sect.Addr <= s.Value && s.Value < sect.Addr+sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value-sect.Addr:]
ints = make([]int64, len(data)/8)
for i := range ints {
ints[i] = int64(bo.Uint64(data[i*8:]))
}
}
}
}
case isDebugFloats(s.Name):
// Found it. Now find data section.
if i := int(s.Sect) - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if sect.Addr <= s.Value && s.Value < sect.Addr+sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value-sect.Addr:]
floats = make([]float64, len(data)/8)
for i := range floats {
floats[i] = math.Float64frombits(bo.Uint64(data[i*8:]))
}
}
}
}
default:
if n := indexOfDebugStr(s.Name); n != -1 {
// Found it. Now find data section.
if i := int(s.Sect) - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if sect.Addr <= s.Value && s.Value < sect.Addr+sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value-sect.Addr:]
strdata[n] = string(data)
}
}
}
break
}
if n := indexOfDebugStrlen(s.Name); n != -1 {
// Found it. Now find data section.
if i := int(s.Sect) - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if sect.Addr <= s.Value && s.Value < sect.Addr+sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value-sect.Addr:]
strlen := bo.Uint64(data[:8])
if strlen > (1<<(uint(p.IntSize*8)-1) - 1) { // greater than MaxInt?
fatalf("string literal too big")
}
strlens[n] = int(strlen)
}
}
}
break
}
}
}
buildStrings()
}
return d, ints, floats, strs
}
if f, err := elf.Open(ofile); err == nil {
defer f.Close()
d, err := f.DWARF()
if err != nil {
fatalf("cannot load DWARF output from %s: %v", ofile, err)
}
bo := f.ByteOrder
symtab, err := f.Symbols()
if err == nil {
// Check for use of -fsanitize=hwaddress (issue 53285).
removeTag := func(v uint64) uint64 { return v }
if goarch == "arm64" {
for i := range symtab {
if symtab[i].Name == "__hwasan_init" {
// -fsanitize=hwaddress on ARM
// uses the upper byte of a
// memory address as a hardware
// tag. Remove it so that
// we can find the associated
// data.
removeTag = func(v uint64) uint64 { return v &^ (0xff << (64 - 8)) }
break
}
}
}
for i := range symtab {
s := &symtab[i]
switch {
case isDebugInts(s.Name):
// Found it. Now find data section.
if i := int(s.Section); 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
val := removeTag(s.Value)
if sect.Addr <= val && val < sect.Addr+sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[val-sect.Addr:]
ints = make([]int64, len(data)/8)
for i := range ints {
ints[i] = int64(bo.Uint64(data[i*8:]))
}
}
}
}
case isDebugFloats(s.Name):
// Found it. Now find data section.
if i := int(s.Section); 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
val := removeTag(s.Value)
if sect.Addr <= val && val < sect.Addr+sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[val-sect.Addr:]
floats = make([]float64, len(data)/8)
for i := range floats {
floats[i] = math.Float64frombits(bo.Uint64(data[i*8:]))
}
}
}
}
default:
if n := indexOfDebugStr(s.Name); n != -1 {
// Found it. Now find data section.
if i := int(s.Section); 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
val := removeTag(s.Value)
if sect.Addr <= val && val < sect.Addr+sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[val-sect.Addr:]
strdata[n] = string(data)
}
}
}
break
}
if n := indexOfDebugStrlen(s.Name); n != -1 {
// Found it. Now find data section.
if i := int(s.Section); 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
val := removeTag(s.Value)
if sect.Addr <= val && val < sect.Addr+sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[val-sect.Addr:]
strlen := bo.Uint64(data[:8])
if strlen > (1<<(uint(p.IntSize*8)-1) - 1) { // greater than MaxInt?
fatalf("string literal too big")
}
strlens[n] = int(strlen)
}
}
}
break
}
}
}
buildStrings()
}
return d, ints, floats, strs
}
if f, err := pe.Open(ofile); err == nil {
defer f.Close()
d, err := f.DWARF()
if err != nil {
fatalf("cannot load DWARF output from %s: %v", ofile, err)
}
bo := binary.LittleEndian
for _, s := range f.Symbols {
switch {
case isDebugInts(s.Name):
if i := int(s.SectionNumber) - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if s.Value < sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value:]
ints = make([]int64, len(data)/8)
for i := range ints {
ints[i] = int64(bo.Uint64(data[i*8:]))
}
}
}
}
case isDebugFloats(s.Name):
if i := int(s.SectionNumber) - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if s.Value < sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value:]
floats = make([]float64, len(data)/8)
for i := range floats {
floats[i] = math.Float64frombits(bo.Uint64(data[i*8:]))
}
}
}
}
default:
if n := indexOfDebugStr(s.Name); n != -1 {
if i := int(s.SectionNumber) - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if s.Value < sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value:]
strdata[n] = string(data)
}
}
}
break
}
if n := indexOfDebugStrlen(s.Name); n != -1 {
if i := int(s.SectionNumber) - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if s.Value < sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value:]
strlen := bo.Uint64(data[:8])
if strlen > (1<<(uint(p.IntSize*8)-1) - 1) { // greater than MaxInt?
fatalf("string literal too big")
}
strlens[n] = int(strlen)
}
}
}
break
}
}
}
buildStrings()
return d, ints, floats, strs
}
if f, err := xcoff.Open(ofile); err == nil {
defer f.Close()
d, err := f.DWARF()
if err != nil {
fatalf("cannot load DWARF output from %s: %v", ofile, err)
}
bo := binary.BigEndian
for _, s := range f.Symbols {
switch {
case isDebugInts(s.Name):
if i := s.SectionNumber - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if s.Value < sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value:]
ints = make([]int64, len(data)/8)
for i := range ints {
ints[i] = int64(bo.Uint64(data[i*8:]))
}
}
}
}
case isDebugFloats(s.Name):
if i := s.SectionNumber - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if s.Value < sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value:]
floats = make([]float64, len(data)/8)
for i := range floats {
floats[i] = math.Float64frombits(bo.Uint64(data[i*8:]))
}
}
}
}
default:
if n := indexOfDebugStr(s.Name); n != -1 {
if i := s.SectionNumber - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if s.Value < sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value:]
strdata[n] = string(data)
}
}
}
break
}
if n := indexOfDebugStrlen(s.Name); n != -1 {
if i := s.SectionNumber - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if s.Value < sect.Size {
if sdat, err := sect.Data(); err == nil {
data := sdat[s.Value:]
strlen := bo.Uint64(data[:8])
if strlen > (1<<(uint(p.IntSize*8)-1) - 1) { // greater than MaxInt?
fatalf("string literal too big")
}
strlens[n] = int(strlen)
}
}
}
break
}
}
}
buildStrings()
return d, ints, floats, strs
}
fatalf("cannot parse gcc output %s as ELF, Mach-O, PE, XCOFF object", ofile)
panic("not reached")
}
// gccDefines runs gcc -E -dM -xc - over the C program stdin
// and returns the corresponding standard output, which is the
// #defines that gcc encountered while processing the input
// and its included files.
func gccDefines(stdin []byte, gccOptions []string) string {
base := append(gccBaseCmd, "-E", "-dM", "-xc")
base = append(base, gccMachine()...)
stdout, _ := runGcc(stdin, append(append(base, gccOptions...), "-"))
return stdout
}
// gccErrors runs gcc over the C program stdin and returns
// the errors that gcc prints. That is, this function expects
// gcc to fail.
// gccErrors is called concurrently with different C programs.
func (p *Package) gccErrors(stdin []byte, extraArgs ...string) string {
// TODO(rsc): require failure
args := p.gccCmd(gccTmp())
// Optimization options can confuse the error messages; remove them.
nargs := make([]string, 0, len(args)+len(extraArgs))
for _, arg := range args {
if !strings.HasPrefix(arg, "-O") {
nargs = append(nargs, arg)
}
}
// Force -O0 optimization and append extra arguments, but keep the
// trailing "-" at the end.
li := len(nargs) - 1
last := nargs[li]
nargs[li] = "-O0"
nargs = append(nargs, extraArgs...)
nargs = append(nargs, last)
if *debugGcc {
fmt.Fprintf(os.Stderr, "$ %s <<EOF\n", strings.Join(nargs, " "))
os.Stderr.Write(stdin)
fmt.Fprint(os.Stderr, "EOF\n")
}
stdout, stderr, _ := run(stdin, nargs)
if *debugGcc {
os.Stderr.Write(stdout)
os.Stderr.Write(stderr)
}
return string(stderr)
}
// runGcc runs the gcc command line args with stdin on standard input.
// If the command exits with a non-zero exit status, runGcc prints
// details about what was run and exits.
// Otherwise runGcc returns the data written to standard output and standard error.
// Note that for some of the uses we expect useful data back
// on standard error, but for those uses gcc must still exit 0.
func runGcc(stdin []byte, args []string) (string, string) {
if *debugGcc {
fmt.Fprintf(os.Stderr, "$ %s <<EOF\n", strings.Join(args, " "))
os.Stderr.Write(stdin)
fmt.Fprint(os.Stderr, "EOF\n")
}
stdout, stderr, ok := run(stdin, args)
if *debugGcc {
os.Stderr.Write(stdout)
os.Stderr.Write(stderr)
}
if !ok {
os.Stderr.Write(stderr)
os.Exit(2)
}
return string(stdout), string(stderr)
}
// A typeConv is a translator from dwarf types to Go types
// with equivalent memory layout.
type typeConv struct {
// Cache of already-translated or in-progress types.
m map[string]*Type
// Map from types to incomplete pointers to those types.
ptrs map[string][]*Type
// Keys of ptrs in insertion order (deterministic worklist)
// ptrKeys contains exactly the keys in ptrs.
ptrKeys []dwarf.Type
// Type names X for which there exists an XGetTypeID function with type func() CFTypeID.
getTypeIDs map[string]bool
// incompleteStructs contains C structs that should be marked Incomplete.
incompleteStructs map[string]bool
// Predeclared types.
bool ast.Expr
byte ast.Expr // denotes padding
int8, int16, int32, int64 ast.Expr
uint8, uint16, uint32, uint64, uintptr ast.Expr
float32, float64 ast.Expr
complex64, complex128 ast.Expr
void ast.Expr
string ast.Expr
goVoid ast.Expr // _Ctype_void, denotes C's void
goVoidPtr ast.Expr // unsafe.Pointer or *byte
ptrSize int64
intSize int64
}
var tagGen int
var typedef = make(map[string]*Type)
var goIdent = make(map[string]*ast.Ident)
// unionWithPointer is true for a Go type that represents a C union (or class)
// that may contain a pointer. This is used for cgo pointer checking.
var unionWithPointer = make(map[ast.Expr]bool)
// anonymousStructTag provides a consistent tag for an anonymous struct.
// The same dwarf.StructType pointer will always get the same tag.
var anonymousStructTag = make(map[*dwarf.StructType]string)
func (c *typeConv) Init(ptrSize, intSize int64) {
c.ptrSize = ptrSize
c.intSize = intSize
c.m = make(map[string]*Type)
c.ptrs = make(map[string][]*Type)
c.getTypeIDs = make(map[string]bool)
c.incompleteStructs = make(map[string]bool)
c.bool = c.Ident("bool")
c.byte = c.Ident("byte")
c.int8 = c.Ident("int8")
c.int16 = c.Ident("int16")
c.int32 = c.Ident("int32")
c.int64 = c.Ident("int64")
c.uint8 = c.Ident("uint8")
c.uint16 = c.Ident("uint16")
c.uint32 = c.Ident("uint32")
c.uint64 = c.Ident("uint64")
c.uintptr = c.Ident("uintptr")
c.float32 = c.Ident("float32")
c.float64 = c.Ident("float64")
c.complex64 = c.Ident("complex64")
c.complex128 = c.Ident("complex128")
c.void = c.Ident("void")
c.string = c.Ident("string")
c.goVoid = c.Ident("_Ctype_void")
// Normally cgo translates void* to unsafe.Pointer,
// but for historical reasons -godefs uses *byte instead.
if *godefs {
c.goVoidPtr = &ast.StarExpr{X: c.byte}
} else {
c.goVoidPtr = c.Ident("unsafe.Pointer")
}
}
// base strips away qualifiers and typedefs to get the underlying type.
func base(dt dwarf.Type) dwarf.Type {
for {
if d, ok := dt.(*dwarf.QualType); ok {
dt = d.Type
continue
}
if d, ok := dt.(*dwarf.TypedefType); ok {
dt = d.Type
continue
}
break
}
return dt
}
// unqual strips away qualifiers from a DWARF type.
// In general we don't care about top-level qualifiers.
func unqual(dt dwarf.Type) dwarf.Type {
for {
if d, ok := dt.(*dwarf.QualType); ok {
dt = d.Type
} else {
break
}
}
return dt
}
// Map from dwarf text names to aliases we use in package "C".
var dwarfToName = map[string]string{
"long int": "long",
"long unsigned int": "ulong",
"unsigned int": "uint",
"short unsigned int": "ushort",
"unsigned short": "ushort", // Used by Clang; issue 13129.
"short int": "short",
"long long int": "longlong",
"long long unsigned int": "ulonglong",
"signed char": "schar",
"unsigned char": "uchar",
"unsigned long": "ulong", // Used by Clang 14; issue 53013.
"unsigned long long": "ulonglong", // Used by Clang 14; issue 53013.
}
const signedDelta = 64
// String returns the current type representation. Format arguments
// are assembled within this method so that any changes in mutable
// values are taken into account.
func (tr *TypeRepr) String() string {
if len(tr.Repr) == 0 {
return ""
}
if len(tr.FormatArgs) == 0 {
return tr.Repr
}
return fmt.Sprintf(tr.Repr, tr.FormatArgs...)
}
// Empty reports whether the result of String would be "".
func (tr *TypeRepr) Empty() bool {
return len(tr.Repr) == 0
}
// Set modifies the type representation.
// If fargs are provided, repr is used as a format for fmt.Sprintf.
// Otherwise, repr is used unprocessed as the type representation.
func (tr *TypeRepr) Set(repr string, fargs ...any) {
tr.Repr = repr
tr.FormatArgs = fargs
}
// FinishType completes any outstanding type mapping work.
// In particular, it resolves incomplete pointer types.
func (c *typeConv) FinishType(pos token.Pos) {
// Completing one pointer type might produce more to complete.
// Keep looping until they're all done.
for len(c.ptrKeys) > 0 {
dtype := c.ptrKeys[0]
dtypeKey := dtype.String()
c.ptrKeys = c.ptrKeys[1:]
ptrs := c.ptrs[dtypeKey]
delete(c.ptrs, dtypeKey)
// Note Type might invalidate c.ptrs[dtypeKey].
t := c.Type(dtype, pos)
for _, ptr := range ptrs {
ptr.Go.(*ast.StarExpr).X = t.Go
ptr.C.Set("%s*", t.C)
}
}
}
// Type returns a *Type with the same memory layout as
// dtype when used as the type of a variable or a struct field.
func (c *typeConv) Type(dtype dwarf.Type, pos token.Pos) *Type {
return c.loadType(dtype, pos, "")
}
// loadType recursively loads the requested dtype and its dependency graph.
func (c *typeConv) loadType(dtype dwarf.Type, pos token.Pos, parent string) *Type {
// Always recompute bad pointer typedefs, as the set of such
// typedefs changes as we see more types.
checkCache := true
if dtt, ok := dtype.(*dwarf.TypedefType); ok && c.badPointerTypedef(dtt) {
checkCache = false
}
// The cache key should be relative to its parent.
// See issue https://golang.org/issue/31891
key := parent + " > " + dtype.String()
if checkCache {
if t, ok := c.m[key]; ok {
if t.Go == nil {
fatalf("%s: type conversion loop at %s", lineno(pos), dtype)
}
return t
}
}
t := new(Type)
t.Size = dtype.Size() // note: wrong for array of pointers, corrected below
t.Align = -1
t.C = &TypeRepr{Repr: dtype.Common().Name}
c.m[key] = t
switch dt := dtype.(type) {
default:
fatalf("%s: unexpected type: %s", lineno(pos), dtype)
case *dwarf.AddrType:
if t.Size != c.ptrSize {
fatalf("%s: unexpected: %d-byte address type - %s", lineno(pos), t.Size, dtype)
}
t.Go = c.uintptr
t.Align = t.Size
case *dwarf.ArrayType:
if dt.StrideBitSize > 0 {
// Cannot represent bit-sized elements in Go.
t.Go = c.Opaque(t.Size)
break
}
count := dt.Count
if count == -1 {
// Indicates flexible array member, which Go doesn't support.
// Translate to zero-length array instead.
count = 0
}
sub := c.Type(dt.Type, pos)
t.Align = sub.Align
t.Go = &ast.ArrayType{
Len: c.intExpr(count),
Elt: sub.Go,
}
// Recalculate t.Size now that we know sub.Size.
t.Size = count * sub.Size
t.C.Set("__typeof__(%s[%d])", sub.C, dt.Count)
case *dwarf.BoolType:
t.Go = c.bool
t.Align = 1
case *dwarf.CharType:
if t.Size != 1 {
fatalf("%s: unexpected: %d-byte char type - %s", lineno(pos), t.Size, dtype)
}
t.Go = c.int8
t.Align = 1
case *dwarf.EnumType:
if t.Align = t.Size; t.Align >= c.ptrSize {
t.Align = c.ptrSize
}
t.C.Set("enum " + dt.EnumName)
signed := 0
t.EnumValues = make(map[string]int64)
for _, ev := range dt.Val {
t.EnumValues[ev.Name] = ev.Val
if ev.Val < 0 {
signed = signedDelta
}
}
switch t.Size + int64(signed) {
default:
fatalf("%s: unexpected: %d-byte enum type - %s", lineno(pos), t.Size, dtype)
case 1:
t.Go = c.uint8
case 2:
t.Go = c.uint16
case 4:
t.Go = c.uint32
case 8:
t.Go = c.uint64
case 1 + signedDelta:
t.Go = c.int8
case 2 + signedDelta:
t.Go = c.int16
case 4 + signedDelta:
t.Go = c.int32
case 8 + signedDelta:
t.Go = c.int64
}
case *dwarf.FloatType:
switch t.Size {
default:
fatalf("%s: unexpected: %d-byte float type - %s", lineno(pos), t.Size, dtype)
case 4:
t.Go = c.float32
case 8:
t.Go = c.float64
}
if t.Align = t.Size; t.Align >= c.ptrSize {
t.Align = c.ptrSize
}
case *dwarf.ComplexType:
switch t.Size {
default:
fatalf("%s: unexpected: %d-byte complex type - %s", lineno(pos), t.Size, dtype)
case 8:
t.Go = c.complex64
case 16:
t.Go = c.complex128
}
if t.Align = t.Size / 2; t.Align >= c.ptrSize {
t.Align = c.ptrSize
}
case *dwarf.FuncType:
// No attempt at translation: would enable calls
// directly between worlds, but we need to moderate those.
t.Go = c.uintptr
t.Align = c.ptrSize
case *dwarf.IntType:
if dt.BitSize > 0 {
fatalf("%s: unexpected: %d-bit int type - %s", lineno(pos), dt.BitSize, dtype)
}
if t.Align = t.Size; t.Align >= c.ptrSize {
t.Align = c.ptrSize
}
switch t.Size {
default:
fatalf("%s: unexpected: %d-byte int type - %s", lineno(pos), t.Size, dtype)
case 1:
t.Go = c.int8
case 2:
t.Go = c.int16
case 4:
t.Go = c.int32
case 8:
t.Go = c.int64
case 16:
t.Go = &ast.ArrayType{
Len: c.intExpr(t.Size),
Elt: c.uint8,
}
// t.Align is the alignment of the Go type.
t.Align = 1
}
case *dwarf.PtrType:
// Clang doesn't emit DW_AT_byte_size for pointer types.
if t.Size != c.ptrSize && t.Size != -1 {
fatalf("%s: unexpected: %d-byte pointer type - %s", lineno(pos), t.Size, dtype)
}
t.Size = c.ptrSize
t.Align = c.ptrSize
if _, ok := base(dt.Type).(*dwarf.VoidType); ok {
t.Go = c.goVoidPtr
t.C.Set("void*")
dq := dt.Type
for {
if d, ok := dq.(*dwarf.QualType); ok {
t.C.Set(d.Qual + " " + t.C.String())
dq = d.Type
} else {
break
}
}
break
}
// Placeholder initialization; completed in FinishType.
t.Go = &ast.StarExpr{}
t.C.Set("<incomplete>*")
key := dt.Type.String()
if _, ok := c.ptrs[key]; !ok {
c.ptrKeys = append(c.ptrKeys, dt.Type)
}
c.ptrs[key] = append(c.ptrs[key], t)
case *dwarf.QualType:
t1 := c.Type(dt.Type, pos)
t.Size = t1.Size
t.Align = t1.Align
t.Go = t1.Go
if unionWithPointer[t1.Go] {
unionWithPointer[t.Go] = true
}
t.EnumValues = nil
t.Typedef = ""
t.C.Set("%s "+dt.Qual, t1.C)
return t
case *dwarf.StructType:
// Convert to Go struct, being careful about alignment.
// Have to give it a name to simulate C "struct foo" references.
tag := dt.StructName
if dt.ByteSize < 0 && tag == "" { // opaque unnamed struct - should not be possible
break
}
if tag == "" {
tag = anonymousStructTag[dt]
if tag == "" {
tag = "__" + strconv.Itoa(tagGen)
tagGen++
anonymousStructTag[dt] = tag
}
} else if t.C.Empty() {
t.C.Set(dt.Kind + " " + tag)
}
name := c.Ident("_Ctype_" + dt.Kind + "_" + tag)
t.Go = name // publish before recursive calls
goIdent[name.Name] = name
if dt.ByteSize < 0 {
// Don't override old type
if _, ok := typedef[name.Name]; ok {
break
}
// Size calculation in c.Struct/c.Opaque will die with size=-1 (unknown),
// so execute the basic things that the struct case would do
// other than try to determine a Go representation.
tt := *t
tt.C = &TypeRepr{"%s %s", []any{dt.Kind, tag}}
// We don't know what the representation of this struct is, so don't let
// anyone allocate one on the Go side. As a side effect of this annotation,
// pointers to this type will not be considered pointers in Go. They won't
// get writebarrier-ed or adjusted during a stack copy. This should handle
// all the cases badPointerTypedef used to handle, but hopefully will
// continue to work going forward without any more need for cgo changes.
tt.Go = c.Ident(incomplete)
typedef[name.Name] = &tt
break
}
switch dt.Kind {
case "class", "union":
t.Go = c.Opaque(t.Size)
if c.dwarfHasPointer(dt, pos) {
unionWithPointer[t.Go] = true
}
if t.C.Empty() {
t.C.Set("__typeof__(unsigned char[%d])", t.Size)
}
t.Align = 1 // TODO: should probably base this on field alignment.
typedef[name.Name] = t
case "struct":
g, csyntax, align := c.Struct(dt, pos)
if t.C.Empty() {
t.C.Set(csyntax)
}
t.Align = align
tt := *t
if tag != "" {
tt.C = &TypeRepr{"struct %s", []any{tag}}
}
tt.Go = g
if c.incompleteStructs[tag] {
tt.Go = c.Ident(incomplete)
}
typedef[name.Name] = &tt
}
case *dwarf.TypedefType:
// Record typedef for printing.
if dt.Name == "_GoString_" {
// Special C name for Go string type.
// Knows string layout used by compilers: pointer plus length,
// which rounds up to 2 pointers after alignment.
t.Go = c.string
t.Size = c.ptrSize * 2
t.Align = c.ptrSize
break
}
if dt.Name == "_GoBytes_" {
// Special C name for Go []byte type.
// Knows slice layout used by compilers: pointer, length, cap.
t.Go = c.Ident("[]byte")
t.Size = c.ptrSize + 4 + 4
t.Align = c.ptrSize
break
}
name := c.Ident("_Ctype_" + dt.Name)
goIdent[name.Name] = name
akey := ""
if c.anonymousStructTypedef(dt) {
// only load type recursively for typedefs of anonymous
// structs, see issues 37479 and 37621.
akey = key
}
sub := c.loadType(dt.Type, pos, akey)
if c.badPointerTypedef(dt) {
// Treat this typedef as a uintptr.
s := *sub
s.Go = c.uintptr
s.BadPointer = true
sub = &s
// Make sure we update any previously computed type.
if oldType := typedef[name.Name]; oldType != nil {
oldType.Go = sub.Go
oldType.BadPointer = true
}
}
if c.badVoidPointerTypedef(dt) {
// Treat this typedef as a pointer to a _cgopackage.Incomplete.
s := *sub
s.Go = c.Ident("*" + incomplete)
sub = &s
// Make sure we update any previously computed type.
if oldType := typedef[name.Name]; oldType != nil {
oldType.Go = sub.Go
}
}
// Check for non-pointer "struct <tag>{...}; typedef struct <tag> *<name>"
// typedefs that should be marked Incomplete.
if ptr, ok := dt.Type.(*dwarf.PtrType); ok {
if strct, ok := ptr.Type.(*dwarf.StructType); ok {
if c.badStructPointerTypedef(dt.Name, strct) {
c.incompleteStructs[strct.StructName] = true
// Make sure we update any previously computed type.
name := "_Ctype_struct_" + strct.StructName
if oldType := typedef[name]; oldType != nil {
oldType.Go = c.Ident(incomplete)
}
}
}
}
t.Go = name
t.BadPointer = sub.BadPointer
if unionWithPointer[sub.Go] {
unionWithPointer[t.Go] = true
}
t.Size = sub.Size
t.Align = sub.Align
oldType := typedef[name.Name]
if oldType == nil {
tt := *t
tt.Go = sub.Go
tt.BadPointer = sub.BadPointer
typedef[name.Name] = &tt
}
// If sub.Go.Name is "_Ctype_struct_foo" or "_Ctype_union_foo" or "_Ctype_class_foo",
// use that as the Go form for this typedef too, so that the typedef will be interchangeable
// with the base type.
// In -godefs mode, do this for all typedefs.
if isStructUnionClass(sub.Go) || *godefs {
t.Go = sub.Go
if isStructUnionClass(sub.Go) {
// Use the typedef name for C code.
typedef[sub.Go.(*ast.Ident).Name].C = t.C
}
// If we've seen this typedef before, and it
// was an anonymous struct/union/class before
// too, use the old definition.
// TODO: it would be safer to only do this if
// we verify that the types are the same.
if oldType != nil && isStructUnionClass(oldType.Go) {
t.Go = oldType.Go
}
}
case *dwarf.UcharType:
if t.Size != 1 {
fatalf("%s: unexpected: %d-byte uchar type - %s", lineno(pos), t.Size, dtype)
}
t.Go = c.uint8
t.Align = 1
case *dwarf.UintType:
if dt.BitSize > 0 {
fatalf("%s: unexpected: %d-bit uint type - %s", lineno(pos), dt.BitSize, dtype)
}
if t.Align = t.Size; t.Align >= c.ptrSize {
t.Align = c.ptrSize
}
switch t.Size {
default:
fatalf("%s: unexpected: %d-byte uint type - %s", lineno(pos), t.Size, dtype)
case 1:
t.Go = c.uint8
case 2:
t.Go = c.uint16
case 4:
t.Go = c.uint32
case 8:
t.Go = c.uint64
case 16:
t.Go = &ast.ArrayType{
Len: c.intExpr(t.Size),
Elt: c.uint8,
}
// t.Align is the alignment of the Go type.
t.Align = 1
}
case *dwarf.VoidType:
t.Go = c.goVoid
t.C.Set("void")
t.Align = 1
}
switch dtype.(type) {
case *dwarf.AddrType, *dwarf.BoolType, *dwarf.CharType, *dwarf.ComplexType, *dwarf.IntType, *dwarf.FloatType, *dwarf.UcharType, *dwarf.UintType:
s := dtype.Common().Name
if s != "" {
if ss, ok := dwarfToName[s]; ok {
s = ss
}
s = strings.ReplaceAll(s, " ", "")
name := c.Ident("_Ctype_" + s)
tt := *t
typedef[name.Name] = &tt
if !*godefs {
t.Go = name
}
}
}
if t.Size < 0 {
// Unsized types are [0]byte, unless they're typedefs of other types
// or structs with tags.
// if so, use the name we've already defined.
t.Size = 0
switch dt := dtype.(type) {
case *dwarf.TypedefType:
// ok
case *dwarf.StructType:
if dt.StructName != "" {
break
}
t.Go = c.Opaque(0)
default:
t.Go = c.Opaque(0)
}
if t.C.Empty() {
t.C.Set("void")
}
}
if t.C.Empty() {
fatalf("%s: internal error: did not create C name for %s", lineno(pos), dtype)
}
return t
}
// isStructUnionClass reports whether the type described by the Go syntax x
// is a struct, union, or class with a tag.
func isStructUnionClass(x ast.Expr) bool {
id, ok := x.(*ast.Ident)
if !ok {
return false
}
name := id.Name
return strings.HasPrefix(name, "_Ctype_struct_") ||
strings.HasPrefix(name, "_Ctype_union_") ||
strings.HasPrefix(name, "_Ctype_class_")
}
// FuncArg returns a Go type with the same memory layout as
// dtype when used as the type of a C function argument.
func (c *typeConv) FuncArg(dtype dwarf.Type, pos token.Pos) *Type {
t := c.Type(unqual(dtype), pos)
switch dt := dtype.(type) {
case *dwarf.ArrayType:
// Arrays are passed implicitly as pointers in C.
// In Go, we must be explicit.
tr := &TypeRepr{}
tr.Set("%s*", t.C)
return &Type{
Size: c.ptrSize,
Align: c.ptrSize,
Go: &ast.StarExpr{X: t.Go},
C: tr,
}
case *dwarf.TypedefType:
// C has much more relaxed rules than Go for
// implicit type conversions. When the parameter
// is type T defined as *X, simulate a little of the
// laxness of C by making the argument *X instead of T.
if ptr, ok := base(dt.Type).(*dwarf.PtrType); ok {
// Unless the typedef happens to point to void* since
// Go has special rules around using unsafe.Pointer.
if _, void := base(ptr.Type).(*dwarf.VoidType); void {
break
}
// ...or the typedef is one in which we expect bad pointers.
// It will be a uintptr instead of *X.
if c.baseBadPointerTypedef(dt) {
break
}
t = c.Type(ptr, pos)
if t == nil {
return nil
}
// For a struct/union/class, remember the C spelling,
// in case it has __attribute__((unavailable)).
// See issue 2888.
if isStructUnionClass(t.Go) {
t.Typedef = dt.Name
}
}
}
return t
}
// FuncType returns the Go type analogous to dtype.
// There is no guarantee about matching memory layout.
func (c *typeConv) FuncType(dtype *dwarf.FuncType, pos token.Pos) *FuncType {
p := make([]*Type, len(dtype.ParamType))
gp := make([]*ast.Field, len(dtype.ParamType))
for i, f := range dtype.ParamType {
// gcc's DWARF generator outputs a single DotDotDotType parameter for
// function pointers that specify no parameters (e.g. void
// (*__cgo_0)()). Treat this special case as void. This case is
// invalid according to ISO C anyway (i.e. void (*__cgo_1)(...) is not
// legal).
if _, ok := f.(*dwarf.DotDotDotType); ok && i == 0 {
p, gp = nil, nil
break
}
p[i] = c.FuncArg(f, pos)
gp[i] = &ast.Field{Type: p[i].Go}
}
var r *Type
var gr []*ast.Field
if _, ok := base(dtype.ReturnType).(*dwarf.VoidType); ok {
gr = []*ast.Field{{Type: c.goVoid}}
} else if dtype.ReturnType != nil {
r = c.Type(unqual(dtype.ReturnType), pos)
gr = []*ast.Field{{Type: r.Go}}
}
return &FuncType{
Params: p,
Result: r,
Go: &ast.FuncType{
Params: &ast.FieldList{List: gp},
Results: &ast.FieldList{List: gr},
},
}
}
// Identifier
func (c *typeConv) Ident(s string) *ast.Ident {
return ast.NewIdent(s)
}
// Opaque type of n bytes.
func (c *typeConv) Opaque(n int64) ast.Expr {
return &ast.ArrayType{
Len: c.intExpr(n),
Elt: c.byte,
}
}
// Expr for integer n.
func (c *typeConv) intExpr(n int64) ast.Expr {
return &ast.BasicLit{
Kind: token.INT,
Value: strconv.FormatInt(n, 10),
}
}
// Add padding of given size to fld.
func (c *typeConv) pad(fld []*ast.Field, sizes []int64, size int64) ([]*ast.Field, []int64) {
n := len(fld)
fld = fld[0 : n+1]
fld[n] = &ast.Field{Names: []*ast.Ident{c.Ident("_")}, Type: c.Opaque(size)}
sizes = sizes[0 : n+1]
sizes[n] = size
return fld, sizes
}
// Struct conversion: return Go and (gc) C syntax for type.
func (c *typeConv) Struct(dt *dwarf.StructType, pos token.Pos) (expr *ast.StructType, csyntax string, align int64) {
// Minimum alignment for a struct is 1 byte.
align = 1
var buf strings.Builder
buf.WriteString("struct {")
fld := make([]*ast.Field, 0, 2*len(dt.Field)+1) // enough for padding around every field
sizes := make([]int64, 0, 2*len(dt.Field)+1)
off := int64(0)
// Rename struct fields that happen to be named Go keywords into
// _{keyword}. Create a map from C ident -> Go ident. The Go ident will
// be mangled. Any existing identifier that already has the same name on
// the C-side will cause the Go-mangled version to be prefixed with _.
// (e.g. in a struct with fields '_type' and 'type', the latter would be
// rendered as '__type' in Go).
ident := make(map[string]string)
used := make(map[string]bool)
for _, f := range dt.Field {
ident[f.Name] = f.Name
used[f.Name] = true
}
if !*godefs {
for cid, goid := range ident {
if token.Lookup(goid).IsKeyword() {
// Avoid keyword
goid = "_" + goid
// Also avoid existing fields
for _, exist := used[goid]; exist; _, exist = used[goid] {
goid = "_" + goid
}
used[goid] = true
ident[cid] = goid
}
}
}
anon := 0
for _, f := range dt.Field {
name := f.Name
ft := f.Type
// In godefs mode, if this field is a C11
// anonymous union then treat the first field in the
// union as the field in the struct. This handles
// cases like the glibc <sys/resource.h> file; see
// issue 6677.
if *godefs {
if st, ok := f.Type.(*dwarf.StructType); ok && name == "" && st.Kind == "union" && len(st.Field) > 0 && !used[st.Field[0].Name] {
name = st.Field[0].Name
ident[name] = name
ft = st.Field[0].Type
}
}
// TODO: Handle fields that are anonymous structs by
// promoting the fields of the inner struct.
t := c.Type(ft, pos)
tgo := t.Go
size := t.Size
talign := t.Align
if f.BitOffset > 0 || f.BitSize > 0 {
// The layout of bitfields is implementation defined,
// so we don't know how they correspond to Go fields
// even if they are aligned at byte boundaries.
continue
}
if talign > 0 && f.ByteOffset%talign != 0 {
// Drop misaligned fields, the same way we drop integer bit fields.
// The goal is to make available what can be made available.
// Otherwise one bad and unneeded field in an otherwise okay struct
// makes the whole program not compile. Much of the time these
// structs are in system headers that cannot be corrected.
continue
}
// Round off up to talign, assumed to be a power of 2.
origOff := off
off = (off + talign - 1) &^ (talign - 1)
if f.ByteOffset > off {
fld, sizes = c.pad(fld, sizes, f.ByteOffset-origOff)
off = f.ByteOffset
}
if f.ByteOffset < off {
// Drop a packed field that we can't represent.
continue
}
n := len(fld)
fld = fld[0 : n+1]
if name == "" {
name = fmt.Sprintf("anon%d", anon)
anon++
ident[name] = name
}
fld[n] = &ast.Field{Names: []*ast.Ident{c.Ident(ident[name])}, Type: tgo}
sizes = sizes[0 : n+1]
sizes[n] = size
off += size
buf.WriteString(t.C.String())
buf.WriteString(" ")
buf.WriteString(name)
buf.WriteString("; ")
if talign > align {
align = talign
}
}
if off < dt.ByteSize {
fld, sizes = c.pad(fld, sizes, dt.ByteSize-off)
off = dt.ByteSize
}
// If the last field in a non-zero-sized struct is zero-sized
// the compiler is going to pad it by one (see issue 9401).
// We can't permit that, because then the size of the Go
// struct will not be the same as the size of the C struct.
// Our only option in such a case is to remove the field,
// which means that it cannot be referenced from Go.
for off > 0 && sizes[len(sizes)-1] == 0 {
n := len(sizes)
fld = fld[0 : n-1]
sizes = sizes[0 : n-1]
}
if off != dt.ByteSize {
fatalf("%s: struct size calculation error off=%d bytesize=%d", lineno(pos), off, dt.ByteSize)
}
buf.WriteString("}")
csyntax = buf.String()
if *godefs {
godefsFields(fld)
}
expr = &ast.StructType{Fields: &ast.FieldList{List: fld}}
return
}
// dwarfHasPointer reports whether the DWARF type dt contains a pointer.
func (c *typeConv) dwarfHasPointer(dt dwarf.Type, pos token.Pos) bool {
switch dt := dt.(type) {
default:
fatalf("%s: unexpected type: %s", lineno(pos), dt)
return false
case *dwarf.AddrType, *dwarf.BoolType, *dwarf.CharType, *dwarf.EnumType,
*dwarf.FloatType, *dwarf.ComplexType, *dwarf.FuncType,
*dwarf.IntType, *dwarf.UcharType, *dwarf.UintType, *dwarf.VoidType:
return false
case *dwarf.ArrayType:
return c.dwarfHasPointer(dt.Type, pos)
case *dwarf.PtrType:
return true
case *dwarf.QualType:
return c.dwarfHasPointer(dt.Type, pos)
case *dwarf.StructType:
return slices.ContainsFunc(dt.Field, func(f *dwarf.StructField) bool {
return c.dwarfHasPointer(f.Type, pos)
})
case *dwarf.TypedefType:
if dt.Name == "_GoString_" || dt.Name == "_GoBytes_" {
return true
}
return c.dwarfHasPointer(dt.Type, pos)
}
}
func upper(s string) string {
if s == "" {
return ""
}
r, size := utf8.DecodeRuneInString(s)
if r == '_' {
return "X" + s
}
return string(unicode.ToUpper(r)) + s[size:]
}
// godefsFields rewrites field names for use in Go or C definitions.
// It strips leading common prefixes (like tv_ in tv_sec, tv_usec)
// converts names to upper case, and rewrites _ into Pad_godefs_n,
// so that all fields are exported.
func godefsFields(fld []*ast.Field) {
prefix := fieldPrefix(fld)
// Issue 48396: check for duplicate field names.
if prefix != "" {
names := make(map[string]bool)
fldLoop:
for _, f := range fld {
for _, n := range f.Names {
name := n.Name
if name == "_" {
continue
}
if name != prefix {
name = strings.TrimPrefix(n.Name, prefix)
}
name = upper(name)
if names[name] {
// Field name conflict: don't remove prefix.
prefix = ""
break fldLoop
}
names[name] = true
}
}
}
npad := 0
for _, f := range fld {
for _, n := range f.Names {
if n.Name != prefix {
n.Name = strings.TrimPrefix(n.Name, prefix)
}
if n.Name == "_" {
// Use exported name instead.
n.Name = "Pad_cgo_" + strconv.Itoa(npad)
npad++
}
n.Name = upper(n.Name)
}
}
}
// fieldPrefix returns the prefix that should be removed from all the
// field names when generating the C or Go code. For generated
// C, we leave the names as is (tv_sec, tv_usec), since that's what
// people are used to seeing in C. For generated Go code, such as
// package syscall's data structures, we drop a common prefix
// (so sec, usec, which will get turned into Sec, Usec for exporting).
func fieldPrefix(fld []*ast.Field) string {
prefix := ""
for _, f := range fld {
for _, n := range f.Names {
// Ignore field names that don't have the prefix we're
// looking for. It is common in C headers to have fields
// named, say, _pad in an otherwise prefixed header.
// If the struct has 3 fields tv_sec, tv_usec, _pad1, then we
// still want to remove the tv_ prefix.
// The check for "orig_" here handles orig_eax in the
// x86 ptrace register sets, which otherwise have all fields
// with reg_ prefixes.
if strings.HasPrefix(n.Name, "orig_") || strings.HasPrefix(n.Name, "_") {
continue
}
i := strings.Index(n.Name, "_")
if i < 0 {
continue
}
if prefix == "" {
prefix = n.Name[:i+1]
} else if prefix != n.Name[:i+1] {
return ""
}
}
}
return prefix
}
// anonymousStructTypedef reports whether dt is a C typedef for an anonymous
// struct.
func (c *typeConv) anonymousStructTypedef(dt *dwarf.TypedefType) bool {
st, ok := dt.Type.(*dwarf.StructType)
return ok && st.StructName == ""
}
// badPointerTypedef reports whether dt is a C typedef that should not be
// considered a pointer in Go. A typedef is bad if C code sometimes stores
// non-pointers in this type.
// TODO: Currently our best solution is to find these manually and list them as
// they come up. A better solution is desired.
// Note: DEPRECATED. There is now a better solution. Search for incomplete in this file.
func (c *typeConv) badPointerTypedef(dt *dwarf.TypedefType) bool {
if c.badCFType(dt) {
return true
}
if c.badJNI(dt) {
return true
}
if c.badEGLType(dt) {
return true
}
return false
}
// badVoidPointerTypedef is like badPointerTypeDef, but for "void *" typedefs that should be _cgopackage.Incomplete.
func (c *typeConv) badVoidPointerTypedef(dt *dwarf.TypedefType) bool {
// Match the Windows HANDLE type (#42018).
if goos != "windows" || dt.Name != "HANDLE" {
return false
}
// Check that the typedef is "typedef void *<name>".
if ptr, ok := dt.Type.(*dwarf.PtrType); ok {
if _, ok := ptr.Type.(*dwarf.VoidType); ok {
return true
}
}
return false
}
// badStructPointerTypedef is like badVoidPointerTypedef but for structs.
func (c *typeConv) badStructPointerTypedef(name string, dt *dwarf.StructType) bool {
// Windows handle types can all potentially contain non-pointers.
// badVoidPointerTypedef handles the "void *" HANDLE type, but other
// handles are defined as
//
// struct <name>__{int unused;}; typedef struct <name>__ *name;
//
// by the DECLARE_HANDLE macro in STRICT mode. The macro is declared in
// the Windows ntdef.h header,
//
// https://github.com/tpn/winsdk-10/blob/master/Include/10.0.16299.0/shared/ntdef.h#L779
if goos != "windows" {
return false
}
if len(dt.Field) != 1 {
return false
}
if dt.StructName != name+"__" {
return false
}
if f := dt.Field[0]; f.Name != "unused" || f.Type.Common().Name != "int" {
return false
}
return true
}
// baseBadPointerTypedef reports whether the base of a chain of typedefs is a bad typedef
// as badPointerTypedef reports.
func (c *typeConv) baseBadPointerTypedef(dt *dwarf.TypedefType) bool {
for {
if t, ok := dt.Type.(*dwarf.TypedefType); ok {
dt = t
continue
}
break
}
return c.badPointerTypedef(dt)
}
func (c *typeConv) badCFType(dt *dwarf.TypedefType) bool {
// The real bad types are CFNumberRef and CFDateRef.
// Sometimes non-pointers are stored in these types.
// CFTypeRef is a supertype of those, so it can have bad pointers in it as well.
// We return true for the other *Ref types just so casting between them is easier.
// We identify the correct set of types as those ending in Ref and for which
// there exists a corresponding GetTypeID function.
// See comment below for details about the bad pointers.
if goos != "darwin" && goos != "ios" {
return false
}
s := dt.Name
if !strings.HasSuffix(s, "Ref") {
return false
}
s = s[:len(s)-3]
if s == "CFType" {
return true
}
if c.getTypeIDs[s] {
return true
}
if i := strings.Index(s, "Mutable"); i >= 0 && c.getTypeIDs[s[:i]+s[i+7:]] {
// Mutable and immutable variants share a type ID.
return true
}
return false
}
// Comment from Darwin's CFInternal.h
/*
// Tagged pointer support
// Low-bit set means tagged object, next 3 bits (currently)
// define the tagged object class, next 4 bits are for type
// information for the specific tagged object class. Thus,
// the low byte is for type info, and the rest of a pointer
// (32 or 64-bit) is for payload, whatever the tagged class.
//
// Note that the specific integers used to identify the
// specific tagged classes can and will change from release
// to release (that's why this stuff is in CF*Internal*.h),
// as can the definition of type info vs payload above.
//
#if __LP64__
#define CF_IS_TAGGED_OBJ(PTR) ((uintptr_t)(PTR) & 0x1)
#define CF_TAGGED_OBJ_TYPE(PTR) ((uintptr_t)(PTR) & 0xF)
#else
#define CF_IS_TAGGED_OBJ(PTR) 0
#define CF_TAGGED_OBJ_TYPE(PTR) 0
#endif
enum {
kCFTaggedObjectID_Invalid = 0,
kCFTaggedObjectID_Atom = (0 << 1) + 1,
kCFTaggedObjectID_Undefined3 = (1 << 1) + 1,
kCFTaggedObjectID_Undefined2 = (2 << 1) + 1,
kCFTaggedObjectID_Integer = (3 << 1) + 1,
kCFTaggedObjectID_DateTS = (4 << 1) + 1,
kCFTaggedObjectID_ManagedObjectID = (5 << 1) + 1, // Core Data
kCFTaggedObjectID_Date = (6 << 1) + 1,
kCFTaggedObjectID_Undefined7 = (7 << 1) + 1,
};
*/
func (c *typeConv) badJNI(dt *dwarf.TypedefType) bool {
// In Dalvik and ART, the jobject type in the JNI interface of the JVM has the
// property that it is sometimes (always?) a small integer instead of a real pointer.
// Note: although only the android JVMs are bad in this respect, we declare the JNI types
// bad regardless of platform, so the same Go code compiles on both android and non-android.
if parent, ok := jniTypes[dt.Name]; ok {
// Try to make sure we're talking about a JNI type, not just some random user's
// type that happens to use the same name.
// C doesn't have the notion of a package, so it's hard to be certain.
// Walk up to jobject, checking each typedef on the way.
w := dt
for parent != "" {
t, ok := w.Type.(*dwarf.TypedefType)
if !ok || t.Name != parent {
return false
}
w = t
parent, ok = jniTypes[w.Name]
if !ok {
return false
}
}
// Check that the typedef is either:
// 1:
// struct _jobject;
// typedef struct _jobject *jobject;
// 2: (in NDK16 in C++)
// class _jobject {};
// typedef _jobject* jobject;
// 3: (in NDK16 in C)
// typedef void* jobject;
if ptr, ok := w.Type.(*dwarf.PtrType); ok {
switch v := ptr.Type.(type) {
case *dwarf.VoidType:
return true
case *dwarf.StructType:
if v.StructName == "_jobject" && len(v.Field) == 0 {
switch v.Kind {
case "struct":
if v.Incomplete {
return true
}
case "class":
if !v.Incomplete {
return true
}
}
}
}
}
}
return false
}
func (c *typeConv) badEGLType(dt *dwarf.TypedefType) bool {
if dt.Name != "EGLDisplay" && dt.Name != "EGLConfig" {
return false
}
// Check that the typedef is "typedef void *<name>".
if ptr, ok := dt.Type.(*dwarf.PtrType); ok {
if _, ok := ptr.Type.(*dwarf.VoidType); ok {
return true
}
}
return false
}
// jniTypes maps from JNI types that we want to be uintptrs, to the underlying type to which
// they are mapped. The base "jobject" maps to the empty string.
var jniTypes = map[string]string{
"jobject": "",
"jclass": "jobject",
"jthrowable": "jobject",
"jstring": "jobject",
"jarray": "jobject",
"jbooleanArray": "jarray",
"jbyteArray": "jarray",
"jcharArray": "jarray",
"jshortArray": "jarray",
"jintArray": "jarray",
"jlongArray": "jarray",
"jfloatArray": "jarray",
"jdoubleArray": "jarray",
"jobjectArray": "jarray",
"jweak": "jobject",
} | go | github | https://github.com/golang/go | src/cmd/cgo/gcc.go |
import gym
import numpy
y = .97 #Discount Rate
learnRate = .2
totalEps = 1000
def updateQMat(q, reward, state, action, newState):
futureReward = max(q[newState][:])
q[state][action] = q[state][action] + learnRate * (reward + y * futureReward - q[state][action])
return
success = False
lastFailEp = -1
firstSuccEp = -1
successEps = 0
env = gym.make('FrozenLake-v0')
#env = wrappers.Monitor(env, '/tmp/recording', force=True) #Records performance data
qMatrix = numpy.zeros((env.observation_space.n, env.action_space.n)) #Initialize qMatrix to 0s
for i in range(totalEps):
observation = env.reset()
#Loop through episode, one timestep at a time
for t in range(env.spec.tags.get('wrapper_config.TimeLimit.max_episode_steps')):
#Create an array of random estimated rewards representing each action
#with the possible range of rewards decreasing with each episode
randomActions = numpy.random.randn(1, env.action_space.n)*(1/(i+1))
#Choose either the action with max expected reward, or a random action
#according to randomActions array. With each episode, the random actions
#will become less chosen.
action = numpy.argmax(qMatrix[observation][:] + randomActions)
oldObservation = observation;
observation, reward, done, info = env.step(action) #Perform the action
if done and reward == 0:
reward = -1 #Edit reward to negative in the case of falling in a hole
updateQMat(qMatrix, reward, oldObservation, action, observation) #Update the Q-Matrix
#env.render()
if done:
if reward == 1:
successEps += 1
if success == False:
success = True
firstSuccEp = i
else:
lastFailEp = i
#print("Episode finished after {} timesteps".format(t+1))
break
print()
print("Percentage of successful episodes: " + str(successEps/totalEps * 100) + "%")
print()
print("First successful episode: " + str(firstSuccEp))
print()
print("Last failed episode: " + str(lastFailEp))
env.close()
print("qMatrix=%s\n" % qMatrix)
print("Policy is %s\n" % numpy.argmax(qMatrix,axis=1).reshape(4,4))
#gym.upload('/tmp/recording', api_key='sk_fVhBRLT7S7e4MoHswIH5wg') #Uploads performance data | unknown | codeparrot/codeparrot-clean | ||
/* Copyright 2017 - 2025 R. Thomas
* Copyright 2017 - 2025 Quarkslab
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef LIEF_MACHO_THREAD_COMMAND_H
#define LIEF_MACHO_THREAD_COMMAND_H
#include <vector>
#include <ostream>
#include "LIEF/visibility.h"
#include "LIEF/span.hpp"
#include "LIEF/MachO/LoadCommand.hpp"
#include "LIEF/MachO/Header.hpp"
namespace LIEF {
namespace MachO {
class BinaryParser;
namespace details {
struct thread_command;
}
/// Class that represents the LC_THREAD / LC_UNIXTHREAD commands and that
/// can be used to get the binary entrypoint when the LC_MAIN (MainCommand) is not present
///
/// Generally speaking, this command aims at defining the original state
/// of the main thread which includes the registers' values
class LIEF_API ThreadCommand : public LoadCommand {
friend class BinaryParser;
public:
ThreadCommand() = default;
ThreadCommand(const details::thread_command& cmd,
Header::CPU_TYPE arch = Header::CPU_TYPE::ANY);
ThreadCommand(uint32_t flavor, uint32_t count,
Header::CPU_TYPE arch= Header::CPU_TYPE::ANY);
ThreadCommand& operator=(const ThreadCommand& copy) = default;
ThreadCommand(const ThreadCommand& copy) = default;
std::unique_ptr<LoadCommand> clone() const override {
return std::unique_ptr<ThreadCommand>(new ThreadCommand(*this));
}
~ThreadCommand() override = default;
/// Integer that defines a special *flavor* for the thread.
///
/// The meaning of this value depends on the architecture(). The list of
/// the values can be found in the XNU kernel files:
/// - xnu/osfmk/mach/arm/thread_status.h for the ARM/AArch64 architectures
/// - xnu/osfmk/mach/i386/thread_status.h for the x86/x86-64 architectures
uint32_t flavor() const {
return flavor_;
}
/// Size of the thread state data with 32-bits alignment.
///
/// This value should match state().size()
uint32_t count() const {
return count_;
}
/// The CPU architecture that is targeted by this ThreadCommand
Header::CPU_TYPE architecture() const {
return architecture_;
}
/// The actual thread state as a vector of bytes. Depending on the architecture(),
/// these data can be casted into x86_thread_state_t, x86_thread_state64_t, ...
span<const uint8_t> state() const {
return state_;
}
span<uint8_t> state() {
return state_;
}
/// Return the initial Program Counter regardless of the underlying architecture.
/// This value, when non null, can be used to determine the binary's entrypoint.
///
/// Underneath, it works by looking for the PC register value in the state() data
uint64_t pc() const;
void state(std::vector<uint8_t> state) {
state_ = std::move(state);
}
void flavor(uint32_t flavor) {
flavor_ = flavor;
}
void count(uint32_t count) {
count_ = count;
}
void architecture(Header::CPU_TYPE arch) {
architecture_ = arch;
}
void accept(Visitor& visitor) const override;
std::ostream& print(std::ostream& os) const override;
static bool classof(const LoadCommand* cmd) {
const LoadCommand::TYPE type = cmd->command();
return type == LoadCommand::TYPE::THREAD ||
type == LoadCommand::TYPE::UNIXTHREAD;
}
private:
uint32_t flavor_ = 0;
uint32_t count_ = 0;
Header::CPU_TYPE architecture_ = Header::CPU_TYPE::ANY;
std::vector<uint8_t> state_;
};
}
}
#endif | unknown | github | https://github.com/nodejs/node | deps/LIEF/include/LIEF/MachO/ThreadCommand.hpp |
# Copyright 2014 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test classes for code snippet for modeling article."""
import pytest
import keyproperty_models as models
def test_models(testbed):
name = 'Takashi Matsuo'
contact = models.Contact(name=name)
contact.put()
contact = contact.key.get()
assert contact.name == name
# This test fails because of the eventual consistency nature of
# HRD. We configure HRD consistency for the test datastore stub to
# match the production behavior.
@pytest.mark.xfail
# [START failing_test]
def test_fails(self):
contact = models.Contact(name='Example')
contact.put()
models.PhoneNumber(
contact=self.contact_key,
phone_type='home',
number='(650) 555 - 2200').put()
numbers = contact.phone_numbers.fetch()
assert 1 == len(numbers)
# [END failing_test] | unknown | codeparrot/codeparrot-clean | ||
# This is a helper module for test_threaded_import. The test imports this
# module, and this module tries to run various Python library functions in
# their own thread, as a side effect of being imported. If the spawned
# thread doesn't complete in TIMEOUT seconds, an "appeared to hang" message
# is appended to the module-global `errors` list. That list remains empty
# if (and only if) all functions tested complete.
TIMEOUT = 10
import threading
import tempfile
import os.path
errors = []
# This class merely runs a function in its own thread T. The thread importing
# this module holds the import lock, so if the function called by T tries
# to do its own imports it will block waiting for this module's import
# to complete.
class Worker(threading.Thread):
def __init__(self, function, args):
threading.Thread.__init__(self)
self.function = function
self.args = args
def run(self):
self.function(*self.args)
for name, func, args in [
# Bug 147376: TemporaryFile hung on Windows, starting in Python 2.4.
("tempfile.TemporaryFile", lambda: tempfile.TemporaryFile().close(), ()),
# The real cause for bug 147376: ntpath.abspath() caused the hang.
("os.path.abspath", os.path.abspath, ('.',)),
]:
t = Worker(func, args)
t.start()
t.join(TIMEOUT)
if t.is_alive():
errors.append("%s appeared to hang" % name) | unknown | codeparrot/codeparrot-clean | ||
from __future__ import unicode_literals
import datetime
import unittest
from django.apps.registry import Apps
from django.core.exceptions import ValidationError
from django.db import models
from django.test import TestCase
from .models import (
CustomPKModel, FlexibleDatePost, ModelToValidate, Post, UniqueErrorsModel,
UniqueFieldsModel, UniqueForDateModel, UniqueTogetherModel,
)
class GetUniqueCheckTests(unittest.TestCase):
def test_unique_fields_get_collected(self):
m = UniqueFieldsModel()
self.assertEqual(
([(UniqueFieldsModel, ('id',)),
(UniqueFieldsModel, ('unique_charfield',)),
(UniqueFieldsModel, ('unique_integerfield',))],
[]),
m._get_unique_checks()
)
def test_unique_together_gets_picked_up_and_converted_to_tuple(self):
m = UniqueTogetherModel()
self.assertEqual(
([(UniqueTogetherModel, ('ifield', 'cfield')),
(UniqueTogetherModel, ('ifield', 'efield')),
(UniqueTogetherModel, ('id',)), ],
[]),
m._get_unique_checks()
)
def test_unique_together_normalization(self):
"""
Test the Meta.unique_together normalization with different sorts of
objects.
"""
data = {
'2-tuple': (('foo', 'bar'), (('foo', 'bar'),)),
'list': (['foo', 'bar'], (('foo', 'bar'),)),
'already normalized': ((('foo', 'bar'), ('bar', 'baz')),
(('foo', 'bar'), ('bar', 'baz'))),
'set': ({('foo', 'bar'), ('bar', 'baz')}, # Ref #21469
(('foo', 'bar'), ('bar', 'baz'))),
}
for test_name, (unique_together, normalized) in data.items():
class M(models.Model):
foo = models.IntegerField()
bar = models.IntegerField()
baz = models.IntegerField()
Meta = type(str('Meta'), (), {
'unique_together': unique_together,
'apps': Apps()
})
checks, _ = M()._get_unique_checks()
for t in normalized:
check = (M, t)
self.assertIn(check, checks)
def test_primary_key_is_considered_unique(self):
m = CustomPKModel()
self.assertEqual(([(CustomPKModel, ('my_pk_field',))], []), m._get_unique_checks())
def test_unique_for_date_gets_picked_up(self):
m = UniqueForDateModel()
self.assertEqual((
[(UniqueForDateModel, ('id',))],
[(UniqueForDateModel, 'date', 'count', 'start_date'),
(UniqueForDateModel, 'year', 'count', 'end_date'),
(UniqueForDateModel, 'month', 'order', 'end_date')]
), m._get_unique_checks()
)
def test_unique_for_date_exclusion(self):
m = UniqueForDateModel()
self.assertEqual((
[(UniqueForDateModel, ('id',))],
[(UniqueForDateModel, 'year', 'count', 'end_date'),
(UniqueForDateModel, 'month', 'order', 'end_date')]
), m._get_unique_checks(exclude='start_date')
)
class PerformUniqueChecksTest(TestCase):
def test_primary_key_unique_check_not_performed_when_adding_and_pk_not_specified(self):
# Regression test for #12560
with self.assertNumQueries(0):
mtv = ModelToValidate(number=10, name='Some Name')
setattr(mtv, '_adding', True)
mtv.full_clean()
def test_primary_key_unique_check_performed_when_adding_and_pk_specified(self):
# Regression test for #12560
with self.assertNumQueries(1):
mtv = ModelToValidate(number=10, name='Some Name', id=123)
setattr(mtv, '_adding', True)
mtv.full_clean()
def test_primary_key_unique_check_not_performed_when_not_adding(self):
# Regression test for #12132
with self.assertNumQueries(0):
mtv = ModelToValidate(number=10, name='Some Name')
mtv.full_clean()
def test_unique_for_date(self):
Post.objects.create(
title="Django 1.0 is released", slug="Django 1.0",
subtitle="Finally", posted=datetime.date(2008, 9, 3),
)
p = Post(title="Django 1.0 is released", posted=datetime.date(2008, 9, 3))
with self.assertRaises(ValidationError) as cm:
p.full_clean()
self.assertEqual(cm.exception.message_dict, {'title': ['Title must be unique for Posted date.']})
# Should work without errors
p = Post(title="Work on Django 1.1 begins", posted=datetime.date(2008, 9, 3))
p.full_clean()
# Should work without errors
p = Post(title="Django 1.0 is released", posted=datetime.datetime(2008, 9, 4))
p.full_clean()
p = Post(slug="Django 1.0", posted=datetime.datetime(2008, 1, 1))
with self.assertRaises(ValidationError) as cm:
p.full_clean()
self.assertEqual(cm.exception.message_dict, {'slug': ['Slug must be unique for Posted year.']})
p = Post(subtitle="Finally", posted=datetime.datetime(2008, 9, 30))
with self.assertRaises(ValidationError) as cm:
p.full_clean()
self.assertEqual(cm.exception.message_dict, {'subtitle': ['Subtitle must be unique for Posted month.']})
p = Post(title="Django 1.0 is released")
with self.assertRaises(ValidationError) as cm:
p.full_clean()
self.assertEqual(cm.exception.message_dict, {'posted': ['This field cannot be null.']})
def test_unique_for_date_with_nullable_date(self):
FlexibleDatePost.objects.create(
title="Django 1.0 is released", slug="Django 1.0",
subtitle="Finally", posted=datetime.date(2008, 9, 3),
)
p = FlexibleDatePost(title="Django 1.0 is released")
try:
p.full_clean()
except ValidationError:
self.fail("unique_for_date checks shouldn't trigger when the associated DateField is None.")
p = FlexibleDatePost(slug="Django 1.0")
try:
p.full_clean()
except ValidationError:
self.fail("unique_for_year checks shouldn't trigger when the associated DateField is None.")
p = FlexibleDatePost(subtitle="Finally")
try:
p.full_clean()
except ValidationError:
self.fail("unique_for_month checks shouldn't trigger when the associated DateField is None.")
def test_unique_errors(self):
UniqueErrorsModel.objects.create(name='Some Name', no=10)
m = UniqueErrorsModel(name='Some Name', no=11)
with self.assertRaises(ValidationError) as cm:
m.full_clean()
self.assertEqual(cm.exception.message_dict, {'name': ['Custom unique name message.']})
m = UniqueErrorsModel(name='Some Other Name', no=10)
with self.assertRaises(ValidationError) as cm:
m.full_clean()
self.assertEqual(cm.exception.message_dict, {'no': ['Custom unique number message.']}) | unknown | codeparrot/codeparrot-clean | ||
function Component() {
const x = 4;
const get4 = () => {
while (bar()) {
if (baz) {
bar();
}
}
return () => x;
};
return get4;
} | javascript | github | https://github.com/facebook/react | compiler/packages/babel-plugin-react-compiler/src/__tests__/fixtures/compiler/rewrite-phis-in-lambda-capture-context.js |
{
"kind": "Dashboard",
"apiVersion": "dashboard.grafana.app/v2beta1",
"metadata": {
"name": "v40.refresh_empty_string.v42"
},
"spec": {
"annotations": [
{
"kind": "AnnotationQuery",
"spec": {
"query": {
"kind": "DataQuery",
"group": "grafana",
"version": "v0",
"datasource": {
"name": "-- Grafana --"
},
"spec": {}
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations \u0026 Alerts",
"builtIn": true,
"legacyOptions": {
"type": "dashboard"
}
}
}
],
"cursorSync": "Off",
"editable": true,
"elements": {},
"layout": {
"kind": "GridLayout",
"spec": {
"items": []
}
},
"links": [],
"liveNow": false,
"preload": false,
"tags": [],
"timeSettings": {
"timezone": "",
"from": "now-6h",
"to": "now",
"autoRefresh": "",
"autoRefreshIntervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"hideTimepicker": false,
"fiscalYearStartMonth": 0
},
"title": "Empty String Refresh Test Dashboard",
"variables": []
},
"status": {}
} | json | github | https://github.com/grafana/grafana | apps/dashboard/pkg/migration/conversion/testdata/input/migrated_dashboards_from_v0_to_v2/v2beta1.v40.refresh_empty_string.json |
#[cfg_attr(all(test, assert_no_panic), no_panic::no_panic)]
pub fn scalbnf(x: f32, n: i32) -> f32 {
super::generic::scalbn(x, n)
} | rust | github | https://github.com/nodejs/node | deps/crates/vendor/libm/src/math/scalbnf.rs |
/*
MIT License http://www.opensource.org/licenses/mit-license.php
Author Florent Cailhol @ooflorent
*/
"use strict";
const WebpackError = require("./WebpackError");
/** @typedef {import("./Compiler")} Compiler */
const PLUGIN_NAME = "WarnDeprecatedOptionPlugin";
class WarnDeprecatedOptionPlugin {
/**
* Create an instance of the plugin
* @param {string} option the target option
* @param {string | number} value the deprecated option value
* @param {string} suggestion the suggestion replacement
*/
constructor(option, value, suggestion) {
this.option = option;
this.value = value;
this.suggestion = suggestion;
}
/**
* Apply the plugin
* @param {Compiler} compiler the compiler instance
* @returns {void}
*/
apply(compiler) {
compiler.hooks.thisCompilation.tap(PLUGIN_NAME, (compilation) => {
compilation.warnings.push(
new DeprecatedOptionWarning(this.option, this.value, this.suggestion)
);
});
}
}
class DeprecatedOptionWarning extends WebpackError {
/**
* Create an instance deprecated option warning
* @param {string} option the target option
* @param {string | number} value the deprecated option value
* @param {string} suggestion the suggestion replacement
*/
constructor(option, value, suggestion) {
super();
/** @type {string} */
this.name = "DeprecatedOptionWarning";
this.message =
"configuration\n" +
`The value '${value}' for option '${option}' is deprecated. ` +
`Use '${suggestion}' instead.`;
}
}
module.exports = WarnDeprecatedOptionPlugin; | javascript | github | https://github.com/webpack/webpack | lib/WarnDeprecatedOptionPlugin.js |
data = (
'Lang ', # 0x00
'Kan ', # 0x01
'Lao ', # 0x02
'Lai ', # 0x03
'Xian ', # 0x04
'Que ', # 0x05
'Kong ', # 0x06
'Chong ', # 0x07
'Chong ', # 0x08
'Ta ', # 0x09
'Lin ', # 0x0a
'Hua ', # 0x0b
'Ju ', # 0x0c
'Lai ', # 0x0d
'Qi ', # 0x0e
'Min ', # 0x0f
'Kun ', # 0x10
'Kun ', # 0x11
'Zu ', # 0x12
'Gu ', # 0x13
'Cui ', # 0x14
'Ya ', # 0x15
'Ya ', # 0x16
'Gang ', # 0x17
'Lun ', # 0x18
'Lun ', # 0x19
'Leng ', # 0x1a
'Jue ', # 0x1b
'Duo ', # 0x1c
'Zheng ', # 0x1d
'Guo ', # 0x1e
'Yin ', # 0x1f
'Dong ', # 0x20
'Han ', # 0x21
'Zheng ', # 0x22
'Wei ', # 0x23
'Yao ', # 0x24
'Pi ', # 0x25
'Yan ', # 0x26
'Song ', # 0x27
'Jie ', # 0x28
'Beng ', # 0x29
'Zu ', # 0x2a
'Jue ', # 0x2b
'Dong ', # 0x2c
'Zhan ', # 0x2d
'Gu ', # 0x2e
'Yin ', # 0x2f
'[?] ', # 0x30
'Ze ', # 0x31
'Huang ', # 0x32
'Yu ', # 0x33
'Wei ', # 0x34
'Yang ', # 0x35
'Feng ', # 0x36
'Qiu ', # 0x37
'Dun ', # 0x38
'Ti ', # 0x39
'Yi ', # 0x3a
'Zhi ', # 0x3b
'Shi ', # 0x3c
'Zai ', # 0x3d
'Yao ', # 0x3e
'E ', # 0x3f
'Zhu ', # 0x40
'Kan ', # 0x41
'Lu ', # 0x42
'Yan ', # 0x43
'Mei ', # 0x44
'Gan ', # 0x45
'Ji ', # 0x46
'Ji ', # 0x47
'Huan ', # 0x48
'Ting ', # 0x49
'Sheng ', # 0x4a
'Mei ', # 0x4b
'Qian ', # 0x4c
'Wu ', # 0x4d
'Yu ', # 0x4e
'Zong ', # 0x4f
'Lan ', # 0x50
'Jue ', # 0x51
'Yan ', # 0x52
'Yan ', # 0x53
'Wei ', # 0x54
'Zong ', # 0x55
'Cha ', # 0x56
'Sui ', # 0x57
'Rong ', # 0x58
'Yamashina ', # 0x59
'Qin ', # 0x5a
'Yu ', # 0x5b
'Kewashii ', # 0x5c
'Lou ', # 0x5d
'Tu ', # 0x5e
'Dui ', # 0x5f
'Xi ', # 0x60
'Weng ', # 0x61
'Cang ', # 0x62
'Dang ', # 0x63
'Hong ', # 0x64
'Jie ', # 0x65
'Ai ', # 0x66
'Liu ', # 0x67
'Wu ', # 0x68
'Song ', # 0x69
'Qiao ', # 0x6a
'Zi ', # 0x6b
'Wei ', # 0x6c
'Beng ', # 0x6d
'Dian ', # 0x6e
'Cuo ', # 0x6f
'Qian ', # 0x70
'Yong ', # 0x71
'Nie ', # 0x72
'Cuo ', # 0x73
'Ji ', # 0x74
'[?] ', # 0x75
'Tao ', # 0x76
'Song ', # 0x77
'Zong ', # 0x78
'Jiang ', # 0x79
'Liao ', # 0x7a
'Kang ', # 0x7b
'Chan ', # 0x7c
'Die ', # 0x7d
'Cen ', # 0x7e
'Ding ', # 0x7f
'Tu ', # 0x80
'Lou ', # 0x81
'Zhang ', # 0x82
'Zhan ', # 0x83
'Zhan ', # 0x84
'Ao ', # 0x85
'Cao ', # 0x86
'Qu ', # 0x87
'Qiang ', # 0x88
'Zui ', # 0x89
'Zui ', # 0x8a
'Dao ', # 0x8b
'Dao ', # 0x8c
'Xi ', # 0x8d
'Yu ', # 0x8e
'Bo ', # 0x8f
'Long ', # 0x90
'Xiang ', # 0x91
'Ceng ', # 0x92
'Bo ', # 0x93
'Qin ', # 0x94
'Jiao ', # 0x95
'Yan ', # 0x96
'Lao ', # 0x97
'Zhan ', # 0x98
'Lin ', # 0x99
'Liao ', # 0x9a
'Liao ', # 0x9b
'Jin ', # 0x9c
'Deng ', # 0x9d
'Duo ', # 0x9e
'Zun ', # 0x9f
'Jiao ', # 0xa0
'Gui ', # 0xa1
'Yao ', # 0xa2
'Qiao ', # 0xa3
'Yao ', # 0xa4
'Jue ', # 0xa5
'Zhan ', # 0xa6
'Yi ', # 0xa7
'Xue ', # 0xa8
'Nao ', # 0xa9
'Ye ', # 0xaa
'Ye ', # 0xab
'Yi ', # 0xac
'E ', # 0xad
'Xian ', # 0xae
'Ji ', # 0xaf
'Xie ', # 0xb0
'Ke ', # 0xb1
'Xi ', # 0xb2
'Di ', # 0xb3
'Ao ', # 0xb4
'Zui ', # 0xb5
'[?] ', # 0xb6
'Ni ', # 0xb7
'Rong ', # 0xb8
'Dao ', # 0xb9
'Ling ', # 0xba
'Za ', # 0xbb
'Yu ', # 0xbc
'Yue ', # 0xbd
'Yin ', # 0xbe
'[?] ', # 0xbf
'Jie ', # 0xc0
'Li ', # 0xc1
'Sui ', # 0xc2
'Long ', # 0xc3
'Long ', # 0xc4
'Dian ', # 0xc5
'Ying ', # 0xc6
'Xi ', # 0xc7
'Ju ', # 0xc8
'Chan ', # 0xc9
'Ying ', # 0xca
'Kui ', # 0xcb
'Yan ', # 0xcc
'Wei ', # 0xcd
'Nao ', # 0xce
'Quan ', # 0xcf
'Chao ', # 0xd0
'Cuan ', # 0xd1
'Luan ', # 0xd2
'Dian ', # 0xd3
'Dian ', # 0xd4
'[?] ', # 0xd5
'Yan ', # 0xd6
'Yan ', # 0xd7
'Yan ', # 0xd8
'Nao ', # 0xd9
'Yan ', # 0xda
'Chuan ', # 0xdb
'Gui ', # 0xdc
'Chuan ', # 0xdd
'Zhou ', # 0xde
'Huang ', # 0xdf
'Jing ', # 0xe0
'Xun ', # 0xe1
'Chao ', # 0xe2
'Chao ', # 0xe3
'Lie ', # 0xe4
'Gong ', # 0xe5
'Zuo ', # 0xe6
'Qiao ', # 0xe7
'Ju ', # 0xe8
'Gong ', # 0xe9
'Kek ', # 0xea
'Wu ', # 0xeb
'Pwu ', # 0xec
'Pwu ', # 0xed
'Chai ', # 0xee
'Qiu ', # 0xef
'Qiu ', # 0xf0
'Ji ', # 0xf1
'Yi ', # 0xf2
'Si ', # 0xf3
'Ba ', # 0xf4
'Zhi ', # 0xf5
'Zhao ', # 0xf6
'Xiang ', # 0xf7
'Yi ', # 0xf8
'Jin ', # 0xf9
'Xun ', # 0xfa
'Juan ', # 0xfb
'Phas ', # 0xfc
'Xun ', # 0xfd
'Jin ', # 0xfe
'Fu ', # 0xff
) | unknown | codeparrot/codeparrot-clean | ||
{
"assetSchedule": "{{count}} de {{total}} ativos atualizados",
"dagActions": {
"delete": {
"button": "Excluir Dag",
"warning": "Isso removerá todas as metadados relacionados ao Dag, incluindo Execuções e Tarefas."
}
},
"favoriteDag": "Dag Favorito",
"filters": {
"allRunTypes": "Todos os Tipos de Execução",
"allStates": "Todos os Estados",
"favorite": {
"all": "Todos",
"favorite": "Favorito",
"unfavorite": "Remover dos Favoritos"
},
"paused": {
"active": "Ativo",
"all": "Todos",
"paused": "Pausado"
},
"runIdPatternFilter": "Pesquisar Execuções de Dag"
},
"ownerLink": "Link do Proprietário para {{owner}}",
"runAndTaskActions": {
"affectedTasks": {
"noItemsFound": "Nenhuma tarefa encontrada.",
"title": "Tarefas Afetadas: {{count}}"
},
"clear": {
"button": "Limpar {{type}}",
"buttonTooltip": "Pressione shift+c para limpar",
"error": "Falha ao limpar {{type}}",
"title": "Limpar {{type}}"
},
"delete": {
"button": "Excluir {{type}}",
"dialog": {
"resourceName": "{{type}} {{id}}",
"title": "Excluir {{type}}",
"warning": "Isso removerá todas as metadados relacionados ao {{type}}."
},
"error": "Erro ao excluir {{type}}",
"success": {
"description": "A solicitação de exclusão do {{type}} foi bem-sucedida.",
"title": "{{type}} Excluído com Sucesso"
}
},
"markAs": {
"button": "Marcar {{type}} como...",
"buttonTooltip": {
"failed": "Pressione shift+f para marcar como falha",
"success": "Pressione shift+s para marcar como sucesso"
},
"title": "Marcar {{type}} como {{state}}"
},
"options": {
"downstream": "Downstream",
"existingTasks": "Limpar tarefas existentes",
"future": "Futuro",
"onlyFailed": "Limpar somente tarefas falhadas",
"past": "Passado",
"queueNew": "Enfileirar novas tarefas",
"runOnLatestVersion": "Executar com a versão mais recente do pacote",
"upstream": "Upstream"
}
},
"search": {
"advanced": "Pesquisa Avançada",
"clear": "Limpar pesquisa",
"dags": "Pesquisar Dags",
"hotkey": "+K",
"tasks": "Pesquisar Tarefas"
},
"sort": {
"displayName": {
"asc": "Ordenar por Nome (A-Z)",
"desc": "Ordenar por Nome (Z-A)"
},
"lastRunStartDate": {
"asc": "Ordenar por Data de Início da Última Execução (Mais Antiga-Mais Recente)",
"desc": "Ordenar por Data de Início da Última Execução (Mais Recente-Mais Antiga)"
},
"lastRunState": {
"asc": "Ordenar por Estado da Última Execução (A-Z)",
"desc": "Ordenar por Estado da Última Execução (Z-A)"
},
"nextDagRun": {
"asc": "Ordenar por Próxima Execução do Dag (Mais Antiga-Mais Recente)",
"desc": "Ordenar por Próxima Execução do Dag (Mais Recente-Mais Antiga)"
},
"placeholder": "Ordenar por"
},
"unfavoriteDag": "Remover Dag dos Favoritos"
} | json | github | https://github.com/apache/airflow | airflow-core/src/airflow/ui/public/i18n/locales/pt/dags.json |
/*
* Copyright 2002-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.beans.factory.aot;
import java.util.function.BiConsumer;
import java.util.function.Supplier;
import javax.lang.model.element.Modifier;
import org.assertj.core.api.ThrowingConsumer;
import org.junit.jupiter.api.Disabled;
import org.junit.jupiter.api.Nested;
import org.junit.jupiter.api.Test;
import org.springframework.aot.generate.GeneratedClass;
import org.springframework.aot.hint.ExecutableHint;
import org.springframework.aot.hint.ExecutableMode;
import org.springframework.aot.hint.ReflectionHints;
import org.springframework.aot.hint.TypeHint;
import org.springframework.aot.test.generate.TestGenerationContext;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.DefaultListableBeanFactory;
import org.springframework.beans.factory.support.InstanceSupplier;
import org.springframework.beans.factory.support.RegisteredBean;
import org.springframework.beans.factory.support.RegisteredBean.InstantiationDescriptor;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.beans.testfixture.beans.TestBean;
import org.springframework.beans.testfixture.beans.TestBeanWithPrivateConstructor;
import org.springframework.beans.testfixture.beans.factory.aot.DefaultSimpleBeanContract;
import org.springframework.beans.testfixture.beans.factory.aot.DeferredTypeBuilder;
import org.springframework.beans.testfixture.beans.factory.aot.SimpleBean;
import org.springframework.beans.testfixture.beans.factory.aot.SimpleBeanContract;
import org.springframework.beans.testfixture.beans.factory.generator.InnerComponentConfiguration;
import org.springframework.beans.testfixture.beans.factory.generator.InnerComponentConfiguration.EnvironmentAwareComponent;
import org.springframework.beans.testfixture.beans.factory.generator.InnerComponentConfiguration.EnvironmentAwareComponentWithoutPublicConstructor;
import org.springframework.beans.testfixture.beans.factory.generator.InnerComponentConfiguration.NoDependencyComponent;
import org.springframework.beans.testfixture.beans.factory.generator.InnerComponentConfiguration.NoDependencyComponentWithoutPublicConstructor;
import org.springframework.beans.testfixture.beans.factory.generator.SimpleConfiguration;
import org.springframework.beans.testfixture.beans.factory.generator.deprecation.DeprecatedBean;
import org.springframework.beans.testfixture.beans.factory.generator.deprecation.DeprecatedConstructor;
import org.springframework.beans.testfixture.beans.factory.generator.deprecation.DeprecatedForRemovalBean;
import org.springframework.beans.testfixture.beans.factory.generator.deprecation.DeprecatedForRemovalConstructor;
import org.springframework.beans.testfixture.beans.factory.generator.deprecation.DeprecatedForRemovalMemberConfiguration;
import org.springframework.beans.testfixture.beans.factory.generator.deprecation.DeprecatedMemberConfiguration;
import org.springframework.beans.testfixture.beans.factory.generator.factory.NumberHolder;
import org.springframework.beans.testfixture.beans.factory.generator.factory.NumberHolderFactoryBean;
import org.springframework.beans.testfixture.beans.factory.generator.factory.SampleFactory;
import org.springframework.beans.testfixture.beans.factory.generator.injection.InjectionComponent;
import org.springframework.core.env.StandardEnvironment;
import org.springframework.core.test.tools.Compiled;
import org.springframework.core.test.tools.TestCompiler;
import org.springframework.javapoet.CodeBlock;
import org.springframework.javapoet.MethodSpec;
import org.springframework.javapoet.ParameterizedTypeName;
import org.springframework.util.ReflectionUtils;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatNoException;
/**
* Tests for {@link InstanceSupplierCodeGenerator}.
*
* @author Phillip Webb
* @author Stephane Nicoll
*/
class InstanceSupplierCodeGeneratorTests {
private final TestGenerationContext generationContext;
private final DefaultListableBeanFactory beanFactory;
InstanceSupplierCodeGeneratorTests() {
this.generationContext = new TestGenerationContext();
this.beanFactory = new DefaultListableBeanFactory();
}
@Test
void generateWhenHasDefaultConstructor() {
BeanDefinition beanDefinition = new RootBeanDefinition(TestBean.class);
compile(beanDefinition, (instanceSupplier, compiled) -> {
TestBean bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(TestBean.class);
assertThat(compiled.getSourceFile())
.contains("InstanceSupplier.using(TestBean::new)");
});
assertThat(getReflectionHints().getTypeHint(TestBean.class)).isNotNull();
}
@Test
void generateWhenHasConstructorWithParameter() {
BeanDefinition beanDefinition = new RootBeanDefinition(InjectionComponent.class);
this.beanFactory.registerSingleton("injected", "injected");
compile(beanDefinition, (instanceSupplier, compiled) -> {
InjectionComponent bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(InjectionComponent.class).extracting("bean").isEqualTo("injected");
});
assertThat(getReflectionHints().getTypeHint(InjectionComponent.class)).isNotNull();
}
@Test
void generateWhenHasConstructorWithInnerClassAndDefaultConstructor() {
RootBeanDefinition beanDefinition = new RootBeanDefinition(NoDependencyComponent.class);
this.beanFactory.registerSingleton("configuration", new InnerComponentConfiguration());
compile(beanDefinition, (instanceSupplier, compiled) -> {
Object bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(NoDependencyComponent.class);
assertThat(compiled.getSourceFile()).contains(
"getBeanFactory().getBean(InnerComponentConfiguration.class).new NoDependencyComponent()");
});
assertThat(getReflectionHints().getTypeHint(NoDependencyComponent.class)).isNotNull();
}
@Test
void generateWhenHasConstructorWithInnerClassAndParameter() {
BeanDefinition beanDefinition = new RootBeanDefinition(EnvironmentAwareComponent.class);
StandardEnvironment environment = new StandardEnvironment();
this.beanFactory.registerSingleton("configuration", new InnerComponentConfiguration());
this.beanFactory.registerSingleton("environment", environment);
compile(beanDefinition, (instanceSupplier, compiled) -> {
Object bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(EnvironmentAwareComponent.class);
assertThat(bean).hasFieldOrPropertyWithValue("environment", environment);
assertThat(compiled.getSourceFile()).contains(
"getBeanFactory().getBean(InnerComponentConfiguration.class).new EnvironmentAwareComponent(");
});
assertThat(getReflectionHints().getTypeHint(EnvironmentAwareComponent.class)).isNotNull();
}
@Test
void generateWhenHasNonPublicConstructorWithInnerClassAndDefaultConstructor() {
RootBeanDefinition beanDefinition = new RootBeanDefinition(NoDependencyComponentWithoutPublicConstructor.class);
this.beanFactory.registerSingleton("configuration", new InnerComponentConfiguration());
compile(beanDefinition, (instanceSupplier, compiled) -> {
Object bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(NoDependencyComponentWithoutPublicConstructor.class);
assertThat(compiled.getSourceFile()).doesNotContain(
"getBeanFactory().getBean(InnerComponentConfiguration.class)");
});
assertThat(getReflectionHints().getTypeHint(NoDependencyComponentWithoutPublicConstructor.class))
.satisfies(hasConstructorWithMode(ExecutableMode.INVOKE));
}
@Test
void generateWhenHasNonPublicConstructorWithInnerClassAndParameter() {
BeanDefinition beanDefinition = new RootBeanDefinition(EnvironmentAwareComponentWithoutPublicConstructor.class);
StandardEnvironment environment = new StandardEnvironment();
this.beanFactory.registerSingleton("configuration", new InnerComponentConfiguration());
this.beanFactory.registerSingleton("environment", environment);
compile(beanDefinition, (instanceSupplier, compiled) -> {
Object bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(EnvironmentAwareComponentWithoutPublicConstructor.class);
assertThat(bean).hasFieldOrPropertyWithValue("environment", environment);
assertThat(compiled.getSourceFile()).doesNotContain(
"getBeanFactory().getBean(InnerComponentConfiguration.class)");
});
assertThat(getReflectionHints().getTypeHint(EnvironmentAwareComponentWithoutPublicConstructor.class))
.satisfies(hasConstructorWithMode(ExecutableMode.INVOKE));
}
@Test
void generateWhenHasConstructorWithGeneric() {
BeanDefinition beanDefinition = new RootBeanDefinition(NumberHolderFactoryBean.class);
this.beanFactory.registerSingleton("number", 123);
compile(beanDefinition, (instanceSupplier, compiled) -> {
NumberHolder<?> bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(NumberHolder.class);
assertThat(bean).extracting("number").isNull(); // No property actually set
assertThat(compiled.getSourceFile()).contains("NumberHolderFactoryBean::new");
});
assertThat(getReflectionHints().getTypeHint(NumberHolderFactoryBean.class)).isNotNull();
}
@Test
void generateWhenHasPrivateConstructor() {
BeanDefinition beanDefinition = new RootBeanDefinition(TestBeanWithPrivateConstructor.class);
compile(beanDefinition, (instanceSupplier, compiled) -> {
TestBeanWithPrivateConstructor bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(TestBeanWithPrivateConstructor.class);
assertThat(compiled.getSourceFile())
.contains("return BeanInstanceSupplier.<TestBeanWithPrivateConstructor>forConstructor();");
});
assertThat(getReflectionHints().getTypeHint(TestBeanWithPrivateConstructor.class))
.satisfies(hasConstructorWithMode(ExecutableMode.INVOKE));
}
@Test
void generateWhenHasFactoryMethodWithNoArg() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(String.class)
.setFactoryMethodOnBean("stringBean", "config").getBeanDefinition();
this.beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.genericBeanDefinition(SimpleConfiguration.class).getBeanDefinition());
compile(beanDefinition, (instanceSupplier, compiled) -> {
String bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(String.class);
assertThat(bean).isEqualTo("Hello");
assertThat(compiled.getSourceFile()).contains(
"getBeanFactory().getBean(\"config\", SimpleConfiguration.class).stringBean()");
});
assertThat(getReflectionHints().getTypeHint(SimpleConfiguration.class)).isNotNull();
}
@Test
void generateWhenHasFactoryMethodOnInterface() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(SimpleBean.class)
.setFactoryMethodOnBean("simpleBean", "config").getBeanDefinition();
this.beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.rootBeanDefinition(DefaultSimpleBeanContract.class).getBeanDefinition());
compile(beanDefinition, (instanceSupplier, compiled) -> {
Object bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(SimpleBean.class);
assertThat(compiled.getSourceFile()).contains(
"getBeanFactory().getBean(\"config\", DefaultSimpleBeanContract.class).simpleBean()");
});
assertThat(getReflectionHints().getTypeHint(SimpleBeanContract.class)).isNotNull();
}
@Test
void generateWhenHasPrivateStaticFactoryMethodWithNoArg() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(String.class)
.setFactoryMethodOnBean("privateStaticStringBean", "config")
.getBeanDefinition();
this.beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.genericBeanDefinition(SimpleConfiguration.class).getBeanDefinition());
compile(beanDefinition, (instanceSupplier, compiled) -> {
String bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(String.class);
assertThat(bean).isEqualTo("Hello");
assertThat(compiled.getSourceFile())
.contains("forFactoryMethod")
.doesNotContain("withGenerator");
});
assertThat(getReflectionHints().getTypeHint(SimpleConfiguration.class))
.satisfies(hasMethodWithMode(ExecutableMode.INVOKE));
}
@Test
void generateWhenHasStaticFactoryMethodWithNoArg() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(SimpleConfiguration.class)
.setFactoryMethod("integerBean").getBeanDefinition();
compile(beanDefinition, (instanceSupplier, compiled) -> {
Integer bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(Integer.class);
assertThat(bean).isEqualTo(42);
assertThat(compiled.getSourceFile())
.contains("(registeredBean) -> SimpleConfiguration.integerBean()");
});
assertThat(getReflectionHints().getTypeHint(SimpleConfiguration.class)).isNotNull();
}
@Test
void generateWhenHasStaticFactoryMethodWithArg() {
RootBeanDefinition beanDefinition = (RootBeanDefinition) BeanDefinitionBuilder
.rootBeanDefinition(SimpleConfiguration.class)
.setFactoryMethod("create").getBeanDefinition();
beanDefinition.setResolvedFactoryMethod(ReflectionUtils
.findMethod(SampleFactory.class, "create", Number.class, String.class));
this.beanFactory.registerSingleton("number", 42);
this.beanFactory.registerSingleton("string", "test");
compile(beanDefinition, (instanceSupplier, compiled) -> {
String bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(String.class);
assertThat(bean).isEqualTo("42test");
assertThat(compiled.getSourceFile()).contains("SampleFactory.create(");
});
assertThat(getReflectionHints().getTypeHint(SampleFactory.class)).isNotNull();
}
@Test
void generateWhenHasFactoryMethodCheckedException() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(Integer.class)
.setFactoryMethodOnBean("throwingIntegerBean", "config")
.getBeanDefinition();
this.beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.genericBeanDefinition(SimpleConfiguration.class).getBeanDefinition());
compile(beanDefinition, (instanceSupplier, compiled) -> {
Integer bean = getBean(beanDefinition, instanceSupplier);
assertThat(bean).isInstanceOf(Integer.class);
assertThat(bean).isEqualTo(42);
assertThat(compiled.getSourceFile()).doesNotContain(") throws Exception {");
});
assertThat(getReflectionHints().getTypeHint(SimpleConfiguration.class)).isNotNull();
}
private ReflectionHints getReflectionHints() {
return this.generationContext.getRuntimeHints().reflection();
}
private ThrowingConsumer<TypeHint> hasConstructorWithMode(ExecutableMode mode) {
return hint -> assertThat(hint.constructors()).anySatisfy(hasMode(mode));
}
private ThrowingConsumer<TypeHint> hasMethodWithMode(ExecutableMode mode) {
return hint -> assertThat(hint.methods()).anySatisfy(hasMode(mode));
}
private ThrowingConsumer<ExecutableHint> hasMode(ExecutableMode mode) {
return hint -> assertThat(hint.getMode()).isEqualTo(mode);
}
@SuppressWarnings("unchecked")
private <T> T getBean(BeanDefinition beanDefinition, InstanceSupplier<?> instanceSupplier) {
((RootBeanDefinition) beanDefinition).setInstanceSupplier(instanceSupplier);
this.beanFactory.registerBeanDefinition("testBean", beanDefinition);
return (T) this.beanFactory.getBean("testBean");
}
private void compile(BeanDefinition beanDefinition, BiConsumer<InstanceSupplier<?>, Compiled> result) {
compile(TestCompiler.forSystem(), beanDefinition, result);
}
private void compile(TestCompiler testCompiler, BeanDefinition beanDefinition,
BiConsumer<InstanceSupplier<?>, Compiled> result) {
DefaultListableBeanFactory freshBeanFactory = new DefaultListableBeanFactory(this.beanFactory);
freshBeanFactory.registerBeanDefinition("testBean", beanDefinition);
RegisteredBean registeredBean = RegisteredBean.of(freshBeanFactory, "testBean");
DeferredTypeBuilder typeBuilder = new DeferredTypeBuilder();
GeneratedClass generateClass = this.generationContext.getGeneratedClasses().addForFeature("TestCode", typeBuilder);
InstanceSupplierCodeGenerator generator = new InstanceSupplierCodeGenerator(
this.generationContext, generateClass.getName(),
generateClass.getMethods(), false);
InstantiationDescriptor instantiationDescriptor = registeredBean.resolveInstantiationDescriptor();
assertThat(instantiationDescriptor).isNotNull();
CodeBlock generatedCode = generator.generateCode(registeredBean, instantiationDescriptor);
typeBuilder.set(type -> {
type.addModifiers(Modifier.PUBLIC);
type.addSuperinterface(ParameterizedTypeName.get(Supplier.class, InstanceSupplier.class));
type.addMethod(MethodSpec.methodBuilder("get")
.addModifiers(Modifier.PUBLIC)
.returns(InstanceSupplier.class)
.addStatement("return $L", generatedCode).build());
});
this.generationContext.writeGeneratedContent();
testCompiler.with(this.generationContext).compile(compiled -> result.accept(
(InstanceSupplier<?>) compiled.getInstance(Supplier.class).get(), compiled));
}
@Nested
@SuppressWarnings("deprecation")
class DeprecationTests {
private static final TestCompiler TEST_COMPILER = TestCompiler.forSystem()
.withCompilerOptions("-Xlint:all", "-Xlint:-rawtypes", "-Werror");
@Test
@Disabled("Need to move to a separate method so that the warning can be suppressed")
void generateWhenTargetClassIsDeprecated() {
compileAndCheckWarnings(new RootBeanDefinition(DeprecatedBean.class));
}
@Test
void generateWhenTargetConstructorIsDeprecated() {
compileAndCheckWarnings(new RootBeanDefinition(DeprecatedConstructor.class));
}
@Test
void generateWhenTargetFactoryMethodIsDeprecated() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(String.class)
.setFactoryMethodOnBean("deprecatedString", "config").getBeanDefinition();
beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.genericBeanDefinition(DeprecatedMemberConfiguration.class).getBeanDefinition());
compileAndCheckWarnings(beanDefinition);
}
@Test
void generateWhenTargetFactoryMethodParameterIsDeprecated() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(String.class)
.setFactoryMethodOnBean("deprecatedParameter", "config").getBeanDefinition();
beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.genericBeanDefinition(DeprecatedMemberConfiguration.class).getBeanDefinition());
beanFactory.registerBeanDefinition("parameter", new RootBeanDefinition(DeprecatedBean.class));
compileAndCheckWarnings(beanDefinition);
}
@Test
void generateWhenTargetFactoryMethodReturnTypeIsDeprecated() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(DeprecatedBean.class)
.setFactoryMethodOnBean("deprecatedReturnType", "config").getBeanDefinition();
beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.genericBeanDefinition(DeprecatedMemberConfiguration.class).getBeanDefinition());
compileAndCheckWarnings(beanDefinition);
}
@Test
void generateWhenTargetFactoryMethodIsProtectedAndReturnTypeIsDeprecated() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(DeprecatedBean.class)
.setFactoryMethodOnBean("deprecatedReturnTypeProtected", "config").getBeanDefinition();
beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.genericBeanDefinition(DeprecatedMemberConfiguration.class).getBeanDefinition());
compileAndCheckWarnings(beanDefinition);
}
private void compileAndCheckWarnings(BeanDefinition beanDefinition) {
assertThatNoException().isThrownBy(() -> compile(TEST_COMPILER, beanDefinition,
((instanceSupplier, compiled) -> {})));
}
}
@Nested
@SuppressWarnings("removal")
class DeprecationForRemovalTests {
private static final TestCompiler TEST_COMPILER = TestCompiler.forSystem()
.withCompilerOptions("-Xlint:all", "-Xlint:-rawtypes", "-Werror");
@Test
@Disabled("Need to move to a separate method so that the warning can be suppressed")
void generateWhenTargetClassIsDeprecatedForRemoval() {
compileAndCheckWarnings(new RootBeanDefinition(DeprecatedForRemovalBean.class));
}
@Test
void generateWhenTargetConstructorIsDeprecatedForRemoval() {
compileAndCheckWarnings(new RootBeanDefinition(DeprecatedForRemovalConstructor.class));
}
@Test
void generateWhenTargetFactoryMethodIsDeprecatedForRemoval() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(String.class)
.setFactoryMethodOnBean("deprecatedString", "config").getBeanDefinition();
beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.genericBeanDefinition(DeprecatedForRemovalMemberConfiguration.class).getBeanDefinition());
compileAndCheckWarnings(beanDefinition);
}
@Test
void generateWhenTargetFactoryMethodParameterIsDeprecatedForRemoval() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(String.class)
.setFactoryMethodOnBean("deprecatedParameter", "config").getBeanDefinition();
beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.genericBeanDefinition(DeprecatedForRemovalMemberConfiguration.class).getBeanDefinition());
beanFactory.registerBeanDefinition("parameter", new RootBeanDefinition(DeprecatedForRemovalBean.class));
compileAndCheckWarnings(beanDefinition);
}
@Test
void generateWhenTargetFactoryMethodReturnTypeIsDeprecatedForRemoval() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(DeprecatedForRemovalBean.class)
.setFactoryMethodOnBean("deprecatedReturnType", "config").getBeanDefinition();
beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.genericBeanDefinition(DeprecatedForRemovalMemberConfiguration.class).getBeanDefinition());
compileAndCheckWarnings(beanDefinition);
}
@Test
void generateWhenTargetFactoryMethodIsProtectedAndReturnTypeIsDeprecatedForRemoval() {
BeanDefinition beanDefinition = BeanDefinitionBuilder
.rootBeanDefinition(DeprecatedForRemovalBean.class)
.setFactoryMethodOnBean("deprecatedReturnTypeProtected", "config").getBeanDefinition();
beanFactory.registerBeanDefinition("config", BeanDefinitionBuilder
.genericBeanDefinition(DeprecatedForRemovalMemberConfiguration.class).getBeanDefinition());
compileAndCheckWarnings(beanDefinition);
}
private void compileAndCheckWarnings(BeanDefinition beanDefinition) {
assertThatNoException().isThrownBy(() -> compile(TEST_COMPILER, beanDefinition,
((instanceSupplier, compiled) -> {})));
}
}
} | java | github | https://github.com/spring-projects/spring-framework | spring-beans/src/test/java/org/springframework/beans/factory/aot/InstanceSupplierCodeGeneratorTests.java |
// Copyright IBM Corp. 2016, 2025
// SPDX-License-Identifier: BUSL-1.1
//go:build !enterprise
package pki
import (
"github.com/hashicorp/vault/builtin/logical/pki/issuing"
)
//go:generate go run github.com/hashicorp/vault/tools/stubmaker
func (b *backend) adjustInputBundle(input *inputBundle) {}
func entValidateRole(b *backend, entry *issuing.RoleEntry, operation string) ([]string, error) {
return nil, nil
} | go | github | https://github.com/hashicorp/vault | builtin/logical/pki/common_criteria_stubs_oss.go |
#!/usr/bin/env python
# vim: sts=4 sw=4 et
# This is a component of EMC
# gladevcp Copyright 2010 Chris Morley
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
""" Python / GLADE based Virtual Control Panel for EMC
A virtual control panel (VCP) is used to display and control
HAL pins.
Usage: gladevcp -g position -c compname -H halfile -x windowid myfile.glade
compname is the name of the HAL component to be created.
halfile contains hal commands to be executed with halcmd after the hal component is ready
The name of the HAL pins associated with the VCP will begin with 'compname.'
myfile.glade is an XML file which specifies the layout of the VCP.
-g option allows setting of the inital position of the panel
"""
import sys, os, subprocess
import traceback
import hal
from optparse import Option, OptionParser
import ConfigParser
import gtk
import gtk.glade
import gobject
import signal
global remote_ok
try:
import glib
import gtk.gdk
import zmq
from message_pb2 import Container
from types_pb2 import *
except ImportError,msg:
print >> sys.stderr, "gladevcp: cant operate remotely - import error: %s" % (msg)
remote_ok = False
else:
remote_ok = True
from gladevcp.gladebuilder import GladeBuilder
from gladevcp import xembed
import gladevcp.makepins
from hal_glib import GRemoteComponent
options = [ Option( '-c', dest='component', metavar='NAME'
, help="Set component name to NAME. Default is basename of UI file")
, Option( '-d', action='store_true', dest='debug'
, help="Enable debug output")
, Option( '-g', dest='geometry', default="", help="""Set geometry WIDTHxHEIGHT+XOFFSET+YOFFSET.
Values are in pixel units, XOFFSET/YOFFSET is referenced from top left of screen
use -g WIDTHxHEIGHT for just setting size or -g +XOFFSET+YOFFSET for just position""")
, Option( '-H', dest='halfile', metavar='FILE'
, help="execute hal statements from FILE with halcmd after the component is set up and ready")
, Option( '-m', dest='maximum', default=False, help="Force panel window to maxumize")
, Option( '-r', dest='gtk_rc', default="",
help="read custom GTK rc file to set widget style")
, Option( '-R', dest='gtk_workaround', action='store_false',default=True,
help="disable workaround for GTK bug to properly read ~/.gtkrc-2.0 gtkrc files")
, Option( '-t', dest='theme', default="", help="Set gtk theme. Default is system theme")
, Option( '-x', dest='parent', type=int, metavar='XID'
, help="Reparent gladevcp into an existing window XID instead of creating a new top level window")
, Option( '-u', dest='usermod', action='append', default=[], metavar='FILE'
, help='Use FILEs as additional user defined modules with handlers')
, Option( '-I', dest='instance', default=-1, metavar='remote HAL instance to connect to'
, help='connect to a particular HAL instance (default 0)')
, Option( '-S', dest='svc_uuid', default=None, metavar='UUID of remote HAL instance'
, help='connect to haltalk server by giving its UUID')
, Option( '-E', action='store_true', dest='use_mki'
, metavar='use MKUUID from MACHINEKIT_INI'
, help='local case - use the current MKUUID')
, Option( '-N', action='store_true', dest='remote',default=False
, metavar='connect to remote haltalk server using TCP'
, help='enable remote operation via TCP')
, Option( '-C', dest='halrcmd_uri', default=None, metavar='zeroMQ URI of remote HALrcmd service'
, help='connect to remote haltalk server by giving its HALrcmd URI')
, Option( '-M', dest='halrcomp_uri', default=None, metavar='zeroMQ URI of remote HALrcomp service'
, help='connect to remote haltalk server by giving its HALrcomp URI')
, Option( '-D', dest='rcdebug', default=0, metavar='debug level for remote components'
, help='set to > to trace message exchange for remote components')
, Option( '-P', dest='pinginterval', default=3, metavar='seconds to ping haltalk'
, help='normally 3 secs')
, Option( '-U', dest='useropts', action='append', metavar='USEROPT', default=[]
, help='pass USEROPTs to Python modules')
]
signal_func = 'on_unix_signal'
gladevcp_debug = 0
def dbg(str):
global gladevcp_debug
if not gladevcp_debug: return
print str
def on_window_destroy(widget, data=None):
gtk.main_quit()
class Trampoline(object):
def __init__(self,methods):
self.methods = methods
def __call__(self, *a, **kw):
for m in self.methods:
m(*a, **kw)
def load_handlers(usermod,halcomp,builder,useropts,compname):
hdl_func = 'get_handlers'
def add_handler(method, f):
if method in handlers:
handlers[method].append(f)
else:
handlers[method] = [f]
handlers = {}
for u in usermod:
(directory,filename) = os.path.split(u)
(basename,extension) = os.path.splitext(filename)
if directory == '':
directory = '.'
if directory not in sys.path:
sys.path.insert(0,directory)
dbg('adding import dir %s' % directory)
try:
mod = __import__(basename)
except ImportError,msg:
print "module '%s' skipped - import error: %s" %(basename,msg)
continue
dbg("module '%s' imported OK" % mod.__name__)
try:
# look for 'get_handlers' function
h = getattr(mod,hdl_func,None)
if h and callable(h):
dbg("module '%s' : '%s' function found" % (mod.__name__,hdl_func))
objlist = h(halcomp,builder,useropts, compname)
else:
# the module has no get_handlers() callable.
# in this case we permit any callable except class Objects in the module to register as handler
dbg("module '%s': no '%s' function - registering only functions as callbacks" % (mod.__name__,hdl_func))
objlist = [mod]
# extract callback candidates
for object in objlist:
dbg("Registering handlers in module %s object %s" % (mod.__name__, object))
if isinstance(object, dict):
methods = dict.items()
else:
methods = map(lambda n: (n, getattr(object, n, None)), dir(object))
for method,f in methods:
if method.startswith('_'):
continue
if callable(f):
dbg("Register callback '%s' in %s" % (method, object))
add_handler(method, f)
except Exception, e:
print "gladevcp: trouble looking for handlers in '%s': %s" %(basename, e)
traceback.print_exc()
# Wrap lists in Trampoline, unwrap single functions
for n,v in list(handlers.items()):
if len(v) == 1:
handlers[n] = v[0]
else:
handlers[n] = Trampoline(v)
return handlers
def main():
""" creates a HAL component.
parsees a glade XML file with gtk.builder or libglade
calls gladevcp.makepins with the specified XML file
to create pins and register callbacks.
main window must be called "window1"
"""
global gladevcp_debug
(progdir, progname) = os.path.split(sys.argv[0])
usage = "usage: %prog [options] myfile.ui"
parser = OptionParser(usage=usage)
parser.disable_interspersed_args()
parser.add_options(options)
(opts, args) = parser.parse_args()
if not args:
parser.print_help()
sys.exit(1)
gladevcp_debug = debug = opts.debug
xmlname = args[0]
if opts.instance > -1 and not remote_ok:
print >> sys.stderr, "gladevcp: cant operate remotely - modules missing"
sys.exit(1)
#if there was no component name specified use the xml file name
if opts.component is None:
opts.component = os.path.splitext(os.path.basename(xmlname))[0]
#try loading as a libglade project
try:
builder = gtk.Builder()
builder.add_from_file(xmlname)
except:
try:
# try loading as a gtk.builder project
dbg("**** GLADE VCP INFO: Not a builder project, trying to load as a lib glade project")
builder = gtk.glade.XML(xmlname)
builder = GladeBuilder(builder)
except Exception,e:
print >> sys.stderr, "**** GLADE VCP ERROR: With xml file: %s : %s" % (xmlname,e)
sys.exit(0)
window = builder.get_object("window1")
window.set_title(opts.component)
if opts.instance != -1:
print >> sys.stderr, "*** GLADE VCP ERROR: the -I option is deprecated, either use:"
print >> sys.stderr, "-s <uuid> for zeroconf lookup, or"
print >> sys.stderr, "-C <halrcmd_uri> -M <halrcomp_uri> for explicit URI's "
sys.exit(0)
if opts.svc_uuid and (opts.halrcmd_uri or opts.halrcomp_uri):
print >> sys.stderr, "*** GLADE VCP ERROR: use either -s<uuid> or -C/-M, but not both"
sys.exit(0)
if not (opts.svc_uuid or opts.use_mki or opts.halrcmd_uri or opts.halrcomp_uri): # local
try:
import hal
halcomp = hal.component(opts.component)
except:
print >> sys.stderr, "*** GLADE VCP ERROR: Asking for a HAL component using a name that already exists."
sys.exit(0)
panel = gladevcp.makepins.GladePanel( halcomp, xmlname, builder, None)
else:
if opts.rcdebug: print "remote uuid=%s halrcmd=%s halrcomp=%s" % (opts.svc_uuid,opts.halrcmd_uri,opts.halrcomp_uri)
if opts.use_mki:
mki = ConfigParser.ConfigParser()
mki.read(os.getenv("MACHINEKIT_INI"))
uuid = mki.get("MACHINEKIT", "MKUUID")
else:
uuid = opts.svc_uuid
halcomp = GRemoteComponent(opts.component,
builder,
halrcmd_uri=opts.halrcmd_uri,
halrcomp_uri=opts.halrcomp_uri,
uuid=uuid,
instance=opts.instance,
period=int(opts.pinginterval),
remote=int(opts.remote),
debug=int(opts.rcdebug))
panel = gladevcp.makepins.GladePanel( halcomp, xmlname, builder, None)
# no discovery, so bind right away
if not (opts.use_mki or opts.svc_uuid):
halcomp.bind()
# else bind() is called once all URI's discovered and connected
# this should really be done with a signal, and bind() done in reaction to
# the signal
# at this point, any glade HL widgets and their pins are set up.
handlers = load_handlers(opts.usermod,halcomp,builder,opts.useropts,opts.component)
builder.connect_signals(handlers)
if opts.parent:
# block X errors since gdk error handling silently exits the
# program without even the atexit handler given a chance
gtk.gdk.error_trap_push()
window = xembed.reparent(window, opts.parent)
forward = os.environ.get('AXIS_FORWARD_EVENTS_TO', None)
if forward:
xembed.keyboard_forward(window, forward)
window.connect("destroy", on_window_destroy)
window.show()
# for window resize and or position options
if "+" in opts.geometry:
try:
j = opts.geometry.partition("+")
pos = j[2].partition("+")
window.move( int(pos[0]), int(pos[2]) )
except:
print >> sys.stderr, "**** GLADE VCP ERROR: With window position data"
parser.print_usage()
sys.exit(1)
if "x" in opts.geometry:
try:
if "+" in opts.geometry:
j = opts.geometry.partition("+")
t = j[0].partition("x")
else:
t = window_geometry.partition("x")
window.resize( int(t[0]), int(t[2]) )
except:
print >> sys.stderr, "**** GLADE VCP ERROR: With window resize data"
parser.print_usage()
sys.exit(1)
if opts.gtk_workaround:
# work around https://bugs.launchpad.net/ubuntu/+source/pygtk/+bug/507739
# this makes widget and widget_class matches in gtkrc and theme files actually work
dbg( "activating GTK bug workaround for gtkrc files")
for o in builder.get_objects():
if isinstance(o, gtk.Widget):
# retrieving the name works only for GtkBuilder files, not for
# libglade files, so be cautious about it
name = gtk.Buildable.get_name(o)
if name: o.set_name(name)
if opts.gtk_rc:
dbg( "**** GLADE VCP INFO: %s reading gtkrc file '%s'" %(opts.component,opts.gtk_rc))
gtk.rc_add_default_file(opts.gtk_rc)
gtk.rc_parse(opts.gtk_rc)
if opts.theme:
dbg("**** GLADE VCP INFO: Switching %s to '%s' theme" %(opts.component,opts.theme))
settings = gtk.settings_get_default()
settings.set_string_property("gtk-theme-name", opts.theme, "")
# This needs to be done after geometry moves so on dual screens the window maxumizes to the actual used screen size.
if opts.maximum:
window.window.maximize()
if opts.instance > -1:
# zmq setup incantations
print "setup for remote instance",opts.instance
pass
if opts.halfile:
cmd = ["halcmd", "-f", opts.halfile]
res = subprocess.call(cmd, stdout=sys.stdout, stderr=sys.stderr)
if res:
print >> sys.stderr, "'%s' exited with %d" %(' '.join(cmd), res)
sys.exit(res)
# User components are set up so report that we are ready
halcomp.ready()
if handlers.has_key(signal_func):
dbg("Register callback '%s' for SIGINT and SIGTERM" %(signal_func))
signal.signal(signal.SIGTERM, handlers[signal_func])
signal.signal(signal.SIGINT, handlers[signal_func])
try:
gtk.main()
except KeyboardInterrupt:
sys.exit(0)
finally:
halcomp.exit()
if opts.parent:
gtk.gdk.flush()
error = gtk.gdk.error_trap_pop()
if error:
print >> sys.stderr, "**** GLADE VCP ERROR: X Protocol Error: %s" % str(error)
if __name__ == '__main__':
main() | unknown | codeparrot/codeparrot-clean | ||
import itertools
import functools
import numpy as np
try:
import bottleneck as bn
_USE_BOTTLENECK = True
except ImportError: # pragma: no cover
_USE_BOTTLENECK = False
import pandas.hashtable as _hash
from pandas import compat, lib, algos, tslib
from pandas.compat import builtins
from pandas.core.common import (isnull, notnull, _values_from_object,
_maybe_upcast_putmask,
ensure_float, _ensure_float64,
_ensure_int64, _ensure_object,
is_float, is_integer, is_complex,
is_float_dtype,
is_complex_dtype, is_integer_dtype,
is_bool_dtype, is_object_dtype,
is_datetime64_dtype, is_timedelta64_dtype,
is_datetime_or_timedelta_dtype, _get_dtype,
is_int_or_datetime_dtype, is_any_int_dtype,
_int64_max)
class disallow(object):
def __init__(self, *dtypes):
super(disallow, self).__init__()
self.dtypes = tuple(np.dtype(dtype).type for dtype in dtypes)
def check(self, obj):
return hasattr(obj, 'dtype') and issubclass(obj.dtype.type,
self.dtypes)
def __call__(self, f):
@functools.wraps(f)
def _f(*args, **kwargs):
obj_iter = itertools.chain(args, compat.itervalues(kwargs))
if any(self.check(obj) for obj in obj_iter):
raise TypeError('reduction operation {0!r} not allowed for '
'this dtype'.format(f.__name__.replace('nan',
'')))
try:
return f(*args, **kwargs)
except ValueError as e:
# we want to transform an object array
# ValueError message to the more typical TypeError
# e.g. this is normally a disallowed function on
# object arrays that contain strings
if is_object_dtype(args[0]):
raise TypeError(e)
raise
return _f
class bottleneck_switch(object):
def __init__(self, zero_value=None, **kwargs):
self.zero_value = zero_value
self.kwargs = kwargs
def __call__(self, alt):
bn_name = alt.__name__
try:
bn_func = getattr(bn, bn_name)
except (AttributeError, NameError): # pragma: no cover
bn_func = None
@functools.wraps(alt)
def f(values, axis=None, skipna=True, **kwds):
if len(self.kwargs) > 0:
for k, v in compat.iteritems(self.kwargs):
if k not in kwds:
kwds[k] = v
try:
if self.zero_value is not None and values.size == 0:
if values.ndim == 1:
# wrap the 0's if needed
if is_timedelta64_dtype(values):
return lib.Timedelta(0)
return 0
else:
result_shape = (values.shape[:axis] +
values.shape[axis + 1:])
result = np.empty(result_shape)
result.fill(0)
return result
if _USE_BOTTLENECK and skipna and _bn_ok_dtype(values.dtype,
bn_name):
result = bn_func(values, axis=axis, **kwds)
# prefer to treat inf/-inf as NA, but must compute the func
# twice :(
if _has_infs(result):
result = alt(values, axis=axis, skipna=skipna, **kwds)
else:
result = alt(values, axis=axis, skipna=skipna, **kwds)
except Exception:
try:
result = alt(values, axis=axis, skipna=skipna, **kwds)
except ValueError as e:
# we want to transform an object array
# ValueError message to the more typical TypeError
# e.g. this is normally a disallowed function on
# object arrays that contain strings
if is_object_dtype(values):
raise TypeError(e)
raise
return result
return f
def _bn_ok_dtype(dt, name):
# Bottleneck chokes on datetime64
if (not is_object_dtype(dt) and
not is_datetime_or_timedelta_dtype(dt)):
# bottleneck does not properly upcast during the sum
# so can overflow
if name == 'nansum':
if dt.itemsize < 8:
return False
return True
return False
def _has_infs(result):
if isinstance(result, np.ndarray):
if result.dtype == 'f8':
return lib.has_infs_f8(result.ravel())
elif result.dtype == 'f4':
return lib.has_infs_f4(result.ravel())
try:
return np.isinf(result).any()
except (TypeError, NotImplementedError) as e:
# if it doesn't support infs, then it can't have infs
return False
def _get_fill_value(dtype, fill_value=None, fill_value_typ=None):
""" return the correct fill value for the dtype of the values """
if fill_value is not None:
return fill_value
if _na_ok_dtype(dtype):
if fill_value_typ is None:
return np.nan
else:
if fill_value_typ == '+inf':
return np.inf
else:
return -np.inf
else:
if fill_value_typ is None:
return tslib.iNaT
else:
if fill_value_typ == '+inf':
# need the max int here
return _int64_max
else:
return tslib.iNaT
def _get_values(values, skipna, fill_value=None, fill_value_typ=None,
isfinite=False, copy=True):
""" utility to get the values view, mask, dtype
if necessary copy and mask using the specified fill_value
copy = True will force the copy """
values = _values_from_object(values)
if isfinite:
mask = _isfinite(values)
else:
mask = isnull(values)
dtype = values.dtype
dtype_ok = _na_ok_dtype(dtype)
# get our fill value (in case we need to provide an alternative
# dtype for it)
fill_value = _get_fill_value(dtype, fill_value=fill_value,
fill_value_typ=fill_value_typ)
if skipna:
if copy:
values = values.copy()
if dtype_ok:
np.putmask(values, mask, fill_value)
# promote if needed
else:
values, changed = _maybe_upcast_putmask(values, mask, fill_value)
elif copy:
values = values.copy()
values = _view_if_needed(values)
# return a platform independent precision dtype
dtype_max = dtype
if is_integer_dtype(dtype) or is_bool_dtype(dtype):
dtype_max = np.int64
elif is_float_dtype(dtype):
dtype_max = np.float64
return values, mask, dtype, dtype_max
def _isfinite(values):
if is_datetime_or_timedelta_dtype(values):
return isnull(values)
if (is_complex_dtype(values) or is_float_dtype(values) or
is_integer_dtype(values) or is_bool_dtype(values)):
return ~np.isfinite(values)
return ~np.isfinite(values.astype('float64'))
def _na_ok_dtype(dtype):
return not is_int_or_datetime_dtype(dtype)
def _view_if_needed(values):
if is_datetime_or_timedelta_dtype(values):
return values.view(np.int64)
return values
def _wrap_results(result, dtype):
""" wrap our results if needed """
if is_datetime64_dtype(dtype):
if not isinstance(result, np.ndarray):
result = lib.Timestamp(result)
else:
result = result.view(dtype)
elif is_timedelta64_dtype(dtype):
if not isinstance(result, np.ndarray):
# raise if we have a timedelta64[ns] which is too large
if np.fabs(result) > _int64_max:
raise ValueError("overflow in timedelta operation")
result = lib.Timedelta(result, unit='ns')
else:
result = result.astype('i8').view(dtype)
return result
def nanany(values, axis=None, skipna=True):
values, mask, dtype, _ = _get_values(values, skipna, False, copy=skipna)
return values.any(axis)
def nanall(values, axis=None, skipna=True):
values, mask, dtype, _ = _get_values(values, skipna, True, copy=skipna)
return values.all(axis)
@disallow('M8')
@bottleneck_switch(zero_value=0)
def nansum(values, axis=None, skipna=True):
values, mask, dtype, dtype_max = _get_values(values, skipna, 0)
dtype_sum = dtype_max
if is_float_dtype(dtype):
dtype_sum = dtype
elif is_timedelta64_dtype(dtype):
dtype_sum = np.float64
the_sum = values.sum(axis, dtype=dtype_sum)
the_sum = _maybe_null_out(the_sum, axis, mask)
return _wrap_results(the_sum, dtype)
@disallow('M8')
@bottleneck_switch()
def nanmean(values, axis=None, skipna=True):
values, mask, dtype, dtype_max = _get_values(values, skipna, 0)
dtype_sum = dtype_max
dtype_count = np.float64
if is_integer_dtype(dtype) or is_timedelta64_dtype(dtype):
dtype_sum = np.float64
elif is_float_dtype(dtype):
dtype_sum = dtype
dtype_count = dtype
count = _get_counts(mask, axis, dtype=dtype_count)
the_sum = _ensure_numeric(values.sum(axis, dtype=dtype_sum))
if axis is not None and getattr(the_sum, 'ndim', False):
the_mean = the_sum / count
ct_mask = count == 0
if ct_mask.any():
the_mean[ct_mask] = np.nan
else:
the_mean = the_sum / count if count > 0 else np.nan
return _wrap_results(the_mean, dtype)
@disallow('M8')
@bottleneck_switch()
def nanmedian(values, axis=None, skipna=True):
values, mask, dtype, dtype_max = _get_values(values, skipna)
def get_median(x):
mask = notnull(x)
if not skipna and not mask.all():
return np.nan
return algos.median(_values_from_object(x[mask]))
if not is_float_dtype(values):
values = values.astype('f8')
values[mask] = np.nan
if axis is None:
values = values.ravel()
notempty = values.size
# an array from a frame
if values.ndim > 1:
# there's a non-empty array to apply over otherwise numpy raises
if notempty:
return _wrap_results(np.apply_along_axis(get_median, axis, values), dtype)
# must return the correct shape, but median is not defined for the
# empty set so return nans of shape "everything but the passed axis"
# since "axis" is where the reduction would occur if we had a nonempty
# array
shp = np.array(values.shape)
dims = np.arange(values.ndim)
ret = np.empty(shp[dims != axis])
ret.fill(np.nan)
return _wrap_results(ret, dtype)
# otherwise return a scalar value
return _wrap_results(get_median(values) if notempty else np.nan, dtype)
def _get_counts_nanvar(mask, axis, ddof, dtype=float):
dtype = _get_dtype(dtype)
count = _get_counts(mask, axis, dtype=dtype)
d = count - dtype.type(ddof)
# always return NaN, never inf
if np.isscalar(count):
if count <= ddof:
count = np.nan
d = np.nan
else:
mask2 = count <= ddof
if mask2.any():
np.putmask(d, mask2, np.nan)
np.putmask(count, mask2, np.nan)
return count, d
@disallow('M8')
@bottleneck_switch(ddof=1)
def nanstd(values, axis=None, skipna=True, ddof=1):
result = np.sqrt(nanvar(values, axis=axis, skipna=skipna, ddof=ddof))
return _wrap_results(result, values.dtype)
@disallow('M8')
@bottleneck_switch(ddof=1)
def nanvar(values, axis=None, skipna=True, ddof=1):
dtype = values.dtype
mask = isnull(values)
if is_any_int_dtype(values):
values = values.astype('f8')
values[mask] = np.nan
if is_float_dtype(values):
count, d = _get_counts_nanvar(mask, axis, ddof, values.dtype)
else:
count, d = _get_counts_nanvar(mask, axis, ddof)
if skipna:
values = values.copy()
np.putmask(values, mask, 0)
# xref GH10242
# Compute variance via two-pass algorithm, which is stable against
# cancellation errors and relatively accurate for small numbers of
# observations.
#
# See https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
avg = _ensure_numeric(values.sum(axis=axis, dtype=np.float64)) / count
if axis is not None:
avg = np.expand_dims(avg, axis)
sqr = _ensure_numeric((avg - values) ** 2)
np.putmask(sqr, mask, 0)
result = sqr.sum(axis=axis, dtype=np.float64) / d
# Return variance as np.float64 (the datatype used in the accumulator),
# unless we were dealing with a float array, in which case use the same
# precision as the original values array.
if is_float_dtype(dtype):
result = result.astype(dtype)
return _wrap_results(result, values.dtype)
@disallow('M8', 'm8')
def nansem(values, axis=None, skipna=True, ddof=1):
var = nanvar(values, axis, skipna, ddof=ddof)
mask = isnull(values)
if not is_float_dtype(values.dtype):
values = values.astype('f8')
count, _ = _get_counts_nanvar(mask, axis, ddof, values.dtype)
var = nanvar(values, axis, skipna, ddof=ddof)
return np.sqrt(var) / np.sqrt(count)
def _nanminmax(meth, fill_value_typ):
@bottleneck_switch()
def reduction(values, axis=None, skipna=True):
values, mask, dtype, dtype_max = _get_values(
values,
skipna,
fill_value_typ=fill_value_typ,
)
if ((axis is not None and values.shape[axis] == 0)
or values.size == 0):
try:
result = getattr(values, meth)(axis, dtype=dtype_max)
result.fill(np.nan)
except:
result = np.nan
else:
result = getattr(values, meth)(axis)
result = _wrap_results(result, dtype)
return _maybe_null_out(result, axis, mask)
reduction.__name__ = 'nan' + meth
return reduction
nanmin = _nanminmax('min', fill_value_typ='+inf')
nanmax = _nanminmax('max', fill_value_typ='-inf')
def nanargmax(values, axis=None, skipna=True):
"""
Returns -1 in the NA case
"""
values, mask, dtype, _ = _get_values(values, skipna, fill_value_typ='-inf',
isfinite=True)
result = values.argmax(axis)
result = _maybe_arg_null_out(result, axis, mask, skipna)
return result
def nanargmin(values, axis=None, skipna=True):
"""
Returns -1 in the NA case
"""
values, mask, dtype, _ = _get_values(values, skipna, fill_value_typ='+inf',
isfinite=True)
result = values.argmin(axis)
result = _maybe_arg_null_out(result, axis, mask, skipna)
return result
@disallow('M8','m8')
def nanskew(values, axis=None, skipna=True):
mask = isnull(values)
if not is_float_dtype(values.dtype):
values = values.astype('f8')
count = _get_counts(mask, axis)
else:
count = _get_counts(mask, axis, dtype=values.dtype)
if skipna:
values = values.copy()
np.putmask(values, mask, 0)
typ = values.dtype.type
A = values.sum(axis) / count
B = (values ** 2).sum(axis) / count - A ** typ(2)
C = (values ** 3).sum(axis) / count - A ** typ(3) - typ(3) * A * B
# floating point error
B = _zero_out_fperr(B)
C = _zero_out_fperr(C)
result = ((np.sqrt(count * count - count) * C) /
((count - typ(2)) * np.sqrt(B) ** typ(3)))
if isinstance(result, np.ndarray):
result = np.where(B == 0, 0, result)
result[count < 3] = np.nan
return result
else:
result = 0 if B == 0 else result
if count < 3:
return np.nan
return result
@disallow('M8','m8')
def nankurt(values, axis=None, skipna=True):
mask = isnull(values)
if not is_float_dtype(values.dtype):
values = values.astype('f8')
count = _get_counts(mask, axis)
else:
count = _get_counts(mask, axis, dtype=values.dtype)
if skipna:
values = values.copy()
np.putmask(values, mask, 0)
typ = values.dtype.type
A = values.sum(axis) / count
B = (values ** 2).sum(axis) / count - A ** typ(2)
C = (values ** 3).sum(axis) / count - A ** typ(3) - typ(3) * A * B
D = (values ** 4).sum(axis) / count - A ** typ(4) - typ(6) * B * A * A - typ(4) * C * A
B = _zero_out_fperr(B)
D = _zero_out_fperr(D)
if not isinstance(B, np.ndarray):
# if B is a scalar, check these corner cases first before doing division
if count < 4:
return np.nan
if B == 0:
return 0
result = (((count * count - typ(1)) * D / (B * B) - typ(3) * ((count - typ(1)) ** typ(2))) /
((count - typ(2)) * (count - typ(3))))
if isinstance(result, np.ndarray):
result = np.where(B == 0, 0, result)
result[count < 4] = np.nan
return result
@disallow('M8','m8')
def nanprod(values, axis=None, skipna=True):
mask = isnull(values)
if skipna and not is_any_int_dtype(values):
values = values.copy()
values[mask] = 1
result = values.prod(axis)
return _maybe_null_out(result, axis, mask)
def _maybe_arg_null_out(result, axis, mask, skipna):
# helper function for nanargmin/nanargmax
if axis is None or not getattr(result, 'ndim', False):
if skipna:
if mask.all():
result = -1
else:
if mask.any():
result = -1
else:
if skipna:
na_mask = mask.all(axis)
else:
na_mask = mask.any(axis)
if na_mask.any():
result[na_mask] = -1
return result
def _get_counts(mask, axis, dtype=float):
dtype = _get_dtype(dtype)
if axis is None:
return dtype.type(mask.size - mask.sum())
count = mask.shape[axis] - mask.sum(axis)
if np.isscalar(count):
return dtype.type(count)
try:
return count.astype(dtype)
except AttributeError:
return np.array(count, dtype=dtype)
def _maybe_null_out(result, axis, mask):
if axis is not None and getattr(result, 'ndim', False):
null_mask = (mask.shape[axis] - mask.sum(axis)) == 0
if np.any(null_mask):
if np.iscomplexobj(result):
result = result.astype('c16')
else:
result = result.astype('f8')
result[null_mask] = np.nan
elif result is not tslib.NaT:
null_mask = mask.size - mask.sum()
if null_mask == 0:
result = np.nan
return result
def _zero_out_fperr(arg):
if isinstance(arg, np.ndarray):
return np.where(np.abs(arg) < 1e-14, 0, arg)
else:
return arg.dtype.type(0) if np.abs(arg) < 1e-14 else arg
@disallow('M8','m8')
def nancorr(a, b, method='pearson', min_periods=None):
"""
a, b: ndarrays
"""
if len(a) != len(b):
raise AssertionError('Operands to nancorr must have same size')
if min_periods is None:
min_periods = 1
valid = notnull(a) & notnull(b)
if not valid.all():
a = a[valid]
b = b[valid]
if len(a) < min_periods:
return np.nan
f = get_corr_func(method)
return f(a, b)
def get_corr_func(method):
if method in ['kendall', 'spearman']:
from scipy.stats import kendalltau, spearmanr
def _pearson(a, b):
return np.corrcoef(a, b)[0, 1]
def _kendall(a, b):
rs = kendalltau(a, b)
if isinstance(rs, tuple):
return rs[0]
return rs
def _spearman(a, b):
return spearmanr(a, b)[0]
_cor_methods = {
'pearson': _pearson,
'kendall': _kendall,
'spearman': _spearman
}
return _cor_methods[method]
@disallow('M8','m8')
def nancov(a, b, min_periods=None):
if len(a) != len(b):
raise AssertionError('Operands to nancov must have same size')
if min_periods is None:
min_periods = 1
valid = notnull(a) & notnull(b)
if not valid.all():
a = a[valid]
b = b[valid]
if len(a) < min_periods:
return np.nan
return np.cov(a, b)[0, 1]
def _ensure_numeric(x):
if isinstance(x, np.ndarray):
if is_integer_dtype(x) or is_bool_dtype(x):
x = x.astype(np.float64)
elif is_object_dtype(x):
try:
x = x.astype(np.complex128)
except:
x = x.astype(np.float64)
else:
if not np.any(x.imag):
x = x.real
elif not (is_float(x) or is_integer(x) or is_complex(x)):
try:
x = float(x)
except Exception:
try:
x = complex(x)
except Exception:
raise TypeError('Could not convert %s to numeric' % str(x))
return x
# NA-friendly array comparisons
import operator
def make_nancomp(op):
def f(x, y):
xmask = isnull(x)
ymask = isnull(y)
mask = xmask | ymask
result = op(x, y)
if mask.any():
if is_bool_dtype(result):
result = result.astype('O')
np.putmask(result, mask, np.nan)
return result
return f
nangt = make_nancomp(operator.gt)
nange = make_nancomp(operator.ge)
nanlt = make_nancomp(operator.lt)
nanle = make_nancomp(operator.le)
naneq = make_nancomp(operator.eq)
nanne = make_nancomp(operator.ne)
def unique1d(values):
"""
Hash table-based unique
"""
if np.issubdtype(values.dtype, np.floating):
table = _hash.Float64HashTable(len(values))
uniques = np.array(table.unique(_ensure_float64(values)),
dtype=np.float64)
elif np.issubdtype(values.dtype, np.datetime64):
table = _hash.Int64HashTable(len(values))
uniques = table.unique(_ensure_int64(values))
uniques = uniques.view('M8[ns]')
elif np.issubdtype(values.dtype, np.timedelta64):
table = _hash.Int64HashTable(len(values))
uniques = table.unique(_ensure_int64(values))
uniques = uniques.view('m8[ns]')
elif np.issubdtype(values.dtype, np.integer):
table = _hash.Int64HashTable(len(values))
uniques = table.unique(_ensure_int64(values))
else:
table = _hash.PyObjectHashTable(len(values))
uniques = table.unique(_ensure_object(values))
return uniques | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
# coding=utf-8
import ast
import requests
import socket
import socket
import sys
import threading
from datetime import datetime
class PortScanner(object):
threads = []
def __init__(self, config, logfile):
socket_codes = self.getSocketCodes()
start_time = datetime.now()
out = "PortScanner started at {}\n\n".format(start_time)
print(out)
if logfile:
logfile.write(out)
logfile.flush()
for address in config:
if config[address]:
t = threading.Thread(target=self.scanPorts,
args=(address, config[address], logfile, True))
self.threads.append(t)
t.start()
else:
allports = range(1,1025)
sliced_ports = []
while allports: # getting ports in batches of 20
slice = allports[:20]
sliced_ports.append(slice)
allports = [p for p in allports if p not in slice]
for ports in sliced_ports:
t=threading.Thread(target=self.scanPorts,
args=(address, ports, logfile, False))
self.threads.append(t)
t.start()
while threading.enumerate():
pass
end_time=datetime.now()
out="Scanning completed in {}".format(end_time - start_time)
print(out)
if logfile:
logfile.write(out)
logfile.close()
def scanPorts(self, host, ports, logfile, detailed):
'''
Tries to open a socket and logs the result.
host, ports:
the address and the ports to test
logfile:
the file in which to write the results
detailed:
if True, the TCP result code is logged for every port.
if False, only the open ports will be registered.
'''
try:
if detailed:
codes=self.getSocketCodes()
for port in ports:
sock=socket.socket(
socket.AF_INET, socket.SOCK_STREAM)
result=sock.connect_ex((host, int(port)))
out="[{0}] Port {1} returned code {2}: {3}\n".format(host,
port,
result,
codes[result])
print out
if logfile:
logfile.write(out)
logfile.flush()
else:
for port in ports:
print("testing {0}:{1}".format(host, port))
sock=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result=sock.connect_ex((host, int(port)))
sock.close()
if result == 0:
out='[{0}] Port {1} is open\n'.format(host, port)
print out
if logfile:
logfile.write(out)
logfile.flush()
except KeyboardInterrupt:
out += "WARNING: You pressed Ctrl+C."
if logfile:
logfile.write(out)
logfile.flush()
except socket.gaierror:
out += 'Hostname could not be resolved. Exiting'
if logfile:
logfile.write(out)
logfile.flush()
sys.exit()
except socket.error:
print "Couldn't connect to server"
sys.exit()
def getSocketCodes(self):
res=requests.get(
"https://gist.githubusercontent.com/d33pcode/2542a87dd80ba35dbffd2cffbb65b53a/raw/8a137eae6bd56ad0e55d8ea3cf1b590ef25698fe/socketcodes.txt")
return ast.literal_eval(res.content)
def main():
test_list=readConf('addresslist.conf')
output=open('scan.log', 'w')
PortScanner(test_list, output)
def readConf(path):
with open(path, 'r') as f:
content=f.read()
return ast.literal_eval(content)
if __name__ == '__main__':
main() | unknown | codeparrot/codeparrot-clean | ||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from heatclient.v1 import resource_types
from heatclient.v1 import services
from heatclient.v1 import stacks
from openstack_dashboard.test.test_data import utils
# A slightly hacked up copy of a sample cloudformation template for testing.
TEMPLATE = """
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS CloudFormation Sample Template.",
"Parameters": {
"KeyName": {
"Description": "Name of an EC2 Key Pair to enable SSH access to the instances",
"Type": "String"
},
"InstanceType": {
"Description": "WebServer EC2 instance type",
"Type": "String",
"Default": "m1.small",
"AllowedValues": [
"m1.tiny",
"m1.small",
"m1.medium",
"m1.large",
"m1.xlarge"
],
"ConstraintDescription": "must be a valid EC2 instance type."
},
"DBName": {
"Default": "wordpress",
"Description": "The WordPress database name",
"Type": "String",
"MinLength": "1",
"MaxLength": "64",
"AllowedPattern": "[a-zA-Z][a-zA-Z0-9]*",
"ConstraintDescription": "must begin with a letter and..."
},
"DBUsername": {
"Default": "admin",
"NoEcho": "true",
"Description": "The WordPress database admin account username",
"Type": "String",
"MinLength": "1",
"MaxLength": "16",
"AllowedPattern": "[a-zA-Z][a-zA-Z0-9]*",
"ConstraintDescription": "must begin with a letter and..."
},
"DBPassword": {
"Default": "admin",
"NoEcho": "true",
"Description": "The WordPress database admin account password",
"Type": "String",
"MinLength": "1",
"MaxLength": "41",
"AllowedPattern": "[a-zA-Z0-9]*",
"ConstraintDescription": "must contain only alphanumeric characters."
},
"DBRootPassword": {
"Default": "admin",
"NoEcho": "true",
"Description": "Root password for MySQL",
"Type": "String",
"MinLength": "1",
"MaxLength": "41",
"AllowedPattern": "[a-zA-Z0-9]*",
"ConstraintDescription": "must contain only alphanumeric characters."
},
"LinuxDistribution": {
"Default": "F17",
"Description": "Distribution of choice",
"Type": "String",
"AllowedValues": [
"F18",
"F17",
"U10",
"RHEL-6.1",
"RHEL-6.2",
"RHEL-6.3"
]
},
"Network": {
"Type": "String",
"CustomConstraint": "neutron.network"
}
},
"Mappings": {
"AWSInstanceType2Arch": {
"m1.tiny": {
"Arch": "32"
},
"m1.small": {
"Arch": "64"
},
"m1.medium": {
"Arch": "64"
},
"m1.large": {
"Arch": "64"
},
"m1.xlarge": {
"Arch": "64"
}
},
"DistroArch2AMI": {
"F18": {
"32": "F18-i386-cfntools",
"64": "F18-x86_64-cfntools"
},
"F17": {
"32": "F17-i386-cfntools",
"64": "F17-x86_64-cfntools"
},
"U10": {
"32": "U10-i386-cfntools",
"64": "U10-x86_64-cfntools"
},
"RHEL-6.1": {
"32": "rhel61-i386-cfntools",
"64": "rhel61-x86_64-cfntools"
},
"RHEL-6.2": {
"32": "rhel62-i386-cfntools",
"64": "rhel62-x86_64-cfntools"
},
"RHEL-6.3": {
"32": "rhel63-i386-cfntools",
"64": "rhel63-x86_64-cfntools"
}
}
},
"Resources": {
"WikiDatabase": {
"Type": "AWS::EC2::Instance",
"Metadata": {
"AWS::CloudFormation::Init": {
"config": {
"packages": {
"yum": {
"mysql": [],
"mysql-server": [],
"httpd": [],
"wordpress": []
}
},
"services": {
"systemd": {
"mysqld": {
"enabled": "true",
"ensureRunning": "true"
},
"httpd": {
"enabled": "true",
"ensureRunning": "true"
}
}
}
}
}
},
"Properties": {
"ImageId": {
"Fn::FindInMap": [
"DistroArch2AMI",
{
"Ref": "LinuxDistribution"
},
{
"Fn::FindInMap": [
"AWSInstanceType2Arch",
{
"Ref": "InstanceType"
},
"Arch"
]
}
]
},
"InstanceType": {
"Ref": "InstanceType"
},
"KeyName": {
"Ref": "KeyName"
},
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"",
[
"#!/bin/bash -v\n",
"/opt/aws/bin/cfn-init\n"
]
]
}
}
}
}
},
"Outputs": {
"WebsiteURL": {
"Value": {
"Fn::Join": [
"",
[
"http://",
{
"Fn::GetAtt": [
"WikiDatabase",
"PublicIp"
]
},
"/wordpress"
]
]
},
"Description": "URL for Wordpress wiki"
}
}
}
"""
VALIDATE = """
{
"Description": "AWS CloudFormation Sample Template.",
"Parameters": {
"DBUsername": {
"Type": "String",
"Description": "The WordPress database admin account username",
"Default": "admin",
"MinLength": "1",
"AllowedPattern": "[a-zA-Z][a-zA-Z0-9]*",
"NoEcho": "true",
"MaxLength": "16",
"ConstraintDescription": "must begin with a letter and..."
},
"LinuxDistribution": {
"Default": "F17",
"Type": "String",
"Description": "Distribution of choice",
"AllowedValues": [
"F18",
"F17",
"U10",
"RHEL-6.1",
"RHEL-6.2",
"RHEL-6.3"
]
},
"DBRootPassword": {
"Type": "String",
"Description": "Root password for MySQL",
"Default": "admin",
"MinLength": "1",
"AllowedPattern": "[a-zA-Z0-9]*",
"NoEcho": "true",
"MaxLength": "41",
"ConstraintDescription": "must contain only alphanumeric characters."
},
"KeyName": {
"Type": "String",
"Description": "Name of an EC2 Key Pair to enable SSH access to the instances"
},
"DBName": {
"Type": "String",
"Description": "The WordPress database name",
"Default": "wordpress",
"MinLength": "1",
"AllowedPattern": "[a-zA-Z][a-zA-Z0-9]*",
"MaxLength": "64",
"ConstraintDescription": "must begin with a letter and..."
},
"DBPassword": {
"Type": "String",
"Description": "The WordPress database admin account password",
"Default": "admin",
"MinLength": "1",
"AllowedPattern": "[a-zA-Z0-9]*",
"NoEcho": "true",
"MaxLength": "41",
"ConstraintDescription": "must contain only alphanumeric characters."
},
"InstanceType": {
"Default": "m1.small",
"Type": "String",
"ConstraintDescription": "must be a valid EC2 instance type.",
"Description": "WebServer EC2 instance type",
"AllowedValues": [
"m1.tiny",
"m1.small",
"m1.medium",
"m1.large",
"m1.xlarge"
]
},
"Network": {
"Type": "String",
"CustomConstraint": "neutron.network"
}
}
}
"""
ENVIRONMENT = """
parameters:
InstanceType: m1.xsmall
db_password: verybadpass
KeyName: heat_key
"""
class Environment(object):
def __init__(self, data):
self.data = data
class Template(object):
def __init__(self, data, validate):
self.data = data
self.validate = validate
def data(TEST):
TEST.stacks = utils.TestDataContainer()
TEST.stack_templates = utils.TestDataContainer()
TEST.stack_environments = utils.TestDataContainer()
TEST.resource_types = utils.TestDataContainer()
TEST.heat_services = utils.TestDataContainer()
# Services
service_1 = services.Service(services.ServiceManager(None), {
"status": "up",
"binary": "heat-engine",
"report_interval": 60,
"engine_id": "2f7b5a9b-c50b-4b01-8248-f89f5fb338d1",
"created_at": "2015-02-06T03:23:32.000000",
"hostname": "mrkanag",
"updated_at": "2015-02-20T09:49:52.000000",
"topic": "engine",
"host": "engine-1",
"deleted_at": None,
"id": "1efd7015-5016-4caa-b5c8-12438af7b100"
})
service_2 = services.Service(services.ServiceManager(None), {
"status": "up",
"binary": "heat-engine",
"report_interval": 60,
"engine_id": "2f7b5a9b-c50b-4b01-8248-f89f5fb338d2",
"created_at": "2015-02-06T03:23:32.000000",
"hostname": "mrkanag",
"updated_at": "2015-02-20T09:49:52.000000",
"topic": "engine",
"host": "engine-2",
"deleted_at": None,
"id": "1efd7015-5016-4caa-b5c8-12438af7b100"
})
TEST.heat_services.add(service_1)
TEST.heat_services.add(service_2)
# Data return by heatclient.
TEST.api_resource_types = utils.TestDataContainer()
for i in range(10):
stack_data = {
"description": "No description",
"links": [{
"href": "http://192.168.1.70:8004/v1/"
"051c727ee67040d6a7b7812708485a97/"
"stacks/stack-1211-38/"
"05b4f39f-ea96-4d91-910c-e758c078a089",
"rel": "self"
}],
"parameters": {
'DBUsername': '******',
'InstanceType': 'm1.small',
'AWS::StackId': (
'arn:openstack:heat::2ce287:stacks/teststack/88553ec'),
'DBRootPassword': '******',
'AWS::StackName': "teststack{0}".format(i),
'DBPassword': '******',
'AWS::Region': 'ap-southeast-1',
'DBName': u'wordpress'
},
"stack_status_reason": "Stack successfully created",
"stack_name": "stack-test{0}".format(i),
"creation_time": "2013-04-22T00:11:39Z",
"updated_time": "2013-04-22T00:11:39Z",
"stack_status": "CREATE_COMPLETE",
"id": "05b4f39f-ea96-4d91-910c-e758c078a089{0}".format(i)
}
stack = stacks.Stack(stacks.StackManager(None), stack_data)
TEST.stacks.add(stack)
TEST.stack_templates.add(Template(TEMPLATE, VALIDATE))
TEST.stack_environments.add(Environment(ENVIRONMENT))
# Resource types list
r_type_1 = {
"resource_type": "AWS::CloudFormation::Stack",
"attributes": {},
"properties": {
"Parameters": {
"description":
"The set of parameters passed to this nested stack.",
"immutable": False,
"required": False,
"type": "map",
"update_allowed": True},
"TemplateURL": {
"description": "The URL of a template that specifies"
" the stack to be created as a resource.",
"immutable": False,
"required": True,
"type": "string",
"update_allowed": True},
"TimeoutInMinutes": {
"description": "The length of time, in minutes,"
" to wait for the nested stack creation.",
"immutable": False,
"required": False,
"type": "number",
"update_allowed": True}
}
}
r_type_2 = {
"resource_type": "OS::Heat::CloudConfig",
"attributes": {
"config": {
"description": "The config value of the software config."}
},
"properties": {
"cloud_config": {
"description": "Map representing the cloud-config data"
" structure which will be formatted as YAML.",
"immutable": False,
"required": False,
"type": "map",
"update_allowed": False}
}
}
r_types_list = [r_type_1, r_type_2]
for rt in r_types_list:
r_type = resource_types.ResourceType(
resource_types.ResourceTypeManager(None), rt['resource_type'])
TEST.resource_types.add(r_type)
TEST.api_resource_types.add(rt) | unknown | codeparrot/codeparrot-clean | ||
import unittest as real_unittest
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
from django.db.models import get_app, get_apps
from django.test import _doctest as doctest
from django.test.utils import setup_test_environment, teardown_test_environment
from django.test.testcases import OutputChecker, DocTestRunner, TestCase
from django.utils import unittest
from django.utils.importlib import import_module
from django.utils.module_loading import module_has_submodule
__all__ = ('DjangoTestRunner', 'DjangoTestSuiteRunner')
# The module name for tests outside models.py
TEST_MODULE = 'tests'
doctestOutputChecker = OutputChecker()
class DjangoTestRunner(unittest.TextTestRunner):
def __init__(self, *args, **kwargs):
import warnings
warnings.warn(
"DjangoTestRunner is deprecated; it's functionality is "
"indistinguishable from TextTestRunner",
DeprecationWarning
)
super(DjangoTestRunner, self).__init__(*args, **kwargs)
def get_tests(app_module):
parts = app_module.__name__.split('.')
prefix, last = parts[:-1], parts[-1]
try:
test_module = import_module('.'.join(prefix + [TEST_MODULE]))
except ImportError:
# Couldn't import tests.py. Was it due to a missing file, or
# due to an import error in a tests.py that actually exists?
# app_module either points to a models.py file, or models/__init__.py
# Tests are therefore either in same directory, or one level up
if last == 'models':
app_root = import_module('.'.join(prefix))
else:
app_root = app_module
if not module_has_submodule(app_root, TEST_MODULE):
test_module = None
else:
# The module exists, so there must be an import error in the test
# module itself.
raise
return test_module
def build_suite(app_module):
"""
Create a complete Django test suite for the provided application module.
"""
suite = unittest.TestSuite()
# Load unit and doctests in the models.py module. If module has
# a suite() method, use it. Otherwise build the test suite ourselves.
if hasattr(app_module, 'suite'):
suite.addTest(app_module.suite())
else:
suite.addTest(unittest.defaultTestLoader.loadTestsFromModule(
app_module))
try:
suite.addTest(doctest.DocTestSuite(app_module,
checker=doctestOutputChecker,
runner=DocTestRunner))
except ValueError:
# No doc tests in models.py
pass
# Check to see if a separate 'tests' module exists parallel to the
# models module
test_module = get_tests(app_module)
if test_module:
# Load unit and doctests in the tests.py module. If module has
# a suite() method, use it. Otherwise build the test suite ourselves.
if hasattr(test_module, 'suite'):
suite.addTest(test_module.suite())
else:
suite.addTest(unittest.defaultTestLoader.loadTestsFromModule(
test_module))
try:
suite.addTest(doctest.DocTestSuite(
test_module, checker=doctestOutputChecker,
runner=DocTestRunner))
except ValueError:
# No doc tests in tests.py
pass
return suite
def build_test(label):
"""
Construct a test case with the specified label. Label should be of the
form model.TestClass or model.TestClass.test_method. Returns an
instantiated test or test suite corresponding to the label provided.
"""
parts = label.split('.')
if len(parts) < 2 or len(parts) > 3:
raise ValueError("Test label '%s' should be of the form app.TestCase "
"or app.TestCase.test_method" % label)
#
# First, look for TestCase instances with a name that matches
#
app_module = get_app(parts[0])
test_module = get_tests(app_module)
TestClass = getattr(app_module, parts[1], None)
# Couldn't find the test class in models.py; look in tests.py
if TestClass is None:
if test_module:
TestClass = getattr(test_module, parts[1], None)
try:
if issubclass(TestClass, (unittest.TestCase, real_unittest.TestCase)):
if len(parts) == 2: # label is app.TestClass
try:
return unittest.TestLoader().loadTestsFromTestCase(
TestClass)
except TypeError:
raise ValueError(
"Test label '%s' does not refer to a test class"
% label)
else: # label is app.TestClass.test_method
return TestClass(parts[2])
except TypeError:
# TestClass isn't a TestClass - it must be a method or normal class
pass
#
# If there isn't a TestCase, look for a doctest that matches
#
tests = []
for module in app_module, test_module:
try:
doctests = doctest.DocTestSuite(module,
checker=doctestOutputChecker,
runner=DocTestRunner)
# Now iterate over the suite, looking for doctests whose name
# matches the pattern that was given
for test in doctests:
if test._dt_test.name in (
'%s.%s' % (module.__name__, '.'.join(parts[1:])),
'%s.__test__.%s' % (
module.__name__, '.'.join(parts[1:]))):
tests.append(test)
except ValueError:
# No doctests found.
pass
# If no tests were found, then we were given a bad test label.
if not tests:
raise ValueError("Test label '%s' does not refer to a test" % label)
# Construct a suite out of the tests that matched.
return unittest.TestSuite(tests)
def partition_suite(suite, classes, bins):
"""
Partitions a test suite by test type.
classes is a sequence of types
bins is a sequence of TestSuites, one more than classes
Tests of type classes[i] are added to bins[i],
tests with no match found in classes are place in bins[-1]
"""
for test in suite:
if isinstance(test, unittest.TestSuite):
partition_suite(test, classes, bins)
else:
for i in range(len(classes)):
if isinstance(test, classes[i]):
bins[i].addTest(test)
break
else:
bins[-1].addTest(test)
def reorder_suite(suite, classes):
"""
Reorders a test suite by test type.
`classes` is a sequence of types
All tests of type classes[0] are placed first, then tests of type
classes[1], etc. Tests with no match in classes are placed last.
"""
class_count = len(classes)
bins = [unittest.TestSuite() for i in range(class_count+1)]
partition_suite(suite, classes, bins)
for i in range(class_count):
bins[0].addTests(bins[i+1])
return bins[0]
def dependency_ordered(test_databases, dependencies):
"""Reorder test_databases into an order that honors the dependencies
described in TEST_DEPENDENCIES.
"""
ordered_test_databases = []
resolved_databases = set()
while test_databases:
changed = False
deferred = []
while test_databases:
signature, (db_name, aliases) = test_databases.pop()
dependencies_satisfied = True
for alias in aliases:
if alias in dependencies:
if all(a in resolved_databases
for a in dependencies[alias]):
# all dependencies for this alias are satisfied
dependencies.pop(alias)
resolved_databases.add(alias)
else:
dependencies_satisfied = False
else:
resolved_databases.add(alias)
if dependencies_satisfied:
ordered_test_databases.append((signature, (db_name, aliases)))
changed = True
else:
deferred.append((signature, (db_name, aliases)))
if not changed:
raise ImproperlyConfigured(
"Circular dependency in TEST_DEPENDENCIES")
test_databases = deferred
return ordered_test_databases
class DjangoTestSuiteRunner(object):
def __init__(self, verbosity=1, interactive=True, failfast=True, **kwargs):
self.verbosity = verbosity
self.interactive = interactive
self.failfast = failfast
def setup_test_environment(self, **kwargs):
setup_test_environment()
settings.DEBUG = False
unittest.installHandler()
def build_suite(self, test_labels, extra_tests=None, **kwargs):
suite = unittest.TestSuite()
if test_labels:
for label in test_labels:
if '.' in label:
suite.addTest(build_test(label))
else:
app = get_app(label)
suite.addTest(build_suite(app))
else:
for app in get_apps():
suite.addTest(build_suite(app))
if extra_tests:
for test in extra_tests:
suite.addTest(test)
return reorder_suite(suite, (TestCase,))
def setup_databases(self, **kwargs):
from django.db import connections, DEFAULT_DB_ALIAS
# First pass -- work out which databases actually need to be created,
# and which ones are test mirrors or duplicate entries in DATABASES
mirrored_aliases = {}
test_databases = {}
dependencies = {}
for alias in connections:
connection = connections[alias]
if connection.settings_dict['TEST_MIRROR']:
# If the database is marked as a test mirror, save
# the alias.
mirrored_aliases[alias] = (
connection.settings_dict['TEST_MIRROR'])
else:
# Store a tuple with DB parameters that uniquely identify it.
# If we have two aliases with the same values for that tuple,
# we only need to create the test database once.
item = test_databases.setdefault(
connection.creation.test_db_signature(),
(connection.settings_dict['NAME'], [])
)
item[1].append(alias)
if 'TEST_DEPENDENCIES' in connection.settings_dict:
dependencies[alias] = (
connection.settings_dict['TEST_DEPENDENCIES'])
else:
if alias != DEFAULT_DB_ALIAS:
dependencies[alias] = connection.settings_dict.get(
'TEST_DEPENDENCIES', [DEFAULT_DB_ALIAS])
# Second pass -- actually create the databases.
old_names = []
mirrors = []
for signature, (db_name, aliases) in dependency_ordered(
test_databases.items(), dependencies):
# Actually create the database for the first connection
connection = connections[aliases[0]]
old_names.append((connection, db_name, True))
test_db_name = connection.creation.create_test_db(
self.verbosity, autoclobber=not self.interactive)
for alias in aliases[1:]:
connection = connections[alias]
if db_name:
old_names.append((connection, db_name, False))
connection.settings_dict['NAME'] = test_db_name
else:
# If settings_dict['NAME'] isn't defined, we have a backend
# where the name isn't important -- e.g., SQLite, which
# uses :memory:. Force create the database instead of
# assuming it's a duplicate.
old_names.append((connection, db_name, True))
connection.creation.create_test_db(
self.verbosity, autoclobber=not self.interactive)
for alias, mirror_alias in mirrored_aliases.items():
mirrors.append((alias, connections[alias].settings_dict['NAME']))
connections[alias].settings_dict['NAME'] = (
connections[mirror_alias].settings_dict['NAME'])
connections[alias].features = connections[mirror_alias].features
return old_names, mirrors
def run_suite(self, suite, **kwargs):
return unittest.TextTestRunner(
verbosity=self.verbosity, failfast=self.failfast).run(suite)
def teardown_databases(self, old_config, **kwargs):
"""
Destroys all the non-mirror databases.
"""
old_names, mirrors = old_config
for connection, old_name, destroy in old_names:
if destroy:
connection.creation.destroy_test_db(old_name, self.verbosity)
def teardown_test_environment(self, **kwargs):
unittest.removeHandler()
teardown_test_environment()
def suite_result(self, suite, result, **kwargs):
return len(result.failures) + len(result.errors)
def run_tests(self, test_labels, extra_tests=None, **kwargs):
"""
Run the unit tests for all the test labels in the provided list.
Labels must be of the form:
- app.TestClass.test_method
Run a single specific test method
- app.TestClass
Run all the test methods in a given class
- app
Search for doctests and unittests in the named application.
When looking for tests, the test runner will look in the models and
tests modules for the application.
A list of 'extra' tests may also be provided; these tests
will be added to the test suite.
Returns the number of tests that failed.
"""
self.setup_test_environment()
suite = self.build_suite(test_labels, extra_tests)
old_config = self.setup_databases()
result = self.run_suite(suite)
self.teardown_databases(old_config)
self.teardown_test_environment()
return self.suite_result(suite, result) | unknown | codeparrot/codeparrot-clean | ||
from ctypes import c_void_p
from django.contrib.gis.geos.error import GEOSException
# Trying to import GDAL libraries, if available. Have to place in
# try/except since this package may be used outside GeoDjango.
try:
from django.contrib.gis import gdal
except ImportError:
# A 'dummy' gdal module.
class GDALInfo(object):
HAS_GDAL = False
gdal = GDALInfo()
# NumPy supported?
try:
import numpy
except ImportError:
numpy = False
class GEOSBase(object):
"""
Base object for GEOS objects that has a pointer access property
that controls access to the underlying C pointer.
"""
# Initially the pointer is NULL.
_ptr = None
# Default allowed pointer type.
ptr_type = c_void_p
# Pointer access property.
def _get_ptr(self):
# Raise an exception if the pointer isn't valid don't
# want to be passing NULL pointers to routines --
# that's very bad.
if self._ptr: return self._ptr
else: raise GEOSException('NULL GEOS %s pointer encountered.' % self.__class__.__name__)
def _set_ptr(self, ptr):
# Only allow the pointer to be set with pointers of the
# compatible type or None (NULL).
if ptr is None or isinstance(ptr, self.ptr_type):
self._ptr = ptr
else:
raise TypeError('Incompatible pointer type')
# Property for controlling access to the GEOS object pointers. Using
# this raises an exception when the pointer is NULL, thus preventing
# the C library from attempting to access an invalid memory location.
ptr = property(_get_ptr, _set_ptr) | unknown | codeparrot/codeparrot-clean | ||
"""
This module defines the SArray class which provides the
ability to create, access and manipulate a remote scalable array object.
SArray acts similarly to pandas.Series but without indexing.
The data is immutable, homogeneous, and is stored on the GraphLab Server side.
"""
'''
Copyright (C) 2015 Dato, Inc.
All rights reserved.
This software may be modified and distributed under the terms
of the BSD license. See the DATO-PYTHON-LICENSE file for details.
'''
import graphlab.connect as _mt
import graphlab.connect.main as glconnect
from graphlab.cython.cy_type_utils import pytype_from_dtype, infer_type_of_list, is_numeric_type
from graphlab.cython.cy_sarray import UnitySArrayProxy
from graphlab.cython.context import debug_trace as cython_context
from graphlab.util import _make_internal_url, _is_callable
import graphlab as gl
import inspect
import math
from graphlab.deps import numpy, HAS_NUMPY
from graphlab.deps import pandas, HAS_PANDAS
import time
import array
import datetime
import graphlab.meta as meta
import itertools
import warnings
__all__ = ['SArray']
def _create_sequential_sarray(size, start=0, reverse=False):
if type(size) is not int:
raise TypeError("size must be int")
if type(start) is not int:
raise TypeError("size must be int")
if type(reverse) is not bool:
raise TypeError("reverse must me bool")
with cython_context():
return SArray(_proxy=glconnect.get_unity().create_sequential_sarray(size, start, reverse))
class SArray(object):
"""
An immutable, homogeneously typed array object backed by persistent storage.
SArray is scaled to hold data that are much larger than the machine's main
memory. It fully supports missing values and random access. The
data backing an SArray is located on the same machine as the GraphLab
Server process. Each column in an :py:class:`~graphlab.SFrame` is an
SArray.
Parameters
----------
data : list | numpy.ndarray | pandas.Series | string
The input data. If this is a list, numpy.ndarray, or pandas.Series,
the data in the list is converted and stored in an SArray.
Alternatively if this is a string, it is interpreted as a path (or
url) to a text file. Each line of the text file is loaded as a
separate row. If ``data`` is a directory where an SArray was previously
saved, this is loaded as an SArray read directly out of that
directory.
dtype : {None, int, float, str, list, array.array, dict, datetime.datetime, graphlab.Image}, optional
The data type of the SArray. If not specified (None), we attempt to
infer it from the input. If it is a numpy array or a Pandas series, the
dtype of the array/series is used. If it is a list, the dtype is
inferred from the inner list. If it is a URL or path to a text file, we
default the dtype to str.
ignore_cast_failure : bool, optional
If True, ignores casting failures but warns when elements cannot be
casted into the specified dtype.
Notes
-----
- If ``data`` is pandas.Series, the index will be ignored.
- The datetime is based on the Boost datetime format (see http://www.boost.org/doc/libs/1_48_0/doc/html/date_time/date_time_io.html
for details)
- When working with the GraphLab EC2 instance (see
:py:func:`graphlab.aws.launch_EC2()`), an SArray cannot be constructed
using local file path, because it involves a potentially large amount of
data transfer from client to server. However, it is still okay to use a
remote file path. See the examples below. The same restriction applies to
:py:class:`~graphlab.SGraph` and :py:class:`~graphlab.SFrame`.
Examples
--------
SArray can be constructed in various ways:
Construct an SArray from list.
>>> from graphlab import SArray
>>> sa = SArray(data=[1,2,3,4,5], dtype=int)
Construct an SArray from numpy.ndarray.
>>> sa = SArray(data=numpy.asarray([1,2,3,4,5]), dtype=int)
or:
>>> sa = SArray(numpy.asarray([1,2,3,4,5]), int)
Construct an SArray from pandas.Series.
>>> sa = SArray(data=pd.Series([1,2,3,4,5]), dtype=int)
or:
>>> sa = SArray(pd.Series([1,2,3,4,5]), int)
If the type is not specified, automatic inference is attempted:
>>> SArray(data=[1,2,3,4,5]).dtype()
int
>>> SArray(data=[1,2,3,4,5.0]).dtype()
float
The SArray supports standard datatypes such as: integer, float and string.
It also supports three higher level datatypes: float arrays, dict
and list (array of arbitrary types).
Create an SArray from a list of strings:
>>> sa = SArray(data=['a','b'])
Create an SArray from a list of float arrays;
>>> sa = SArray([[1,2,3], [3,4,5]])
Create an SArray from a list of lists:
>>> sa = SArray(data=[['a', 1, {'work': 3}], [2, 2.0]])
Create an SArray from a list of dictionaries:
>>> sa = SArray(data=[{'a':1, 'b': 2}, {'b':2, 'c': 1}])
Create an SArray from a list of datetime objects:
>>> sa = SArray(data=[datetime.datetime(2011, 10, 20, 9, 30, 10)])
Construct an SArray from local text file. (Only works for local server).
>>> sa = SArray('/tmp/a_to_z.txt.gz')
Construct an SArray from a text file downloaded from a URL.
>>> sa = SArray('http://s3-us-west-2.amazonaws.com/testdatasets/a_to_z.txt.gz')
**Numeric Operators**
SArrays support a large number of vectorized operations on numeric types.
For instance:
>>> sa = SArray([1,1,1,1,1])
>>> sb = SArray([2,2,2,2,2])
>>> sc = sa + sb
>>> sc
dtype: int
Rows: 5
[3, 3, 3, 3, 3]
>>> sc + 2
dtype: int
Rows: 5
[5, 5, 5, 5, 5]
Operators which are supported include all numeric operators (+,-,*,/), as
well as comparison operators (>, >=, <, <=), and logical operators (&, |).
For instance:
>>> sa = SArray([1,2,3,4,5])
>>> (sa >= 2) & (sa <= 4)
dtype: int
Rows: 5
[0, 1, 1, 1, 0]
The numeric operators (+,-,*,/) also work on array types:
>>> sa = SArray(data=[[1.0,1.0], [2.0,2.0]])
>>> sa + 1
dtype: list
Rows: 2
[array('f', [2.0, 2.0]), array('f', [3.0, 3.0])]
>>> sa + sa
dtype: list
Rows: 2
[array('f', [2.0, 2.0]), array('f', [4.0, 4.0])]
The addition operator (+) can also be used for string concatenation:
>>> sa = SArray(data=['a','b'])
>>> sa + "x"
dtype: str
Rows: 2
['ax', 'bx']
This can be useful for performing type interpretation of lists or
dictionaries stored as strings:
>>> sa = SArray(data=['a,b','c,d'])
>>> ("[" + sa + "]").astype(list) # adding brackets make it look like a list
dtype: list
Rows: 2
[['a', 'b'], ['c', 'd']]
All comparison operations and boolean operators are supported and emit
binary SArrays.
>>> sa = SArray([1,2,3,4,5])
>>> sa >= 2
dtype: int
Rows: 3
[0, 1, 1, 1, 1]
>>> (sa >= 2) & (sa <= 4)
dtype: int
Rows: 3
[0, 1, 1, 1, 0]
**Element Access and Slicing**
SArrays can be accessed by integer keys just like a regular python list.
Such operations may not be fast on large datasets so looping over an SArray
should be avoided.
>>> sa = SArray([1,2,3,4,5])
>>> sa[0]
1
>>> sa[2]
3
>>> sa[5]
IndexError: SFrame index out of range
Negative indices can be used to access elements from the tail of the array
>>> sa[-1] # returns the last element
5
>>> sa[-2] # returns the second to last element
4
The SArray also supports the full range of python slicing operators:
>>> sa[1000:] # Returns an SArray containing rows 1000 to the end
>>> sa[:1000] # Returns an SArray containing rows 0 to row 999 inclusive
>>> sa[0:1000:2] # Returns an SArray containing rows 0 to row 1000 in steps of 2
>>> sa[-100:] # Returns an SArray containing last 100 rows
>>> sa[-100:len(sa):2] # Returns an SArray containing last 100 rows in steps of 2
**Logical Filter**
An SArray can be filtered using
>>> array[binary_filter]
where array and binary_filter are SArrays of the same length. The result is
a new SArray which contains only elements of 'array' where its matching row
in the binary_filter is non zero.
This permits the use of boolean operators that can be used to perform
logical filtering operations. For instance:
>>> sa = SArray([1,2,3,4,5])
>>> sa[(sa >= 2) & (sa <= 4)]
dtype: int
Rows: 3
[2, 3, 4]
This can also be used more generally to provide filtering capability which
is otherwise not expressible with simple boolean functions. For instance:
>>> sa = SArray([1,2,3,4,5])
>>> sa[sa.apply(lambda x: math.log(x) <= 1)]
dtype: int
Rows: 3
[1, 2]
This is equivalent to
>>> sa.filter(lambda x: math.log(x) <= 1)
dtype: int
Rows: 3
[1, 2]
**Iteration**
The SArray is also iterable, but not efficiently since this involves a
streaming transmission of data from the server to the client. This should
not be used for large data.
>>> sa = SArray([1,2,3,4,5])
>>> [i + 1 for i in sa]
[2, 3, 4, 5, 6]
This can be used to convert an SArray to a list:
>>> sa = SArray([1,2,3,4,5])
>>> l = list(sa)
>>> l
[1, 2, 3, 4, 5]
"""
def __init__(self, data=[], dtype=None, ignore_cast_failure=False, _proxy=None):
"""
__init__(data=list(), dtype=None, ignore_cast_failure=False)
Construct a new SArray. The source of data includes: list,
numpy.ndarray, pandas.Series, and urls.
"""
_mt._get_metric_tracker().track('sarray.init')
if dtype is not None and type(dtype) != type:
raise TypeError('dtype must be a type, e.g. use int rather than \'int\'')
if (_proxy):
self.__proxy__ = _proxy
elif type(data) == SArray:
self.__proxy__ = data.__proxy__
else:
self.__proxy__ = UnitySArrayProxy(glconnect.get_client())
# we need to perform type inference
if dtype is None:
if (isinstance(data, list)):
# if it is a list, Get the first type and make sure
# the remaining items are all of the same type
dtype = infer_type_of_list(data)
elif isinstance(data, array.array):
dtype = infer_type_of_list(data)
elif HAS_PANDAS and isinstance(data, pandas.Series):
# if it is a pandas series get the dtype of the series
dtype = pytype_from_dtype(data.dtype)
if dtype == object:
# we need to get a bit more fine grained than that
dtype = infer_type_of_list(data)
elif HAS_NUMPY and isinstance(data, numpy.ndarray):
# if it is a numpy array, get the dtype of the array
dtype = pytype_from_dtype(data.dtype)
if dtype == object:
# we need to get a bit more fine grained than that
dtype = infer_type_of_list(data)
if len(data.shape) == 2:
# we need to make it an array or a list
if dtype == float or dtype == int:
dtype = array.array
else:
dtype = list
elif len(data.shape) > 2:
raise TypeError("Cannot convert Numpy arrays of greater than 2 dimensions")
elif (isinstance(data, str) or isinstance(data, unicode)):
# if it is a file, we default to string
dtype = str
if HAS_PANDAS and isinstance(data, pandas.Series):
with cython_context():
self.__proxy__.load_from_iterable(data.values, dtype, ignore_cast_failure)
elif (HAS_NUMPY and isinstance(data, numpy.ndarray)) or isinstance(data, list) or isinstance(data, array.array):
with cython_context():
self.__proxy__.load_from_iterable(data, dtype, ignore_cast_failure)
elif (isinstance(data, str) or isinstance(data, unicode)):
internal_url = _make_internal_url(data)
with cython_context():
self.__proxy__.load_autodetect(internal_url, dtype)
else:
raise TypeError("Unexpected data source. " \
"Possible data source types are: list, " \
"numpy.ndarray, pandas.Series, and string(url)")
@classmethod
def from_const(cls, value, size):
"""
Constructs an SArray of size with a const value.
Parameters
----------
value : [int | float | str | array.array | list | dict | datetime]
The value to fill the SArray
size : int
The size of the SArray
Examples
--------
Construct an SArray consisting of 10 zeroes:
>>> graphlab.SArray.from_const(0, 10)
"""
assert type(size) is int and size >= 0, "size must be a positive int"
if (type(value) not in [type(None), int, float, str, array.array, list, dict, datetime.datetime]):
raise TypeError('Cannot create sarray of value type %s' % str(type(value)))
proxy = UnitySArrayProxy(glconnect.get_client())
proxy.load_from_const(value, size)
return cls(_proxy=proxy)
@classmethod
def from_sequence(cls, *args):
"""
from_sequence(start=0, stop)
Create an SArray from sequence
.. sourcecode:: python
Construct an SArray of integer values from 0 to 999
>>> gl.SArray.from_sequence(1000)
This is equivalent, but more efficient than:
>>> gl.SArray(range(1000))
Construct an SArray of integer values from 10 to 999
>>> gl.SArray.from_sequence(10, 1000)
This is equivalent, but more efficient than:
>>> gl.SArray(range(10, 1000))
Parameters
----------
start : int, optional
The start of the sequence. The sequence will contain this value.
stop : int
The end of the sequence. The sequence will not contain this value.
"""
start = None
stop = None
# fill with args. This checks for from_sequence(100), from_sequence(10,100)
if len(args) == 1:
stop = args[0]
elif len(args) == 2:
start = args[0]
stop = args[1]
if stop is None and start is None:
raise TypeError("from_sequence expects at least 1 argument. got 0")
elif start is None:
return _create_sequential_sarray(stop)
else:
size = stop - start
# this matches the behavior of range
# i.e. range(100,10) just returns an empty array
if (size < 0):
size = 0
return _create_sequential_sarray(size, start)
@classmethod
def from_avro(cls, filename):
"""
Construct an SArray from an Avro file. The SArray type is determined by
the schema of the Avro file.
Parameters
----------
filename : str
The Avro file to load into an SArray.
Examples
--------
Construct an SArray from a local Avro file named 'data.avro':
>>> graphlab.SArray.from_avro('/data/data.avro')
Notes
-----
Currently only supports direct loading of files on the local filesystem.
References
----------
- `Avro Specification <http://avro.apache.org/docs/1.7.7/spec.html>`_
"""
_mt._get_metric_tracker().track('sarray.from_avro')
proxy = UnitySArrayProxy(glconnect.get_client())
proxy.load_from_avro(filename)
return cls(_proxy = proxy)
def __get_content_identifier__(self):
"""
Returns the unique identifier of the content that backs the SArray
Notes
-----
Meant for internal use only.
"""
with cython_context():
return self.__proxy__.get_content_identifier()
def save(self, filename, format=None):
"""
Saves the SArray to file.
The saved SArray will be in a directory named with the `targetfile`
parameter.
Parameters
----------
filename : string
A local path or a remote URL. If format is 'text', it will be
saved as a text file. If format is 'binary', a directory will be
created at the location which will contain the SArray.
format : {'binary', 'text', 'csv'}, optional
Format in which to save the SFrame. Binary saved SArrays can be
loaded much faster and without any format conversion losses.
'text' and 'csv' are synonymous: Each SArray row will be written
as a single line in an output text file. If not
given, will try to infer the format from filename given. If file
name ends with 'csv', 'txt' or '.csv.gz', then save as 'csv' format,
otherwise save as 'binary' format.
"""
if format == None:
if filename.endswith(('.csv', '.csv.gz', 'txt')):
format = 'text'
else:
format = 'binary'
if format == 'binary':
with cython_context():
self.__proxy__.save(_make_internal_url(filename))
elif format == 'text':
sf = gl.SFrame({'X1':self})
with cython_context():
sf.__proxy__.save_as_csv(_make_internal_url(filename), {'header':False})
def _escape_space(self,s):
return "".join([ch.encode('string_escape') if ch.isspace() else ch for ch in s])
def __repr__(self):
"""
Returns a string description of the SArray.
"""
ret = "dtype: " + str(self.dtype().__name__) + "\n"
ret = ret + "Rows: " + str(self.size()) + "\n"
ret = ret + self.__str__()
return ret
def __str__(self):
"""
Returns a string containing the first 100 elements of the array.
"""
# If sarray is image, take head of elements casted to string.
if self.dtype() == gl.data_structures.image.Image:
headln = str(list(self._head_str(100)))
else:
headln = self._escape_space(str(list(self.head(100))))
headln = unicode(headln.decode('string_escape'),'utf-8',errors='replace').encode('utf-8')
if (self.size() > 100):
# cut the last close bracket
# and replace it with ...
headln = headln[0:-1] + ", ... ]"
return headln
def __nonzero__(self):
"""
Returns true if the array is not empty.
"""
return self.size() != 0
def __len__(self):
"""
Returns the length of the array
"""
return self.size()
def __iter__(self):
"""
Provides an iterator to the contents of the array.
"""
def generator():
elems_at_a_time = 262144
self.__proxy__.begin_iterator()
ret = self.__proxy__.iterator_get_next(elems_at_a_time)
while(True):
for j in ret:
yield j
if len(ret) == elems_at_a_time:
ret = self.__proxy__.iterator_get_next(elems_at_a_time)
else:
break
return generator()
def __add__(self, other):
"""
If other is a scalar value, adds it to the current array, returning
the new result. If other is an SArray, performs an element-wise
addition of the two arrays.
"""
with cython_context():
if type(other) is SArray:
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '+'))
else:
return SArray(_proxy = self.__proxy__.left_scalar_operator(other, '+'))
def __sub__(self, other):
"""
If other is a scalar value, subtracts it from the current array, returning
the new result. If other is an SArray, performs an element-wise
subtraction of the two arrays.
"""
with cython_context():
if type(other) is SArray:
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '-'))
else:
return SArray(_proxy = self.__proxy__.left_scalar_operator(other, '-'))
def __mul__(self, other):
"""
If other is a scalar value, multiplies it to the current array, returning
the new result. If other is an SArray, performs an element-wise
multiplication of the two arrays.
"""
with cython_context():
if type(other) is SArray:
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '*'))
else:
return SArray(_proxy = self.__proxy__.left_scalar_operator(other, '*'))
def __div__(self, other):
"""
If other is a scalar value, divides each element of the current array
by the value, returning the result. If other is an SArray, performs
an element-wise division of the two arrays.
"""
with cython_context():
if type(other) is SArray:
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '/'))
else:
return SArray(_proxy = self.__proxy__.left_scalar_operator(other, '/'))
def __lt__(self, other):
"""
If other is a scalar value, compares each element of the current array
by the value, returning the result. If other is an SArray, performs
an element-wise comparison of the two arrays.
"""
with cython_context():
if type(other) is SArray:
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '<'))
else:
return SArray(_proxy = self.__proxy__.left_scalar_operator(other, '<'))
def __gt__(self, other):
"""
If other is a scalar value, compares each element of the current array
by the value, returning the result. If other is an SArray, performs
an element-wise comparison of the two arrays.
"""
with cython_context():
if type(other) is SArray:
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '>'))
else:
return SArray(_proxy = self.__proxy__.left_scalar_operator(other, '>'))
def __le__(self, other):
"""
If other is a scalar value, compares each element of the current array
by the value, returning the result. If other is an SArray, performs
an element-wise comparison of the two arrays.
"""
with cython_context():
if type(other) is SArray:
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '<='))
else:
return SArray(_proxy = self.__proxy__.left_scalar_operator(other, '<='))
def __ge__(self, other):
"""
If other is a scalar value, compares each element of the current array
by the value, returning the result. If other is an SArray, performs
an element-wise comparison of the two arrays.
"""
with cython_context():
if type(other) is SArray:
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '>='))
else:
return SArray(_proxy = self.__proxy__.left_scalar_operator(other, '>='))
def __radd__(self, other):
"""
Adds a scalar value to the current array.
Returned array has the same type as the array on the right hand side
"""
with cython_context():
return SArray(_proxy = self.__proxy__.right_scalar_operator(other, '+'))
def __rsub__(self, other):
"""
Subtracts a scalar value from the current array.
Returned array has the same type as the array on the right hand side
"""
with cython_context():
return SArray(_proxy = self.__proxy__.right_scalar_operator(other, '-'))
def __rmul__(self, other):
"""
Multiplies a scalar value to the current array.
Returned array has the same type as the array on the right hand side
"""
with cython_context():
return SArray(_proxy = self.__proxy__.right_scalar_operator(other, '*'))
def __rdiv__(self, other):
"""
Divides a scalar value by each element in the array
Returned array has the same type as the array on the right hand side
"""
with cython_context():
return SArray(_proxy = self.__proxy__.right_scalar_operator(other, '/'))
def __eq__(self, other):
"""
If other is a scalar value, compares each element of the current array
by the value, returning the new result. If other is an SArray, performs
an element-wise comparison of the two arrays.
"""
with cython_context():
if type(other) is SArray:
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '=='))
else:
return SArray(_proxy = self.__proxy__.left_scalar_operator(other, '=='))
def __ne__(self, other):
"""
If other is a scalar value, compares each element of the current array
by the value, returning the new result. If other is an SArray, performs
an element-wise comparison of the two arrays.
"""
with cython_context():
if type(other) is SArray:
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '!='))
else:
return SArray(_proxy = self.__proxy__.left_scalar_operator(other, '!='))
def __and__(self, other):
"""
Perform a logical element-wise 'and' against another SArray.
"""
if type(other) is SArray:
with cython_context():
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '&'))
else:
raise TypeError("SArray can only perform logical and against another SArray")
def __or__(self, other):
"""
Perform a logical element-wise 'or' against another SArray.
"""
if type(other) is SArray:
with cython_context():
return SArray(_proxy = self.__proxy__.vector_operator(other.__proxy__, '|'))
else:
raise TypeError("SArray can only perform logical or against another SArray")
def __getitem__(self, other):
"""
If the key is an SArray of identical length, this function performs a
logical filter: i.e. it subselects all the elements in this array
where the corresponding value in the other array evaluates to true.
If the key is an integer this returns a single row of
the SArray. If the key is a slice, this returns an SArray with the
sliced rows. See the GraphLab Create User Guide for usage examples.
"""
sa_len = len(self)
if type(other) is int:
if other < 0:
other += sa_len
if other >= sa_len:
raise IndexError("SFrame index out of range")
try:
lb, ub, value_list = self._getitem_cache
if lb <= other < ub:
return value_list[other - lb]
except AttributeError:
pass
# Not in cache, need to grab it
block_size = 1024 * (32 if self.dtype() in [int, long, float] else 4)
block_num = int(other // block_size)
lb = block_num * block_size
ub = min(sa_len, lb + block_size)
val_list = list(SArray(_proxy = self.__proxy__.copy_range(lb, 1, ub)))
self._getitem_cache = (lb, ub, val_list)
return val_list[other - lb]
elif type(other) is SArray:
if len(other) != sa_len:
raise IndexError("Cannot perform logical indexing on arrays of different length.")
with cython_context():
return SArray(_proxy = self.__proxy__.logical_filter(other.__proxy__))
elif type(other) is slice:
start = other.start
stop = other.stop
step = other.step
if start is None:
start = 0
if stop is None:
stop = sa_len
if step is None:
step = 1
# handle negative indices
if start < 0:
start = sa_len + start
if stop < 0:
stop = sa_len + stop
return SArray(_proxy = self.__proxy__.copy_range(start, step, stop))
else:
raise IndexError("Invalid type to use for indexing")
def __materialize__(self):
"""
For a SArray that is lazily evaluated, force persist this sarray
to disk, committing all lazy evaluated operations.
"""
with cython_context():
self.__proxy__.materialize()
def __is_materialized__(self):
"""
Returns whether or not the sarray has been materialized.
"""
return self.__proxy__.is_materialized()
def size(self):
"""
The size of the SArray.
"""
return self.__proxy__.size()
def dtype(self):
"""
The data type of the SArray.
Returns
-------
out : type
The type of the SArray.
Examples
--------
>>> sa = gl.SArray(["The quick brown fox jumps over the lazy dog."])
>>> sa.dtype()
str
>>> sa = gl.SArray(range(10))
>>> sa.dtype()
int
"""
return self.__proxy__.dtype()
def head(self, n=10):
"""
Returns an SArray which contains the first n rows of this SArray.
Parameters
----------
n : int
The number of rows to fetch.
Returns
-------
out : SArray
A new SArray which contains the first n rows of the current SArray.
Examples
--------
>>> gl.SArray(range(10)).head(5)
dtype: int
Rows: 5
[0, 1, 2, 3, 4]
"""
return SArray(_proxy=self.__proxy__.head(n))
def vector_slice(self, start, end=None):
"""
If this SArray contains vectors or recursive types, this returns a new SArray
containing each individual vector sliced, between start and end, exclusive.
Parameters
----------
start : int
The start position of the slice.
end : int, optional.
The end position of the slice. Note that the end position
is NOT included in the slice. Thus a g.vector_slice(1,3) will extract
entries in position 1 and 2.
Returns
-------
out : SArray
Each individual vector sliced according to the arguments.
Examples
--------
If g is a vector of floats:
>>> g = SArray([[1,2,3],[2,3,4]])
>>> g
dtype: array
Rows: 2
[array('d', [1.0, 2.0, 3.0]), array('d', [2.0, 3.0, 4.0])]
>>> g.vector_slice(0) # extracts the first element of each vector
dtype: float
Rows: 2
[1.0, 2.0]
>>> g.vector_slice(0, 2) # extracts the first two elements of each vector
dtype: array.array
Rows: 2
[array('d', [1.0, 2.0]), array('d', [2.0, 3.0])]
If a vector cannot be sliced, the result will be None:
>>> g = SArray([[1],[1,2],[1,2,3]])
>>> g
dtype: array.array
Rows: 3
[array('d', [1.0]), array('d', [1.0, 2.0]), array('d', [1.0, 2.0, 3.0])]
>>> g.vector_slice(2)
dtype: float
Rows: 3
[None, None, 3.0]
>>> g.vector_slice(0,2)
dtype: list
Rows: 3
[None, array('d', [1.0, 2.0]), array('d', [1.0, 2.0])]
If g is a vector of mixed types (float, int, str, array, list, etc.):
>>> g = SArray([['a',1,1.0],['b',2,2.0]])
>>> g
dtype: list
Rows: 2
[['a', 1, 1.0], ['b', 2, 2.0]]
>>> g.vector_slice(0) # extracts the first element of each vector
dtype: list
Rows: 2
[['a'], ['b']]
"""
if (self.dtype() != array.array) and (self.dtype() != list):
raise RuntimeError("Only Vector type can be sliced")
if end == None:
end = start + 1
with cython_context():
return SArray(_proxy=self.__proxy__.vector_slice(start, end))
def _count_words(self, to_lower=True):
"""
For documentation, see graphlab.text_analytics.count_ngrams().
"""
if (self.dtype() != str):
raise TypeError("Only SArray of string type is supported for counting bag of words")
_mt._get_metric_tracker().track('sarray.count_words')
# construct options, will extend over time
options = dict()
options["to_lower"] = to_lower == True
with cython_context():
return SArray(_proxy=self.__proxy__.count_bag_of_words(options))
def _count_ngrams(self, n=2, method="word", to_lower=True, ignore_space=True):
"""
For documentation, see graphlab.text_analytics.count_ngrams().
"""
if (self.dtype() != str):
raise TypeError("Only SArray of string type is supported for counting n-grams")
if (type(n) != int):
raise TypeError("Input 'n' must be of type int")
if (n < 1):
raise ValueError("Input 'n' must be greater than 0")
if (n > 5):
warnings.warn("It is unusual for n-grams to be of size larger than 5.")
_mt._get_metric_tracker().track('sarray.count_ngrams', properties={'n':n, 'method':method})
# construct options, will extend over time
options = dict()
options["to_lower"] = to_lower == True
options["ignore_space"] = ignore_space == True
if method == "word":
with cython_context():
return SArray(_proxy=self.__proxy__.count_ngrams(n, options ))
elif method == "character" :
with cython_context():
return SArray(_proxy=self.__proxy__.count_character_ngrams(n, options ))
else:
raise ValueError("Invalid 'method' input value. Please input either 'word' or 'character' ")
def dict_trim_by_keys(self, keys, exclude=True):
"""
Filter an SArray of dictionary type by the given keys. By default, all
keys that are in the provided list in ``keys`` are *excluded* from the
returned SArray.
Parameters
----------
keys : list
A collection of keys to trim down the elements in the SArray.
exclude : bool, optional
If True, all keys that are in the input key list are removed. If
False, only keys that are in the input key list are retained.
Returns
-------
out : SArray
A SArray of dictionary type, with each dictionary element trimmed
according to the input criteria.
See Also
--------
dict_trim_by_values
Examples
--------
>>> sa = graphlab.SArray([{"this":1, "is":1, "dog":2},
{"this": 2, "are": 2, "cat": 1}])
>>> sa.dict_trim_by_keys(["this", "is", "and", "are"], exclude=True)
dtype: dict
Rows: 2
[{'dog': 2}, {'cat': 1}]
"""
if isinstance(keys, str) or (not hasattr(keys, "__iter__")):
keys = [keys]
_mt._get_metric_tracker().track('sarray.dict_trim_by_keys')
with cython_context():
return SArray(_proxy=self.__proxy__.dict_trim_by_keys(keys, exclude))
def dict_trim_by_values(self, lower=None, upper=None):
"""
Filter dictionary values to a given range (inclusive). Trimming is only
performed on values which can be compared to the bound values. Fails on
SArrays whose data type is not ``dict``.
Parameters
----------
lower : int or long or float, optional
The lowest dictionary value that would be retained in the result. If
not given, lower bound is not applied.
upper : int or long or float, optional
The highest dictionary value that would be retained in the result.
If not given, upper bound is not applied.
Returns
-------
out : SArray
An SArray of dictionary type, with each dict element trimmed
according to the input criteria.
See Also
--------
dict_trim_by_keys
Examples
--------
>>> sa = graphlab.SArray([{"this":1, "is":5, "dog":7},
{"this": 2, "are": 1, "cat": 5}])
>>> sa.dict_trim_by_values(2,5)
dtype: dict
Rows: 2
[{'is': 5}, {'this': 2, 'cat': 5}]
>>> sa.dict_trim_by_values(upper=5)
dtype: dict
Rows: 2
[{'this': 1, 'is': 5}, {'this': 2, 'are': 1, 'cat': 5}]
"""
if None != lower and (not is_numeric_type(type(lower))):
raise TypeError("lower bound has to be a numeric value")
if None != upper and (not is_numeric_type(type(upper))):
raise TypeError("upper bound has to be a numeric value")
_mt._get_metric_tracker().track('sarray.dict_trim_by_values')
with cython_context():
return SArray(_proxy=self.__proxy__.dict_trim_by_values(lower, upper))
def dict_keys(self):
"""
Create an SArray that contains all the keys from each dictionary
element as a list. Fails on SArrays whose data type is not ``dict``.
Returns
-------
out : SArray
A SArray of list type, where each element is a list of keys
from the input SArray element.
See Also
--------
dict_values
Examples
---------
>>> sa = graphlab.SArray([{"this":1, "is":5, "dog":7},
{"this": 2, "are": 1, "cat": 5}])
>>> sa.dict_keys()
dtype: list
Rows: 2
[['this', 'is', 'dog'], ['this', 'are', 'cat']]
"""
_mt._get_metric_tracker().track('sarray.dict_keys')
with cython_context():
return SArray(_proxy=self.__proxy__.dict_keys())
def dict_values(self):
"""
Create an SArray that contains all the values from each dictionary
element as a list. Fails on SArrays whose data type is not ``dict``.
Returns
-------
out : SArray
A SArray of list type, where each element is a list of values
from the input SArray element.
See Also
--------
dict_keys
Examples
--------
>>> sa = graphlab.SArray([{"this":1, "is":5, "dog":7},
{"this": 2, "are": 1, "cat": 5}])
>>> sa.dict_values()
dtype: list
Rows: 2
[[1, 5, 7], [2, 1, 5]]
"""
_mt._get_metric_tracker().track('sarray.dict_values')
with cython_context():
return SArray(_proxy=self.__proxy__.dict_values())
def dict_has_any_keys(self, keys):
"""
Create a boolean SArray by checking the keys of an SArray of
dictionaries. An element of the output SArray is True if the
corresponding input element's dictionary has any of the given keys.
Fails on SArrays whose data type is not ``dict``.
Parameters
----------
keys : list
A list of key values to check each dictionary against.
Returns
-------
out : SArray
A SArray of int type, where each element indicates whether the
input SArray element contains any key in the input list.
See Also
--------
dict_has_all_keys
Examples
--------
>>> sa = graphlab.SArray([{"this":1, "is":5, "dog":7}, {"animal":1},
{"this": 2, "are": 1, "cat": 5}])
>>> sa.dict_has_any_keys(["is", "this", "are"])
dtype: int
Rows: 3
[1, 1, 0]
"""
if isinstance(keys, str) or (not hasattr(keys, "__iter__")):
keys = [keys]
_mt._get_metric_tracker().track('sarray.dict_has_any_keys')
with cython_context():
return SArray(_proxy=self.__proxy__.dict_has_any_keys(keys))
def dict_has_all_keys(self, keys):
"""
Create a boolean SArray by checking the keys of an SArray of
dictionaries. An element of the output SArray is True if the
corresponding input element's dictionary has all of the given keys.
Fails on SArrays whose data type is not ``dict``.
Parameters
----------
keys : list
A list of key values to check each dictionary against.
Returns
-------
out : SArray
A SArray of int type, where each element indicates whether the
input SArray element contains all keys in the input list.
See Also
--------
dict_has_any_keys
Examples
--------
>>> sa = graphlab.SArray([{"this":1, "is":5, "dog":7},
{"this": 2, "are": 1, "cat": 5}])
>>> sa.dict_has_all_keys(["is", "this"])
dtype: int
Rows: 2
[1, 0]
"""
if isinstance(keys, str) or (not hasattr(keys, "__iter__")):
keys = [keys]
_mt._get_metric_tracker().track('sarray.dict_has_all_keys')
with cython_context():
return SArray(_proxy=self.__proxy__.dict_has_all_keys(keys))
def apply(self, fn, dtype=None, skip_undefined=True, seed=None,
_lua_translate=False):
"""
apply(fn, dtype=None, skip_undefined=True, seed=None)
Transform each element of the SArray by a given function. The result
SArray is of type ``dtype``. ``fn`` should be a function that returns
exactly one value which can be cast into the type specified by
``dtype``. If ``dtype`` is not specified, the first 100 elements of the
SArray are used to make a guess about the data type.
Parameters
----------
fn : function
The function to transform each element. Must return exactly one
value which can be cast into the type specified by ``dtype``.
This can also be a toolkit extension function which is compiled
as a native shared library using SDK.
dtype : {None, int, float, str, list, array.array, dict, graphlab.Image}, optional
The data type of the new SArray. If ``None``, the first 100 elements
of the array are used to guess the target data type.
skip_undefined : bool, optional
If True, will not apply ``fn`` to any undefined values.
seed : int, optional
Used as the seed if a random number generator is included in ``fn``.
Returns
-------
out : SArray
The SArray transformed by ``fn``. Each element of the SArray is of
type ``dtype``.
See Also
--------
SFrame.apply
Examples
--------
>>> sa = graphlab.SArray([1,2,3])
>>> sa.apply(lambda x: x*2)
dtype: int
Rows: 3
[2, 4, 6]
Using native toolkit extension function:
.. code-block:: c++
#include <graphlab/sdk/toolkit_function_macros.hpp>
#include <cmath>
using namespace graphlab;
double logx(const flexible_type& x, double base) {
return log((double)(x)) / log(base);
}
BEGIN_FUNCTION_REGISTRATION
REGISTER_FUNCTION(logx, "x", "base");
END_FUNCTION_REGISTRATION
compiled into example.so
>>> import example
>>> sa = graphlab.SArray([1,2,4])
>>> sa.apply(lambda x: example.logx(x, 2))
dtype: float
Rows: 3
[0.0, 1.0, 2.0]
"""
if (type(fn) == str):
fn = "LUA" + fn
if dtype == None:
raise TypeError("dtype must be specified for a lua function")
else:
assert _is_callable(fn), "Input must be a function"
dryrun = [fn(i) for i in self.head(100) if i is not None]
import traceback
if dtype == None:
dtype = infer_type_of_list(dryrun)
if not seed:
seed = time.time()
# log metric
_mt._get_metric_tracker().track('sarray.apply')
# First phase test if it is a toolkit function
nativefn = None
try:
import graphlab.extensions as extensions
nativefn = extensions._build_native_function_call(fn)
except:
# failure are fine. we just fall out into the next few phases
pass
if nativefn is not None:
# this is a toolkit lambda. We can do something about it
with cython_context():
return SArray(_proxy=self.__proxy__.transform_native(nativefn, dtype, skip_undefined, seed))
# Second phase. Try lua compilation if possible
try:
# try compilation
if _lua_translate:
# its a function
print "Attempting Lua Translation"
import graphlab.Lua_Translator
import ast
import StringIO
def isalambda(v):
return isinstance(v, type(lambda: None)) and v.__name__ == '<lambda>'
output = StringIO.StringIO()
translator = gl.Lua_Translator.translator_NodeVisitor(output)
ast_node = None
try:
if not isalambda(fn):
ast_node = ast.parse(inspect.getsource(fn))
translator.rename_function[fn.__name__] = "__lambda__transfer__"
except:
pass
try:
if ast_node == None:
print "Cannot translate. Trying again from byte code decompilation"
ast_node = meta.decompiler.decompile_func(fn)
translator.rename_function[""] = "__lambda__transfer__"
except:
pass
if ast_node == None:
raise ValueError("Unable to get source of function")
ftype = gl.Lua_Translator.FunctionType()
selftype = self.dtype()
if selftype == list:
ftype.input_type = tuple([[]])
elif selftype == dict:
ftype.input_type = tuple([{}])
elif selftype == array.array:
ftype.input_type = tuple([[float]])
else:
ftype.input_type = tuple([selftype])
translator.function_known_types["__lambda__transfer__"] = ftype
translator.translate_ast(ast_node)
print "Lua Translation Success"
print output.getvalue()
fn = "LUA" + output.getvalue()
except Exception as e:
print traceback.format_exc()
print "Lua Translation Failed"
print e
except:
print traceback.format_exc()
print "Lua Translation Failed"
with cython_context():
return SArray(_proxy=self.__proxy__.transform(fn, dtype, skip_undefined, seed))
def filter(self, fn, skip_undefined=True, seed=None):
"""
Filter this SArray by a function.
Returns a new SArray filtered by this SArray. If `fn` evaluates an
element to true, this element is copied to the new SArray. If not, it
isn't. Throws an exception if the return type of `fn` is not castable
to a boolean value.
Parameters
----------
fn : function
Function that filters the SArray. Must evaluate to bool or int.
skip_undefined : bool, optional
If True, will not apply fn to any undefined values.
seed : int, optional
Used as the seed if a random number generator is included in fn.
Returns
-------
out : SArray
The SArray filtered by fn. Each element of the SArray is of
type int.
Examples
--------
>>> sa = graphlab.SArray([1,2,3])
>>> sa.filter(lambda x: x < 3)
dtype: int
Rows: 2
[1, 2]
"""
assert inspect.isfunction(fn), "Input must be a function"
if not seed:
seed = time.time()
_mt._get_metric_tracker().track('sarray.filter')
with cython_context():
return SArray(_proxy=self.__proxy__.filter(fn, skip_undefined, seed))
def sample(self, fraction, seed=None):
"""
Create an SArray which contains a subsample of the current SArray.
Parameters
----------
fraction : float
The fraction of the rows to fetch. Must be between 0 and 1.
seed : int
The random seed for the random number generator.
Returns
-------
out : SArray
The new SArray which contains the subsampled rows.
Examples
--------
>>> sa = graphlab.SArray(range(10))
>>> sa.sample(.3)
dtype: int
Rows: 3
[2, 6, 9]
"""
if (fraction > 1 or fraction < 0):
raise ValueError('Invalid sampling rate: ' + str(fraction))
if (self.size() == 0):
return SArray()
if not seed:
seed = time.time()
_mt._get_metric_tracker().track('sarray.sample')
with cython_context():
return SArray(_proxy=self.__proxy__.sample(fraction, seed))
def _save_as_text(self, url):
"""
Save the SArray to disk as text file.
"""
raise NotImplementedError
def all(self):
"""
Return True if every element of the SArray evaluates to False. For
numeric SArrays zeros and missing values (``None``) evaluate to False,
while all non-zero, non-missing values evaluate to True. For string,
list, and dictionary SArrays, empty values (zero length strings, lists
or dictionaries) or missing values (``None``) evaluate to False. All
other values evaluate to True.
Returns True on an empty SArray.
Returns
-------
out : bool
See Also
--------
any
Examples
--------
>>> graphlab.SArray([1, None]).all()
False
>>> graphlab.SArray([1, 0]).all()
False
>>> graphlab.SArray([1, 2]).all()
True
>>> graphlab.SArray(["hello", "world"]).all()
True
>>> graphlab.SArray(["hello", ""]).all()
False
>>> graphlab.SArray([]).all()
True
"""
with cython_context():
return self.__proxy__.all()
def any(self):
"""
Return True if any element of the SArray evaluates to True. For numeric
SArrays any non-zero value evaluates to True. For string, list, and
dictionary SArrays, any element of non-zero length evaluates to True.
Returns False on an empty SArray.
Returns
-------
out : bool
See Also
--------
all
Examples
--------
>>> graphlab.SArray([1, None]).any()
True
>>> graphlab.SArray([1, 0]).any()
True
>>> graphlab.SArray([0, 0]).any()
False
>>> graphlab.SArray(["hello", "world"]).any()
True
>>> graphlab.SArray(["hello", ""]).any()
True
>>> graphlab.SArray(["", ""]).any()
False
>>> graphlab.SArray([]).any()
False
"""
with cython_context():
return self.__proxy__.any()
def max(self):
"""
Get maximum numeric value in SArray.
Returns None on an empty SArray. Raises an exception if called on an
SArray with non-numeric type.
Returns
-------
out : type of SArray
Maximum value of SArray
See Also
--------
min
Examples
--------
>>> graphlab.SArray([14, 62, 83, 72, 77, 96, 5, 25, 69, 66]).max()
96
"""
with cython_context():
return self.__proxy__.max()
def min(self):
"""
Get minimum numeric value in SArray.
Returns None on an empty SArray. Raises an exception if called on an
SArray with non-numeric type.
Returns
-------
out : type of SArray
Minimum value of SArray
See Also
--------
max
Examples
--------
>>> graphlab.SArray([14, 62, 83, 72, 77, 96, 5, 25, 69, 66]).min()
"""
with cython_context():
return self.__proxy__.min()
def sum(self):
"""
Sum of all values in this SArray.
Raises an exception if called on an SArray of strings, lists, or
dictionaries. If the SArray contains numeric arrays (array.array) and
all the arrays are the same length, the sum over all the arrays will be
returned. Returns None on an empty SArray. For large values, this may
overflow without warning.
Returns
-------
out : type of SArray
Sum of all values in SArray
"""
with cython_context():
return self.__proxy__.sum()
def mean(self):
"""
Mean of all the values in the SArray, or mean image.
Returns None on an empty SArray. Raises an exception if called on an
SArray with non-numeric type or non-Image type.
Returns
-------
out : float | graphlab.Image
Mean of all values in SArray, or image holding per-pixel mean
across the input SArray.
"""
with cython_context():
if self.dtype() == gl.Image:
import graphlab.extensions as extensions
return extensions.generate_mean(self)
else:
return self.__proxy__.mean()
def std(self, ddof=0):
"""
Standard deviation of all the values in the SArray.
Returns None on an empty SArray. Raises an exception if called on an
SArray with non-numeric type or if `ddof` >= length of SArray.
Parameters
----------
ddof : int, optional
"delta degrees of freedom" in the variance calculation.
Returns
-------
out : float
The standard deviation of all the values.
"""
with cython_context():
return self.__proxy__.std(ddof)
def var(self, ddof=0):
"""
Variance of all the values in the SArray.
Returns None on an empty SArray. Raises an exception if called on an
SArray with non-numeric type or if `ddof` >= length of SArray.
Parameters
----------
ddof : int, optional
"delta degrees of freedom" in the variance calculation.
Returns
-------
out : float
Variance of all values in SArray.
"""
with cython_context():
return self.__proxy__.var(ddof)
def num_missing(self):
"""
Number of missing elements in the SArray.
Returns
-------
out : int
Number of missing values.
"""
with cython_context():
return self.__proxy__.num_missing()
def nnz(self):
"""
Number of non-zero elements in the SArray.
Returns
-------
out : int
Number of non-zero elements.
"""
with cython_context():
return self.__proxy__.nnz()
def datetime_to_str(self,str_format="%Y-%m-%dT%H:%M:%S%ZP"):
"""
Create a new SArray with all the values cast to str. The string format is
specified by the 'str_format' parameter.
Parameters
----------
str_format : str
The format to output the string. Default format is "%Y-%m-%dT%H:%M:%S%ZP".
Returns
-------
out : SArray[str]
The SArray converted to the type 'str'.
Examples
--------
>>> dt = datetime.datetime(2011, 10, 20, 9, 30, 10, tzinfo=GMT(-5))
>>> sa = graphlab.SArray([dt])
>>> sa.datetime_to_str("%e %b %Y %T %ZP")
dtype: str
Rows: 1
[20 Oct 2011 09:30:10 GMT-05:00]
See Also
----------
str_to_datetime
References
----------
[1] Boost date time from string conversion guide (http://www.boost.org/doc/libs/1_48_0/doc/html/date_time/date_time_io.html)
"""
if(self.dtype() != datetime.datetime):
raise TypeError("datetime_to_str expects SArray of datetime as input SArray")
_mt._get_metric_tracker().track('sarray.datetime_to_str')
with cython_context():
return SArray(_proxy=self.__proxy__.datetime_to_str(str_format))
def str_to_datetime(self,str_format="%Y-%m-%dT%H:%M:%S%ZP"):
"""
Create a new SArray with all the values cast to datetime. The string format is
specified by the 'str_format' parameter.
Parameters
----------
str_format : str
The string format of the input SArray. Default format is "%Y-%m-%dT%H:%M:%S%ZP".
Returns
-------
out : SArray[datetime.datetime]
The SArray converted to the type 'datetime'.
Examples
--------
>>> sa = graphlab.SArray(["20-Oct-2011 09:30:10 GMT-05:30"])
>>> sa.str_to_datetime("%d-%b-%Y %H:%M:%S %ZP")
dtype: datetime
Rows: 1
datetime.datetime(2011, 10, 20, 9, 30, 10, tzinfo=GMT(-5.5))
See Also
----------
datetime_to_str
References
----------
[1] boost date time to string conversion guide (http://www.boost.org/doc/libs/1_48_0/doc/html/date_time/date_time_io.html)
"""
if(self.dtype() != str):
raise TypeError("str_to_datetime expects SArray of str as input SArray")
_mt._get_metric_tracker().track('sarray.str_to_datetime')
with cython_context():
return SArray(_proxy=self.__proxy__.str_to_datetime(str_format))
def pixel_array_to_image(self, width, height, channels, undefined_on_failure=True, allow_rounding=False):
"""
Create a new SArray with all the values cast to :py:class:`graphlab.image.Image`
of uniform size.
Parameters
----------
width: int
The width of the new images.
height: int
The height of the new images.
channels: int.
Number of channels of the new images.
undefined_on_failure: bool , optional , default True
If True, return None type instead of Image type in failure instances.
If False, raises error upon failure.
allow_rounding: bool, optional , default False
If True, rounds non-integer values when converting to Image type.
If False, raises error upon rounding.
Returns
-------
out : SArray[graphlab.Image]
The SArray converted to the type 'graphlab.Image'.
See Also
--------
astype, str_to_datetime, datetime_to_str
Examples
--------
The MNIST data is scaled from 0 to 1, but our image type only loads integer pixel values
from 0 to 255. If we just convert without scaling, all values below one would be cast to
0.
>>> mnist_array = graphlab.SArray('http://s3.amazonaws.com/dato-datasets/mnist/mnist_vec_sarray')
>>> scaled_mnist_array = mnist_array * 255
>>> mnist_img_sarray = gl.SArray.pixel_array_to_image(scaled_mnist_array, 28, 28, 1, allow_rounding = True)
"""
if(self.dtype() != array.array):
raise TypeError("array_to_img expects SArray of arrays as input SArray")
num_to_test = 10
num_test = min(self.size(), num_to_test)
mod_values = [val % 1 for x in range(num_test) for val in self[x]]
out_of_range_values = [(val > 255 or val < 0) for x in range(num_test) for val in self[x]]
if sum(mod_values) != 0.0 and not allow_rounding:
raise ValueError("There are non-integer values in the array data. Images only support integer data values between 0 and 255. To permit rounding, set the 'allow_rounding' paramter to 1.")
if sum(out_of_range_values) != 0:
raise ValueError("There are values outside the range of 0 to 255. Images only support integer data values between 0 and 255.")
_mt._get_metric_tracker().track('sarray.pixel_array_to_img')
import graphlab.extensions as extensions
return extensions.vector_sarray_to_image_sarray(self, width, height, channels, undefined_on_failure)
def _head_str(self, num_rows):
"""
Takes the head of SArray casted to string.
"""
import graphlab.extensions as extensions
return extensions._head_str(self, num_rows)
def astype(self, dtype, undefined_on_failure=False):
"""
Create a new SArray with all values cast to the given type. Throws an
exception if the types are not castable to the given type.
Parameters
----------
dtype : {int, float, str, list, array.array, dict, datetime.datetime}
The type to cast the elements to in SArray
undefined_on_failure: bool, optional
If set to True, runtime cast failures will be emitted as missing
values rather than failing.
Returns
-------
out : SArray [dtype]
The SArray converted to the type ``dtype``.
Notes
-----
- The string parsing techniques used to handle conversion to dictionary
and list types are quite generic and permit a variety of interesting
formats to be interpreted. For instance, a JSON string can usually be
interpreted as a list or a dictionary type. See the examples below.
- For datetime-to-string and string-to-datetime conversions,
use sa.datetime_to_str() and sa.str_to_datetime() functions.
- For array.array to graphlab.Image conversions, use sa.pixel_array_to_image()
Examples
--------
>>> sa = graphlab.SArray(['1','2','3','4'])
>>> sa.astype(int)
dtype: int
Rows: 4
[1, 2, 3, 4]
Given an SArray of strings that look like dicts, convert to a dictionary
type:
>>> sa = graphlab.SArray(['{1:2 3:4}', '{a:b c:d}'])
>>> sa.astype(dict)
dtype: dict
Rows: 2
[{1: 2, 3: 4}, {'a': 'b', 'c': 'd'}]
"""
_mt._get_metric_tracker().track('sarray.astype.%s' % str(dtype.__name__))
if (dtype == gl.Image) and (self.dtype() == array.array):
raise TypeError("Cannot cast from image type to array with sarray.astype(). Please use sarray.array_to_img() instead.")
with cython_context():
return SArray(_proxy=self.__proxy__.astype(dtype, undefined_on_failure))
def clip(self, lower=float('nan'), upper=float('nan')):
"""
Create a new SArray with each value clipped to be within the given
bounds.
In this case, "clipped" means that values below the lower bound will be
set to the lower bound value. Values above the upper bound will be set
to the upper bound value. This function can operate on SArrays of
numeric type as well as array type, in which case each individual
element in each array is clipped. By default ``lower`` and ``upper`` are
set to ``float('nan')`` which indicates the respective bound should be
ignored. The method fails if invoked on an SArray of non-numeric type.
Parameters
----------
lower : int, optional
The lower bound used to clip. Ignored if equal to ``float('nan')``
(the default).
upper : int, optional
The upper bound used to clip. Ignored if equal to ``float('nan')``
(the default).
Returns
-------
out : SArray
See Also
--------
clip_lower, clip_upper
Examples
--------
>>> sa = graphlab.SArray([1,2,3])
>>> sa.clip(2,2)
dtype: int
Rows: 3
[2, 2, 2]
"""
with cython_context():
return SArray(_proxy=self.__proxy__.clip(lower, upper))
def clip_lower(self, threshold):
"""
Create new SArray with all values clipped to the given lower bound. This
function can operate on numeric arrays, as well as vector arrays, in
which case each individual element in each vector is clipped. Throws an
exception if the SArray is empty or the types are non-numeric.
Parameters
----------
threshold : float
The lower bound used to clip values.
Returns
-------
out : SArray
See Also
--------
clip, clip_upper
Examples
--------
>>> sa = graphlab.SArray([1,2,3])
>>> sa.clip_lower(2)
dtype: int
Rows: 3
[2, 2, 3]
"""
with cython_context():
return SArray(_proxy=self.__proxy__.clip(threshold, float('nan')))
def clip_upper(self, threshold):
"""
Create new SArray with all values clipped to the given upper bound. This
function can operate on numeric arrays, as well as vector arrays, in
which case each individual element in each vector is clipped.
Parameters
----------
threshold : float
The upper bound used to clip values.
Returns
-------
out : SArray
See Also
--------
clip, clip_lower
Examples
--------
>>> sa = graphlab.SArray([1,2,3])
>>> sa.clip_upper(2)
dtype: int
Rows: 3
[1, 2, 2]
"""
with cython_context():
return SArray(_proxy=self.__proxy__.clip(float('nan'), threshold))
def tail(self, n=10):
"""
Get an SArray that contains the last n elements in the SArray.
Parameters
----------
n : int
The number of elements to fetch
Returns
-------
out : SArray
A new SArray which contains the last n rows of the current SArray.
"""
with cython_context():
return SArray(_proxy=self.__proxy__.tail(n))
def dropna(self):
"""
Create new SArray containing only the non-missing values of the
SArray.
A missing value shows up in an SArray as 'None'. This will also drop
float('nan').
Returns
-------
out : SArray
The new SArray with missing values removed.
"""
_mt._get_metric_tracker().track('sarray.dropna')
with cython_context():
return SArray(_proxy = self.__proxy__.drop_missing_values())
def fillna(self, value):
"""
Create new SArray with all missing values (None or NaN) filled in
with the given value.
The size of the new SArray will be the same as the original SArray. If
the given value is not the same type as the values in the SArray,
`fillna` will attempt to convert the value to the original SArray's
type. If this fails, an error will be raised.
Parameters
----------
value : type convertible to SArray's type
The value used to replace all missing values
Returns
-------
out : SArray
A new SArray with all missing values filled
"""
_mt._get_metric_tracker().track('sarray.fillna')
with cython_context():
return SArray(_proxy = self.__proxy__.fill_missing_values(value))
def topk_index(self, topk=10, reverse=False):
"""
Create an SArray indicating which elements are in the top k.
Entries are '1' if the corresponding element in the current SArray is a
part of the top k elements, and '0' if that corresponding element is
not. Order is descending by default.
Parameters
----------
topk : int
The number of elements to determine if 'top'
reverse: bool
If True, return the topk elements in ascending order
Returns
-------
out : SArray (of type int)
Notes
-----
This is used internally by SFrame's topk function.
"""
with cython_context():
return SArray(_proxy = self.__proxy__.topk_index(topk, reverse))
def sketch_summary(self, background=False, sub_sketch_keys=None):
"""
Summary statistics that can be calculated with one pass over the SArray.
Returns a graphlab.Sketch object which can be further queried for many
descriptive statistics over this SArray. Many of the statistics are
approximate. See the :class:`~graphlab.Sketch` documentation for more
detail.
Parameters
----------
background : boolean, optional
If True, the sketch construction will return immediately and the
sketch will be constructed in the background. While this is going on,
the sketch can be queried incrementally, but at a performance penalty.
Defaults to False.
sub_sketch_keys: int | str | list of int | list of str, optional
For SArray of dict type, also constructs sketches for a given set of keys,
For SArray of array type, also constructs sketches for the given indexes.
The sub sketches may be queried using:
:py:func:`~graphlab.Sketch.element_sub_sketch()`
Defaults to None in which case no subsketches will be constructed.
Returns
-------
out : Sketch
Sketch object that contains descriptive statistics for this SArray.
Many of the statistics are approximate.
"""
from graphlab.data_structures.sketch import Sketch
if (self.dtype() == gl.data_structures.image.Image):
raise TypeError("sketch_summary() is not supported for arrays of image type")
if (type(background) != bool):
raise TypeError("'background' parameter has to be a boolean value")
if (sub_sketch_keys != None):
if (self.dtype() != dict and self.dtype() != array.array):
raise TypeError("sub_sketch_keys is only supported for SArray of dictionary or array type")
if not hasattr(sub_sketch_keys, "__iter__"):
sub_sketch_keys = [sub_sketch_keys]
value_types = set([type(i) for i in sub_sketch_keys])
if (len(value_types) != 1):
raise ValueError("sub_sketch_keys member values need to have the same type.")
value_type = value_types.pop();
if (self.dtype() == dict and value_type != str):
raise TypeError("Only string value(s) can be passed to sub_sketch_keys for SArray of dictionary type. "+
"For dictionary types, sketch summary is computed by casting keys to string values.")
if (self.dtype() == array.array and value_type != int):
raise TypeError("Only int value(s) can be passed to sub_sketch_keys for SArray of array type")
else:
sub_sketch_keys = list()
_mt._get_metric_tracker().track('sarray.sketch_summary')
return Sketch(self, background, sub_sketch_keys = sub_sketch_keys)
def append(self, other):
"""
Append an SArray to the current SArray. Creates a new SArray with the
rows from both SArrays. Both SArrays must be of the same type.
Parameters
----------
other : SArray
Another SArray whose rows are appended to current SArray.
Returns
-------
out : SArray
A new SArray that contains rows from both SArrays, with rows from
the ``other`` SArray coming after all rows from the current SArray.
See Also
--------
SFrame.append
Examples
--------
>>> sa = graphlab.SArray([1, 2, 3])
>>> sa2 = graphlab.SArray([4, 5, 6])
>>> sa.append(sa2)
dtype: int
Rows: 6
[1, 2, 3, 4, 5, 6]
"""
_mt._get_metric_tracker().track('sarray.append')
if type(other) is not SArray:
raise RuntimeError("SArray append can only work with SArray")
if self.dtype() != other.dtype():
raise RuntimeError("Data types in both SArrays have to be the same")
with cython_context():
other.__materialize__()
return SArray(_proxy = self.__proxy__.append(other.__proxy__))
def unique(self):
"""
Get all unique values in the current SArray.
Raises a TypeError if the SArray is of dictionary type. Will not
necessarily preserve the order of the given SArray in the new SArray.
Returns
-------
out : SArray
A new SArray that contains the unique values of the current SArray.
See Also
--------
SFrame.unique
"""
_mt._get_metric_tracker().track('sarray.unique')
tmp_sf = gl.SFrame()
tmp_sf.add_column(self, 'X1')
res = tmp_sf.groupby('X1',{})
return SArray(_proxy=res['X1'].__proxy__)
@gl._check_canvas_enabled
def show(self, view=None):
"""
show(view=None)
Visualize the SArray with GraphLab Create :mod:`~graphlab.canvas`. This function starts Canvas
if it is not already running. If the SArray has already been plotted,
this function will update the plot.
Parameters
----------
view : str, optional
The name of the SFrame view to show. Can be one of:
- None: Use the default (depends on the dtype of the SArray).
- 'Categorical': Shows most frequent items in this SArray, sorted
by frequency. Only valid for str, int, or float dtypes.
- 'Numeric': Shows a histogram (distribution of values) for the
SArray. Only valid for int or float dtypes.
- 'Dictionary': Shows a cross filterable list of keys (categorical)
and values (categorical or numeric). Only valid for dict dtype.
- 'Array': Shows a Numeric view, filterable by sub-column (index).
Only valid for array.array dtype.
- 'List': Shows a Categorical view, aggregated across all sub-
columns (indices). Only valid for list dtype.
Returns
-------
view : graphlab.canvas.view.View
An object representing the GraphLab Canvas view
See Also
--------
canvas
Examples
--------
Suppose 'sa' is an SArray, we can view it in GraphLab Canvas using:
>>> sa.show()
If 'sa' is a numeric (int or float) SArray, we can view it as
a categorical variable using:
>>> sa.show(view='Categorical')
"""
import graphlab.canvas
import graphlab.canvas.inspect
import graphlab.canvas.views.sarray
graphlab.canvas.inspect.find_vars(self)
return graphlab.canvas.show(graphlab.canvas.views.sarray.SArrayView(self, params={
'view': view
}))
def item_length(self):
"""
Length of each element in the current SArray.
Only works on SArrays of dict, array, or list type. If a given element
is a missing value, then the output elements is also a missing value.
This function is equivalent to the following but more performant:
sa_item_len = sa.apply(lambda x: len(x) if x is not None else None)
Returns
-------
out_sf : SArray
A new SArray, each element in the SArray is the len of the corresponding
items in original SArray.
Examples
--------
>>> sa = SArray([
... {"is_restaurant": 1, "is_electronics": 0},
... {"is_restaurant": 1, "is_retail": 1, "is_electronics": 0},
... {"is_restaurant": 0, "is_retail": 1, "is_electronics": 0},
... {"is_restaurant": 0},
... {"is_restaurant": 1, "is_electronics": 1},
... None])
>>> sa.item_length()
dtype: int
Rows: 6
[2, 3, 3, 1, 2, None]
"""
if (self.dtype() not in [list, dict, array.array]):
raise TypeError("item_length() is only applicable for SArray of type list, dict and array.")
_mt._get_metric_tracker().track('sarray.item_length')
with cython_context():
return SArray(_proxy = self.__proxy__.item_length())
def split_datetime(self, column_name_prefix = "X", limit=None, tzone=False):
"""
Splits an SArray of datetime type to multiple columns, return a
new SFrame that contains expanded columns. A SArray of datetime will be
split by default into an SFrame of 6 columns, one for each
year/month/day/hour/minute/second element.
column naming:
When splitting a SArray of datetime type, new columns are named:
prefix.year, prefix.month, etc. The prefix is set by the parameter
"column_name_prefix" and defaults to 'X'. If column_name_prefix is
None or empty, then no prefix is used.
Timezone column:
If tzone parameter is True, then timezone information is represented
as one additional column which is a float shows the offset from
GMT(0.0) or from UTC.
Parameters
----------
column_name_prefix: str, optional
If provided, expanded column names would start with the given prefix.
Defaults to "X".
limit: list[str], optional
Limits the set of datetime elements to expand.
Elements are 'year','month','day','hour','minute',
and 'second'.
tzone: bool, optional
A boolean parameter that determines whether to show timezone column or not.
Defaults to False.
Returns
-------
out : SFrame
A new SFrame that contains all expanded columns
Examples
--------
To expand only day and year elements of a datetime SArray
>>> sa = SArray(
[datetime(2011, 1, 21, 7, 7, 21, tzinfo=GMT(0)),
datetime(2010, 2, 5, 7, 8, 21, tzinfo=GMT(4.5)])
>>> sa.split_datetime(column_name_prefix=None,limit=['day','year'])
Columns:
day int
year int
Rows: 2
Data:
+-------+--------+
| day | year |
+-------+--------+
| 21 | 2011 |
| 5 | 2010 |
+-------+--------+
[2 rows x 2 columns]
To expand only year and tzone elements of a datetime SArray
with tzone column represented as a string. Columns are named with prefix:
'Y.column_name'.
>>> sa.split_datetime(column_name_prefix="Y",limit=['year'],tzone=True)
Columns:
Y.year int
Y.tzone float
Rows: 2
Data:
+----------+---------+
| Y.year | Y.tzone |
+----------+---------+
| 2011 | 0.0 |
| 2010 | 4.5 |
+----------+---------+
[2 rows x 2 columns]
"""
if self.dtype() != datetime.datetime:
raise TypeError("Only column of datetime type is supported.")
if column_name_prefix == None:
column_name_prefix = ""
if type(column_name_prefix) != str:
raise TypeError("'column_name_prefix' must be a string")
# convert limit to column_keys
if limit != None:
if (not hasattr(limit, '__iter__')):
raise TypeError("'limit' must be a list");
name_types = set([type(i) for i in limit])
if (len(name_types) != 1):
raise TypeError("'limit' contains values that are different types")
if (name_types.pop() != str):
raise TypeError("'limit' must contain string values.")
if len(set(limit)) != len(limit):
raise ValueError("'limit' contains duplicate values")
column_types = []
if(limit != None):
column_types = list()
for i in limit:
column_types.append(int);
else:
limit = ['year','month','day','hour','minute','second']
column_types = [int, int, int, int, int, int]
if(tzone == True):
limit += ['tzone']
column_types += [float]
_mt._get_metric_tracker().track('sarray.split_datetime')
with cython_context():
return gl.SFrame(_proxy=self.__proxy__.expand(column_name_prefix, limit, column_types))
def unpack(self, column_name_prefix = "X", column_types=None, na_value=None, limit=None):
"""
Convert an SArray of list, array, or dict type to an SFrame with
multiple columns.
`unpack` expands an SArray using the values of each list/array/dict as
elements in a new SFrame of multiple columns. For example, an SArray of
lists each of length 4 will be expanded into an SFrame of 4 columns,
one for each list element. An SArray of lists/arrays of varying size
will be expand to a number of columns equal to the longest list/array.
An SArray of dictionaries will be expanded into as many columns as
there are keys.
When unpacking an SArray of list or array type, new columns are named:
`column_name_prefix`.0, `column_name_prefix`.1, etc. If unpacking a
column of dict type, unpacked columns are named
`column_name_prefix`.key1, `column_name_prefix`.key2, etc.
When unpacking an SArray of list or dictionary types, missing values in
the original element remain as missing values in the resultant columns.
If the `na_value` parameter is specified, all values equal to this
given value are also replaced with missing values. In an SArray of
array.array type, NaN is interpreted as a missing value.
:py:func:`graphlab.SFrame.pack_columns()` is the reverse effect of unpack
Parameters
----------
column_name_prefix: str, optional
If provided, unpacked column names would start with the given prefix.
column_types: list[type], optional
Column types for the unpacked columns. If not provided, column
types are automatically inferred from first 100 rows. Defaults to
None.
na_value: optional
Convert all values that are equal to `na_value` to
missing value if specified.
limit: list, optional
Limits the set of list/array/dict keys to unpack.
For list/array SArrays, 'limit' must contain integer indices.
For dict SArray, 'limit' must contain dictionary keys.
Returns
-------
out : SFrame
A new SFrame that contains all unpacked columns
Examples
--------
To unpack a dict SArray
>>> sa = SArray([{ 'word': 'a', 'count': 1},
... { 'word': 'cat', 'count': 2},
... { 'word': 'is', 'count': 3},
... { 'word': 'coming','count': 4}])
Normal case of unpacking SArray of type dict:
>>> sa.unpack(column_name_prefix=None)
Columns:
count int
word str
<BLANKLINE>
Rows: 4
<BLANKLINE>
Data:
+-------+--------+
| count | word |
+-------+--------+
| 1 | a |
| 2 | cat |
| 3 | is |
| 4 | coming |
+-------+--------+
[4 rows x 2 columns]
<BLANKLINE>
Unpack only keys with 'word':
>>> sa.unpack(limit=['word'])
Columns:
X.word str
<BLANKLINE>
Rows: 4
<BLANKLINE>
Data:
+--------+
| X.word |
+--------+
| a |
| cat |
| is |
| coming |
+--------+
[4 rows x 1 columns]
<BLANKLINE>
>>> sa2 = SArray([
... [1, 0, 1],
... [1, 1, 1],
... [0, 1]])
Convert all zeros to missing values:
>>> sa2.unpack(column_types=[int, int, int], na_value=0)
Columns:
X.0 int
X.1 int
X.2 int
<BLANKLINE>
Rows: 3
<BLANKLINE>
Data:
+------+------+------+
| X.0 | X.1 | X.2 |
+------+------+------+
| 1 | None | 1 |
| 1 | 1 | 1 |
| None | 1 | None |
+------+------+------+
[3 rows x 3 columns]
<BLANKLINE>
"""
if self.dtype() not in [dict, array.array, list]:
raise TypeError("Only SArray of dict/list/array type supports unpack")
if column_name_prefix == None:
column_name_prefix = ""
if type(column_name_prefix) != str:
raise TypeError("'column_name_prefix' must be a string")
# validdate 'limit'
if limit != None:
if (not hasattr(limit, '__iter__')):
raise TypeError("'limit' must be a list");
name_types = set([type(i) for i in limit])
if (len(name_types) != 1):
raise TypeError("'limit' contains values that are different types")
# limit value should be numeric if unpacking sarray.array value
if (self.dtype() != dict) and (name_types.pop() != int):
raise TypeError("'limit' must contain integer values.")
if len(set(limit)) != len(limit):
raise ValueError("'limit' contains duplicate values")
if (column_types != None):
if not hasattr(column_types, '__iter__'):
raise TypeError("column_types must be a list");
for column_type in column_types:
if (column_type not in (int, float, str, list, dict, array.array)):
raise TypeError("column_types contains unsupported types. Supported types are ['float', 'int', 'list', 'dict', 'str', 'array.array']")
if limit != None:
if len(limit) != len(column_types):
raise ValueError("limit and column_types do not have the same length")
elif self.dtype() == dict:
raise ValueError("if 'column_types' is given, 'limit' has to be provided to unpack dict type.")
else:
limit = range(len(column_types))
else:
head_rows = self.head(100).dropna()
lengths = [len(i) for i in head_rows]
if len(lengths) == 0 or max(lengths) == 0:
raise RuntimeError("Cannot infer number of items from the SArray, SArray may be empty. please explicitly provide column types")
# infer column types for dict type at server side, for list and array, infer from client side
if self.dtype() != dict:
length = max(lengths)
if limit == None:
limit = range(length)
else:
# adjust the length
length = len(limit)
if self.dtype() == array.array:
column_types = [float for i in range(length)]
else:
column_types = list()
for i in limit:
t = [(x[i] if ((x is not None) and len(x) > i) else None) for x in head_rows]
column_types.append(infer_type_of_list(t))
_mt._get_metric_tracker().track('sarray.unpack')
with cython_context():
if (self.dtype() == dict and column_types == None):
limit = limit if limit != None else []
return gl.SFrame(_proxy=self.__proxy__.unpack_dict(column_name_prefix, limit, na_value))
else:
return gl.SFrame(_proxy=self.__proxy__.unpack(column_name_prefix, limit, column_types, na_value))
def sort(self, ascending=True):
"""
Sort all values in this SArray.
Sort only works for sarray of type str, int and float, otherwise TypeError
will be raised. Creates a new, sorted SArray.
Parameters
----------
ascending: boolean, optional
If true, the sarray values are sorted in ascending order, otherwise,
descending order.
Returns
-------
out: SArray
Examples
--------
>>> sa = SArray([3,2,1])
>>> sa.sort()
dtype: int
Rows: 3
[1, 2, 3]
"""
if self.dtype() not in (int, float, str, datetime.datetime):
raise TypeError("Only sarray with type (int, float, str, datetime.datetime) can be sorted")
sf = gl.SFrame()
sf['a'] = self
return sf.sort('a', ascending)['a'] | unknown | codeparrot/codeparrot-clean | ||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: snap.proto
package snappb
import (
fmt "fmt"
io "io"
math "math"
math_bits "math/bits"
proto "github.com/golang/protobuf/proto"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
type Snapshot struct {
Crc *uint32 `protobuf:"varint,1,opt,name=crc" json:"crc,omitempty"`
Data []byte `protobuf:"bytes,2,opt,name=data" json:"data,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Snapshot) Reset() { *m = Snapshot{} }
func (m *Snapshot) String() string { return proto.CompactTextString(m) }
func (*Snapshot) ProtoMessage() {}
func (*Snapshot) Descriptor() ([]byte, []int) {
return fileDescriptor_f2e3c045ebf84d00, []int{0}
}
func (m *Snapshot) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Snapshot) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Snapshot.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *Snapshot) XXX_Merge(src proto.Message) {
xxx_messageInfo_Snapshot.Merge(m, src)
}
func (m *Snapshot) XXX_Size() int {
return m.Size()
}
func (m *Snapshot) XXX_DiscardUnknown() {
xxx_messageInfo_Snapshot.DiscardUnknown(m)
}
var xxx_messageInfo_Snapshot proto.InternalMessageInfo
func (m *Snapshot) GetCrc() uint32 {
if m != nil && m.Crc != nil {
return *m.Crc
}
return 0
}
func (m *Snapshot) GetData() []byte {
if m != nil {
return m.Data
}
return nil
}
func init() {
proto.RegisterType((*Snapshot)(nil), "snappb.snapshot")
}
func init() { proto.RegisterFile("snap.proto", fileDescriptor_f2e3c045ebf84d00) }
var fileDescriptor_f2e3c045ebf84d00 = []byte{
// 140 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x2a, 0xce, 0x4b, 0x2c,
0xd0, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x03, 0xb1, 0x0b, 0x92, 0x94, 0x0c, 0xb8, 0x38,
0x40, 0xac, 0xe2, 0x8c, 0xfc, 0x12, 0x21, 0x01, 0x2e, 0xe6, 0xe4, 0xa2, 0x64, 0x09, 0x46, 0x05,
0x46, 0x0d, 0xde, 0x20, 0x10, 0x53, 0x48, 0x88, 0x8b, 0x25, 0x25, 0xb1, 0x24, 0x51, 0x82, 0x49,
0x81, 0x51, 0x83, 0x27, 0x08, 0xcc, 0x76, 0x72, 0x3b, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39,
0xc6, 0x07, 0x8f, 0xe4, 0x18, 0x67, 0x3c, 0x96, 0x63, 0x88, 0x32, 0x49, 0xcf, 0xd7, 0x4b, 0x2d,
0x49, 0x4e, 0xd1, 0xcb, 0xcc, 0xd7, 0x07, 0xd1, 0xfa, 0xc5, 0xa9, 0x45, 0x65, 0xa9, 0x45, 0xfa,
0x65, 0xc6, 0x60, 0x2e, 0x94, 0x97, 0x58, 0x90, 0xa9, 0x0f, 0xb2, 0x4a, 0x1f, 0x62, 0x33, 0x20,
0x00, 0x00, 0xff, 0xff, 0x64, 0x15, 0x9e, 0x77, 0x8e, 0x00, 0x00, 0x00,
}
func (m *Snapshot) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *Snapshot) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Snapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.XXX_unrecognized != nil {
i -= len(m.XXX_unrecognized)
copy(dAtA[i:], m.XXX_unrecognized)
}
if m.Data != nil {
i -= len(m.Data)
copy(dAtA[i:], m.Data)
i = encodeVarintSnap(dAtA, i, uint64(len(m.Data)))
i--
dAtA[i] = 0x12
}
if m.Crc != nil {
i = encodeVarintSnap(dAtA, i, uint64(*m.Crc))
i--
dAtA[i] = 0x8
}
return len(dAtA) - i, nil
}
func encodeVarintSnap(dAtA []byte, offset int, v uint64) int {
offset -= sovSnap(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *Snapshot) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Crc != nil {
n += 1 + sovSnap(uint64(*m.Crc))
}
if m.Data != nil {
l = len(m.Data)
n += 1 + l + sovSnap(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func sovSnap(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozSnap(x uint64) (n int) {
return sovSnap(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *Snapshot) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowSnap
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: snapshot: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: snapshot: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Crc", wireType)
}
var v uint32
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowSnap
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.Crc = &v
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
}
var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowSnap
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if byteLen < 0 {
return ErrInvalidLengthSnap
}
postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthSnap
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...)
if m.Data == nil {
m.Data = []byte{}
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipSnap(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthSnap
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipSnap(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowSnap
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowSnap
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
break
}
}
case 1:
iNdEx += 8
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowSnap
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if length < 0 {
return 0, ErrInvalidLengthSnap
}
iNdEx += length
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupSnap
}
depth--
case 5:
iNdEx += 4
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthSnap
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthSnap = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowSnap = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupSnap = fmt.Errorf("proto: unexpected end of group")
) | go | github | https://github.com/etcd-io/etcd | server/etcdserver/api/snap/snappb/snap.pb.go |
# encoding: utf-8
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
db.execute("create unique index email on auth_user (email)")
pass
def backwards(self, orm):
db.execute("drop index email on auth_user")
pass
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'about': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'avatar_type': ('django.db.models.fields.CharField', [], {'default': "'n'", 'max_length': '1'}),
'bronze': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'consecutive_days_visit_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'country': ('django_countries.fields.CountryField', [], {'max_length': '2', 'blank': 'True'}),
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'date_of_birth': ('django.db.models.fields.DateField', [], {'null': 'True', 'blank': 'True'}),
'display_tag_filter_strategy': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'email_isvalid': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'email_key': ('django.db.models.fields.CharField', [], {'max_length': '32', 'null': 'True'}),
'email_tag_filter_strategy': ('django.db.models.fields.SmallIntegerField', [], {'default': '1'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'gold': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'gravatar': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'ignored_tags': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'interesting_tags': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'last_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'location': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'new_response_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'questions_per_page': ('django.db.models.fields.SmallIntegerField', [], {'default': '10'}),
'real_name': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'reputation': ('django.db.models.fields.PositiveIntegerField', [], {'default': '1'}),
'seen_response_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'show_country': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'silver': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'status': ('django.db.models.fields.CharField', [], {'default': "'w'", 'max_length': '2'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'}),
'website': ('django.db.models.fields.URLField', [], {'max_length': '200', 'blank': 'True'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'student.registration': {
'Meta': {'object_name': 'Registration', 'db_table': "'auth_registration'"},
'activation_key': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '32', 'db_index': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'unique': 'True'})
},
'student.userprofile': {
'Meta': {'object_name': 'UserProfile', 'db_table': "'auth_userprofile'"},
'courseware': ('django.db.models.fields.CharField', [], {'default': "'course.xml'", 'max_length': '255', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '255', 'blank': 'True'}),
'location': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '255', 'blank': 'True'}),
'meta': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '255', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'unique': 'True'})
},
'student.usertestgroup': {
'Meta': {'object_name': 'UserTestGroup'},
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '32', 'db_index': 'True'}),
'users': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.User']", 'db_index': 'True', 'symmetrical': 'False'})
}
}
complete_apps = ['student'] | unknown | codeparrot/codeparrot-clean | ||
from typing import TYPE_CHECKING, Any
from langchain_classic._api import create_importer
if TYPE_CHECKING:
from langchain_community.vectorstores import Milvus
# Create a way to dynamically look up deprecated imports.
# Used to consolidate logic for raising deprecation warnings and
# handling optional imports.
DEPRECATED_LOOKUP = {"Milvus": "langchain_community.vectorstores"}
_import_attribute = create_importer(__package__, deprecated_lookups=DEPRECATED_LOOKUP)
def __getattr__(name: str) -> Any:
"""Look up attributes dynamically."""
return _import_attribute(name)
__all__ = [
"Milvus",
] | python | github | https://github.com/langchain-ai/langchain | libs/langchain/langchain_classic/vectorstores/milvus.py |
# -*- coding: utf-8 -*-
"""
***************************************************************************
r_sum.py
---------------------
Date : December 2012
Copyright : (C) 2012 by Victor Olaya
Email : volayaf at gmail dot com
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Victor Olaya'
__date__ = 'December 2012'
__copyright__ = '(C) 2012, Victor Olaya'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
import HtmlReportPostProcessor
def postProcessResults(alg):
HtmlReportPostProcessor.postProcessResults(alg) | unknown | codeparrot/codeparrot-clean | ||
use std::cell::RefCell;
use std::collections::{BTreeSet, HashMap};
use std::fmt;
use std::str::FromStr;
use proc_macro::Span;
use proc_macro2::{Ident, TokenStream};
use quote::{ToTokens, format_ident, quote};
use syn::parse::ParseStream;
use syn::punctuated::Punctuated;
use syn::spanned::Spanned;
use syn::{Attribute, Field, LitStr, Meta, Path, Token, Type, TypeTuple, parenthesized};
use synstructure::{BindingInfo, VariantInfo};
use super::error::invalid_attr;
use crate::diagnostics::error::{
DiagnosticDeriveError, span_err, throw_invalid_attr, throw_span_err,
};
use crate::diagnostics::message::Message;
thread_local! {
pub(crate) static CODE_IDENT_COUNT: RefCell<u32> = RefCell::new(0);
}
/// Returns an ident of the form `__code_N` where `N` is incremented once with every call.
pub(crate) fn new_code_ident() -> syn::Ident {
CODE_IDENT_COUNT.with(|count| {
let ident = format_ident!("__code_{}", *count.borrow());
*count.borrow_mut() += 1;
ident
})
}
/// Checks whether the type name of `ty` matches `name`.
///
/// Given some struct at `a::b::c::Foo`, this will return true for `c::Foo`, `b::c::Foo`, or
/// `a::b::c::Foo`. This reasonably allows qualified names to be used in the macro.
pub(crate) fn type_matches_path(ty: &Type, name: &[&str]) -> bool {
if let Type::Path(ty) = ty {
ty.path
.segments
.iter()
.map(|s| s.ident.to_string())
.rev()
.zip(name.iter().rev())
.all(|(x, y)| &x.as_str() == y)
} else {
false
}
}
/// Checks whether the type `ty` is `()`.
pub(crate) fn type_is_unit(ty: &Type) -> bool {
if let Type::Tuple(TypeTuple { elems, .. }) = ty { elems.is_empty() } else { false }
}
/// Checks whether the type `ty` is `bool`.
pub(crate) fn type_is_bool(ty: &Type) -> bool {
type_matches_path(ty, &["bool"])
}
/// Reports a type error for field with `attr`.
pub(crate) fn report_type_error(
attr: &Attribute,
ty_name: &str,
) -> Result<!, DiagnosticDeriveError> {
let name = attr.path().segments.last().unwrap().ident.to_string();
let meta = &attr.meta;
throw_span_err!(
attr.span().unwrap(),
&format!(
"the `#[{}{}]` attribute can only be applied to fields of type {}",
name,
match meta {
Meta::Path(_) => "",
Meta::NameValue(_) => " = ...",
Meta::List(_) => "(...)",
},
ty_name
)
);
}
/// Reports an error if the field's type does not match `path`.
fn report_error_if_not_applied_to_ty(
attr: &Attribute,
info: &FieldInfo<'_>,
path: &[&str],
ty_name: &str,
) -> Result<(), DiagnosticDeriveError> {
if !type_matches_path(info.ty.inner_type(), path) {
report_type_error(attr, ty_name)?;
}
Ok(())
}
/// Reports an error if the field's type is not `Applicability`.
pub(crate) fn report_error_if_not_applied_to_applicability(
attr: &Attribute,
info: &FieldInfo<'_>,
) -> Result<(), DiagnosticDeriveError> {
report_error_if_not_applied_to_ty(
attr,
info,
&["rustc_errors", "Applicability"],
"`Applicability`",
)
}
/// Reports an error if the field's type is not `Span`.
pub(crate) fn report_error_if_not_applied_to_span(
attr: &Attribute,
info: &FieldInfo<'_>,
) -> Result<(), DiagnosticDeriveError> {
if !type_matches_path(info.ty.inner_type(), &["rustc_span", "Span"])
&& !type_matches_path(info.ty.inner_type(), &["rustc_errors", "MultiSpan"])
{
report_type_error(attr, "`Span` or `MultiSpan`")?;
}
Ok(())
}
/// Inner type of a field and type of wrapper.
#[derive(Copy, Clone)]
pub(crate) enum FieldInnerTy<'ty> {
/// Field is wrapped in a `Option<$inner>`.
Option(&'ty Type),
/// Field is wrapped in a `Vec<$inner>`.
Vec(&'ty Type),
/// Field isn't wrapped in an outer type.
Plain(&'ty Type),
}
impl<'ty> FieldInnerTy<'ty> {
/// Returns inner type for a field, if there is one.
///
/// - If `ty` is an `Option<Inner>`, returns `FieldInnerTy::Option(Inner)`.
/// - If `ty` is a `Vec<Inner>`, returns `FieldInnerTy::Vec(Inner)`.
/// - Otherwise returns `FieldInnerTy::Plain(ty)`.
pub(crate) fn from_type(ty: &'ty Type) -> Self {
fn single_generic_type(ty: &Type) -> &Type {
let Type::Path(ty_path) = ty else {
panic!("expected path type");
};
let path = &ty_path.path;
let ty = path.segments.last().unwrap();
let syn::PathArguments::AngleBracketed(bracketed) = &ty.arguments else {
panic!("expected bracketed generic arguments");
};
assert_eq!(bracketed.args.len(), 1);
let syn::GenericArgument::Type(ty) = &bracketed.args[0] else {
panic!("expected generic parameter to be a type generic");
};
ty
}
if type_matches_path(ty, &["std", "option", "Option"]) {
FieldInnerTy::Option(single_generic_type(ty))
} else if type_matches_path(ty, &["std", "vec", "Vec"]) {
FieldInnerTy::Vec(single_generic_type(ty))
} else {
FieldInnerTy::Plain(ty)
}
}
/// Returns `true` if `FieldInnerTy::with` will result in iteration for this inner type (i.e.
/// that cloning might be required for values moved in the loop body).
pub(crate) fn will_iterate(&self) -> bool {
match self {
FieldInnerTy::Vec(..) => true,
FieldInnerTy::Option(..) | FieldInnerTy::Plain(_) => false,
}
}
/// Returns the inner type.
pub(crate) fn inner_type(&self) -> &'ty Type {
match self {
FieldInnerTy::Option(inner) | FieldInnerTy::Vec(inner) | FieldInnerTy::Plain(inner) => {
inner
}
}
}
/// Surrounds `inner` with destructured wrapper type, exposing inner type as `binding`.
pub(crate) fn with(&self, binding: impl ToTokens, inner: impl ToTokens) -> TokenStream {
match self {
FieldInnerTy::Option(..) => quote! {
if let Some(#binding) = #binding {
#inner
}
},
FieldInnerTy::Vec(..) => quote! {
for #binding in #binding {
#inner
}
},
FieldInnerTy::Plain(t) if type_is_bool(t) => quote! {
if #binding {
#inner
}
},
FieldInnerTy::Plain(..) => quote! { #inner },
}
}
pub(crate) fn span(&self) -> proc_macro2::Span {
match self {
FieldInnerTy::Option(ty) | FieldInnerTy::Vec(ty) | FieldInnerTy::Plain(ty) => ty.span(),
}
}
}
/// Field information passed to the builder. Deliberately omits attrs to discourage the
/// `generate_*` methods from walking the attributes themselves.
pub(crate) struct FieldInfo<'a> {
pub(crate) binding: &'a BindingInfo<'a>,
pub(crate) ty: FieldInnerTy<'a>,
pub(crate) span: &'a proc_macro2::Span,
}
/// Small helper trait for abstracting over `Option` fields that contain a value and a `Span`
/// for error reporting if they are set more than once.
pub(crate) trait SetOnce<T> {
fn set_once(&mut self, value: T, span: Span);
fn value(self) -> Option<T>;
fn value_ref(&self) -> Option<&T>;
}
/// An [`Option<T>`] that keeps track of the span that caused it to be set; used with [`SetOnce`].
pub(super) type SpannedOption<T> = Option<(T, Span)>;
impl<T> SetOnce<T> for SpannedOption<T> {
fn set_once(&mut self, value: T, span: Span) {
match self {
None => {
*self = Some((value, span));
}
Some((_, prev_span)) => {
span_err(span, "attribute specified multiple times")
.span_note(*prev_span, "previously specified here")
.emit();
}
}
}
fn value(self) -> Option<T> {
self.map(|(v, _)| v)
}
fn value_ref(&self) -> Option<&T> {
self.as_ref().map(|(v, _)| v)
}
}
pub(super) type FieldMap = HashMap<String, TokenStream>;
/// In the strings in the attributes supplied to this macro, we want callers to be able to
/// reference fields in the format string. For example:
///
/// ```ignore (not-usage-example)
/// /// Suggest `==` when users wrote `===`.
/// #[suggestion("example message", code = "{lhs} == {rhs}")]
/// struct NotJavaScriptEq {
/// #[primary_span]
/// span: Span,
/// lhs: Ident,
/// rhs: Ident,
/// }
/// ```
///
/// We want to automatically pick up that `{lhs}` refers `self.lhs` and `{rhs}` refers to
/// `self.rhs`, then generate this call to `format!`:
///
/// ```ignore (not-usage-example)
/// format!("{lhs} == {rhs}", lhs = self.lhs, rhs = self.rhs)
/// ```
///
/// This function builds the entire call to `format!`.
pub(super) fn build_format(
field_map: &FieldMap,
input: &str,
span: proc_macro2::Span,
) -> TokenStream {
// This set is used later to generate the final format string. To keep builds reproducible,
// the iteration order needs to be deterministic, hence why we use a `BTreeSet` here
// instead of a `HashSet`.
let mut referenced_fields: BTreeSet<String> = BTreeSet::new();
// At this point, we can start parsing the format string.
let mut it = input.chars().peekable();
// Once the start of a format string has been found, process the format string and spit out
// the referenced fields. Leaves `it` sitting on the closing brace of the format string, so
// the next call to `it.next()` retrieves the next character.
while let Some(c) = it.next() {
if c != '{' {
continue;
}
if *it.peek().unwrap_or(&'\0') == '{' {
assert_eq!(it.next().unwrap(), '{');
continue;
}
let mut eat_argument = || -> Option<String> {
let mut result = String::new();
// Format specifiers look like:
//
// format := '{' [ argument ] [ ':' format_spec ] '}' .
//
// Therefore, we only need to eat until ':' or '}' to find the argument.
while let Some(c) = it.next() {
result.push(c);
let next = *it.peek().unwrap_or(&'\0');
if next == '}' {
break;
} else if next == ':' {
// Eat the ':' character.
assert_eq!(it.next().unwrap(), ':');
break;
}
}
// Eat until (and including) the matching '}'
while it.next()? != '}' {
continue;
}
Some(result)
};
if let Some(referenced_field) = eat_argument() {
referenced_fields.insert(referenced_field);
}
}
// At this point, `referenced_fields` contains a set of the unique fields that were
// referenced in the format string. Generate the corresponding "x = self.x" format
// string parameters:
let args = referenced_fields.into_iter().map(|field: String| {
let field_ident = format_ident!("{}", field);
let value = match field_map.get(&field) {
Some(value) => value.clone(),
// This field doesn't exist. Emit a diagnostic.
None => {
span_err(span.unwrap(), format!("`{field}` doesn't refer to a field on this type"))
.emit();
quote! {
"{#field}"
}
}
};
quote! {
#field_ident = #value
}
});
quote! {
format!(#input #(,#args)*)
}
}
/// `Applicability` of a suggestion - mirrors `rustc_errors::Applicability` - and used to represent
/// the user's selection of applicability if specified in an attribute.
#[derive(Clone, Copy)]
pub(crate) enum Applicability {
MachineApplicable,
MaybeIncorrect,
HasPlaceholders,
Unspecified,
}
impl FromStr for Applicability {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"machine-applicable" => Ok(Applicability::MachineApplicable),
"maybe-incorrect" => Ok(Applicability::MaybeIncorrect),
"has-placeholders" => Ok(Applicability::HasPlaceholders),
"unspecified" => Ok(Applicability::Unspecified),
_ => Err(()),
}
}
}
impl quote::ToTokens for Applicability {
fn to_tokens(&self, tokens: &mut TokenStream) {
tokens.extend(match self {
Applicability::MachineApplicable => {
quote! { rustc_errors::Applicability::MachineApplicable }
}
Applicability::MaybeIncorrect => {
quote! { rustc_errors::Applicability::MaybeIncorrect }
}
Applicability::HasPlaceholders => {
quote! { rustc_errors::Applicability::HasPlaceholders }
}
Applicability::Unspecified => {
quote! { rustc_errors::Applicability::Unspecified }
}
});
}
}
/// Build the mapping of field names to fields. This allows attributes to peek values from
/// other fields.
pub(super) fn build_field_mapping(variant: &VariantInfo<'_>) -> HashMap<String, TokenStream> {
let mut fields_map = FieldMap::new();
for binding in variant.bindings() {
if let Some(ident) = &binding.ast().ident {
fields_map.insert(ident.to_string(), quote! { #binding });
}
}
fields_map
}
#[derive(Copy, Clone, Debug)]
pub(super) enum AllowMultipleAlternatives {
No,
Yes,
}
fn parse_suggestion_values(
nested: ParseStream<'_>,
allow_multiple: AllowMultipleAlternatives,
) -> syn::Result<Vec<LitStr>> {
if nested.parse::<Token![=]>().is_ok() {
return Ok(vec![nested.parse::<LitStr>()?]);
}
let content;
parenthesized!(content in nested);
if let AllowMultipleAlternatives::No = allow_multiple {
span_err(content.span().unwrap(), "expected exactly one string literal for `code = ...`")
.emit();
return Ok(vec![]);
}
let literals = Punctuated::<LitStr, Token![,]>::parse_terminated(&content);
Ok(match literals {
Ok(p) if p.is_empty() => {
span_err(
content.span().unwrap(),
"expected at least one string literal for `code(...)`",
)
.emit();
vec![]
}
Ok(p) => p.into_iter().collect(),
Err(_) => {
span_err(content.span().unwrap(), "`code(...)` must contain only string literals")
.emit();
vec![]
}
})
}
/// Constructs the `format!()` invocation(s) necessary for a `#[suggestion*(code = "foo")]` or
/// `#[suggestion*(code("foo", "bar"))]` attribute field
pub(super) fn build_suggestion_code(
code_field: &Ident,
nested: ParseStream<'_>,
fields: &FieldMap,
allow_multiple: AllowMultipleAlternatives,
) -> Result<TokenStream, syn::Error> {
let values = parse_suggestion_values(nested, allow_multiple)?;
Ok(if let AllowMultipleAlternatives::Yes = allow_multiple {
let formatted_strings: Vec<_> = values
.into_iter()
.map(|value| build_format(fields, &value.value(), value.span()))
.collect();
quote! { let #code_field = [#(#formatted_strings),*].into_iter(); }
} else if let [value] = values.as_slice() {
let formatted_str = build_format(fields, &value.value(), value.span());
quote! { let #code_field = #formatted_str; }
} else {
// error handled previously
quote! { let #code_field = String::new(); }
})
}
/// Possible styles for suggestion subdiagnostics.
#[derive(Clone, Copy, PartialEq)]
pub(super) enum SuggestionKind {
Normal,
Short,
Hidden,
Verbose,
ToolOnly,
}
impl FromStr for SuggestionKind {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"normal" => Ok(SuggestionKind::Normal),
"short" => Ok(SuggestionKind::Short),
"hidden" => Ok(SuggestionKind::Hidden),
"verbose" => Ok(SuggestionKind::Verbose),
"tool-only" => Ok(SuggestionKind::ToolOnly),
_ => Err(()),
}
}
}
impl fmt::Display for SuggestionKind {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
SuggestionKind::Normal => write!(f, "normal"),
SuggestionKind::Short => write!(f, "short"),
SuggestionKind::Hidden => write!(f, "hidden"),
SuggestionKind::Verbose => write!(f, "verbose"),
SuggestionKind::ToolOnly => write!(f, "tool-only"),
}
}
}
impl SuggestionKind {
pub(crate) fn to_suggestion_style(&self) -> TokenStream {
match self {
SuggestionKind::Normal => {
quote! { rustc_errors::SuggestionStyle::ShowCode }
}
SuggestionKind::Short => {
quote! { rustc_errors::SuggestionStyle::HideCodeInline }
}
SuggestionKind::Hidden => {
quote! { rustc_errors::SuggestionStyle::HideCodeAlways }
}
SuggestionKind::Verbose => {
quote! { rustc_errors::SuggestionStyle::ShowAlways }
}
SuggestionKind::ToolOnly => {
quote! { rustc_errors::SuggestionStyle::CompletelyHidden }
}
}
}
fn from_suffix(s: &str) -> Option<Self> {
match s {
"" => Some(SuggestionKind::Normal),
"_short" => Some(SuggestionKind::Short),
"_hidden" => Some(SuggestionKind::Hidden),
"_verbose" => Some(SuggestionKind::Verbose),
_ => None,
}
}
}
/// Types of subdiagnostics that can be created using attributes
#[derive(Clone)]
pub(super) enum SubdiagnosticKind {
/// `#[label(...)]`
Label,
/// `#[note(...)]`
Note,
/// `#[note_once(...)]`
NoteOnce,
/// `#[help(...)]`
Help,
/// `#[help_once(...)]`
HelpOnce,
/// `#[warning(...)]`
Warn,
/// `#[suggestion{,_short,_hidden,_verbose}]`
Suggestion {
suggestion_kind: SuggestionKind,
applicability: SpannedOption<Applicability>,
/// Identifier for variable used for formatted code, e.g. `___code_0`. Enables separation
/// of formatting and diagnostic emission so that `arg` calls can happen in-between..
code_field: syn::Ident,
/// Initialization logic for `code_field`'s variable, e.g.
/// `let __formatted_code = /* whatever */;`
code_init: TokenStream,
},
/// `#[multipart_suggestion{,_short,_hidden,_verbose}]`
MultipartSuggestion {
suggestion_kind: SuggestionKind,
applicability: SpannedOption<Applicability>,
},
}
pub(super) struct SubdiagnosticVariant {
pub(super) kind: SubdiagnosticKind,
pub(super) message: Option<Message>,
}
impl SubdiagnosticVariant {
/// Constructs a `SubdiagnosticVariant` from a field or type attribute such as `#[note]`,
/// `#[error("add parenthesis")]` or `#[suggestion(code = "...")]`. Returns the
/// `SubdiagnosticKind` and the diagnostic message, if specified.
pub(super) fn from_attr(
attr: &Attribute,
fields: &FieldMap,
) -> Result<Option<SubdiagnosticVariant>, DiagnosticDeriveError> {
// Always allow documentation comments.
if is_doc_comment(attr) {
return Ok(None);
}
let span = attr.span().unwrap();
let name = attr.path().segments.last().unwrap().ident.to_string();
let name = name.as_str();
let mut kind = match name {
"label" => SubdiagnosticKind::Label,
"note" => SubdiagnosticKind::Note,
"note_once" => SubdiagnosticKind::NoteOnce,
"help" => SubdiagnosticKind::Help,
"help_once" => SubdiagnosticKind::HelpOnce,
"warning" => SubdiagnosticKind::Warn,
_ => {
// Recover old `#[(multipart_)suggestion_*]` syntaxes
// FIXME(#100717): remove
if let Some(suggestion_kind) =
name.strip_prefix("suggestion").and_then(SuggestionKind::from_suffix)
{
if suggestion_kind != SuggestionKind::Normal {
invalid_attr(attr)
.help(format!(
r#"Use `#[suggestion(..., style = "{suggestion_kind}")]` instead"#
))
.emit();
}
SubdiagnosticKind::Suggestion {
suggestion_kind: SuggestionKind::Normal,
applicability: None,
code_field: new_code_ident(),
code_init: TokenStream::new(),
}
} else if let Some(suggestion_kind) =
name.strip_prefix("multipart_suggestion").and_then(SuggestionKind::from_suffix)
{
if suggestion_kind != SuggestionKind::Normal {
invalid_attr(attr)
.help(format!(
r#"Use `#[multipart_suggestion(..., style = "{suggestion_kind}")]` instead"#
))
.emit();
}
SubdiagnosticKind::MultipartSuggestion {
suggestion_kind: SuggestionKind::Normal,
applicability: None,
}
} else {
throw_invalid_attr!(attr);
}
}
};
let list = match &attr.meta {
Meta::List(list) => {
// An attribute with properties, such as `#[suggestion(code = "...")]` or
// `#[error("message")]`
list
}
Meta::Path(_) => {
// An attribute without a message or other properties, such as `#[note]` - return
// without further processing.
//
// Only allow this if there are no mandatory properties, such as `code = "..."` in
// `#[suggestion(...)]`
match kind {
SubdiagnosticKind::Label
| SubdiagnosticKind::Note
| SubdiagnosticKind::NoteOnce
| SubdiagnosticKind::Help
| SubdiagnosticKind::HelpOnce
| SubdiagnosticKind::Warn
| SubdiagnosticKind::MultipartSuggestion { .. } => {
return Ok(Some(SubdiagnosticVariant { kind, message: None }));
}
SubdiagnosticKind::Suggestion { .. } => {
throw_span_err!(span, "suggestion without `code = \"...\"`")
}
}
}
_ => {
throw_invalid_attr!(attr)
}
};
let mut code = None;
let mut suggestion_kind = None;
let mut message = None;
list.parse_args_with(|input: ParseStream<'_>| {
let mut is_first = true;
while !input.is_empty() {
// Try to parse an inline diagnostic message
if input.peek(LitStr) {
let inline_message = input.parse::<LitStr>()?;
if !inline_message.suffix().is_empty() {
span_err(
inline_message.span().unwrap(),
"Inline message is not allowed to have a suffix",
).emit();
}
if !input.is_empty() { input.parse::<Token![,]>()?; }
if is_first {
message = Some(Message { attr_span: attr.span(), message_span: inline_message.span(), value: inline_message.value() });
is_first = false;
} else {
span_err(inline_message.span().unwrap(), "a diagnostic message must be the first argument to the attribute").emit();
}
continue
}
is_first = false;
// Try to parse an argument
let arg_name: Path = input.parse::<Path>()?;
let arg_name_span = arg_name.span().unwrap();
match (arg_name.require_ident()?.to_string().as_str(), &mut kind) {
("code", SubdiagnosticKind::Suggestion { code_field, .. }) => {
let code_init = build_suggestion_code(
&code_field,
&input,
fields,
AllowMultipleAlternatives::Yes,
)?;
code.set_once(code_init, arg_name_span);
}
(
"applicability",
SubdiagnosticKind::Suggestion { applicability, .. }
| SubdiagnosticKind::MultipartSuggestion { applicability, .. },
) => {
input.parse::<Token![=]>()?;
let value = input.parse::<LitStr>()?;
let value = Applicability::from_str(&value.value()).unwrap_or_else(|()| {
span_err(value.span().unwrap(), "invalid applicability").emit();
Applicability::Unspecified
});
applicability.set_once(value, span);
}
(
"style",
SubdiagnosticKind::Suggestion { .. }
| SubdiagnosticKind::MultipartSuggestion { .. },
) => {
input.parse::<Token![=]>()?;
let value = input.parse::<LitStr>()?;
let value = value.value().parse().unwrap_or_else(|()| {
span_err(value.span().unwrap(), "invalid suggestion style")
.help("valid styles are `normal`, `short`, `hidden`, `verbose` and `tool-only`")
.emit();
SuggestionKind::Normal
});
suggestion_kind.set_once(value, span);
}
// Invalid nested attribute
(_, SubdiagnosticKind::Suggestion { .. }) => {
span_err(arg_name_span, "invalid nested attribute")
.help(
"only `style`, `code` and `applicability` are valid nested attributes",
)
.emit();
// Consume the rest of the input to avoid spamming errors
let _ = input.parse::<TokenStream>();
}
(_, SubdiagnosticKind::MultipartSuggestion { .. }) => {
span_err(arg_name_span, "invalid nested attribute")
.help("only `style` and `applicability` are valid nested attributes")
.emit();
// Consume the rest of the input to avoid spamming errors
let _ = input.parse::<TokenStream>();
}
_ => {
span_err(arg_name_span, "no nested attribute expected here").emit();
// Consume the rest of the input to avoid spamming errors
let _ = input.parse::<TokenStream>();
}
}
if input.is_empty() { break }
input.parse::<Token![,]>()?;
}
Ok(())
})?;
match kind {
SubdiagnosticKind::Suggestion {
ref code_field,
ref mut code_init,
suggestion_kind: ref mut kind_field,
..
} => {
if let Some(kind) = suggestion_kind.value() {
*kind_field = kind;
}
*code_init = if let Some(init) = code.value() {
init
} else {
span_err(span, "suggestion without `code = \"...\"`").emit();
quote! { let #code_field = std::iter::empty(); }
};
}
SubdiagnosticKind::MultipartSuggestion {
suggestion_kind: ref mut kind_field, ..
} => {
if let Some(kind) = suggestion_kind.value() {
*kind_field = kind;
}
}
SubdiagnosticKind::Label
| SubdiagnosticKind::Note
| SubdiagnosticKind::NoteOnce
| SubdiagnosticKind::Help
| SubdiagnosticKind::HelpOnce
| SubdiagnosticKind::Warn => {}
}
Ok(Some(SubdiagnosticVariant { kind, message }))
}
}
impl quote::IdentFragment for SubdiagnosticKind {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
SubdiagnosticKind::Label => write!(f, "label"),
SubdiagnosticKind::Note => write!(f, "note"),
SubdiagnosticKind::NoteOnce => write!(f, "note_once"),
SubdiagnosticKind::Help => write!(f, "help"),
SubdiagnosticKind::HelpOnce => write!(f, "help_once"),
SubdiagnosticKind::Warn => write!(f, "warn"),
SubdiagnosticKind::Suggestion { .. } => write!(f, "suggestions_with_style"),
SubdiagnosticKind::MultipartSuggestion { .. } => {
write!(f, "multipart_suggestion_with_style")
}
}
}
fn span(&self) -> Option<proc_macro2::Span> {
None
}
}
/// Returns `true` if `field` should generate a `arg` call rather than any other diagnostic
/// call (like `span_label`).
pub(super) fn should_generate_arg(field: &Field) -> bool {
// Perhaps this should be an exhaustive list...
field.attrs.iter().all(|attr| is_doc_comment(attr))
}
pub(super) fn is_doc_comment(attr: &Attribute) -> bool {
attr.path().segments.last().unwrap().ident == "doc"
} | rust | github | https://github.com/rust-lang/rust | compiler/rustc_macros/src/diagnostics/utils.rs |
# -*- coding: utf-8 -*-
#
import sys, os
import codecs
import json
import random
from argparse import ArgumentParser
sys.path.insert(1, os.path.join(sys.path[0], os.path.pardir))
from json_utils import load_json_file, load_json_stream
def main():
parser = ArgumentParser()
parser.add_argument("-s", "--seed", dest="seed", metavar="INT", type=int, default=None,
help="random seed")
parser.add_argument("--cv", dest="cv", metavar="INT", type=int, default=5,
help="N-fold cross-validation")
parser.add_argument("_in", metavar="INPUT", help="input")
parser.add_argument("_out", metavar="OUTPUT", help="output")
args = parser.parse_args()
sys.stderr.write("%d-fold cross validation\n" % args.cv)
if args.seed is not None:
random.seed(args.seed)
langs = []
cvns = []
for i, lang in enumerate(load_json_stream(open(args._in))):
langs.append(lang)
cvns.append(i % args.cv)
random.shuffle(cvns)
with codecs.getwriter("utf-8")(open(args._out, 'w')) as f:
for lang, cvn in zip(langs, cvns):
lang["cvn"] = cvn
f.write("%s\n" % json.dumps(lang))
if __name__ == "__main__":
main() | unknown | codeparrot/codeparrot-clean | ||
import re
from django.conf import settings
from django.contrib.auth.models import User
from django.contrib.comments import signals
from django.contrib.comments.models import Comment
from regressiontests.comment_tests.models import Article, Book
from regressiontests.comment_tests.tests import CommentTestCase
post_redirect_re = re.compile(r'^http://testserver/posted/\?c=(?P<pk>\d+$)')
class CommentViewTests(CommentTestCase):
def testPostCommentHTTPMethods(self):
a = Article.objects.get(pk=1)
data = self.getValidData(a)
response = self.client.get("/post/", data)
self.assertEqual(response.status_code, 405)
self.assertEqual(response["Allow"], "POST")
def testPostCommentMissingCtype(self):
a = Article.objects.get(pk=1)
data = self.getValidData(a)
del data["content_type"]
response = self.client.post("/post/", data)
self.assertEqual(response.status_code, 400)
def testPostCommentBadCtype(self):
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data["content_type"] = "Nobody expects the Spanish Inquisition!"
response = self.client.post("/post/", data)
self.assertEqual(response.status_code, 400)
def testPostCommentMissingObjectPK(self):
a = Article.objects.get(pk=1)
data = self.getValidData(a)
del data["object_pk"]
response = self.client.post("/post/", data)
self.assertEqual(response.status_code, 400)
def testPostCommentBadObjectPK(self):
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data["object_pk"] = "14"
response = self.client.post("/post/", data)
self.assertEqual(response.status_code, 400)
def testPostInvalidIntegerPK(self):
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data["comment"] = "This is another comment"
data["object_pk"] = u'\ufffd'
response = self.client.post("/post/", data)
self.assertEqual(response.status_code, 400)
def testPostInvalidDecimalPK(self):
b = Book.objects.get(pk='12.34')
data = self.getValidData(b)
data["comment"] = "This is another comment"
data["object_pk"] = 'cookies'
response = self.client.post("/post/", data)
self.assertEqual(response.status_code, 400)
def testCommentPreview(self):
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data["preview"] = "Preview"
response = self.client.post("/post/", data)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, "comments/preview.html")
def testHashTampering(self):
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data["security_hash"] = "Nobody expects the Spanish Inquisition!"
response = self.client.post("/post/", data)
self.assertEqual(response.status_code, 400)
def testDebugCommentErrors(self):
"""The debug error template should be shown only if DEBUG is True"""
olddebug = settings.DEBUG
settings.DEBUG = True
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data["security_hash"] = "Nobody expects the Spanish Inquisition!"
response = self.client.post("/post/", data)
self.assertEqual(response.status_code, 400)
self.assertTemplateUsed(response, "comments/400-debug.html")
settings.DEBUG = False
response = self.client.post("/post/", data)
self.assertEqual(response.status_code, 400)
self.assertTemplateNotUsed(response, "comments/400-debug.html")
settings.DEBUG = olddebug
def testCreateValidComment(self):
a = Article.objects.get(pk=1)
data = self.getValidData(a)
self.response = self.client.post("/post/", data, REMOTE_ADDR="1.2.3.4")
self.assertEqual(self.response.status_code, 302)
self.assertEqual(Comment.objects.count(), 1)
c = Comment.objects.all()[0]
self.assertEqual(c.ip_address, "1.2.3.4")
self.assertEqual(c.comment, "This is my comment")
def testPostAsAuthenticatedUser(self):
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data['name'] = data['email'] = ''
self.client.login(username="normaluser", password="normaluser")
self.response = self.client.post("/post/", data, REMOTE_ADDR="1.2.3.4")
self.assertEqual(self.response.status_code, 302)
self.assertEqual(Comment.objects.count(), 1)
c = Comment.objects.all()[0]
self.assertEqual(c.ip_address, "1.2.3.4")
u = User.objects.get(username='normaluser')
self.assertEqual(c.user, u)
self.assertEqual(c.user_name, u.get_full_name())
self.assertEqual(c.user_email, u.email)
def testPostAsAuthenticatedUserWithoutFullname(self):
"""
Check that the user's name in the comment is populated for
authenticated users without first_name and last_name.
"""
user = User.objects.create_user(username='jane_other',
email='jane@example.com', password='jane_other')
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data['name'] = data['email'] = ''
self.client.login(username="jane_other", password="jane_other")
self.response = self.client.post("/post/", data, REMOTE_ADDR="1.2.3.4")
c = Comment.objects.get(user=user)
self.assertEqual(c.ip_address, "1.2.3.4")
self.assertEqual(c.user_name, 'jane_other')
user.delete()
def testPreventDuplicateComments(self):
"""Prevent posting the exact same comment twice"""
a = Article.objects.get(pk=1)
data = self.getValidData(a)
self.client.post("/post/", data)
self.client.post("/post/", data)
self.assertEqual(Comment.objects.count(), 1)
# This should not trigger the duplicate prevention
self.client.post("/post/", dict(data, comment="My second comment."))
self.assertEqual(Comment.objects.count(), 2)
def testCommentSignals(self):
"""Test signals emitted by the comment posting view"""
# callback
def receive(sender, **kwargs):
self.assertEqual(kwargs['comment'].comment, "This is my comment")
self.assertTrue('request' in kwargs)
received_signals.append(kwargs.get('signal'))
# Connect signals and keep track of handled ones
received_signals = []
expected_signals = [
signals.comment_will_be_posted, signals.comment_was_posted
]
for signal in expected_signals:
signal.connect(receive)
# Post a comment and check the signals
self.testCreateValidComment()
self.assertEqual(received_signals, expected_signals)
for signal in expected_signals:
signal.disconnect(receive)
def testWillBePostedSignal(self):
"""
Test that the comment_will_be_posted signal can prevent the comment from
actually getting saved
"""
def receive(sender, **kwargs): return False
signals.comment_will_be_posted.connect(receive, dispatch_uid="comment-test")
a = Article.objects.get(pk=1)
data = self.getValidData(a)
response = self.client.post("/post/", data)
self.assertEqual(response.status_code, 400)
self.assertEqual(Comment.objects.count(), 0)
signals.comment_will_be_posted.disconnect(dispatch_uid="comment-test")
def testWillBePostedSignalModifyComment(self):
"""
Test that the comment_will_be_posted signal can modify a comment before
it gets posted
"""
def receive(sender, **kwargs):
# a bad but effective spam filter :)...
kwargs['comment'].is_public = False
signals.comment_will_be_posted.connect(receive)
self.testCreateValidComment()
c = Comment.objects.all()[0]
self.assertFalse(c.is_public)
def testCommentNext(self):
"""Test the different "next" actions the comment view can take"""
a = Article.objects.get(pk=1)
data = self.getValidData(a)
response = self.client.post("/post/", data)
location = response["Location"]
match = post_redirect_re.match(location)
self.assertTrue(match != None, "Unexpected redirect location: %s" % location)
data["next"] = "/somewhere/else/"
data["comment"] = "This is another comment"
response = self.client.post("/post/", data)
location = response["Location"]
match = re.search(r"^http://testserver/somewhere/else/\?c=\d+$", location)
self.assertTrue(match != None, "Unexpected redirect location: %s" % location)
def testCommentDoneView(self):
a = Article.objects.get(pk=1)
data = self.getValidData(a)
response = self.client.post("/post/", data)
location = response["Location"]
match = post_redirect_re.match(location)
self.assertTrue(match != None, "Unexpected redirect location: %s" % location)
pk = int(match.group('pk'))
response = self.client.get(location)
self.assertTemplateUsed(response, "comments/posted.html")
self.assertEqual(response.context[0]["comment"], Comment.objects.get(pk=pk))
def testCommentNextWithQueryString(self):
"""
The `next` key needs to handle already having a query string (#10585)
"""
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data["next"] = "/somewhere/else/?foo=bar"
data["comment"] = "This is another comment"
response = self.client.post("/post/", data)
location = response["Location"]
match = re.search(r"^http://testserver/somewhere/else/\?foo=bar&c=\d+$", location)
self.assertTrue(match != None, "Unexpected redirect location: %s" % location)
def testCommentPostRedirectWithInvalidIntegerPK(self):
"""
Tests that attempting to retrieve the location specified in the
post redirect, after adding some invalid data to the expected
querystring it ends with, doesn't cause a server error.
"""
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data["comment"] = "This is another comment"
response = self.client.post("/post/", data)
location = response["Location"]
broken_location = location + u"\ufffd"
response = self.client.get(broken_location)
self.assertEqual(response.status_code, 200)
def testCommentNextWithQueryStringAndAnchor(self):
"""
The `next` key needs to handle already having an anchor. Refs #13411.
"""
# With a query string also.
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data["next"] = "/somewhere/else/?foo=bar#baz"
data["comment"] = "This is another comment"
response = self.client.post("/post/", data)
location = response["Location"]
match = re.search(r"^http://testserver/somewhere/else/\?foo=bar&c=\d+#baz$", location)
self.assertTrue(match != None, "Unexpected redirect location: %s" % location)
# Without a query string
a = Article.objects.get(pk=1)
data = self.getValidData(a)
data["next"] = "/somewhere/else/#baz"
data["comment"] = "This is another comment"
response = self.client.post("/post/", data)
location = response["Location"]
match = re.search(r"^http://testserver/somewhere/else/\?c=\d+#baz$", location)
self.assertTrue(match != None, "Unexpected redirect location: %s" % location) | unknown | codeparrot/codeparrot-clean | ||
from ..util import jython, pypy, defaultdict, decorator
from ..util.compat import decimal
import gc
import time
import random
import sys
import types
if jython:
def jython_gc_collect(*args):
"""aggressive gc.collect for tests."""
gc.collect()
time.sleep(0.1)
gc.collect()
gc.collect()
return 0
# "lazy" gc, for VM's that don't GC on refcount == 0
gc_collect = lazy_gc = jython_gc_collect
elif pypy:
def pypy_gc_collect(*args):
gc.collect()
gc.collect()
gc_collect = lazy_gc = pypy_gc_collect
else:
# assume CPython - straight gc.collect, lazy_gc() is a pass
gc_collect = gc.collect
def lazy_gc():
pass
def picklers():
picklers = set()
# Py2K
try:
import cPickle
picklers.add(cPickle)
except ImportError:
pass
# end Py2K
import pickle
picklers.add(pickle)
# yes, this thing needs this much testing
for pickle_ in picklers:
for protocol in -1, 0, 1, 2:
yield pickle_.loads, lambda d: pickle_.dumps(d, protocol)
def round_decimal(value, prec):
if isinstance(value, float):
return round(value, prec)
# can also use shift() here but that is 2.6 only
return (value * decimal.Decimal("1" + "0" * prec)
).to_integral(decimal.ROUND_FLOOR) / \
pow(10, prec)
class RandomSet(set):
def __iter__(self):
l = list(set.__iter__(self))
random.shuffle(l)
return iter(l)
def pop(self):
index = random.randint(0, len(self) - 1)
item = list(set.__iter__(self))[index]
self.remove(item)
return item
def union(self, other):
return RandomSet(set.union(self, other))
def difference(self, other):
return RandomSet(set.difference(self, other))
def intersection(self, other):
return RandomSet(set.intersection(self, other))
def copy(self):
return RandomSet(self)
def conforms_partial_ordering(tuples, sorted_elements):
"""True if the given sorting conforms to the given partial ordering."""
deps = defaultdict(set)
for parent, child in tuples:
deps[parent].add(child)
for i, node in enumerate(sorted_elements):
for n in sorted_elements[i:]:
if node in deps[n]:
return False
else:
return True
def all_partial_orderings(tuples, elements):
edges = defaultdict(set)
for parent, child in tuples:
edges[child].add(parent)
def _all_orderings(elements):
if len(elements) == 1:
yield list(elements)
else:
for elem in elements:
subset = set(elements).difference([elem])
if not subset.intersection(edges[elem]):
for sub_ordering in _all_orderings(subset):
yield [elem] + sub_ordering
return iter(_all_orderings(elements))
def function_named(fn, name):
"""Return a function with a given __name__.
Will assign to __name__ and return the original function if possible on
the Python implementation, otherwise a new function will be constructed.
This function should be phased out as much as possible
in favor of @decorator. Tests that "generate" many named tests
should be modernized.
"""
try:
fn.__name__ = name
except TypeError:
fn = types.FunctionType(fn.func_code, fn.func_globals, name,
fn.func_defaults, fn.func_closure)
return fn
def run_as_contextmanager(ctx, fn, *arg, **kw):
"""Run the given function under the given contextmanager,
simulating the behavior of 'with' to support older
Python versions.
"""
obj = ctx.__enter__()
try:
result = fn(obj, *arg, **kw)
ctx.__exit__(None, None, None)
return result
except:
exc_info = sys.exc_info()
raise_ = ctx.__exit__(*exc_info)
if raise_ is None:
raise
else:
return raise_
def rowset(results):
"""Converts the results of sql execution into a plain set of column tuples.
Useful for asserting the results of an unordered query.
"""
return set([tuple(row) for row in results])
def fail(msg):
assert False, msg
@decorator
def provide_metadata(fn, *args, **kw):
"""Provide bound MetaData for a single test, dropping afterwards."""
from . import config
from sqlalchemy import schema
metadata = schema.MetaData(config.db)
self = args[0]
prev_meta = getattr(self, 'metadata', None)
self.metadata = metadata
try:
return fn(*args, **kw)
finally:
metadata.drop_all()
self.metadata = prev_meta
class adict(dict):
"""Dict keys available as attributes. Shadows."""
def __getattribute__(self, key):
try:
return self[key]
except KeyError:
return dict.__getattribute__(self, key)
def get_all(self, *keys):
return tuple([self[key] for key in keys]) | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2007 Google, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
"""Abstract Base Classes (ABCs) for collections, according to PEP 3119.
DON'T USE THIS MODULE DIRECTLY! The classes here should be imported
via collections; they are defined here only to alleviate certain
bootstrapping issues. Unit tests are in test_collections.
"""
#from abc import ABCMeta, abstractmethod
import sys
__all__ = ["Hashable", "Iterable", "Iterator",
"Sized", "Container", "Callable",
"Set", "MutableSet",
"Mapping", "MutableMapping",
"MappingView", "KeysView", "ItemsView", "ValuesView",
"Sequence", "MutableSequence",
"ByteString",
]
"""
### collection related types which are not exposed through builtin ###
## iterators ##
#fixme brython
#bytes_iterator = type(iter(b''))
bytes_iterator = type(iter(''))
#fixme brython
#bytearray_iterator = type(iter(bytearray()))
#callable_iterator = ???
dict_keyiterator = type(iter({}.keys()))
dict_valueiterator = type(iter({}.values()))
dict_itemiterator = type(iter({}.items()))
list_iterator = type(iter([]))
list_reverseiterator = type(iter(reversed([])))
range_iterator = type(iter(range(0)))
set_iterator = type(iter(set()))
str_iterator = type(iter(""))
tuple_iterator = type(iter(()))
zip_iterator = type(iter(zip()))
## views ##
dict_keys = type({}.keys())
dict_values = type({}.values())
dict_items = type({}.items())
## misc ##
dict_proxy = type(type.__dict__)
"""
def abstractmethod(self):
return self
### ONE-TRICK PONIES ###
#class Iterable(metaclass=ABCMeta):
class Iterable:
@abstractmethod
def __iter__(self):
while False:
yield None
@classmethod
def __subclasshook__(cls, C):
if cls is Iterable:
if any("__iter__" in B.__dict__ for B in C.__mro__):
return True
return NotImplemented
#class Sized(metaclass=ABCMeta):
class Sized:
@abstractmethod
def __len__(self):
return 0
@classmethod
def __subclasshook__(cls, C):
if cls is Sized:
if any("__len__" in B.__dict__ for B in C.__mro__):
return True
return NotImplemented
#class Container(metaclass=ABCMeta):
class Container:
@abstractmethod
def __contains__(self, x):
return False
@classmethod
def __subclasshook__(cls, C):
if cls is Container:
if any("__contains__" in B.__dict__ for B in C.__mro__):
return True
return NotImplemented
### MAPPINGS ###
class Mapping(Sized, Iterable, Container):
@abstractmethod
def __getitem__(self, key):
raise KeyError
def get(self, key, default=None):
try:
return self[key]
except KeyError:
return default
def __contains__(self, key):
try:
self[key]
except KeyError:
return False
else:
return True
def keys(self):
return KeysView(self)
def items(self):
return ItemsView(self)
def values(self):
return ValuesView(self)
def __eq__(self, other):
if not isinstance(other, Mapping):
return NotImplemented
return dict(self.items()) == dict(other.items())
def __ne__(self, other):
return not (self == other)
class MutableMapping(Mapping):
@abstractmethod
def __setitem__(self, key, value):
raise KeyError
@abstractmethod
def __delitem__(self, key):
raise KeyError
__marker = object()
def pop(self, key, default=__marker):
try:
value = self[key]
except KeyError:
if default is self.__marker:
raise
return default
else:
del self[key]
return value
def popitem(self):
try:
key = next(iter(self))
except StopIteration:
raise KeyError
value = self[key]
del self[key]
return key, value
def clear(self):
try:
while True:
self.popitem()
except KeyError:
pass
def update(*args, **kwds):
if len(args) > 2:
raise TypeError("update() takes at most 2 positional "
"arguments ({} given)".format(len(args)))
elif not args:
raise TypeError("update() takes at least 1 argument (0 given)")
self = args[0]
other = args[1] if len(args) >= 2 else ()
if isinstance(other, Mapping):
for key in other:
self[key] = other[key]
elif hasattr(other, "keys"):
for key in other.keys():
self[key] = other[key]
else:
for key, value in other:
self[key] = value
for key, value in kwds.items():
self[key] = value
def setdefault(self, key, default=None):
try:
return self[key]
except KeyError:
self[key] = default
return default
#MutableMapping.register(dict) | unknown | codeparrot/codeparrot-clean | ||
"""The tests for the Yandex SpeechKit speech platform."""
import asyncio
import os
import shutil
import homeassistant.components.tts as tts
from homeassistant.setup import setup_component
from homeassistant.components.media_player import (
SERVICE_PLAY_MEDIA, DOMAIN as DOMAIN_MP)
from tests.common import (
get_test_home_assistant, assert_setup_component, mock_service)
from .test_init import mutagen_mock # noqa
class TestTTSYandexPlatform:
"""Test the speech component."""
def setup_method(self):
"""Set up things to be run when tests are started."""
self.hass = get_test_home_assistant()
self._base_url = "https://tts.voicetech.yandex.net/generate?"
def teardown_method(self):
"""Stop everything that was started."""
default_tts = self.hass.config.path(tts.DEFAULT_CACHE_DIR)
if os.path.isdir(default_tts):
shutil.rmtree(default_tts)
self.hass.stop()
def test_setup_component(self):
"""Test setup component."""
config = {
tts.DOMAIN: {
'platform': 'yandextts',
'api_key': '1234567xx'
}
}
with assert_setup_component(1, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
def test_setup_component_without_api_key(self):
"""Test setup component without api key."""
config = {
tts.DOMAIN: {
'platform': 'yandextts',
}
}
with assert_setup_component(0, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
def test_service_say(self, aioclient_mock):
"""Test service call say."""
calls = mock_service(self.hass, DOMAIN_MP, SERVICE_PLAY_MEDIA)
url_param = {
'text': 'HomeAssistant',
'lang': 'en-US',
'key': '1234567xx',
'speaker': 'zahar',
'format': 'mp3',
'emotion': 'neutral',
'speed': 1
}
aioclient_mock.get(
self._base_url, status=200, content=b'test', params=url_param)
config = {
tts.DOMAIN: {
'platform': 'yandextts',
'api_key': '1234567xx'
}
}
with assert_setup_component(1, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
self.hass.services.call(tts.DOMAIN, 'yandextts_say', {
tts.ATTR_MESSAGE: "HomeAssistant",
})
self.hass.block_till_done()
assert len(aioclient_mock.mock_calls) == 1
assert len(calls) == 1
def test_service_say_russian_config(self, aioclient_mock):
"""Test service call say."""
calls = mock_service(self.hass, DOMAIN_MP, SERVICE_PLAY_MEDIA)
url_param = {
'text': 'HomeAssistant',
'lang': 'ru-RU',
'key': '1234567xx',
'speaker': 'zahar',
'format': 'mp3',
'emotion': 'neutral',
'speed': 1
}
aioclient_mock.get(
self._base_url, status=200, content=b'test', params=url_param)
config = {
tts.DOMAIN: {
'platform': 'yandextts',
'api_key': '1234567xx',
'language': 'ru-RU',
}
}
with assert_setup_component(1, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
self.hass.services.call(tts.DOMAIN, 'yandextts_say', {
tts.ATTR_MESSAGE: "HomeAssistant",
})
self.hass.block_till_done()
assert len(aioclient_mock.mock_calls) == 1
assert len(calls) == 1
def test_service_say_russian_service(self, aioclient_mock):
"""Test service call say."""
calls = mock_service(self.hass, DOMAIN_MP, SERVICE_PLAY_MEDIA)
url_param = {
'text': 'HomeAssistant',
'lang': 'ru-RU',
'key': '1234567xx',
'speaker': 'zahar',
'format': 'mp3',
'emotion': 'neutral',
'speed': 1
}
aioclient_mock.get(
self._base_url, status=200, content=b'test', params=url_param)
config = {
tts.DOMAIN: {
'platform': 'yandextts',
'api_key': '1234567xx',
}
}
with assert_setup_component(1, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
self.hass.services.call(tts.DOMAIN, 'yandextts_say', {
tts.ATTR_MESSAGE: "HomeAssistant",
tts.ATTR_LANGUAGE: "ru-RU"
})
self.hass.block_till_done()
assert len(aioclient_mock.mock_calls) == 1
assert len(calls) == 1
def test_service_say_timeout(self, aioclient_mock):
"""Test service call say."""
calls = mock_service(self.hass, DOMAIN_MP, SERVICE_PLAY_MEDIA)
url_param = {
'text': 'HomeAssistant',
'lang': 'en-US',
'key': '1234567xx',
'speaker': 'zahar',
'format': 'mp3',
'emotion': 'neutral',
'speed': 1
}
aioclient_mock.get(
self._base_url, status=200,
exc=asyncio.TimeoutError(), params=url_param)
config = {
tts.DOMAIN: {
'platform': 'yandextts',
'api_key': '1234567xx'
}
}
with assert_setup_component(1, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
self.hass.services.call(tts.DOMAIN, 'yandextts_say', {
tts.ATTR_MESSAGE: "HomeAssistant",
})
self.hass.block_till_done()
assert len(calls) == 0
assert len(aioclient_mock.mock_calls) == 1
def test_service_say_http_error(self, aioclient_mock):
"""Test service call say."""
calls = mock_service(self.hass, DOMAIN_MP, SERVICE_PLAY_MEDIA)
url_param = {
'text': 'HomeAssistant',
'lang': 'en-US',
'key': '1234567xx',
'speaker': 'zahar',
'format': 'mp3',
'emotion': 'neutral',
'speed': 1
}
aioclient_mock.get(
self._base_url, status=403, content=b'test', params=url_param)
config = {
tts.DOMAIN: {
'platform': 'yandextts',
'api_key': '1234567xx'
}
}
with assert_setup_component(1, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
self.hass.services.call(tts.DOMAIN, 'yandextts_say', {
tts.ATTR_MESSAGE: "HomeAssistant",
})
self.hass.block_till_done()
assert len(calls) == 0
def test_service_say_specified_speaker(self, aioclient_mock):
"""Test service call say."""
calls = mock_service(self.hass, DOMAIN_MP, SERVICE_PLAY_MEDIA)
url_param = {
'text': 'HomeAssistant',
'lang': 'en-US',
'key': '1234567xx',
'speaker': 'alyss',
'format': 'mp3',
'emotion': 'neutral',
'speed': 1
}
aioclient_mock.get(
self._base_url, status=200, content=b'test', params=url_param)
config = {
tts.DOMAIN: {
'platform': 'yandextts',
'api_key': '1234567xx',
'voice': 'alyss'
}
}
with assert_setup_component(1, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
self.hass.services.call(tts.DOMAIN, 'yandextts_say', {
tts.ATTR_MESSAGE: "HomeAssistant",
})
self.hass.block_till_done()
assert len(aioclient_mock.mock_calls) == 1
assert len(calls) == 1
def test_service_say_specified_emotion(self, aioclient_mock):
"""Test service call say."""
calls = mock_service(self.hass, DOMAIN_MP, SERVICE_PLAY_MEDIA)
url_param = {
'text': 'HomeAssistant',
'lang': 'en-US',
'key': '1234567xx',
'speaker': 'zahar',
'format': 'mp3',
'emotion': 'evil',
'speed': 1
}
aioclient_mock.get(
self._base_url, status=200, content=b'test', params=url_param)
config = {
tts.DOMAIN: {
'platform': 'yandextts',
'api_key': '1234567xx',
'emotion': 'evil'
}
}
with assert_setup_component(1, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
self.hass.services.call(tts.DOMAIN, 'yandextts_say', {
tts.ATTR_MESSAGE: "HomeAssistant",
})
self.hass.block_till_done()
assert len(aioclient_mock.mock_calls) == 1
assert len(calls) == 1
def test_service_say_specified_low_speed(self, aioclient_mock):
"""Test service call say."""
calls = mock_service(self.hass, DOMAIN_MP, SERVICE_PLAY_MEDIA)
url_param = {
'text': 'HomeAssistant',
'lang': 'en-US',
'key': '1234567xx',
'speaker': 'zahar',
'format': 'mp3',
'emotion': 'neutral',
'speed': '0.1'
}
aioclient_mock.get(
self._base_url, status=200, content=b'test', params=url_param)
config = {
tts.DOMAIN: {
'platform': 'yandextts',
'api_key': '1234567xx',
'speed': 0.1
}
}
with assert_setup_component(1, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
self.hass.services.call(tts.DOMAIN, 'yandextts_say', {
tts.ATTR_MESSAGE: "HomeAssistant",
})
self.hass.block_till_done()
assert len(aioclient_mock.mock_calls) == 1
assert len(calls) == 1
def test_service_say_specified_speed(self, aioclient_mock):
"""Test service call say."""
calls = mock_service(self.hass, DOMAIN_MP, SERVICE_PLAY_MEDIA)
url_param = {
'text': 'HomeAssistant',
'lang': 'en-US',
'key': '1234567xx',
'speaker': 'zahar',
'format': 'mp3',
'emotion': 'neutral',
'speed': 2
}
aioclient_mock.get(
self._base_url, status=200, content=b'test', params=url_param)
config = {
tts.DOMAIN: {
'platform': 'yandextts',
'api_key': '1234567xx',
'speed': 2
}
}
with assert_setup_component(1, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
self.hass.services.call(tts.DOMAIN, 'yandextts_say', {
tts.ATTR_MESSAGE: "HomeAssistant",
})
self.hass.block_till_done()
assert len(aioclient_mock.mock_calls) == 1
assert len(calls) == 1
def test_service_say_specified_options(self, aioclient_mock):
"""Test service call say with options."""
calls = mock_service(self.hass, DOMAIN_MP, SERVICE_PLAY_MEDIA)
url_param = {
'text': 'HomeAssistant',
'lang': 'en-US',
'key': '1234567xx',
'speaker': 'zahar',
'format': 'mp3',
'emotion': 'evil',
'speed': 2
}
aioclient_mock.get(
self._base_url, status=200, content=b'test', params=url_param)
config = {
tts.DOMAIN: {
'platform': 'yandextts',
'api_key': '1234567xx',
}
}
with assert_setup_component(1, tts.DOMAIN):
setup_component(self.hass, tts.DOMAIN, config)
self.hass.services.call(tts.DOMAIN, 'yandextts_say', {
tts.ATTR_MESSAGE: "HomeAssistant",
'options': {
'emotion': 'evil',
'speed': 2,
}
})
self.hass.block_till_done()
assert len(aioclient_mock.mock_calls) == 1
assert len(calls) == 1 | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
# flake8: noqa
import warnings
import operator
from itertools import product
from distutils.version import LooseVersion
import nose
from nose.tools import assert_raises
from numpy.random import randn, rand, randint
import numpy as np
from numpy.testing import assert_allclose
from numpy.testing.decorators import slow
import pandas as pd
from pandas.core import common as com
from pandas import DataFrame, Series, Panel, date_range
from pandas.util.testing import makeCustomDataframe as mkdf
from pandas.computation import pytables
from pandas.computation.engines import _engines, NumExprClobberingError
from pandas.computation.expr import PythonExprVisitor, PandasExprVisitor
from pandas.computation.ops import (_binary_ops_dict,
_special_case_arith_ops_syms,
_arith_ops_syms, _bool_ops_syms,
_unary_math_ops, _binary_math_ops)
import pandas.computation.expr as expr
import pandas.util.testing as tm
import pandas.lib as lib
from pandas.util.testing import (assert_frame_equal, randbool,
assertRaisesRegexp, assert_numpy_array_equal,
assert_produces_warning, assert_series_equal)
from pandas.compat import PY3, u, reduce
_series_frame_incompatible = _bool_ops_syms
_scalar_skip = 'in', 'not in'
def engine_has_neg_frac(engine):
return _engines[engine].has_neg_frac
def _eval_single_bin(lhs, cmp1, rhs, engine):
c = _binary_ops_dict[cmp1]
if engine_has_neg_frac(engine):
try:
return c(lhs, rhs)
except ValueError as e:
try:
msg = e.message
except AttributeError:
msg = e
msg = u(msg)
if msg == u('negative number cannot be raised to a fractional'
' power'):
return np.nan
raise
return c(lhs, rhs)
def _series_and_2d_ndarray(lhs, rhs):
return ((isinstance(lhs, Series) and
isinstance(rhs, np.ndarray) and rhs.ndim > 1)
or (isinstance(rhs, Series) and
isinstance(lhs, np.ndarray) and lhs.ndim > 1))
def _series_and_frame(lhs, rhs):
return ((isinstance(lhs, Series) and isinstance(rhs, DataFrame))
or (isinstance(rhs, Series) and isinstance(lhs, DataFrame)))
def _bool_and_frame(lhs, rhs):
return isinstance(lhs, bool) and isinstance(rhs, pd.core.generic.NDFrame)
def _is_py3_complex_incompat(result, expected):
return (PY3 and isinstance(expected, (complex, np.complexfloating)) and
np.isnan(result))
_good_arith_ops = com.difference(_arith_ops_syms, _special_case_arith_ops_syms)
class TestEvalNumexprPandas(tm.TestCase):
@classmethod
def setUpClass(cls):
super(TestEvalNumexprPandas, cls).setUpClass()
tm.skip_if_no_ne()
import numexpr as ne
cls.ne = ne
cls.engine = 'numexpr'
cls.parser = 'pandas'
@classmethod
def tearDownClass(cls):
super(TestEvalNumexprPandas, cls).tearDownClass()
del cls.engine, cls.parser
if hasattr(cls, 'ne'):
del cls.ne
def setup_data(self):
nan_df1 = DataFrame(rand(10, 5))
nan_df1[nan_df1 > 0.5] = np.nan
nan_df2 = DataFrame(rand(10, 5))
nan_df2[nan_df2 > 0.5] = np.nan
self.pandas_lhses = (DataFrame(randn(10, 5)), Series(randn(5)),
Series([1, 2, np.nan, np.nan, 5]), nan_df1)
self.pandas_rhses = (DataFrame(randn(10, 5)), Series(randn(5)),
Series([1, 2, np.nan, np.nan, 5]), nan_df2)
self.scalar_lhses = randn(),
self.scalar_rhses = randn(),
self.lhses = self.pandas_lhses + self.scalar_lhses
self.rhses = self.pandas_rhses + self.scalar_rhses
def setup_ops(self):
self.cmp_ops = expr._cmp_ops_syms
self.cmp2_ops = self.cmp_ops[::-1]
self.bin_ops = expr._bool_ops_syms
self.special_case_ops = _special_case_arith_ops_syms
self.arith_ops = _good_arith_ops
self.unary_ops = '-', '~', 'not '
def setUp(self):
self.setup_ops()
self.setup_data()
self.current_engines = filter(lambda x: x != self.engine, _engines)
def tearDown(self):
del self.lhses, self.rhses, self.scalar_rhses, self.scalar_lhses
del self.pandas_rhses, self.pandas_lhses, self.current_engines
@slow
def test_complex_cmp_ops(self):
cmp_ops = ('!=', '==', '<=', '>=', '<', '>')
cmp2_ops = ('>', '<')
for lhs, cmp1, rhs, binop, cmp2 in product(self.lhses, cmp_ops,
self.rhses, self.bin_ops,
cmp2_ops):
self.check_complex_cmp_op(lhs, cmp1, rhs, binop, cmp2)
def test_simple_cmp_ops(self):
bool_lhses = (DataFrame(randbool(size=(10, 5))),
Series(randbool((5,))), randbool())
bool_rhses = (DataFrame(randbool(size=(10, 5))),
Series(randbool((5,))), randbool())
for lhs, rhs, cmp_op in product(bool_lhses, bool_rhses, self.cmp_ops):
self.check_simple_cmp_op(lhs, cmp_op, rhs)
@slow
def test_binary_arith_ops(self):
for lhs, op, rhs in product(self.lhses, self.arith_ops, self.rhses):
self.check_binary_arith_op(lhs, op, rhs)
def test_modulus(self):
for lhs, rhs in product(self.lhses, self.rhses):
self.check_modulus(lhs, '%', rhs)
def test_floor_division(self):
for lhs, rhs in product(self.lhses, self.rhses):
self.check_floor_division(lhs, '//', rhs)
def test_pow(self):
tm._skip_if_windows()
# odd failure on win32 platform, so skip
for lhs, rhs in product(self.lhses, self.rhses):
self.check_pow(lhs, '**', rhs)
@slow
def test_single_invert_op(self):
for lhs, op, rhs in product(self.lhses, self.cmp_ops, self.rhses):
self.check_single_invert_op(lhs, op, rhs)
@slow
def test_compound_invert_op(self):
for lhs, op, rhs in product(self.lhses, self.cmp_ops, self.rhses):
self.check_compound_invert_op(lhs, op, rhs)
@slow
def test_chained_cmp_op(self):
mids = self.lhses
cmp_ops = '<', '>'
for lhs, cmp1, mid, cmp2, rhs in product(self.lhses, cmp_ops,
mids, cmp_ops, self.rhses):
self.check_chained_cmp_op(lhs, cmp1, mid, cmp2, rhs)
def check_complex_cmp_op(self, lhs, cmp1, rhs, binop, cmp2):
skip_these = _scalar_skip
ex = '(lhs {cmp1} rhs) {binop} (lhs {cmp2} rhs)'.format(cmp1=cmp1,
binop=binop,
cmp2=cmp2)
scalar_with_in_notin = (lib.isscalar(rhs) and (cmp1 in skip_these or
cmp2 in skip_these))
if scalar_with_in_notin:
with tm.assertRaises(TypeError):
pd.eval(ex, engine=self.engine, parser=self.parser)
self.assertRaises(TypeError, pd.eval, ex, engine=self.engine,
parser=self.parser, local_dict={'lhs': lhs,
'rhs': rhs})
else:
lhs_new = _eval_single_bin(lhs, cmp1, rhs, self.engine)
rhs_new = _eval_single_bin(lhs, cmp2, rhs, self.engine)
if (isinstance(lhs_new, Series) and isinstance(rhs_new, DataFrame)
and binop in _series_frame_incompatible):
pass
# TODO: the code below should be added back when left and right
# hand side bool ops are fixed.
# try:
# self.assertRaises(Exception, pd.eval, ex,
#local_dict={'lhs': lhs, 'rhs': rhs},
# engine=self.engine, parser=self.parser)
# except AssertionError:
#import ipdb; ipdb.set_trace()
# raise
else:
expected = _eval_single_bin(
lhs_new, binop, rhs_new, self.engine)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
tm.assert_numpy_array_equal(result, expected)
def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs):
skip_these = _scalar_skip
def check_operands(left, right, cmp_op):
return _eval_single_bin(left, cmp_op, right, self.engine)
lhs_new = check_operands(lhs, mid, cmp1)
rhs_new = check_operands(mid, rhs, cmp2)
if lhs_new is not None and rhs_new is not None:
ex1 = 'lhs {0} mid {1} rhs'.format(cmp1, cmp2)
ex2 = 'lhs {0} mid and mid {1} rhs'.format(cmp1, cmp2)
ex3 = '(lhs {0} mid) & (mid {1} rhs)'.format(cmp1, cmp2)
expected = _eval_single_bin(lhs_new, '&', rhs_new, self.engine)
for ex in (ex1, ex2, ex3):
result = pd.eval(ex, engine=self.engine,
parser=self.parser)
tm.assert_numpy_array_equal(result, expected)
def check_simple_cmp_op(self, lhs, cmp1, rhs):
ex = 'lhs {0} rhs'.format(cmp1)
if cmp1 in ('in', 'not in') and not com.is_list_like(rhs):
self.assertRaises(TypeError, pd.eval, ex, engine=self.engine,
parser=self.parser, local_dict={'lhs': lhs,
'rhs': rhs})
else:
expected = _eval_single_bin(lhs, cmp1, rhs, self.engine)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
tm.assert_numpy_array_equal(result, expected)
def check_binary_arith_op(self, lhs, arith1, rhs):
ex = 'lhs {0} rhs'.format(arith1)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
expected = _eval_single_bin(lhs, arith1, rhs, self.engine)
tm.assert_numpy_array_equal(result, expected)
ex = 'lhs {0} rhs {0} rhs'.format(arith1)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
nlhs = _eval_single_bin(lhs, arith1, rhs,
self.engine)
self.check_alignment(result, nlhs, rhs, arith1)
def check_alignment(self, result, nlhs, ghs, op):
try:
nlhs, ghs = nlhs.align(ghs)
except (ValueError, TypeError, AttributeError):
# ValueError: series frame or frame series align
# TypeError, AttributeError: series or frame with scalar align
pass
else:
expected = self.ne.evaluate('nlhs {0} ghs'.format(op))
tm.assert_numpy_array_equal(result, expected)
# modulus, pow, and floor division require special casing
def check_modulus(self, lhs, arith1, rhs):
ex = 'lhs {0} rhs'.format(arith1)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
expected = lhs % rhs
assert_allclose(result, expected)
expected = self.ne.evaluate('expected {0} rhs'.format(arith1))
assert_allclose(result, expected)
def check_floor_division(self, lhs, arith1, rhs):
ex = 'lhs {0} rhs'.format(arith1)
if self.engine == 'python':
res = pd.eval(ex, engine=self.engine, parser=self.parser)
expected = lhs // rhs
tm.assert_numpy_array_equal(res, expected)
else:
self.assertRaises(TypeError, pd.eval, ex, local_dict={'lhs': lhs,
'rhs': rhs},
engine=self.engine, parser=self.parser)
def get_expected_pow_result(self, lhs, rhs):
try:
expected = _eval_single_bin(lhs, '**', rhs, self.engine)
except ValueError as e:
msg = 'negative number cannot be raised to a fractional power'
try:
emsg = e.message
except AttributeError:
emsg = e
emsg = u(emsg)
if emsg == msg:
if self.engine == 'python':
raise nose.SkipTest(emsg)
else:
expected = np.nan
else:
raise
return expected
def check_pow(self, lhs, arith1, rhs):
ex = 'lhs {0} rhs'.format(arith1)
expected = self.get_expected_pow_result(lhs, rhs)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
if (lib.isscalar(lhs) and lib.isscalar(rhs) and
_is_py3_complex_incompat(result, expected)):
self.assertRaises(AssertionError, tm.assert_numpy_array_equal,
result, expected)
else:
assert_allclose(result, expected)
ex = '(lhs {0} rhs) {0} rhs'.format(arith1)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
expected = self.get_expected_pow_result(
self.get_expected_pow_result(lhs, rhs), rhs)
assert_allclose(result, expected)
def check_single_invert_op(self, lhs, cmp1, rhs):
# simple
for el in (lhs, rhs):
try:
elb = el.astype(bool)
except AttributeError:
elb = np.array([bool(el)])
expected = ~elb
result = pd.eval('~elb', engine=self.engine, parser=self.parser)
tm.assert_numpy_array_equal(expected, result)
for engine in self.current_engines:
tm.skip_if_no_ne(engine)
tm.assert_numpy_array_equal(result, pd.eval('~elb', engine=engine,
parser=self.parser))
def check_compound_invert_op(self, lhs, cmp1, rhs):
skip_these = 'in', 'not in'
ex = '~(lhs {0} rhs)'.format(cmp1)
if lib.isscalar(rhs) and cmp1 in skip_these:
self.assertRaises(TypeError, pd.eval, ex, engine=self.engine,
parser=self.parser, local_dict={'lhs': lhs,
'rhs': rhs})
else:
# compound
if lib.isscalar(lhs) and lib.isscalar(rhs):
lhs, rhs = map(lambda x: np.array([x]), (lhs, rhs))
expected = _eval_single_bin(lhs, cmp1, rhs, self.engine)
if lib.isscalar(expected):
expected = not expected
else:
expected = ~expected
result = pd.eval(ex, engine=self.engine, parser=self.parser)
tm.assert_numpy_array_equal(expected, result)
# make sure the other engines work the same as this one
for engine in self.current_engines:
tm.skip_if_no_ne(engine)
ev = pd.eval(ex, engine=self.engine, parser=self.parser)
tm.assert_numpy_array_equal(ev, result)
def ex(self, op, var_name='lhs'):
return '{0}{1}'.format(op, var_name)
def test_frame_invert(self):
expr = self.ex('~')
# ~ ##
# frame
# float always raises
lhs = DataFrame(randn(5, 2))
if self.engine == 'numexpr':
with tm.assertRaises(NotImplementedError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
with tm.assertRaises(TypeError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
# int raises on numexpr
lhs = DataFrame(randint(5, size=(5, 2)))
if self.engine == 'numexpr':
with tm.assertRaises(NotImplementedError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
expect = ~lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_frame_equal(expect, result)
# bool always works
lhs = DataFrame(rand(5, 2) > 0.5)
expect = ~lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_frame_equal(expect, result)
# object raises
lhs = DataFrame({'b': ['a', 1, 2.0], 'c': rand(3) > 0.5})
if self.engine == 'numexpr':
with tm.assertRaises(ValueError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
with tm.assertRaises(TypeError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
def test_series_invert(self):
# ~ ####
expr = self.ex('~')
# series
# float raises
lhs = Series(randn(5))
if self.engine == 'numexpr':
with tm.assertRaises(NotImplementedError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
with tm.assertRaises(TypeError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
# int raises on numexpr
lhs = Series(randint(5, size=5))
if self.engine == 'numexpr':
with tm.assertRaises(NotImplementedError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
expect = ~lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_series_equal(expect, result)
# bool
lhs = Series(rand(5) > 0.5)
expect = ~lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_series_equal(expect, result)
# float
# int
# bool
# object
lhs = Series(['a', 1, 2.0])
if self.engine == 'numexpr':
with tm.assertRaises(ValueError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
with tm.assertRaises(TypeError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
def test_frame_negate(self):
expr = self.ex('-')
# float
lhs = DataFrame(randn(5, 2))
expect = -lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_frame_equal(expect, result)
# int
lhs = DataFrame(randint(5, size=(5, 2)))
expect = -lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_frame_equal(expect, result)
# bool doesn't work with numexpr but works elsewhere
lhs = DataFrame(rand(5, 2) > 0.5)
if self.engine == 'numexpr':
with tm.assertRaises(NotImplementedError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
expect = -lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_frame_equal(expect, result)
def test_series_negate(self):
expr = self.ex('-')
# float
lhs = Series(randn(5))
expect = -lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_series_equal(expect, result)
# int
lhs = Series(randint(5, size=5))
expect = -lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_series_equal(expect, result)
# bool doesn't work with numexpr but works elsewhere
lhs = Series(rand(5) > 0.5)
if self.engine == 'numexpr':
with tm.assertRaises(NotImplementedError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
expect = -lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_series_equal(expect, result)
def test_frame_pos(self):
expr = self.ex('+')
# float
lhs = DataFrame(randn(5, 2))
if self.engine == 'python':
with tm.assertRaises(TypeError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
expect = lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_frame_equal(expect, result)
# int
lhs = DataFrame(randint(5, size=(5, 2)))
if self.engine == 'python':
with tm.assertRaises(TypeError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
expect = lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_frame_equal(expect, result)
# bool doesn't work with numexpr but works elsewhere
lhs = DataFrame(rand(5, 2) > 0.5)
if self.engine == 'python':
with tm.assertRaises(TypeError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
expect = lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_frame_equal(expect, result)
def test_series_pos(self):
expr = self.ex('+')
# float
lhs = Series(randn(5))
if self.engine == 'python':
with tm.assertRaises(TypeError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
expect = lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_series_equal(expect, result)
# int
lhs = Series(randint(5, size=5))
if self.engine == 'python':
with tm.assertRaises(TypeError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
expect = lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_series_equal(expect, result)
# bool doesn't work with numexpr but works elsewhere
lhs = Series(rand(5) > 0.5)
if self.engine == 'python':
with tm.assertRaises(TypeError):
result = pd.eval(expr, engine=self.engine, parser=self.parser)
else:
expect = lhs
result = pd.eval(expr, engine=self.engine, parser=self.parser)
assert_series_equal(expect, result)
def test_scalar_unary(self):
with tm.assertRaises(TypeError):
pd.eval('~1.0', engine=self.engine, parser=self.parser)
self.assertEqual(
pd.eval('-1.0', parser=self.parser, engine=self.engine), -1.0)
self.assertEqual(
pd.eval('+1.0', parser=self.parser, engine=self.engine), +1.0)
self.assertEqual(
pd.eval('~1', parser=self.parser, engine=self.engine), ~1)
self.assertEqual(
pd.eval('-1', parser=self.parser, engine=self.engine), -1)
self.assertEqual(
pd.eval('+1', parser=self.parser, engine=self.engine), +1)
self.assertEqual(
pd.eval('~True', parser=self.parser, engine=self.engine), ~True)
self.assertEqual(
pd.eval('~False', parser=self.parser, engine=self.engine), ~False)
self.assertEqual(
pd.eval('-True', parser=self.parser, engine=self.engine), -True)
self.assertEqual(
pd.eval('-False', parser=self.parser, engine=self.engine), -False)
self.assertEqual(
pd.eval('+True', parser=self.parser, engine=self.engine), +True)
self.assertEqual(
pd.eval('+False', parser=self.parser, engine=self.engine), +False)
def test_unary_in_array(self):
# GH 11235
assert_numpy_array_equal(
pd.eval('[-True, True, ~True, +True,'
'-False, False, ~False, +False,'
'-37, 37, ~37, +37]'),
np.array([-True, True, ~True, +True,
-False, False, ~False, +False,
-37, 37, ~37, +37]))
def test_disallow_scalar_bool_ops(self):
exprs = '1 or 2', '1 and 2'
exprs += 'a and b', 'a or b'
exprs += '1 or 2 and (3 + 2) > 3',
exprs += '2 * x > 2 or 1 and 2',
exprs += '2 * df > 3 and 1 or a',
x, a, b, df = np.random.randn(3), 1, 2, DataFrame(randn(3, 2))
for ex in exprs:
with tm.assertRaises(NotImplementedError):
pd.eval(ex, engine=self.engine, parser=self.parser)
def test_identical(self):
# GH 10546
x = 1
result = pd.eval('x', engine=self.engine, parser=self.parser)
self.assertEqual(result, 1)
self.assertTrue(lib.isscalar(result))
x = 1.5
result = pd.eval('x', engine=self.engine, parser=self.parser)
self.assertEqual(result, 1.5)
self.assertTrue(lib.isscalar(result))
x = False
result = pd.eval('x', engine=self.engine, parser=self.parser)
self.assertEqual(result, False)
self.assertTrue(lib.isscalar(result))
x = np.array([1])
result = pd.eval('x', engine=self.engine, parser=self.parser)
tm.assert_numpy_array_equal(result, np.array([1]))
self.assertEqual(result.shape, (1, ))
x = np.array([1.5])
result = pd.eval('x', engine=self.engine, parser=self.parser)
tm.assert_numpy_array_equal(result, np.array([1.5]))
self.assertEqual(result.shape, (1, ))
x = np.array([False])
result = pd.eval('x', engine=self.engine, parser=self.parser)
tm.assert_numpy_array_equal(result, np.array([False]))
self.assertEqual(result.shape, (1, ))
def test_line_continuation(self):
# GH 11149
exp = """1 + 2 * \
5 - 1 + 2 """
result = pd.eval(exp, engine=self.engine, parser=self.parser)
self.assertEqual(result, 12)
class TestEvalNumexprPython(TestEvalNumexprPandas):
@classmethod
def setUpClass(cls):
super(TestEvalNumexprPython, cls).setUpClass()
tm.skip_if_no_ne()
import numexpr as ne
cls.ne = ne
cls.engine = 'numexpr'
cls.parser = 'python'
def setup_ops(self):
self.cmp_ops = list(filter(lambda x: x not in ('in', 'not in'),
expr._cmp_ops_syms))
self.cmp2_ops = self.cmp_ops[::-1]
self.bin_ops = [s for s in expr._bool_ops_syms
if s not in ('and', 'or')]
self.special_case_ops = _special_case_arith_ops_syms
self.arith_ops = _good_arith_ops
self.unary_ops = '+', '-', '~'
def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs):
ex1 = 'lhs {0} mid {1} rhs'.format(cmp1, cmp2)
with tm.assertRaises(NotImplementedError):
pd.eval(ex1, engine=self.engine, parser=self.parser)
class TestEvalPythonPython(TestEvalNumexprPython):
@classmethod
def setUpClass(cls):
super(TestEvalPythonPython, cls).setUpClass()
cls.engine = 'python'
cls.parser = 'python'
def check_modulus(self, lhs, arith1, rhs):
ex = 'lhs {0} rhs'.format(arith1)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
expected = lhs % rhs
assert_allclose(result, expected)
expected = _eval_single_bin(expected, arith1, rhs, self.engine)
assert_allclose(result, expected)
def check_alignment(self, result, nlhs, ghs, op):
try:
nlhs, ghs = nlhs.align(ghs)
except (ValueError, TypeError, AttributeError):
# ValueError: series frame or frame series align
# TypeError, AttributeError: series or frame with scalar align
pass
else:
expected = eval('nlhs {0} ghs'.format(op))
tm.assert_numpy_array_equal(result, expected)
class TestEvalPythonPandas(TestEvalPythonPython):
@classmethod
def setUpClass(cls):
super(TestEvalPythonPandas, cls).setUpClass()
cls.engine = 'python'
cls.parser = 'pandas'
def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs):
TestEvalNumexprPandas.check_chained_cmp_op(self, lhs, cmp1, mid, cmp2,
rhs)
f = lambda *args, **kwargs: np.random.randn()
ENGINES_PARSERS = list(product(_engines, expr._parsers))
#-------------------------------------
# basic and complex alignment
def _is_datetime(x):
return issubclass(x.dtype.type, np.datetime64)
def should_warn(*args):
not_mono = not any(map(operator.attrgetter('is_monotonic'), args))
only_one_dt = reduce(operator.xor, map(_is_datetime, args))
return not_mono and only_one_dt
class TestAlignment(object):
index_types = 'i', 'u', 'dt'
lhs_index_types = index_types + ('s',) # 'p'
def check_align_nested_unary_op(self, engine, parser):
tm.skip_if_no_ne(engine)
s = 'df * ~2'
df = mkdf(5, 3, data_gen_f=f)
res = pd.eval(s, engine=engine, parser=parser)
assert_frame_equal(res, df * ~2)
def test_align_nested_unary_op(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_align_nested_unary_op, engine, parser
def check_basic_frame_alignment(self, engine, parser):
tm.skip_if_no_ne(engine)
args = product(self.lhs_index_types, self.index_types,
self.index_types)
with warnings.catch_warnings(record=True):
warnings.simplefilter('always', RuntimeWarning)
for lr_idx_type, rr_idx_type, c_idx_type in args:
df = mkdf(10, 10, data_gen_f=f, r_idx_type=lr_idx_type,
c_idx_type=c_idx_type)
df2 = mkdf(20, 10, data_gen_f=f, r_idx_type=rr_idx_type,
c_idx_type=c_idx_type)
# only warns if not monotonic and not sortable
if should_warn(df.index, df2.index):
with tm.assert_produces_warning(RuntimeWarning):
res = pd.eval('df + df2', engine=engine, parser=parser)
else:
res = pd.eval('df + df2', engine=engine, parser=parser)
assert_frame_equal(res, df + df2)
def test_basic_frame_alignment(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_basic_frame_alignment, engine, parser
def check_frame_comparison(self, engine, parser):
tm.skip_if_no_ne(engine)
args = product(self.lhs_index_types, repeat=2)
for r_idx_type, c_idx_type in args:
df = mkdf(10, 10, data_gen_f=f, r_idx_type=r_idx_type,
c_idx_type=c_idx_type)
res = pd.eval('df < 2', engine=engine, parser=parser)
assert_frame_equal(res, df < 2)
df3 = DataFrame(randn(*df.shape), index=df.index,
columns=df.columns)
res = pd.eval('df < df3', engine=engine, parser=parser)
assert_frame_equal(res, df < df3)
def test_frame_comparison(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_frame_comparison, engine, parser
def check_medium_complex_frame_alignment(self, engine, parser):
tm.skip_if_no_ne(engine)
args = product(self.lhs_index_types, self.index_types,
self.index_types, self.index_types)
with warnings.catch_warnings(record=True):
warnings.simplefilter('always', RuntimeWarning)
for r1, c1, r2, c2 in args:
df = mkdf(3, 2, data_gen_f=f, r_idx_type=r1, c_idx_type=c1)
df2 = mkdf(4, 2, data_gen_f=f, r_idx_type=r2, c_idx_type=c2)
df3 = mkdf(5, 2, data_gen_f=f, r_idx_type=r2, c_idx_type=c2)
if should_warn(df.index, df2.index, df3.index):
with tm.assert_produces_warning(RuntimeWarning):
res = pd.eval('df + df2 + df3', engine=engine,
parser=parser)
else:
res = pd.eval('df + df2 + df3',
engine=engine, parser=parser)
assert_frame_equal(res, df + df2 + df3)
@slow
def test_medium_complex_frame_alignment(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_medium_complex_frame_alignment, engine, parser
def check_basic_frame_series_alignment(self, engine, parser):
tm.skip_if_no_ne(engine)
def testit(r_idx_type, c_idx_type, index_name):
df = mkdf(10, 10, data_gen_f=f, r_idx_type=r_idx_type,
c_idx_type=c_idx_type)
index = getattr(df, index_name)
s = Series(np.random.randn(5), index[:5])
if should_warn(df.index, s.index):
with tm.assert_produces_warning(RuntimeWarning):
res = pd.eval('df + s', engine=engine, parser=parser)
else:
res = pd.eval('df + s', engine=engine, parser=parser)
if r_idx_type == 'dt' or c_idx_type == 'dt':
expected = df.add(s) if engine == 'numexpr' else df + s
else:
expected = df + s
assert_frame_equal(res, expected)
args = product(self.lhs_index_types, self.index_types,
('index', 'columns'))
with warnings.catch_warnings(record=True):
warnings.simplefilter('always', RuntimeWarning)
for r_idx_type, c_idx_type, index_name in args:
testit(r_idx_type, c_idx_type, index_name)
def test_basic_frame_series_alignment(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_basic_frame_series_alignment, engine, parser
def check_basic_series_frame_alignment(self, engine, parser):
tm.skip_if_no_ne(engine)
def testit(r_idx_type, c_idx_type, index_name):
df = mkdf(10, 7, data_gen_f=f, r_idx_type=r_idx_type,
c_idx_type=c_idx_type)
index = getattr(df, index_name)
s = Series(np.random.randn(5), index[:5])
if should_warn(s.index, df.index):
with tm.assert_produces_warning(RuntimeWarning):
res = pd.eval('s + df', engine=engine, parser=parser)
else:
res = pd.eval('s + df', engine=engine, parser=parser)
if r_idx_type == 'dt' or c_idx_type == 'dt':
expected = df.add(s) if engine == 'numexpr' else s + df
else:
expected = s + df
assert_frame_equal(res, expected)
# only test dt with dt, otherwise weird joins result
args = product(['i', 'u', 's'], ['i', 'u', 's'], ('index', 'columns'))
with warnings.catch_warnings(record=True):
for r_idx_type, c_idx_type, index_name in args:
testit(r_idx_type, c_idx_type, index_name)
# dt with dt
args = product(['dt'], ['dt'], ('index', 'columns'))
with warnings.catch_warnings(record=True):
for r_idx_type, c_idx_type, index_name in args:
testit(r_idx_type, c_idx_type, index_name)
def test_basic_series_frame_alignment(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_basic_series_frame_alignment, engine, parser
def check_series_frame_commutativity(self, engine, parser):
tm.skip_if_no_ne(engine)
args = product(self.lhs_index_types, self.index_types, ('+', '*'),
('index', 'columns'))
with warnings.catch_warnings(record=True):
warnings.simplefilter('always', RuntimeWarning)
for r_idx_type, c_idx_type, op, index_name in args:
df = mkdf(10, 10, data_gen_f=f, r_idx_type=r_idx_type,
c_idx_type=c_idx_type)
index = getattr(df, index_name)
s = Series(np.random.randn(5), index[:5])
lhs = 's {0} df'.format(op)
rhs = 'df {0} s'.format(op)
if should_warn(df.index, s.index):
with tm.assert_produces_warning(RuntimeWarning):
a = pd.eval(lhs, engine=engine, parser=parser)
with tm.assert_produces_warning(RuntimeWarning):
b = pd.eval(rhs, engine=engine, parser=parser)
else:
a = pd.eval(lhs, engine=engine, parser=parser)
b = pd.eval(rhs, engine=engine, parser=parser)
if r_idx_type != 'dt' and c_idx_type != 'dt':
if engine == 'numexpr':
assert_frame_equal(a, b)
def test_series_frame_commutativity(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_series_frame_commutativity, engine, parser
def check_complex_series_frame_alignment(self, engine, parser):
tm.skip_if_no_ne(engine)
import random
args = product(self.lhs_index_types, self.index_types,
self.index_types, self.index_types)
n = 3
m1 = 5
m2 = 2 * m1
with warnings.catch_warnings(record=True):
warnings.simplefilter('always', RuntimeWarning)
for r1, r2, c1, c2 in args:
index_name = random.choice(['index', 'columns'])
obj_name = random.choice(['df', 'df2'])
df = mkdf(m1, n, data_gen_f=f, r_idx_type=r1, c_idx_type=c1)
df2 = mkdf(m2, n, data_gen_f=f, r_idx_type=r2, c_idx_type=c2)
index = getattr(locals().get(obj_name), index_name)
s = Series(np.random.randn(n), index[:n])
if r2 == 'dt' or c2 == 'dt':
if engine == 'numexpr':
expected2 = df2.add(s)
else:
expected2 = df2 + s
else:
expected2 = df2 + s
if r1 == 'dt' or c1 == 'dt':
if engine == 'numexpr':
expected = expected2.add(df)
else:
expected = expected2 + df
else:
expected = expected2 + df
if should_warn(df2.index, s.index, df.index):
with tm.assert_produces_warning(RuntimeWarning):
res = pd.eval('df2 + s + df', engine=engine,
parser=parser)
else:
res = pd.eval('df2 + s + df', engine=engine, parser=parser)
tm.assert_equal(res.shape, expected.shape)
assert_frame_equal(res, expected)
@slow
def test_complex_series_frame_alignment(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_complex_series_frame_alignment, engine, parser
def check_performance_warning_for_poor_alignment(self, engine, parser):
tm.skip_if_no_ne(engine)
df = DataFrame(randn(1000, 10))
s = Series(randn(10000))
if engine == 'numexpr':
seen = pd.core.common.PerformanceWarning
else:
seen = False
with assert_produces_warning(seen):
pd.eval('df + s', engine=engine, parser=parser)
s = Series(randn(1000))
with assert_produces_warning(False):
pd.eval('df + s', engine=engine, parser=parser)
df = DataFrame(randn(10, 10000))
s = Series(randn(10000))
with assert_produces_warning(False):
pd.eval('df + s', engine=engine, parser=parser)
df = DataFrame(randn(10, 10))
s = Series(randn(10000))
is_python_engine = engine == 'python'
if not is_python_engine:
wrn = pd.core.common.PerformanceWarning
else:
wrn = False
with assert_produces_warning(wrn) as w:
pd.eval('df + s', engine=engine, parser=parser)
if not is_python_engine:
tm.assert_equal(len(w), 1)
msg = str(w[0].message)
expected = ("Alignment difference on axis {0} is larger"
" than an order of magnitude on term {1!r}, "
"by more than {2:.4g}; performance may suffer"
"".format(1, 'df', np.log10(s.size - df.shape[1])))
tm.assert_equal(msg, expected)
def test_performance_warning_for_poor_alignment(self):
for engine, parser in ENGINES_PARSERS:
yield (self.check_performance_warning_for_poor_alignment, engine,
parser)
#------------------------------------
# slightly more complex ops
class TestOperationsNumExprPandas(tm.TestCase):
@classmethod
def setUpClass(cls):
super(TestOperationsNumExprPandas, cls).setUpClass()
tm.skip_if_no_ne()
cls.engine = 'numexpr'
cls.parser = 'pandas'
cls.arith_ops = expr._arith_ops_syms + expr._cmp_ops_syms
@classmethod
def tearDownClass(cls):
super(TestOperationsNumExprPandas, cls).tearDownClass()
del cls.engine, cls.parser
def eval(self, *args, **kwargs):
kwargs['engine'] = self.engine
kwargs['parser'] = self.parser
kwargs['level'] = kwargs.pop('level', 0) + 1
return pd.eval(*args, **kwargs)
def test_simple_arith_ops(self):
ops = self.arith_ops
for op in filter(lambda x: x != '//', ops):
ex = '1 {0} 1'.format(op)
ex2 = 'x {0} 1'.format(op)
ex3 = '1 {0} (x + 1)'.format(op)
if op in ('in', 'not in'):
self.assertRaises(TypeError, pd.eval, ex,
engine=self.engine, parser=self.parser)
else:
expec = _eval_single_bin(1, op, 1, self.engine)
x = self.eval(ex, engine=self.engine, parser=self.parser)
tm.assert_equal(x, expec)
expec = _eval_single_bin(x, op, 1, self.engine)
y = self.eval(ex2, local_dict={'x': x}, engine=self.engine,
parser=self.parser)
tm.assert_equal(y, expec)
expec = _eval_single_bin(1, op, x + 1, self.engine)
y = self.eval(ex3, local_dict={'x': x},
engine=self.engine, parser=self.parser)
tm.assert_equal(y, expec)
def test_simple_bool_ops(self):
for op, lhs, rhs in product(expr._bool_ops_syms, (True, False),
(True, False)):
ex = '{0} {1} {2}'.format(lhs, op, rhs)
res = self.eval(ex)
exp = eval(ex)
self.assertEqual(res, exp)
def test_bool_ops_with_constants(self):
for op, lhs, rhs in product(expr._bool_ops_syms, ('True', 'False'),
('True', 'False')):
ex = '{0} {1} {2}'.format(lhs, op, rhs)
res = self.eval(ex)
exp = eval(ex)
self.assertEqual(res, exp)
def test_panel_fails(self):
x = Panel(randn(3, 4, 5))
y = Series(randn(10))
assert_raises(NotImplementedError, self.eval, 'x + y',
local_dict={'x': x, 'y': y})
def test_4d_ndarray_fails(self):
x = randn(3, 4, 5, 6)
y = Series(randn(10))
assert_raises(NotImplementedError, self.eval, 'x + y',
local_dict={'x': x, 'y': y})
def test_constant(self):
x = self.eval('1')
tm.assert_equal(x, 1)
def test_single_variable(self):
df = DataFrame(randn(10, 2))
df2 = self.eval('df', local_dict={'df': df})
assert_frame_equal(df, df2)
def test_truediv(self):
s = np.array([1])
ex = 's / 1'
d = {'s': s}
if PY3:
res = self.eval(ex, truediv=False)
tm.assert_numpy_array_equal(res, np.array([1.0]))
res = self.eval(ex, truediv=True)
tm.assert_numpy_array_equal(res, np.array([1.0]))
res = self.eval('1 / 2', truediv=True)
expec = 0.5
self.assertEqual(res, expec)
res = self.eval('1 / 2', truediv=False)
expec = 0.5
self.assertEqual(res, expec)
res = self.eval('s / 2', truediv=False)
expec = 0.5
self.assertEqual(res, expec)
res = self.eval('s / 2', truediv=True)
expec = 0.5
self.assertEqual(res, expec)
else:
res = self.eval(ex, truediv=False)
tm.assert_numpy_array_equal(res, np.array([1]))
res = self.eval(ex, truediv=True)
tm.assert_numpy_array_equal(res, np.array([1.0]))
res = self.eval('1 / 2', truediv=True)
expec = 0.5
self.assertEqual(res, expec)
res = self.eval('1 / 2', truediv=False)
expec = 0
self.assertEqual(res, expec)
res = self.eval('s / 2', truediv=False)
expec = 0
self.assertEqual(res, expec)
res = self.eval('s / 2', truediv=True)
expec = 0.5
self.assertEqual(res, expec)
def test_failing_subscript_with_name_error(self):
df = DataFrame(np.random.randn(5, 3))
with tm.assertRaises(NameError):
self.eval('df[x > 2] > 2')
def test_lhs_expression_subscript(self):
df = DataFrame(np.random.randn(5, 3))
result = self.eval('(df + 1)[df > 2]', local_dict={'df': df})
expected = (df + 1)[df > 2]
assert_frame_equal(result, expected)
def test_attr_expression(self):
df = DataFrame(np.random.randn(5, 3), columns=list('abc'))
expr1 = 'df.a < df.b'
expec1 = df.a < df.b
expr2 = 'df.a + df.b + df.c'
expec2 = df.a + df.b + df.c
expr3 = 'df.a + df.b + df.c[df.b < 0]'
expec3 = df.a + df.b + df.c[df.b < 0]
exprs = expr1, expr2, expr3
expecs = expec1, expec2, expec3
for e, expec in zip(exprs, expecs):
assert_series_equal(expec, self.eval(e, local_dict={'df': df}))
def test_assignment_fails(self):
df = DataFrame(np.random.randn(5, 3), columns=list('abc'))
df2 = DataFrame(np.random.randn(5, 3))
expr1 = 'df = df2'
self.assertRaises(ValueError, self.eval, expr1,
local_dict={'df': df, 'df2': df2})
def test_assignment_column(self):
tm.skip_if_no_ne('numexpr')
df = DataFrame(np.random.randn(5, 2), columns=list('ab'))
orig_df = df.copy()
# multiple assignees
self.assertRaises(SyntaxError, df.eval, 'd c = a + b')
# invalid assignees
self.assertRaises(SyntaxError, df.eval, 'd,c = a + b')
self.assertRaises(
SyntaxError, df.eval, 'Timestamp("20131001") = a + b')
# single assignment - existing variable
expected = orig_df.copy()
expected['a'] = expected['a'] + expected['b']
df = orig_df.copy()
df.eval('a = a + b', inplace=True)
assert_frame_equal(df, expected)
# single assignment - new variable
expected = orig_df.copy()
expected['c'] = expected['a'] + expected['b']
df = orig_df.copy()
df.eval('c = a + b', inplace=True)
assert_frame_equal(df, expected)
# with a local name overlap
def f():
df = orig_df.copy()
a = 1 # noqa
df.eval('a = 1 + b', inplace=True)
return df
df = f()
expected = orig_df.copy()
expected['a'] = 1 + expected['b']
assert_frame_equal(df, expected)
df = orig_df.copy()
def f():
a = 1 # noqa
old_a = df.a.copy()
df.eval('a = a + b', inplace=True)
result = old_a + df.b
assert_series_equal(result, df.a, check_names=False)
self.assertTrue(result.name is None)
f()
# multiple assignment
df = orig_df.copy()
df.eval('c = a + b', inplace=True)
self.assertRaises(SyntaxError, df.eval, 'c = a = b')
# explicit targets
df = orig_df.copy()
self.eval('c = df.a + df.b', local_dict={'df': df},
target=df, inplace=True)
expected = orig_df.copy()
expected['c'] = expected['a'] + expected['b']
assert_frame_equal(df, expected)
def test_column_in(self):
# GH 11235
df = DataFrame({'a': [11], 'b': [-32]})
result = df.eval('a in [11, -32]')
expected = Series([True])
assert_series_equal(result, expected)
def assignment_not_inplace(self):
# GH 9297
tm.skip_if_no_ne('numexpr')
df = DataFrame(np.random.randn(5, 2), columns=list('ab'))
actual = df.eval('c = a + b', inplace=False)
self.assertIsNotNone(actual)
expected = df.copy()
expected['c'] = expected['a'] + expected['b']
assert_frame_equal(df, expected)
# default for inplace will change
with tm.assert_produces_warnings(FutureWarning):
df.eval('c = a + b')
# but don't warn without assignment
with tm.assert_produces_warnings(None):
df.eval('a + b')
def test_multi_line_expression(self):
# GH 11149
tm.skip_if_no_ne('numexpr')
df = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
expected = df.copy()
expected['c'] = expected['a'] + expected['b']
expected['d'] = expected['c'] + expected['b']
ans = df.eval("""
c = a + b
d = c + b""", inplace=True)
assert_frame_equal(expected, df)
self.assertIsNone(ans)
expected['a'] = expected['a'] - 1
expected['e'] = expected['a'] + 2
ans = df.eval("""
a = a - 1
e = a + 2""", inplace=True)
assert_frame_equal(expected, df)
self.assertIsNone(ans)
# multi-line not valid if not all assignments
with tm.assertRaises(ValueError):
df.eval("""
a = b + 2
b - 2""", inplace=False)
def test_multi_line_expression_not_inplace(self):
# GH 11149
tm.skip_if_no_ne('numexpr')
df = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
expected = df.copy()
expected['c'] = expected['a'] + expected['b']
expected['d'] = expected['c'] + expected['b']
df = df.eval("""
c = a + b
d = c + b""", inplace=False)
assert_frame_equal(expected, df)
expected['a'] = expected['a'] - 1
expected['e'] = expected['a'] + 2
df = df.eval("""
a = a - 1
e = a + 2""", inplace=False)
assert_frame_equal(expected, df)
def test_assignment_in_query(self):
# GH 8664
df = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
df_orig = df.copy()
with tm.assertRaises(ValueError):
df.query('a = 1')
assert_frame_equal(df, df_orig)
def query_inplace(self):
# GH 11149
df = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
expected = df.copy()
expected = expected[expected['a'] == 2]
df.query('a == 2', inplace=True)
assert_frame_equal(expected, df)
def test_basic_period_index_boolean_expression(self):
df = mkdf(2, 2, data_gen_f=f, c_idx_type='p', r_idx_type='i')
e = df < 2
r = self.eval('df < 2', local_dict={'df': df})
x = df < 2
assert_frame_equal(r, e)
assert_frame_equal(x, e)
def test_basic_period_index_subscript_expression(self):
df = mkdf(2, 2, data_gen_f=f, c_idx_type='p', r_idx_type='i')
r = self.eval('df[df < 2 + 3]', local_dict={'df': df})
e = df[df < 2 + 3]
assert_frame_equal(r, e)
def test_nested_period_index_subscript_expression(self):
df = mkdf(2, 2, data_gen_f=f, c_idx_type='p', r_idx_type='i')
r = self.eval('df[df[df < 2] < 2] + df * 2', local_dict={'df': df})
e = df[df[df < 2] < 2] + df * 2
assert_frame_equal(r, e)
def test_date_boolean(self):
df = DataFrame(randn(5, 3))
df['dates1'] = date_range('1/1/2012', periods=5)
res = self.eval('df.dates1 < 20130101', local_dict={'df': df},
engine=self.engine, parser=self.parser)
expec = df.dates1 < '20130101'
assert_series_equal(res, expec, check_names=False)
def test_simple_in_ops(self):
if self.parser != 'python':
res = pd.eval('1 in [1, 2]', engine=self.engine,
parser=self.parser)
self.assertTrue(res)
res = pd.eval('2 in (1, 2)', engine=self.engine,
parser=self.parser)
self.assertTrue(res)
res = pd.eval('3 in (1, 2)', engine=self.engine,
parser=self.parser)
self.assertFalse(res)
res = pd.eval('3 not in (1, 2)', engine=self.engine,
parser=self.parser)
self.assertTrue(res)
res = pd.eval('[3] not in (1, 2)', engine=self.engine,
parser=self.parser)
self.assertTrue(res)
res = pd.eval('[3] in ([3], 2)', engine=self.engine,
parser=self.parser)
self.assertTrue(res)
res = pd.eval('[[3]] in [[[3]], 2]', engine=self.engine,
parser=self.parser)
self.assertTrue(res)
res = pd.eval('(3,) in [(3,), 2]', engine=self.engine,
parser=self.parser)
self.assertTrue(res)
res = pd.eval('(3,) not in [(3,), 2]', engine=self.engine,
parser=self.parser)
self.assertFalse(res)
res = pd.eval('[(3,)] in [[(3,)], 2]', engine=self.engine,
parser=self.parser)
self.assertTrue(res)
else:
with tm.assertRaises(NotImplementedError):
pd.eval('1 in [1, 2]', engine=self.engine, parser=self.parser)
with tm.assertRaises(NotImplementedError):
pd.eval('2 in (1, 2)', engine=self.engine, parser=self.parser)
with tm.assertRaises(NotImplementedError):
pd.eval('3 in (1, 2)', engine=self.engine, parser=self.parser)
with tm.assertRaises(NotImplementedError):
pd.eval('3 not in (1, 2)', engine=self.engine,
parser=self.parser)
with tm.assertRaises(NotImplementedError):
pd.eval('[(3,)] in (1, 2, [(3,)])', engine=self.engine,
parser=self.parser)
with tm.assertRaises(NotImplementedError):
pd.eval('[3] not in (1, 2, [[3]])', engine=self.engine,
parser=self.parser)
class TestOperationsNumExprPython(TestOperationsNumExprPandas):
@classmethod
def setUpClass(cls):
super(TestOperationsNumExprPython, cls).setUpClass()
cls.engine = 'numexpr'
cls.parser = 'python'
tm.skip_if_no_ne(cls.engine)
cls.arith_ops = expr._arith_ops_syms + expr._cmp_ops_syms
cls.arith_ops = filter(lambda x: x not in ('in', 'not in'),
cls.arith_ops)
def test_check_many_exprs(self):
a = 1
expr = ' * '.join('a' * 33)
expected = 1
res = pd.eval(expr, engine=self.engine, parser=self.parser)
tm.assert_equal(res, expected)
def test_fails_and(self):
df = DataFrame(np.random.randn(5, 3))
self.assertRaises(NotImplementedError, pd.eval, 'df > 2 and df > 3',
local_dict={'df': df}, parser=self.parser,
engine=self.engine)
def test_fails_or(self):
df = DataFrame(np.random.randn(5, 3))
self.assertRaises(NotImplementedError, pd.eval, 'df > 2 or df > 3',
local_dict={'df': df}, parser=self.parser,
engine=self.engine)
def test_fails_not(self):
df = DataFrame(np.random.randn(5, 3))
self.assertRaises(NotImplementedError, pd.eval, 'not df > 2',
local_dict={'df': df}, parser=self.parser,
engine=self.engine)
def test_fails_ampersand(self):
df = DataFrame(np.random.randn(5, 3))
ex = '(df + 2)[df > 1] > 0 & (df > 0)'
with tm.assertRaises(NotImplementedError):
pd.eval(ex, parser=self.parser, engine=self.engine)
def test_fails_pipe(self):
df = DataFrame(np.random.randn(5, 3))
ex = '(df + 2)[df > 1] > 0 | (df > 0)'
with tm.assertRaises(NotImplementedError):
pd.eval(ex, parser=self.parser, engine=self.engine)
def test_bool_ops_with_constants(self):
for op, lhs, rhs in product(expr._bool_ops_syms, ('True', 'False'),
('True', 'False')):
ex = '{0} {1} {2}'.format(lhs, op, rhs)
if op in ('and', 'or'):
with tm.assertRaises(NotImplementedError):
self.eval(ex)
else:
res = self.eval(ex)
exp = eval(ex)
self.assertEqual(res, exp)
def test_simple_bool_ops(self):
for op, lhs, rhs in product(expr._bool_ops_syms, (True, False),
(True, False)):
ex = 'lhs {0} rhs'.format(op)
if op in ('and', 'or'):
with tm.assertRaises(NotImplementedError):
pd.eval(ex, engine=self.engine, parser=self.parser)
else:
res = pd.eval(ex, engine=self.engine, parser=self.parser)
exp = eval(ex)
self.assertEqual(res, exp)
class TestOperationsPythonPython(TestOperationsNumExprPython):
@classmethod
def setUpClass(cls):
super(TestOperationsPythonPython, cls).setUpClass()
cls.engine = cls.parser = 'python'
cls.arith_ops = expr._arith_ops_syms + expr._cmp_ops_syms
cls.arith_ops = filter(lambda x: x not in ('in', 'not in'),
cls.arith_ops)
class TestOperationsPythonPandas(TestOperationsNumExprPandas):
@classmethod
def setUpClass(cls):
super(TestOperationsPythonPandas, cls).setUpClass()
cls.engine = 'python'
cls.parser = 'pandas'
cls.arith_ops = expr._arith_ops_syms + expr._cmp_ops_syms
class TestMathPythonPython(tm.TestCase):
@classmethod
def setUpClass(cls):
super(TestMathPythonPython, cls).setUpClass()
tm.skip_if_no_ne()
cls.engine = 'python'
cls.parser = 'pandas'
cls.unary_fns = _unary_math_ops
cls.binary_fns = _binary_math_ops
@classmethod
def tearDownClass(cls):
del cls.engine, cls.parser
def eval(self, *args, **kwargs):
kwargs['engine'] = self.engine
kwargs['parser'] = self.parser
kwargs['level'] = kwargs.pop('level', 0) + 1
return pd.eval(*args, **kwargs)
def test_unary_functions(self):
df = DataFrame({'a': np.random.randn(10)})
a = df.a
for fn in self.unary_fns:
expr = "{0}(a)".format(fn)
got = self.eval(expr)
expect = getattr(np, fn)(a)
tm.assert_series_equal(got, expect, check_names=False)
def test_binary_functions(self):
df = DataFrame({'a': np.random.randn(10),
'b': np.random.randn(10)})
a = df.a
b = df.b
for fn in self.binary_fns:
expr = "{0}(a, b)".format(fn)
got = self.eval(expr)
expect = getattr(np, fn)(a, b)
np.testing.assert_allclose(got, expect)
def test_df_use_case(self):
df = DataFrame({'a': np.random.randn(10),
'b': np.random.randn(10)})
df.eval("e = arctan2(sin(a), b)",
engine=self.engine,
parser=self.parser, inplace=True)
got = df.e
expect = np.arctan2(np.sin(df.a), df.b)
tm.assert_series_equal(got, expect, check_names=False)
def test_df_arithmetic_subexpression(self):
df = DataFrame({'a': np.random.randn(10),
'b': np.random.randn(10)})
df.eval("e = sin(a + b)",
engine=self.engine,
parser=self.parser, inplace=True)
got = df.e
expect = np.sin(df.a + df.b)
tm.assert_series_equal(got, expect, check_names=False)
def check_result_type(self, dtype, expect_dtype):
df = DataFrame({'a': np.random.randn(10).astype(dtype)})
self.assertEqual(df.a.dtype, dtype)
df.eval("b = sin(a)",
engine=self.engine,
parser=self.parser, inplace=True)
got = df.b
expect = np.sin(df.a)
self.assertEqual(expect.dtype, got.dtype)
self.assertEqual(expect_dtype, got.dtype)
tm.assert_series_equal(got, expect, check_names=False)
def test_result_types(self):
self.check_result_type(np.int32, np.float64)
self.check_result_type(np.int64, np.float64)
self.check_result_type(np.float32, np.float32)
self.check_result_type(np.float64, np.float64)
def test_result_types2(self):
# xref https://github.com/pydata/pandas/issues/12293
raise nose.SkipTest("unreliable tests on complex128")
# Did not test complex64 because DataFrame is converting it to
# complex128. Due to https://github.com/pydata/pandas/issues/10952
self.check_result_type(np.complex128, np.complex128)
def test_undefined_func(self):
df = DataFrame({'a': np.random.randn(10)})
with tm.assertRaisesRegexp(ValueError,
"\"mysin\" is not a supported function"):
df.eval("mysin(a)",
engine=self.engine,
parser=self.parser)
def test_keyword_arg(self):
df = DataFrame({'a': np.random.randn(10)})
with tm.assertRaisesRegexp(TypeError,
"Function \"sin\" does not support "
"keyword arguments"):
df.eval("sin(x=a)",
engine=self.engine,
parser=self.parser)
class TestMathPythonPandas(TestMathPythonPython):
@classmethod
def setUpClass(cls):
super(TestMathPythonPandas, cls).setUpClass()
cls.engine = 'python'
cls.parser = 'pandas'
class TestMathNumExprPandas(TestMathPythonPython):
@classmethod
def setUpClass(cls):
super(TestMathNumExprPandas, cls).setUpClass()
cls.engine = 'numexpr'
cls.parser = 'pandas'
class TestMathNumExprPython(TestMathPythonPython):
@classmethod
def setUpClass(cls):
super(TestMathNumExprPython, cls).setUpClass()
cls.engine = 'numexpr'
cls.parser = 'python'
_var_s = randn(10)
class TestScope(object):
def check_global_scope(self, e, engine, parser):
tm.skip_if_no_ne(engine)
tm.assert_numpy_array_equal(_var_s * 2, pd.eval(e, engine=engine,
parser=parser))
def test_global_scope(self):
e = '_var_s * 2'
for engine, parser in product(_engines, expr._parsers):
yield self.check_global_scope, e, engine, parser
def check_no_new_locals(self, engine, parser):
tm.skip_if_no_ne(engine)
x = 1
lcls = locals().copy()
pd.eval('x + 1', local_dict=lcls, engine=engine, parser=parser)
lcls2 = locals().copy()
lcls2.pop('lcls')
tm.assert_equal(lcls, lcls2)
def test_no_new_locals(self):
for engine, parser in product(_engines, expr._parsers):
yield self.check_no_new_locals, engine, parser
def check_no_new_globals(self, engine, parser):
tm.skip_if_no_ne(engine)
x = 1
gbls = globals().copy()
pd.eval('x + 1', engine=engine, parser=parser)
gbls2 = globals().copy()
tm.assert_equal(gbls, gbls2)
def test_no_new_globals(self):
for engine, parser in product(_engines, expr._parsers):
yield self.check_no_new_globals, engine, parser
def test_invalid_engine():
tm.skip_if_no_ne()
assertRaisesRegexp(KeyError, 'Invalid engine \'asdf\' passed',
pd.eval, 'x + y', local_dict={'x': 1, 'y': 2},
engine='asdf')
def test_invalid_parser():
tm.skip_if_no_ne()
assertRaisesRegexp(KeyError, 'Invalid parser \'asdf\' passed',
pd.eval, 'x + y', local_dict={'x': 1, 'y': 2},
parser='asdf')
_parsers = {'python': PythonExprVisitor, 'pytables': pytables.ExprVisitor,
'pandas': PandasExprVisitor}
def check_disallowed_nodes(engine, parser):
tm.skip_if_no_ne(engine)
VisitorClass = _parsers[parser]
uns_ops = VisitorClass.unsupported_nodes
inst = VisitorClass('x + 1', engine, parser)
for ops in uns_ops:
assert_raises(NotImplementedError, getattr(inst, ops))
def test_disallowed_nodes():
for engine, visitor in product(_parsers, repeat=2):
yield check_disallowed_nodes, engine, visitor
def check_syntax_error_exprs(engine, parser):
tm.skip_if_no_ne(engine)
e = 's +'
assert_raises(SyntaxError, pd.eval, e, engine=engine, parser=parser)
def test_syntax_error_exprs():
for engine, parser in ENGINES_PARSERS:
yield check_syntax_error_exprs, engine, parser
def check_name_error_exprs(engine, parser):
tm.skip_if_no_ne(engine)
e = 's + t'
with tm.assertRaises(NameError):
pd.eval(e, engine=engine, parser=parser)
def test_name_error_exprs():
for engine, parser in ENGINES_PARSERS:
yield check_name_error_exprs, engine, parser
def check_invalid_local_variable_reference(engine, parser):
tm.skip_if_no_ne(engine)
a, b = 1, 2
exprs = 'a + @b', '@a + b', '@a + @b'
for expr in exprs:
if parser != 'pandas':
with tm.assertRaisesRegexp(SyntaxError, "The '@' prefix is only"):
pd.eval(exprs, engine=engine, parser=parser)
else:
with tm.assertRaisesRegexp(SyntaxError, "The '@' prefix is not"):
pd.eval(exprs, engine=engine, parser=parser)
def test_invalid_local_variable_reference():
for engine, parser in ENGINES_PARSERS:
yield check_invalid_local_variable_reference, engine, parser
def check_numexpr_builtin_raises(engine, parser):
tm.skip_if_no_ne(engine)
sin, dotted_line = 1, 2
if engine == 'numexpr':
with tm.assertRaisesRegexp(NumExprClobberingError,
'Variables in expression .+'):
pd.eval('sin + dotted_line', engine=engine, parser=parser)
else:
res = pd.eval('sin + dotted_line', engine=engine, parser=parser)
tm.assert_equal(res, sin + dotted_line)
def test_numexpr_builtin_raises():
for engine, parser in ENGINES_PARSERS:
yield check_numexpr_builtin_raises, engine, parser
def check_bad_resolver_raises(engine, parser):
tm.skip_if_no_ne(engine)
cannot_resolve = 42, 3.0
with tm.assertRaisesRegexp(TypeError, 'Resolver of type .+'):
pd.eval('1 + 2', resolvers=cannot_resolve, engine=engine,
parser=parser)
def test_bad_resolver_raises():
for engine, parser in ENGINES_PARSERS:
yield check_bad_resolver_raises, engine, parser
def check_more_than_one_expression_raises(engine, parser):
tm.skip_if_no_ne(engine)
with tm.assertRaisesRegexp(SyntaxError,
'only a single expression is allowed'):
pd.eval('1 + 1; 2 + 2', engine=engine, parser=parser)
def test_more_than_one_expression_raises():
for engine, parser in ENGINES_PARSERS:
yield check_more_than_one_expression_raises, engine, parser
def check_bool_ops_fails_on_scalars(gen, lhs, cmp, rhs, engine, parser):
tm.skip_if_no_ne(engine)
mid = gen[type(lhs)]()
ex1 = 'lhs {0} mid {1} rhs'.format(cmp, cmp)
ex2 = 'lhs {0} mid and mid {1} rhs'.format(cmp, cmp)
ex3 = '(lhs {0} mid) & (mid {1} rhs)'.format(cmp, cmp)
for ex in (ex1, ex2, ex3):
with tm.assertRaises(NotImplementedError):
pd.eval(ex, engine=engine, parser=parser)
def test_bool_ops_fails_on_scalars():
_bool_ops_syms = 'and', 'or'
dtypes = int, float
gen = {int: lambda: np.random.randint(10), float: np.random.randn}
for engine, parser, dtype1, cmp, dtype2 in product(_engines, expr._parsers,
dtypes, _bool_ops_syms,
dtypes):
yield (check_bool_ops_fails_on_scalars, gen, gen[dtype1](), cmp,
gen[dtype2](), engine, parser)
def check_inf(engine, parser):
tm.skip_if_no_ne(engine)
s = 'inf + 1'
expected = np.inf
result = pd.eval(s, engine=engine, parser=parser)
tm.assert_equal(result, expected)
def test_inf():
for engine, parser in ENGINES_PARSERS:
yield check_inf, engine, parser
def check_negate_lt_eq_le(engine, parser):
tm.skip_if_no_ne(engine)
df = pd.DataFrame([[0, 10], [1, 20]], columns=['cat', 'count'])
expected = df[~(df.cat > 0)]
result = df.query('~(cat > 0)', engine=engine, parser=parser)
tm.assert_frame_equal(result, expected)
if parser == 'python':
with tm.assertRaises(NotImplementedError):
df.query('not (cat > 0)', engine=engine, parser=parser)
else:
result = df.query('not (cat > 0)', engine=engine, parser=parser)
tm.assert_frame_equal(result, expected)
def test_negate_lt_eq_le():
for engine, parser in product(_engines, expr._parsers):
yield check_negate_lt_eq_le, engine, parser
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False) | unknown | codeparrot/codeparrot-clean | ||
#***************************************************************************
#* *
#* Copyright (c) 2011, 2016 *
#* Jose Luis Cercos Pita <jlcercos@gmail.com> *
#* *
#* This program is free software; you can redistribute it and/or modify *
#* it under the terms of the GNU Lesser General Public License (LGPL) *
#* as published by the Free Software Foundation; either version 2 of *
#* the License, or (at your option) any later version. *
#* for detail see the LICENCE text file. *
#* *
#* This program is distributed in the hope that it will be useful, *
#* but WITHOUT ANY WARRANTY; without even the implied warranty of *
#* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
#* GNU Library General Public License for more details. *
#* *
#* You should have received a copy of the GNU Library General Public *
#* License along with this program; if not, write to the Free Software *
#* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 *
#* USA *
#* *
#***************************************************************************
import math
import random
from FreeCAD import Vector, Rotation, Matrix, Placement
import Part
import Units
import FreeCAD as App
import FreeCADGui as Gui
from PySide import QtGui, QtCore
import Instance
from shipUtils import Math
import shipUtils.Units as USys
DENS = Units.parseQuantity("1025 kg/m^3") # Salt water
COMMON_BOOLEAN_ITERATIONS = 10
def placeShipShape(shape, draft, roll, trim):
"""Move the ship shape such that the free surface matches with the plane
z=0. The transformation will be applied on the input shape, so copy it
before calling this method if it should be preserved.
Position arguments:
shape -- Ship shape
draft -- Ship draft
roll -- Roll angle
trim -- Trim angle
Returned values:
shape -- The same transformed input shape. Just for debugging purposes, you
can discard it.
base_z -- The new base z coordinate (after applying the roll angle). Useful
if you want to revert back the transformation
"""
# Roll the ship. In order to can deal with large roll angles, we are
# proceeding as follows:
# 1.- Applying the roll with respect the base line
# 2.- Recentering the ship in the y direction
# 3.- Readjusting the base line
shape.rotate(Vector(0.0, 0.0, 0.0), Vector(1.0, 0.0, 0.0), roll)
base_z = shape.BoundBox.ZMin
shape.translate(Vector(0.0, draft * math.sin(math.radians(roll)), -base_z))
# Trim the ship. In this case we only need to correct the x direction
shape.rotate(Vector(0.0, 0.0, 0.0), Vector(0.0, -1.0, 0.0), trim)
shape.translate(Vector(draft * math.sin(math.radians(trim)), 0.0, 0.0))
shape.translate(Vector(0.0, 0.0, -draft))
return shape, base_z
def getUnderwaterSide(shape, force=True):
"""Get the underwater shape, simply cropping the provided shape by the z=0
free surface plane.
Position arguments:
shape -- Solid shape to be cropped
Keyword arguments:
force -- True if in case the common boolean operation fails, i.e. returns
no solids, the tool should retry it slightly moving the free surface. False
otherwise. (True by default)
Returned value:
Cropped shape. It is not modifying the input shape
"""
# Convert the shape into an active object
Part.show(shape)
orig = App.ActiveDocument.Objects[-1]
bbox = shape.BoundBox
xmin = bbox.XMin
xmax = bbox.XMax
ymin = bbox.YMin
ymax = bbox.YMax
zmin = bbox.ZMin
zmax = bbox.ZMax
# Create the "sea" box to intersect the ship
L = xmax - xmin
B = ymax - ymin
H = zmax - zmin
box = App.ActiveDocument.addObject("Part::Box","Box")
length_format = USys.getLengthFormat()
box.Placement = Placement(Vector(xmin - L, ymin - B, zmin - H),
Rotation(App.Vector(0,0,1),0))
box.Length = length_format.format(3.0 * L)
box.Width = length_format.format(3.0 * B)
box.Height = length_format.format(- zmin + H)
App.ActiveDocument.recompute()
common = App.activeDocument().addObject("Part::MultiCommon",
"UnderwaterSideHelper")
common.Shapes = [orig, box]
App.ActiveDocument.recompute()
if force and len(common.Shape.Solids) == 0:
# The common operation is failing, let's try moving a bit the free
# surface
msg = QtGui.QApplication.translate(
"ship_console",
"Boolean operation failed when trying to get the underwater side."
" The tool is retrying such operation slightly moving the free"
" surface position",
None)
App.Console.PrintWarning(msg + '\n')
random_bounds = 0.01 * H
i = 0
while len(common.Shape.Solids) == 0 and i < COMMON_BOOLEAN_ITERATIONS:
i += 1
box.Height = length_format.format(
- zmin + H + random.uniform(-random_bounds, random_bounds))
App.ActiveDocument.recompute()
out = common.Shape
App.ActiveDocument.removeObject(common.Name)
App.ActiveDocument.removeObject(orig.Name)
App.ActiveDocument.removeObject(box.Name)
App.ActiveDocument.recompute()
return out
def areas(ship, n, draft=None,
roll=Units.parseQuantity("0 deg"),
trim=Units.parseQuantity("0 deg")):
"""Compute the ship transversal areas
Position arguments:
ship -- Ship object (see createShip)
n -- Number of points to compute
Keyword arguments:
draft -- Ship draft (Design ship draft by default)
roll -- Roll angle (0 degrees by default)
trim -- Trim angle (0 degrees by default)
Returned value:
List of sections, each section contains 2 values, the x longitudinal
coordinate, and the transversal area. If n < 2, an empty list will be
returned.
"""
if n < 2:
return []
if draft is None:
draft = ship.Draft
shape, _ = placeShipShape(ship.Shape.copy(), draft, roll, trim)
shape = getUnderwaterSide(shape)
# Sections distance computation
bbox = shape.BoundBox
xmin = bbox.XMin
xmax = bbox.XMax
dx = (xmax - xmin) / (n - 1.0)
# Since we are computing the sections in the total length (not in the
# length between perpendiculars), we can grant that the starting and
# ending sections have null area
areas = [(Units.Quantity(xmin, Units.Length),
Units.Quantity(0.0, Units.Area))]
# And since we just need to compute areas we will create boxes with its
# front face at the desired transversal area position, computing the
# common solid part, dividing it by faces, and getting only the desired
# ones.
App.Console.PrintMessage("Computing transversal areas...\n")
App.Console.PrintMessage("Some Inventor representation errors can be"
" shown, please ignore them.\n")
for i in range(1, n - 1):
App.Console.PrintMessage("{0} / {1}\n".format(i, n - 2))
x = xmin + i * dx
try:
f = Part.Face(shape.slice(Vector(1,0,0), x))
except Part.OCCError:
msg = QtGui.QApplication.translate(
"ship_console",
"Part.OCCError: Transversal area computation failed",
None)
App.Console.PrintError(msg + '\n')
areas.append((Units.Quantity(x, Units.Length),
Units.Quantity(0.0, Units.Area)))
continue
# It is a valid face, so we can add this area
areas.append((Units.Quantity(x, Units.Length),
Units.Quantity(f.Area, Units.Area)))
# Last area is equal to zero (due to the total length usage)
areas.append((Units.Quantity(xmax, Units.Length),
Units.Quantity(0.0, Units.Area)))
App.Console.PrintMessage("Done!\n")
return areas
def displacement(ship, draft=None,
roll=Units.parseQuantity("0 deg"),
trim=Units.parseQuantity("0 deg")):
"""Compute the ship displacement
Position arguments:
ship -- Ship object (see createShip)
Keyword arguments:
draft -- Ship draft (Design ship draft by default)
roll -- Roll angle (0 degrees by default)
trim -- Trim angle (0 degrees by default)
Returned values:
disp -- The ship displacement (a density of the water of 1025 kg/m^3 is
assumed)
B -- Bouyance application point, i.e. Center of mass of the underwater side
Cb -- Block coefficient
The Bouyance center is referred to the original ship position.
"""
if draft is None:
draft = ship.Draft
shape, base_z = placeShipShape(ship.Shape.copy(), draft, roll, trim)
shape = getUnderwaterSide(shape)
vol = 0.0
cog = Vector()
if len(shape.Solids) > 0:
for solid in shape.Solids:
vol += solid.Volume
sCoG = solid.CenterOfMass
cog.x = cog.x + sCoG.x * solid.Volume
cog.y = cog.y + sCoG.y * solid.Volume
cog.z = cog.z + sCoG.z * solid.Volume
cog.x = cog.x / vol
cog.y = cog.y / vol
cog.z = cog.z / vol
bbox = shape.BoundBox
Vol = (bbox.XMax - bbox.XMin) * (bbox.YMax - bbox.YMin) * abs(bbox.ZMin)
# Undo the transformations on the bouyance point
B = Part.Point(Vector(cog.x, cog.y, cog.z))
m = Matrix()
m.move(Vector(0.0, 0.0, draft))
m.move(Vector(-draft * math.sin(trim.getValueAs("rad")), 0.0, 0.0))
m.rotateY(trim.getValueAs("rad"))
m.move(Vector(0.0,
-draft * math.sin(roll.getValueAs("rad")),
base_z))
m.rotateX(-roll.getValueAs("rad"))
B.transform(m)
try:
cb = vol / Vol
except ZeroDivisionError:
msg = QtGui.QApplication.translate(
"ship_console",
"ZeroDivisionError: Null volume found during the displacement"
" computation!",
None)
App.Console.PrintError(msg + '\n')
cb = 0.0
# Return the computed data
return (DENS * Units.Quantity(vol, Units.Volume),
Vector(B.X, B.Y, B.Z),
cb)
def wettedArea(shape, draft, roll=Units.parseQuantity("0 deg"),
trim=Units.parseQuantity("0 deg")):
"""Compute the ship wetted area
Position arguments:
shape -- External faces of the ship hull
draft -- Ship draft
Keyword arguments:
roll -- Roll angle (0 degrees by default)
trim -- Trim angle (0 degrees by default)
Returned value:
The wetted area, i.e. The underwater side area
"""
shape, _ = placeShipShape(shape.copy(), draft, roll, trim)
shape = getUnderwaterSide(shape, force=False)
area = 0.0
for f in shape.Faces:
area = area + f.Area
return Units.Quantity(area, Units.Area)
def moment(ship, draft=None,
roll=Units.parseQuantity("0 deg"),
trim=Units.parseQuantity("0 deg")):
"""Compute the moment required to trim the ship 1cm
Position arguments:
ship -- Ship object (see createShip)
Keyword arguments:
draft -- Ship draft (Design ship draft by default)
roll -- Roll angle (0 degrees by default)
trim -- Trim angle (0 degrees by default)
Returned value:
Moment required to trim the ship 1cm. Such moment is positive if it cause a
positive trim angle. The moment is expressed as a mass by a distance, not as
a force by a distance
"""
disp_orig, B_orig, _ = displacement(ship, draft, roll, trim)
xcb_orig = Units.Quantity(B_orig.x, Units.Length)
factor = 10.0
x = 0.5 * ship.Length.getValueAs('cm').Value
y = 1.0
angle = math.atan2(y, x) * Units.Radian
trim_new = trim + factor * angle
disp_new, B_new, _ = displacement(ship, draft, roll, trim_new)
xcb_new = Units.Quantity(B_new.x, Units.Length)
mom0 = -disp_orig * xcb_orig
mom1 = -disp_new * xcb_new
return (mom1 - mom0) / factor
def floatingArea(ship, draft=None,
roll=Units.parseQuantity("0 deg"),
trim=Units.parseQuantity("0 deg")):
"""Compute the ship floating area
Position arguments:
ship -- Ship object (see createShip)
Keyword arguments:
draft -- Ship draft (Design ship draft by default)
roll -- Roll angle (0 degrees by default)
trim -- Trim angle (0 degrees by default)
Returned values:
area -- Ship floating area
cf -- Floating area coefficient
"""
if draft is None:
draft = ship.Draft
# We want to intersect the whole ship with the free surface, so in this case
# we must not use the underwater side (or the tool will fail)
shape, _ = placeShipShape(ship.Shape.copy(), draft, roll, trim)
try:
f = Part.Face(shape.slice(Vector(0,0,1), 0.0))
area = Units.Quantity(f.Area, Units.Area)
except Part.OCCError:
msg = QtGui.QApplication.translate(
"ship_console",
"Part.OCCError: Floating area cannot be computed",
None)
App.Console.PrintError(msg + '\n')
area = Units.Quantity(0.0, Units.Area)
bbox = shape.BoundBox
Area = (bbox.XMax - bbox.XMin) * (bbox.YMax - bbox.YMin)
try:
cf = area.Value / Area
except ZeroDivisionError:
msg = QtGui.QApplication.translate(
"ship_console",
"ZeroDivisionError: Null area found during the floating area"
" computation!",
None)
App.Console.PrintError(msg + '\n')
cf = 0.0
return area, cf
def BMT(ship, draft=None, trim=Units.parseQuantity("0 deg")):
"""Calculate "ship Bouyance center" - "transversal metacenter" radius
Position arguments:
ship -- Ship object (see createShip)
Keyword arguments:
draft -- Ship draft (Design ship draft by default)
trim -- Trim angle (0 degrees by default)
Returned value:
BMT radius
"""
if draft is None:
draft = ship.Draft
roll = Units.parseQuantity("0 deg")
_, B0, _ = displacement(ship, draft, roll, trim)
nRoll = 2
maxRoll = Units.parseQuantity("7 deg")
BM = 0.0
for i in range(nRoll):
roll = (maxRoll / nRoll) * (i + 1)
_, B1, _ = displacement(ship, draft, roll, trim)
# * M
# / \
# / \ BM ==|> BM = (BB/2) / sin(alpha/2)
# / \
# *-------*
# BB
BB = B1 - B0
BB.x = 0.0
# nRoll is actually representing the weight function
BM += 0.5 * BB.Length / math.sin(math.radians(0.5 * roll)) / nRoll
return Units.Quantity(BM, Units.Length)
def mainFrameCoeff(ship, draft=None):
"""Compute the main frame coefficient
Position arguments:
ship -- Ship object (see createShip)
Keyword arguments:
draft -- Ship draft (Design ship draft by default)
Returned value:
Ship main frame area coefficient
"""
if draft is None:
draft = ship.Draft
shape, _ = placeShipShape(ship.Shape.copy(), draft,
Units.parseQuantity("0 deg"),
Units.parseQuantity("0 deg"))
shape = getUnderwaterSide(shape)
try:
f = Part.Face(shape.slice(Vector(1,0,0), 0.0))
area = f.Area
except Part.OCCError:
msg = QtGui.QApplication.translate(
"ship_console",
"Part.OCCError: Main frame area cannot be computed",
None)
App.Console.PrintError(msg + '\n')
area = 0.0
bbox = shape.BoundBox
Area = (bbox.YMax - bbox.YMin) * (bbox.ZMax - bbox.ZMin)
try:
cm = area / Area
except ZeroDivisionError:
msg = QtGui.QApplication.translate(
"ship_console",
"ZeroDivisionError: Null area found during the main frame area"
" coefficient computation!",
None)
App.Console.PrintError(msg + '\n')
cm = 0.0
return cm
class Point:
"""Hydrostatics point, that contains the following members:
draft -- Ship draft
trim -- Ship trim
disp -- Ship displacement
xcb -- Bouyance center X coordinate
wet -- Wetted ship area
mom -- Triming 1cm ship moment
farea -- Floating area
KBt -- Transversal KB height
BMt -- Transversal BM height
Cb -- Block coefficient.
Cf -- Floating coefficient.
Cm -- Main frame coefficient.
The moment to trim the ship 1 cm is positive when is resulting in a positive
trim angle.
"""
def __init__(self, ship, faces, draft, trim):
"""Compute all the hydrostatics.
Position argument:
ship -- Ship instance
faces -- Ship external faces
draft -- Ship draft
trim -- Trim angle
"""
disp, B, cb = displacement(ship, draft=draft, trim=trim)
if not faces:
wet = 0.0
else:
wet = wettedArea(faces, draft=draft, trim=trim)
mom = moment(ship, draft=draft, trim=trim)
farea, cf = floatingArea(ship, draft=draft, trim=trim)
bm = BMT(ship, draft=draft, trim=trim)
cm = mainFrameCoeff(ship, draft=draft)
# Store final data
self.draft = draft
self.trim = trim
self.disp = disp
self.xcb = Units.Quantity(B.x, Units.Length)
self.wet = wet
self.farea = farea
self.mom = mom
self.KBt = Units.Quantity(B.z, Units.Length)
self.BMt = bm
self.Cb = cb
self.Cf = cf
self.Cm = cm | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (C) 2008, 2009 Adriano Monteiro Marques
#
# Author: Francesco Piccinno <stack.box@gmail.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
from timed import TimedContext
from umit.pm.backend.abstract.context import register_sniff_context
class BaseSniffContext(TimedContext):
"A context to sniff on a given interface"
has_stop = True
has_resume = False
has_restart = True
def __init__(self, iface, filter=None, minsize=0, maxsize=0, capfile=None, \
scount=0, stime=0, ssize=0, real=True, scroll=True, \
resmac=True, resname=False, restransport=True, promisc=True, \
background=False, capmethod=0, audits=True, \
callback=None, udata=None):
"""
Create a BaseSniffContext object
@param iface the interface to sniff from
@param filter the BPF filter to apply
@param minsize the min size for every packet (0 no filter)
@param maxsize the max size for every packet (0 no filter)
@param capfile the file where the packets are saved (in real time)
@param scount stop after scount packets sniffed (0 no filter)
@param stime stop after stime seconds (0 no filter)
@param ssize stop after ssize bytes (0 no filter)
@param real if the view should be updated in real time
@param scroll if the view shoud be scrolled at every packet received
@param resmac enable MAC resolution
@param resname enable name resolution
@param restransport enable transport resolution
@param promisc set the interface to promisc mode
@param background if the sniff context should be runned in background
@param capmethod the method to use (0 for standard, 1 for virtual
interface trough file, 2 for tcpdump helper, 3 for
dumpcap helper)
@param audits a bool to indicate if auditdispatcher should be feeded
with captured packets.
@param callback a function to call at every packet sniffed
@param udata the user data to pass to callback
"""
TimedContext.__init__(self)
self.iface = iface
self.filter = filter
self.min_packet_size = minsize
self.max_packet_size = maxsize
self.cap_file = capfile
self.promisc = promisc
self.stop_count = scount
self.stop_time = stime
self.stop_size = ssize
self.real_time = real
self.auto_scroll = scroll
self.mac_resolution = resmac
self.name_resolution = resname
self.transport_resoltioin = restransport
self.capmethod = capmethod
self.audits = audits
self.background = background
self.callback = callback
self.udata = udata
self.tot_size = 0
self.tot_time = 0
self.tot_count = 0
SniffContext = register_sniff_context(BaseSniffContext) | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import io
import logging
import re
import zipfile
import babelfish
import bs4
import requests
from . import Provider
from .. import __version__
from ..cache import region, SHOW_EXPIRATION_TIME, EPISODE_EXPIRATION_TIME
from ..exceptions import ProviderError
from ..subtitle import Subtitle, fix_line_endings, compute_guess_properties_matches
from ..video import Episode
logger = logging.getLogger(__name__)
babelfish.language_converters.register('tvsubtitles = subliminal.converters.tvsubtitles:TVsubtitlesConverter')
class TVsubtitlesSubtitle(Subtitle):
provider_name = 'tvsubtitles'
def __init__(self, language, series, season, episode, year, id, rip, release, page_link): # @ReservedAssignment
super(TVsubtitlesSubtitle, self).__init__(language, page_link=page_link)
self.series = series
self.season = season
self.episode = episode
self.year = year
self.id = id
self.rip = rip
self.release = release
def compute_matches(self, video):
matches = set()
# series
if video.series and self.series == video.series:
matches.add('series')
# season
if video.season and self.season == video.season:
matches.add('season')
# episode
if video.episode and self.episode == video.episode:
matches.add('episode')
# year
if self.year == video.year:
matches.add('year')
# release_group
if video.release_group and self.release and video.release_group.lower() in self.release.lower():
matches.add('release_group')
"""
# video_codec
if video.video_codec and self.release and (video.video_codec in self.release.lower()
or video.video_codec == 'h264' and 'x264' in self.release.lower()):
matches.add('video_codec')
# resolution
if video.resolution and self.rip and video.resolution in self.rip.lower():
matches.add('resolution')
# format
if video.format and self.rip and video.format in self.rip.lower():
matches.add('format')
"""
# we don't have the complete filename, so we need to guess the matches separately
# guess video_codec (videoCodec in guessit)
matches |= compute_guess_properties_matches(video, self.release, 'videoCodec')
# guess resolution (screenSize in guessit)
matches |= compute_guess_properties_matches(video, self.rip, 'screenSize')
# guess format
matches |= compute_guess_properties_matches(video, self.rip, 'format')
return matches
class TVsubtitlesProvider(Provider):
languages = {babelfish.Language('por', 'BR')} | {babelfish.Language(l)
for l in ['ara', 'bul', 'ces', 'dan', 'deu', 'ell', 'eng', 'fin', 'fra', 'hun', 'ita', 'jpn', 'kor',
'nld', 'pol', 'por', 'ron', 'rus', 'spa', 'swe', 'tur', 'ukr', 'zho']}
video_types = (Episode,)
server = 'http://www.tvsubtitles.net'
episode_id_re = re.compile('^episode-\d+\.html$')
subtitle_re = re.compile('^\/subtitle-\d+\.html$')
link_re = re.compile('^(?P<series>[A-Za-z0-9 \'.]+).*\((?P<first_year>\d{4})-\d{4}\)$')
def initialize(self):
self.session = requests.Session()
self.session.headers = {'User-Agent': 'Subliminal/%s' % __version__.split('-')[0]}
def terminate(self):
self.session.close()
def request(self, url, params=None, data=None, method='GET'):
"""Make a `method` request on `url` with the given parameters
:param string url: part of the URL to reach with the leading slash
:param dict params: params of the request
:param dict data: data of the request
:param string method: method of the request
:return: the response
:rtype: :class:`bs4.BeautifulSoup`
"""
r = self.session.request(method, self.server + url, params=params, data=data, timeout=10)
if r.status_code != 200:
raise ProviderError('Request failed with status code %d' % r.status_code)
return bs4.BeautifulSoup(r.content, ['permissive'])
@region.cache_on_arguments(expiration_time=SHOW_EXPIRATION_TIME)
def find_show_id(self, series, year=None):
"""Find the show id from the `series` with optional `year`
:param string series: series of the episode in lowercase
:param year: year of the series, if any
:type year: int or None
:return: the show id, if any
:rtype: int or None
"""
data = {'q': series}
logger.debug('Searching series %r', data)
soup = self.request('/search.php', data=data, method='POST')
links = soup.select('div.left li div a[href^="/tvshow-"]')
if not links:
logger.info('Series %r not found', series)
return None
matched_links = [link for link in links if self.link_re.match(link.string)]
for link in matched_links: # first pass with exact match on series
match = self.link_re.match(link.string)
if match.group('series').lower().replace('.', ' ').strip() == series:
if year is not None and int(match.group('first_year')) != year:
continue
return int(link['href'][8:-5])
for link in matched_links: # less selective second pass
match = self.link_re.match(link.string)
if match.group('series').lower().replace('.', ' ').strip().startswith(series):
if year is not None and int(match.group('first_year')) != year:
continue
return int(link['href'][8:-5])
return None
@region.cache_on_arguments(expiration_time=EPISODE_EXPIRATION_TIME)
def find_episode_ids(self, show_id, season):
"""Find episode ids from the show id and the season
:param int show_id: show id
:param int season: season of the episode
:return: episode ids per episode number
:rtype: dict
"""
params = {'show_id': show_id, 'season': season}
logger.debug('Searching episodes %r', params)
soup = self.request('/tvshow-{show_id}-{season}.html'.format(**params))
episode_ids = {}
for row in soup.select('table#table5 tr'):
if not row('a', href=self.episode_id_re):
continue
cells = row('td')
episode_ids[int(cells[0].string.split('x')[1])] = int(cells[1].a['href'][8:-5])
return episode_ids
def query(self, series, season, episode, year=None):
show_id = self.find_show_id(series.lower(), year)
if show_id is None:
return []
episode_ids = self.find_episode_ids(show_id, season)
if episode not in episode_ids:
logger.info('Episode %d not found', episode)
return []
params = {'episode_id': episode_ids[episode]}
logger.debug('Searching episode %r', params)
link = '/episode-{episode_id}.html'.format(**params)
soup = self.request(link)
return [TVsubtitlesSubtitle(babelfish.Language.fromtvsubtitles(row.h5.img['src'][13:-4]), series, season,
episode, year if year and show_id != self.find_show_id(series.lower()) else None,
int(row['href'][10:-5]), row.find('p', title='rip').text.strip() or None,
row.find('p', title='release').text.strip() or None,
self.server + '/subtitle-%d.html' % int(row['href'][10:-5]))
for row in soup('a', href=self.subtitle_re)]
def list_subtitles(self, video, languages):
return [s for s in self.query(video.series, video.season, video.episode, video.year) if s.language in languages]
def download_subtitle(self, subtitle):
r = self.session.get(self.server + '/download-{subtitle_id}.html'.format(subtitle_id=subtitle.id),
timeout=10)
if r.status_code != 200:
raise ProviderError('Request failed with status code %d' % r.status_code)
with zipfile.ZipFile(io.BytesIO(r.content)) as zf:
if len(zf.namelist()) > 1:
raise ProviderError('More than one file to unzip')
subtitle.content = fix_line_endings(zf.read(zf.namelist()[0])) | unknown | codeparrot/codeparrot-clean | ||
// Copyright 2019-2024 Tauri Programme within The Commons Conservancy
// SPDX-License-Identifier: Apache-2.0
// SPDX-License-Identifier: MIT
window.__TAURI_ISOLATION_HOOK__ = (payload, options) => {
return payload
} | javascript | github | https://github.com/tauri-apps/tauri | examples/api/isolation-dist/index.js |
# (c) 2016 Red Hat Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.compat.tests.mock import patch
from ansible.modules.network.vyos import vyos_static_route
from units.modules.utils import set_module_args
from .vyos_module import TestVyosModule, load_fixture
class TestVyosStaticRouteModule(TestVyosModule):
module = vyos_static_route
def setUp(self):
super(TestVyosStaticRouteModule, self).setUp()
self.mock_get_config = patch('ansible.modules.network.vyos.vyos_static_route.get_config')
self.get_config = self.mock_get_config.start()
self.mock_load_config = patch('ansible.modules.network.vyos.vyos_static_route.load_config')
self.load_config = self.mock_load_config.start()
def tearDown(self):
super(TestVyosStaticRouteModule, self).tearDown()
self.mock_get_config.stop()
self.mock_load_config.stop()
def load_fixtures(self, commands=None, transport='cli'):
self.load_config.return_value = dict(diff=None, session='session')
def test_vyos_static_route_present(self):
set_module_args(dict(prefix='172.26.0.0/16', next_hop='172.26.4.1', admin_distance='1'))
result = self.execute_module(changed=True)
self.assertEqual(result['commands'],
['set protocols static route 172.26.0.0/16 next-hop 172.26.4.1 distance 1']) | unknown | codeparrot/codeparrot-clean | ||
import sys
from django.apps import apps
from django.db import models
def sql_flush(style, connection, reset_sequences=True, allow_cascade=False):
"""
Return a list of the SQL statements used to flush the database.
"""
tables = connection.introspection.django_table_names(
only_existing=True, include_views=False
)
return connection.ops.sql_flush(
style,
tables,
reset_sequences=reset_sequences,
allow_cascade=allow_cascade,
)
def emit_pre_migrate_signal(verbosity, interactive, db, **kwargs):
# Emit the pre_migrate signal for every application.
for app_config in apps.get_app_configs():
if app_config.models_module is None:
continue
if verbosity >= 2:
stdout = kwargs.get("stdout", sys.stdout)
stdout.write(
"Running pre-migrate handlers for application %s" % app_config.label
)
models.signals.pre_migrate.send(
sender=app_config,
app_config=app_config,
verbosity=verbosity,
interactive=interactive,
using=db,
**kwargs,
)
def emit_post_migrate_signal(verbosity, interactive, db, **kwargs):
# Emit the post_migrate signal for every application.
for app_config in apps.get_app_configs():
if app_config.models_module is None:
continue
if verbosity >= 2:
stdout = kwargs.get("stdout", sys.stdout)
stdout.write(
"Running post-migrate handlers for application %s" % app_config.label
)
models.signals.post_migrate.send(
sender=app_config,
app_config=app_config,
verbosity=verbosity,
interactive=interactive,
using=db,
**kwargs,
) | python | github | https://github.com/django/django | django/core/management/sql.py |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# NoCmodel basic example
#
# Author: Oscar Diaz
# Version: 0.1
# Date: 03-03-2011
#
# This code is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This code is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the
# Free Software Foundation, Inc., 59 Temple Place, Suite 330,
# Boston, MA 02111-1307 USA
#
#
# Changelog:
#
# 03-03-2011 : (OD) initial release
#
import myhdl
import logging
from nocmodel import *
from nocmodel.basicmodels import *
# Basic example model with TBM simulation
# 1. Create the model
basicnoc = noc(name="Basic 2x2 NoC example")
# 1.1 create a rectangular 2x2 NoC, make its connections and add default protocol
R11 = basicnoc.add_router("R11", with_ipcore=True, coord_x = 1, coord_y = 1)
R12 = basicnoc.add_router("R12", with_ipcore=True, coord_x = 1, coord_y = 2)
R21 = basicnoc.add_router("R21", with_ipcore=True, coord_x = 2, coord_y = 1)
R22 = basicnoc.add_router("R22", with_ipcore=True, coord_x = 2, coord_y = 2)
basicnoc.add_channel(R11,R12)
basicnoc.add_channel(R11,R21)
basicnoc.add_channel(R12,R22)
basicnoc.add_channel(R21,R22)
basicnoc.protocol_ref = basic_protocol()
for r in basicnoc.router_list():
r.update_ports_info()
r.update_routes_info()
# 2. add tbm support, and configure logging
add_tbm_basic_support(basicnoc, log_file="simulation.log", log_level=logging.DEBUG)
# 3. Declare generators to put in the TBM simulation
# set ip_cores functionality as myhdl generators
def sourcegen(din, dout, tbm_ref, mydest, data=None, startdelay=100, period=100):
# this generator only drives dout
@myhdl.instance
def putnewdata():
datacount = 0
protocol_ref = tbm_ref.ipcore_ref.get_protocol_ref()
mysrc = tbm_ref.ipcore_ref.router_ref.address
tbm_ref.debug("sourcegen: init dout is %s" % repr(dout.val))
yield myhdl.delay(startdelay)
while True:
if len(data) == datacount:
tbm_ref.debug("sourcegen: end of data. waiting for %d steps" % (period*10))
yield myhdl.delay(period*10)
raise myhdl.StopSimulation("data ended at time %d" % myhdl.now())
dout.next = protocol_ref.newpacket(False, mysrc, mydest, data[datacount])
tbm_ref.debug("sourcegen: data next element %d dout is %s datacount is %d" % (data[datacount], repr(dout.val), datacount))
yield myhdl.delay(period)
datacount += 1
return putnewdata
def checkgen(din, dout, tbm_ref, mysrc, data=None):
# this generator only respond to din
@myhdl.instance
def checkdata():
datacount = 0
protocol_ref = tbm_ref.ipcore_ref.get_protocol_ref()
mydest = tbm_ref.ipcore_ref.router_ref.address
while True:
yield din
if len(data) > datacount:
checkdata = din.val["data"]
tbm_ref.debug("checkgen: assert checkdata != data[datacount] => %d != %d [%d]" % (checkdata, data[datacount], datacount))
if checkdata != data[datacount]:
tbm_ref.error("checkgen: value != %d (%d)" % (data[datacount], checkdata))
tbm_ref.debug("checkgen: assert source address != mysrc => %d != %d " % (din.val["src"], mysrc))
if din.val["src"] != mysrc:
tbm_ref.error("checkgen: source address != %d (%d)" % (mysrc, din.val["src"]))
tbm_ref.debug("checkgen: assert destination address != mydest => %d != %d " % (din.val["dst"], mydest))
if din.val["dst"] != mydest:
tbm_ref.error("checkgen: destination address != %d (%d)" % (mydest, din.val["dst"]))
datacount += 1
return checkdata
# 4. Set test vectors
R11_testdata = [5, 12, 50, -11, 6, 9, 0, 3, 25]
R12_testdata = [x*5 for x in R11_testdata]
# 5. assign generators to ip cores (in TBM model !)
# R11 will send to R22, R12 will send to R21
R11.ipcore_ref.tbm.register_generator(sourcegen, mydest=R22.address, data=R11_testdata, startdelay=10, period=20)
R12.ipcore_ref.tbm.register_generator(sourcegen, mydest=R21.address, data=R12_testdata, startdelay=15, period=25)
R21.ipcore_ref.tbm.register_generator(checkgen, mysrc=R12.address, data=R12_testdata)
R22.ipcore_ref.tbm.register_generator(checkgen, mysrc=R11.address, data=R11_testdata)
# 6. configure simulation and run!
basicnoc.tbmsim.configure_simulation(max_time=1000)
print "Starting simulation..."
basicnoc.tbmsim.run()
print "Simulation finished. Pick the results in log files."
# 7. View graphical representation
draw_noc(basicnoc) | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
"""
***************************************************************************
QtNetwork.py
---------------------
Date : March 2016
Copyright : (C) 2016 by Juergen E. Fischer
Email : jef at norbit dot de
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Juergen E. Fischer'
__date__ = 'March 2016'
__copyright__ = '(C) 2016, Juergen E. Fischer'
from PyQt5.QtNetwork import * | unknown | codeparrot/codeparrot-clean | ||
# This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see <http://www.gnu.org/licenses/>.
import array
import random
import numpy
from deap import algorithms
from deap import base
from deap import creator
from deap import tools
class PBIL(object):
def __init__(self, ndim, learning_rate, mut_prob, mut_shift, lambda_):
self.prob_vector = [0.5] * ndim
self.learning_rate = learning_rate
self.mut_prob = mut_prob
self.mut_shift = mut_shift
self.lambda_ = lambda_
def sample(self):
return (random.random() < prob for prob in self.prob_vector)
def generate(self, ind_init):
return [ind_init(self.sample()) for _ in range(self.lambda_)]
def update(self, population):
best = max(population, key=lambda ind: ind.fitness)
for i, value in enumerate(best):
# Update the probability vector
self.prob_vector[i] *= 1.0 - self.learning_rate
self.prob_vector[i] += value * self.learning_rate
# Mutate the probability vector
if random.random() < self.mut_prob:
self.prob_vector[i] *= 1.0 - self.mut_shift
self.prob_vector[i] += random.randint(0, 1) * self.mut_shift
def evalOneMax(individual):
return sum(individual),
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", array.array, typecode='b', fitness=creator.FitnessMax)
toolbox = base.Toolbox()
toolbox.register("evaluate", evalOneMax)
def main(seed):
random.seed(seed)
NGEN = 50
#Initialize the PBIL EDA
pbil = PBIL(ndim=50, learning_rate=0.3, mut_prob=0.1,
mut_shift=0.05, lambda_=20)
toolbox.register("generate", pbil.generate, creator.Individual)
toolbox.register("update", pbil.update)
# Statistics computation
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", numpy.mean)
stats.register("std", numpy.std)
stats.register("min", numpy.min)
stats.register("max", numpy.max)
pop, logbook = algorithms.eaGenerateUpdate(toolbox, NGEN, stats=stats, verbose=True)
if __name__ == "__main__":
main(seed=None) | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/python
'''
Use this in the same way as Python's SimpleHTTPServer:
./ssi_server.py [port]
The only difference is that, for files ending in '.html', ssi_server will
inline SSI (Server Side Includes) of the form:
<!-- #include virtual="fragment.html" -->
Run ./ssi_server.py in this directory and visit localhost:8000 for an exmaple.
'''
import os
import ssi
from SimpleHTTPServer import SimpleHTTPRequestHandler
import SimpleHTTPServer
import tempfile
class SSIRequestHandler(SimpleHTTPRequestHandler):
"""Adds minimal support for <!-- #include --> directives.
The key bit is translate_path, which intercepts requests and serves them
using a temporary file which inlines the #includes.
"""
def __init__(self, request, client_address, server):
self.temp_files = []
SimpleHTTPRequestHandler.__init__(self, request, client_address, server)
def do_GET(self):
SimpleHTTPRequestHandler.do_GET(self)
self.delete_temp_files()
def do_HEAD(self):
SimpleHTTPRequestHandler.do_HEAD(self)
self.delete_temp_files()
def translate_path(self, path):
fs_path = SimpleHTTPRequestHandler.translate_path(self, path)
if self.path.endswith('/'):
for index in "index.html", "index.htm":
index = os.path.join(fs_path, index)
if os.path.exists(index):
fs_path = index
break
if fs_path.endswith('.html'):
content = ssi.InlineIncludes(fs_path)
fs_path = self.create_temp_file(fs_path, content)
return fs_path
def delete_temp_files(self):
for temp_file in self.temp_files:
os.remove(temp_file)
def create_temp_file(self, original_path, content):
_, ext = os.path.splitext(original_path)
fd, path = tempfile.mkstemp(suffix=ext)
os.write(fd, content)
os.close(fd)
self.temp_files.append(path)
return path
if __name__ == '__main__':
SimpleHTTPServer.test(HandlerClass=SSIRequestHandler) | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright (C) 2005 Junio C Hamano
*/
#define USE_THE_REPOSITORY_VARIABLE
#define DISABLE_SIGN_COMPARE_WARNINGS
#include "git-compat-util.h"
#include "abspath.h"
#include "base85.h"
#include "config.h"
#include "convert.h"
#include "environment.h"
#include "gettext.h"
#include "tempfile.h"
#include "revision.h"
#include "quote.h"
#include "diff.h"
#include "diffcore.h"
#include "delta.h"
#include "hex.h"
#include "xdiff-interface.h"
#include "color.h"
#include "run-command.h"
#include "utf8.h"
#include "odb.h"
#include "userdiff.h"
#include "submodule.h"
#include "hashmap.h"
#include "mem-pool.h"
#include "merge-ll.h"
#include "string-list.h"
#include "strvec.h"
#include "tmp-objdir.h"
#include "graph.h"
#include "oid-array.h"
#include "packfile.h"
#include "pager.h"
#include "parse-options.h"
#include "help.h"
#include "promisor-remote.h"
#include "dir.h"
#include "object-file.h"
#include "object-name.h"
#include "read-cache-ll.h"
#include "setup.h"
#include "strmap.h"
#include "ws.h"
#ifdef NO_FAST_WORKING_DIRECTORY
#define FAST_WORKING_DIRECTORY 0
#else
#define FAST_WORKING_DIRECTORY 1
#endif
static int diff_detect_rename_default;
static int diff_indent_heuristic = 1;
static int diff_rename_limit_default = 1000;
static int diff_suppress_blank_empty;
static enum git_colorbool diff_use_color_default = GIT_COLOR_UNKNOWN;
static int diff_color_moved_default;
static int diff_color_moved_ws_default;
static int diff_context_default = 3;
static int diff_interhunk_context_default;
static char *diff_word_regex_cfg;
static struct external_diff external_diff_cfg;
static char *diff_order_file_cfg;
int diff_auto_refresh_index = 1;
static int diff_mnemonic_prefix;
static int diff_no_prefix;
static char *diff_src_prefix;
static char *diff_dst_prefix;
static int diff_relative;
static int diff_stat_name_width;
static int diff_stat_graph_width;
static int diff_dirstat_permille_default = 30;
static struct diff_options default_diff_options;
static long diff_algorithm;
static unsigned ws_error_highlight_default = WSEH_NEW;
static char diff_colors[][COLOR_MAXLEN] = {
GIT_COLOR_RESET,
GIT_COLOR_NORMAL, /* CONTEXT */
GIT_COLOR_BOLD, /* METAINFO */
GIT_COLOR_CYAN, /* FRAGINFO */
GIT_COLOR_RED, /* OLD */
GIT_COLOR_GREEN, /* NEW */
GIT_COLOR_YELLOW, /* COMMIT */
GIT_COLOR_BG_RED, /* WHITESPACE */
GIT_COLOR_NORMAL, /* FUNCINFO */
GIT_COLOR_BOLD_MAGENTA, /* OLD_MOVED */
GIT_COLOR_BOLD_BLUE, /* OLD_MOVED ALTERNATIVE */
GIT_COLOR_FAINT, /* OLD_MOVED_DIM */
GIT_COLOR_FAINT_ITALIC, /* OLD_MOVED_ALTERNATIVE_DIM */
GIT_COLOR_BOLD_CYAN, /* NEW_MOVED */
GIT_COLOR_BOLD_YELLOW, /* NEW_MOVED ALTERNATIVE */
GIT_COLOR_FAINT, /* NEW_MOVED_DIM */
GIT_COLOR_FAINT_ITALIC, /* NEW_MOVED_ALTERNATIVE_DIM */
GIT_COLOR_FAINT, /* CONTEXT_DIM */
GIT_COLOR_FAINT_RED, /* OLD_DIM */
GIT_COLOR_FAINT_GREEN, /* NEW_DIM */
GIT_COLOR_BOLD, /* CONTEXT_BOLD */
GIT_COLOR_BOLD_RED, /* OLD_BOLD */
GIT_COLOR_BOLD_GREEN, /* NEW_BOLD */
};
static const char *color_diff_slots[] = {
[DIFF_CONTEXT] = "context",
[DIFF_METAINFO] = "meta",
[DIFF_FRAGINFO] = "frag",
[DIFF_FILE_OLD] = "old",
[DIFF_FILE_NEW] = "new",
[DIFF_COMMIT] = "commit",
[DIFF_WHITESPACE] = "whitespace",
[DIFF_FUNCINFO] = "func",
[DIFF_FILE_OLD_MOVED] = "oldMoved",
[DIFF_FILE_OLD_MOVED_ALT] = "oldMovedAlternative",
[DIFF_FILE_OLD_MOVED_DIM] = "oldMovedDimmed",
[DIFF_FILE_OLD_MOVED_ALT_DIM] = "oldMovedAlternativeDimmed",
[DIFF_FILE_NEW_MOVED] = "newMoved",
[DIFF_FILE_NEW_MOVED_ALT] = "newMovedAlternative",
[DIFF_FILE_NEW_MOVED_DIM] = "newMovedDimmed",
[DIFF_FILE_NEW_MOVED_ALT_DIM] = "newMovedAlternativeDimmed",
[DIFF_CONTEXT_DIM] = "contextDimmed",
[DIFF_FILE_OLD_DIM] = "oldDimmed",
[DIFF_FILE_NEW_DIM] = "newDimmed",
[DIFF_CONTEXT_BOLD] = "contextBold",
[DIFF_FILE_OLD_BOLD] = "oldBold",
[DIFF_FILE_NEW_BOLD] = "newBold",
};
define_list_config_array_extra(color_diff_slots, {"plain"});
static int parse_diff_color_slot(const char *var)
{
if (!strcasecmp(var, "plain"))
return DIFF_CONTEXT;
return LOOKUP_CONFIG(color_diff_slots, var);
}
static int parse_dirstat_params(struct diff_options *options, const char *params_string,
struct strbuf *errmsg)
{
char *params_copy = xstrdup(params_string);
struct string_list params = STRING_LIST_INIT_NODUP;
int ret = 0;
int i;
if (*params_copy)
string_list_split_in_place(¶ms, params_copy, ",", -1);
for (i = 0; i < params.nr; i++) {
const char *p = params.items[i].string;
if (!strcmp(p, "changes")) {
options->flags.dirstat_by_line = 0;
options->flags.dirstat_by_file = 0;
} else if (!strcmp(p, "lines")) {
options->flags.dirstat_by_line = 1;
options->flags.dirstat_by_file = 0;
} else if (!strcmp(p, "files")) {
options->flags.dirstat_by_line = 0;
options->flags.dirstat_by_file = 1;
} else if (!strcmp(p, "noncumulative")) {
options->flags.dirstat_cumulative = 0;
} else if (!strcmp(p, "cumulative")) {
options->flags.dirstat_cumulative = 1;
} else if (isdigit(*p)) {
char *end;
int permille = strtoul(p, &end, 10) * 10;
if (*end == '.' && isdigit(*++end)) {
/* only use first digit */
permille += *end - '0';
/* .. and ignore any further digits */
while (isdigit(*++end))
; /* nothing */
}
if (!*end)
options->dirstat_permille = permille;
else {
strbuf_addf(errmsg, _(" Failed to parse dirstat cut-off percentage '%s'\n"),
p);
ret++;
}
} else {
strbuf_addf(errmsg, _(" Unknown dirstat parameter '%s'\n"), p);
ret++;
}
}
string_list_clear(¶ms, 0);
free(params_copy);
return ret;
}
static int parse_submodule_params(struct diff_options *options, const char *value)
{
if (!strcmp(value, "log"))
options->submodule_format = DIFF_SUBMODULE_LOG;
else if (!strcmp(value, "short"))
options->submodule_format = DIFF_SUBMODULE_SHORT;
else if (!strcmp(value, "diff"))
options->submodule_format = DIFF_SUBMODULE_INLINE_DIFF;
/*
* Please update $__git_diff_submodule_formats in
* git-completion.bash when you add new formats.
*/
else
return -1;
return 0;
}
int git_config_rename(const char *var, const char *value)
{
if (!value)
return DIFF_DETECT_RENAME;
if (!strcasecmp(value, "copies") || !strcasecmp(value, "copy"))
return DIFF_DETECT_COPY;
return git_config_bool(var,value) ? DIFF_DETECT_RENAME : 0;
}
long parse_algorithm_value(const char *value)
{
if (!value)
return -1;
else if (!strcasecmp(value, "myers") || !strcasecmp(value, "default"))
return 0;
else if (!strcasecmp(value, "minimal"))
return XDF_NEED_MINIMAL;
else if (!strcasecmp(value, "patience"))
return XDF_PATIENCE_DIFF;
else if (!strcasecmp(value, "histogram"))
return XDF_HISTOGRAM_DIFF;
/*
* Please update $__git_diff_algorithms in git-completion.bash
* when you add new algorithms.
*/
return -1;
}
static int parse_one_token(const char **arg, const char *token)
{
const char *rest;
if (skip_prefix(*arg, token, &rest) && (!*rest || *rest == ',')) {
*arg = rest;
return 1;
}
return 0;
}
static int parse_ws_error_highlight(const char *arg)
{
const char *orig_arg = arg;
unsigned val = 0;
while (*arg) {
if (parse_one_token(&arg, "none"))
val = 0;
else if (parse_one_token(&arg, "default"))
val = WSEH_NEW;
else if (parse_one_token(&arg, "all"))
val = WSEH_NEW | WSEH_OLD | WSEH_CONTEXT;
else if (parse_one_token(&arg, "new"))
val |= WSEH_NEW;
else if (parse_one_token(&arg, "old"))
val |= WSEH_OLD;
else if (parse_one_token(&arg, "context"))
val |= WSEH_CONTEXT;
else {
return -1 - (int)(arg - orig_arg);
}
if (*arg)
arg++;
}
return val;
}
/*
* These are to give UI layer defaults.
* The core-level commands such as git-diff-files should
* never be affected by the setting of diff.renames
* the user happens to have in the configuration file.
*/
void init_diff_ui_defaults(void)
{
diff_detect_rename_default = DIFF_DETECT_RENAME;
}
int git_diff_heuristic_config(const char *var, const char *value,
void *cb UNUSED)
{
if (!strcmp(var, "diff.indentheuristic"))
diff_indent_heuristic = git_config_bool(var, value);
return 0;
}
static int parse_color_moved(const char *arg)
{
switch (git_parse_maybe_bool(arg)) {
case 0:
return COLOR_MOVED_NO;
case 1:
return COLOR_MOVED_DEFAULT;
default:
break;
}
if (!strcmp(arg, "no"))
return COLOR_MOVED_NO;
else if (!strcmp(arg, "plain"))
return COLOR_MOVED_PLAIN;
else if (!strcmp(arg, "blocks"))
return COLOR_MOVED_BLOCKS;
else if (!strcmp(arg, "zebra"))
return COLOR_MOVED_ZEBRA;
else if (!strcmp(arg, "default"))
return COLOR_MOVED_DEFAULT;
else if (!strcmp(arg, "dimmed-zebra"))
return COLOR_MOVED_ZEBRA_DIM;
else if (!strcmp(arg, "dimmed_zebra"))
return COLOR_MOVED_ZEBRA_DIM;
else
return error(_("color moved setting must be one of 'no', 'default', 'blocks', 'zebra', 'dimmed-zebra', 'plain'"));
}
static unsigned parse_color_moved_ws(const char *arg)
{
int ret = 0;
struct string_list l = STRING_LIST_INIT_DUP;
struct string_list_item *i;
string_list_split_f(&l, arg, ",", -1, STRING_LIST_SPLIT_TRIM);
for_each_string_list_item(i, &l) {
if (!strcmp(i->string, "no"))
ret = 0;
else if (!strcmp(i->string, "ignore-space-change"))
ret |= XDF_IGNORE_WHITESPACE_CHANGE;
else if (!strcmp(i->string, "ignore-space-at-eol"))
ret |= XDF_IGNORE_WHITESPACE_AT_EOL;
else if (!strcmp(i->string, "ignore-all-space"))
ret |= XDF_IGNORE_WHITESPACE;
else if (!strcmp(i->string, "allow-indentation-change"))
ret |= COLOR_MOVED_WS_ALLOW_INDENTATION_CHANGE;
else {
ret |= COLOR_MOVED_WS_ERROR;
error(_("unknown color-moved-ws mode '%s', possible values are 'ignore-space-change', 'ignore-space-at-eol', 'ignore-all-space', 'allow-indentation-change'"), i->string);
}
}
if ((ret & COLOR_MOVED_WS_ALLOW_INDENTATION_CHANGE) &&
(ret & XDF_WHITESPACE_FLAGS)) {
error(_("color-moved-ws: allow-indentation-change cannot be combined with other whitespace modes"));
ret |= COLOR_MOVED_WS_ERROR;
}
string_list_clear(&l, 0);
return ret;
}
int git_diff_ui_config(const char *var, const char *value,
const struct config_context *ctx, void *cb)
{
if (!strcmp(var, "diff.color") || !strcmp(var, "color.diff")) {
diff_use_color_default = git_config_colorbool(var, value);
return 0;
}
if (!strcmp(var, "diff.colormoved")) {
int cm = parse_color_moved(value);
if (cm < 0)
return -1;
diff_color_moved_default = cm;
return 0;
}
if (!strcmp(var, "diff.colormovedws")) {
unsigned cm;
if (!value)
return config_error_nonbool(var);
cm = parse_color_moved_ws(value);
if (cm & COLOR_MOVED_WS_ERROR)
return -1;
diff_color_moved_ws_default = cm;
return 0;
}
if (!strcmp(var, "diff.context")) {
diff_context_default = git_config_int(var, value, ctx->kvi);
if (diff_context_default < 0)
return -1;
return 0;
}
if (!strcmp(var, "diff.interhunkcontext")) {
diff_interhunk_context_default = git_config_int(var, value,
ctx->kvi);
if (diff_interhunk_context_default < 0)
return -1;
return 0;
}
if (!strcmp(var, "diff.renames")) {
diff_detect_rename_default = git_config_rename(var, value);
return 0;
}
if (!strcmp(var, "diff.autorefreshindex")) {
diff_auto_refresh_index = git_config_bool(var, value);
return 0;
}
if (!strcmp(var, "diff.mnemonicprefix")) {
diff_mnemonic_prefix = git_config_bool(var, value);
return 0;
}
if (!strcmp(var, "diff.noprefix")) {
diff_no_prefix = git_config_bool(var, value);
return 0;
}
if (!strcmp(var, "diff.srcprefix")) {
FREE_AND_NULL(diff_src_prefix);
return git_config_string(&diff_src_prefix, var, value);
}
if (!strcmp(var, "diff.dstprefix")) {
FREE_AND_NULL(diff_dst_prefix);
return git_config_string(&diff_dst_prefix, var, value);
}
if (!strcmp(var, "diff.relative")) {
diff_relative = git_config_bool(var, value);
return 0;
}
if (!strcmp(var, "diff.statnamewidth")) {
diff_stat_name_width = git_config_int(var, value, ctx->kvi);
return 0;
}
if (!strcmp(var, "diff.statgraphwidth")) {
diff_stat_graph_width = git_config_int(var, value, ctx->kvi);
return 0;
}
if (!strcmp(var, "diff.external"))
return git_config_string(&external_diff_cfg.cmd, var, value);
if (!strcmp(var, "diff.trustexitcode")) {
external_diff_cfg.trust_exit_code = git_config_bool(var, value);
return 0;
}
if (!strcmp(var, "diff.wordregex"))
return git_config_string(&diff_word_regex_cfg, var, value);
if (!strcmp(var, "diff.orderfile")) {
FREE_AND_NULL(diff_order_file_cfg);
return git_config_pathname(&diff_order_file_cfg, var, value);
}
if (!strcmp(var, "diff.ignoresubmodules")) {
if (!value)
return config_error_nonbool(var);
handle_ignore_submodules_arg(&default_diff_options, value);
}
if (!strcmp(var, "diff.submodule")) {
if (!value)
return config_error_nonbool(var);
if (parse_submodule_params(&default_diff_options, value))
warning(_("Unknown value for 'diff.submodule' config variable: '%s'"),
value);
return 0;
}
if (!strcmp(var, "diff.algorithm")) {
if (!value)
return config_error_nonbool(var);
diff_algorithm = parse_algorithm_value(value);
if (diff_algorithm < 0)
return error(_("unknown value for config '%s': %s"),
var, value);
return 0;
}
if (git_color_config(var, value, cb) < 0)
return -1;
return git_diff_basic_config(var, value, ctx, cb);
}
int git_diff_basic_config(const char *var, const char *value,
const struct config_context *ctx, void *cb)
{
const char *name;
if (!strcmp(var, "diff.renamelimit")) {
diff_rename_limit_default = git_config_int(var, value, ctx->kvi);
return 0;
}
if (userdiff_config(var, value) < 0)
return -1;
if (skip_prefix(var, "diff.color.", &name) ||
skip_prefix(var, "color.diff.", &name)) {
int slot = parse_diff_color_slot(name);
if (slot < 0)
return 0;
if (!value)
return config_error_nonbool(var);
return color_parse(value, diff_colors[slot]);
}
if (!strcmp(var, "diff.wserrorhighlight")) {
int val;
if (!value)
return config_error_nonbool(var);
val = parse_ws_error_highlight(value);
if (val < 0)
return error(_("unknown value for config '%s': %s"),
var, value);
ws_error_highlight_default = val;
return 0;
}
/* like GNU diff's --suppress-blank-empty option */
if (!strcmp(var, "diff.suppressblankempty") ||
/* for backwards compatibility */
!strcmp(var, "diff.suppress-blank-empty")) {
diff_suppress_blank_empty = git_config_bool(var, value);
return 0;
}
if (!strcmp(var, "diff.dirstat")) {
struct strbuf errmsg = STRBUF_INIT;
if (!value)
return config_error_nonbool(var);
default_diff_options.dirstat_permille = diff_dirstat_permille_default;
if (parse_dirstat_params(&default_diff_options, value, &errmsg))
warning(_("Found errors in 'diff.dirstat' config variable:\n%s"),
errmsg.buf);
strbuf_release(&errmsg);
diff_dirstat_permille_default = default_diff_options.dirstat_permille;
return 0;
}
if (git_diff_heuristic_config(var, value, cb) < 0)
return -1;
return git_default_config(var, value, ctx, cb);
}
static char *quote_two(const char *one, const char *two)
{
int need_one = quote_c_style(one, NULL, NULL, CQUOTE_NODQ);
int need_two = quote_c_style(two, NULL, NULL, CQUOTE_NODQ);
struct strbuf res = STRBUF_INIT;
if (need_one + need_two) {
strbuf_addch(&res, '"');
quote_c_style(one, &res, NULL, CQUOTE_NODQ);
quote_c_style(two, &res, NULL, CQUOTE_NODQ);
strbuf_addch(&res, '"');
} else {
strbuf_addstr(&res, one);
strbuf_addstr(&res, two);
}
return strbuf_detach(&res, NULL);
}
static const struct external_diff *external_diff(void)
{
static struct external_diff external_diff_env, *external_diff_ptr;
static int done_preparing = 0;
if (done_preparing)
return external_diff_ptr;
external_diff_env.cmd = xstrdup_or_null(getenv("GIT_EXTERNAL_DIFF"));
if (git_env_bool("GIT_EXTERNAL_DIFF_TRUST_EXIT_CODE", 0))
external_diff_env.trust_exit_code = 1;
if (external_diff_env.cmd)
external_diff_ptr = &external_diff_env;
else if (external_diff_cfg.cmd)
external_diff_ptr = &external_diff_cfg;
done_preparing = 1;
return external_diff_ptr;
}
/*
* Keep track of files used for diffing. Sometimes such an entry
* refers to a temporary file, sometimes to an existing file, and
* sometimes to "/dev/null".
*/
static struct diff_tempfile {
/*
* filename external diff should read from, or NULL if this
* entry is currently not in use:
*/
const char *name;
char hex[GIT_MAX_HEXSZ + 1];
char mode[10];
/*
* If this diff_tempfile instance refers to a temporary file,
* this tempfile object is used to manage its lifetime.
*/
struct tempfile *tempfile;
} diff_temp[2];
struct emit_callback {
int color_diff;
unsigned ws_rule;
int blank_at_eof_in_preimage;
int blank_at_eof_in_postimage;
int lno_in_preimage;
int lno_in_postimage;
int last_line_kind;
const char **label_path;
struct diff_words_data *diff_words;
struct diff_options *opt;
struct strbuf *header;
};
static int count_lines(const char *data, int size)
{
int count, ch, completely_empty = 1, nl_just_seen = 0;
count = 0;
while (0 < size--) {
ch = *data++;
if (ch == '\n') {
count++;
nl_just_seen = 1;
completely_empty = 0;
}
else {
nl_just_seen = 0;
completely_empty = 0;
}
}
if (completely_empty)
return 0;
if (!nl_just_seen)
count++; /* no trailing newline */
return count;
}
static int fill_mmfile(struct repository *r, mmfile_t *mf,
struct diff_filespec *one)
{
if (!DIFF_FILE_VALID(one)) {
mf->ptr = (char *)""; /* does not matter */
mf->size = 0;
return 0;
}
else if (diff_populate_filespec(r, one, NULL))
return -1;
mf->ptr = one->data;
mf->size = one->size;
return 0;
}
/* like fill_mmfile, but only for size, so we can avoid retrieving blob */
static unsigned long diff_filespec_size(struct repository *r,
struct diff_filespec *one)
{
struct diff_populate_filespec_options dpf_options = {
.check_size_only = 1,
};
if (!DIFF_FILE_VALID(one))
return 0;
diff_populate_filespec(r, one, &dpf_options);
return one->size;
}
static int count_trailing_blank(mmfile_t *mf)
{
char *ptr = mf->ptr;
long size = mf->size;
int cnt = 0;
if (!size)
return cnt;
ptr += size - 1; /* pointing at the very end */
if (*ptr != '\n')
; /* incomplete line */
else
ptr--; /* skip the last LF */
while (mf->ptr < ptr) {
char *prev_eol;
for (prev_eol = ptr; mf->ptr <= prev_eol; prev_eol--)
if (*prev_eol == '\n')
break;
if (!ws_blank_line(prev_eol + 1, ptr - prev_eol))
break;
cnt++;
ptr = prev_eol - 1;
}
return cnt;
}
static void check_blank_at_eof(mmfile_t *mf1, mmfile_t *mf2,
struct emit_callback *ecbdata)
{
int l1, l2, at;
l1 = count_trailing_blank(mf1);
l2 = count_trailing_blank(mf2);
if (l2 <= l1) {
ecbdata->blank_at_eof_in_preimage = 0;
ecbdata->blank_at_eof_in_postimage = 0;
return;
}
at = count_lines(mf1->ptr, mf1->size);
ecbdata->blank_at_eof_in_preimage = (at - l1) + 1;
at = count_lines(mf2->ptr, mf2->size);
ecbdata->blank_at_eof_in_postimage = (at - l2) + 1;
}
static void emit_line_0(struct diff_options *o,
const char *set_sign, const char *set, unsigned reverse, const char *reset,
int first, const char *line, int len)
{
int has_trailing_newline, has_trailing_carriage_return;
int needs_reset = 0; /* at the end of the line */
FILE *file = o->file;
fputs(diff_line_prefix(o), file);
has_trailing_newline = (len > 0 && line[len-1] == '\n');
if (has_trailing_newline)
len--;
has_trailing_carriage_return = (len > 0 && line[len-1] == '\r');
if (has_trailing_carriage_return)
len--;
if (!len && !first)
goto end_of_line;
if (reverse && want_color(o->use_color)) {
fputs(GIT_COLOR_REVERSE, file);
needs_reset = 1;
}
if (set_sign) {
fputs(set_sign, file);
needs_reset = 1;
}
if (first)
fputc(first, file);
if (!len)
goto end_of_line;
if (set) {
if (set_sign && set != set_sign)
fputs(reset, file);
fputs(set, file);
needs_reset = 1;
}
fwrite(line, len, 1, file);
needs_reset = 1; /* 'line' may contain color codes. */
end_of_line:
if (needs_reset)
fputs(reset, file);
if (has_trailing_carriage_return)
fputc('\r', file);
if (has_trailing_newline)
fputc('\n', file);
}
static void emit_line(struct diff_options *o, const char *set, const char *reset,
const char *line, int len)
{
emit_line_0(o, set, NULL, 0, reset, 0, line, len);
}
enum diff_symbol {
DIFF_SYMBOL_BINARY_DIFF_HEADER,
DIFF_SYMBOL_BINARY_DIFF_HEADER_DELTA,
DIFF_SYMBOL_BINARY_DIFF_HEADER_LITERAL,
DIFF_SYMBOL_BINARY_DIFF_BODY,
DIFF_SYMBOL_BINARY_DIFF_FOOTER,
DIFF_SYMBOL_STATS_SUMMARY_NO_FILES,
DIFF_SYMBOL_STATS_SUMMARY_ABBREV,
DIFF_SYMBOL_STATS_SUMMARY_INSERTS_DELETES,
DIFF_SYMBOL_STATS_LINE,
DIFF_SYMBOL_WORD_DIFF,
DIFF_SYMBOL_STAT_SEP,
DIFF_SYMBOL_SUMMARY,
DIFF_SYMBOL_SUBMODULE_ADD,
DIFF_SYMBOL_SUBMODULE_DEL,
DIFF_SYMBOL_SUBMODULE_UNTRACKED,
DIFF_SYMBOL_SUBMODULE_MODIFIED,
DIFF_SYMBOL_SUBMODULE_HEADER,
DIFF_SYMBOL_SUBMODULE_ERROR,
DIFF_SYMBOL_SUBMODULE_PIPETHROUGH,
DIFF_SYMBOL_REWRITE_DIFF,
DIFF_SYMBOL_BINARY_FILES,
DIFF_SYMBOL_HEADER,
DIFF_SYMBOL_FILEPAIR_PLUS,
DIFF_SYMBOL_FILEPAIR_MINUS,
DIFF_SYMBOL_WORDS_PORCELAIN,
DIFF_SYMBOL_WORDS,
DIFF_SYMBOL_CONTEXT,
DIFF_SYMBOL_CONTEXT_INCOMPLETE,
DIFF_SYMBOL_PLUS,
DIFF_SYMBOL_MINUS,
DIFF_SYMBOL_CONTEXT_FRAGINFO,
DIFF_SYMBOL_CONTEXT_MARKER,
DIFF_SYMBOL_SEPARATOR
};
/*
* Flags for content lines:
* 0..15 are whitespace rules (see ws.h)
* 16..18 are WSEH_NEW | WSEH_CONTEXT | WSEH_OLD
* 19 is marking if the line is blank at EOF
* 20..22 are used for color-moved.
*/
#define DIFF_SYMBOL_CONTENT_BLANK_LINE_EOF (1<<19)
#define DIFF_SYMBOL_MOVED_LINE (1<<20)
#define DIFF_SYMBOL_MOVED_LINE_ALT (1<<21)
#define DIFF_SYMBOL_MOVED_LINE_UNINTERESTING (1<<22)
#define DIFF_SYMBOL_CONTENT_WS_MASK (WSEH_NEW | WSEH_OLD | WSEH_CONTEXT | WS_RULE_MASK)
/*
* This struct is used when we need to buffer the output of the diff output.
*
* NEEDSWORK: Instead of storing a copy of the line, add an offset pointer
* into the pre/post image file. This pointer could be a union with the
* line pointer. By storing an offset into the file instead of the literal line,
* we can decrease the memory footprint for the buffered output. At first we
* may want to only have indirection for the content lines, but we could also
* enhance the state for emitting prefabricated lines, e.g. the similarity
* score line or hunk/file headers would only need to store a number or path
* and then the output can be constructed later on depending on state.
*/
struct emitted_diff_symbol {
const char *line;
int len;
int flags;
int indent_off; /* Offset to first non-whitespace character */
int indent_width; /* The visual width of the indentation */
unsigned id;
enum diff_symbol s;
};
#define EMITTED_DIFF_SYMBOL_INIT { 0 }
struct emitted_diff_symbols {
struct emitted_diff_symbol *buf;
int nr, alloc;
};
#define EMITTED_DIFF_SYMBOLS_INIT { 0 }
static void append_emitted_diff_symbol(struct diff_options *o,
struct emitted_diff_symbol *e)
{
struct emitted_diff_symbol *f;
ALLOC_GROW(o->emitted_symbols->buf,
o->emitted_symbols->nr + 1,
o->emitted_symbols->alloc);
f = &o->emitted_symbols->buf[o->emitted_symbols->nr++];
memcpy(f, e, sizeof(struct emitted_diff_symbol));
f->line = e->line ? xmemdupz(e->line, e->len) : NULL;
}
static void free_emitted_diff_symbols(struct emitted_diff_symbols *e)
{
if (!e)
return;
free(e->buf);
free(e);
}
struct moved_entry {
const struct emitted_diff_symbol *es;
struct moved_entry *next_line;
struct moved_entry *next_match;
};
struct moved_block {
struct moved_entry *match;
int wsd; /* The whitespace delta of this block */
};
#define INDENT_BLANKLINE INT_MIN
static void fill_es_indent_data(struct emitted_diff_symbol *es)
{
unsigned int off = 0, i;
int width = 0, tab_width = es->flags & WS_TAB_WIDTH_MASK;
const char *s = es->line;
const int len = es->len;
/* skip any \v \f \r at start of indentation */
while (s[off] == '\f' || s[off] == '\v' ||
(off < len - 1 && s[off] == '\r'))
off++;
/* calculate the visual width of indentation */
while(1) {
if (s[off] == ' ') {
width++;
off++;
} else if (s[off] == '\t') {
width += tab_width - (width % tab_width);
while (s[++off] == '\t')
width += tab_width;
} else {
break;
}
}
/* check if this line is blank */
for (i = off; i < len; i++)
if (!isspace(s[i]))
break;
if (i == len) {
es->indent_width = INDENT_BLANKLINE;
es->indent_off = len;
} else {
es->indent_off = off;
es->indent_width = width;
}
}
static int compute_ws_delta(const struct emitted_diff_symbol *a,
const struct emitted_diff_symbol *b)
{
int a_width = a->indent_width,
b_width = b->indent_width;
if (a_width == INDENT_BLANKLINE && b_width == INDENT_BLANKLINE)
return INDENT_BLANKLINE;
return a_width - b_width;
}
static int cmp_in_block_with_wsd(const struct moved_entry *cur,
const struct emitted_diff_symbol *l,
struct moved_block *pmb)
{
int a_width = cur->es->indent_width, b_width = l->indent_width;
int delta;
/* The text of each line must match */
if (cur->es->id != l->id)
return 1;
/*
* If 'l' and 'cur' are both blank then we don't need to check the
* indent. We only need to check cur as we know the strings match.
* */
if (a_width == INDENT_BLANKLINE)
return 0;
/*
* The indent changes of the block are known and stored in pmb->wsd;
* however we need to check if the indent changes of the current line
* match those of the current block.
*/
delta = b_width - a_width;
/*
* If the previous lines of this block were all blank then set its
* whitespace delta.
*/
if (pmb->wsd == INDENT_BLANKLINE)
pmb->wsd = delta;
return delta != pmb->wsd;
}
struct interned_diff_symbol {
struct hashmap_entry ent;
struct emitted_diff_symbol *es;
};
static int interned_diff_symbol_cmp(const void *hashmap_cmp_fn_data,
const struct hashmap_entry *eptr,
const struct hashmap_entry *entry_or_key,
const void *keydata UNUSED)
{
const struct diff_options *diffopt = hashmap_cmp_fn_data;
const struct emitted_diff_symbol *a, *b;
unsigned flags = diffopt->color_moved_ws_handling
& XDF_WHITESPACE_FLAGS;
a = container_of(eptr, const struct interned_diff_symbol, ent)->es;
b = container_of(entry_or_key, const struct interned_diff_symbol, ent)->es;
return !xdiff_compare_lines(a->line + a->indent_off,
a->len - a->indent_off,
b->line + b->indent_off,
b->len - b->indent_off, flags);
}
static void prepare_entry(struct diff_options *o, struct emitted_diff_symbol *l,
struct interned_diff_symbol *s)
{
unsigned flags = o->color_moved_ws_handling & XDF_WHITESPACE_FLAGS;
unsigned int hash = xdiff_hash_string(l->line + l->indent_off,
l->len - l->indent_off, flags);
hashmap_entry_init(&s->ent, hash);
s->es = l;
}
struct moved_entry_list {
struct moved_entry *add, *del;
};
static struct moved_entry_list *add_lines_to_move_detection(struct diff_options *o,
struct mem_pool *entry_mem_pool)
{
struct moved_entry *prev_line = NULL;
struct mem_pool interned_pool;
struct hashmap interned_map;
struct moved_entry_list *entry_list = NULL;
size_t entry_list_alloc = 0;
unsigned id = 0;
int n;
hashmap_init(&interned_map, interned_diff_symbol_cmp, o, 8096);
mem_pool_init(&interned_pool, 1024 * 1024);
for (n = 0; n < o->emitted_symbols->nr; n++) {
struct interned_diff_symbol key;
struct emitted_diff_symbol *l = &o->emitted_symbols->buf[n];
struct interned_diff_symbol *s;
struct moved_entry *entry;
if (l->s != DIFF_SYMBOL_PLUS && l->s != DIFF_SYMBOL_MINUS) {
prev_line = NULL;
continue;
}
if (o->color_moved_ws_handling &
COLOR_MOVED_WS_ALLOW_INDENTATION_CHANGE)
fill_es_indent_data(l);
prepare_entry(o, l, &key);
s = hashmap_get_entry(&interned_map, &key, ent, &key.ent);
if (s) {
l->id = s->es->id;
} else {
l->id = id;
ALLOC_GROW_BY(entry_list, id, 1, entry_list_alloc);
hashmap_add(&interned_map,
memcpy(mem_pool_alloc(&interned_pool,
sizeof(key)),
&key, sizeof(key)));
}
entry = mem_pool_alloc(entry_mem_pool, sizeof(*entry));
entry->es = l;
entry->next_line = NULL;
if (prev_line && prev_line->es->s == l->s)
prev_line->next_line = entry;
prev_line = entry;
if (l->s == DIFF_SYMBOL_PLUS) {
entry->next_match = entry_list[l->id].add;
entry_list[l->id].add = entry;
} else {
entry->next_match = entry_list[l->id].del;
entry_list[l->id].del = entry;
}
}
hashmap_clear(&interned_map);
mem_pool_discard(&interned_pool, 0);
return entry_list;
}
static void pmb_advance_or_null(struct diff_options *o,
struct emitted_diff_symbol *l,
struct moved_block *pmb,
int *pmb_nr)
{
int i, j;
for (i = 0, j = 0; i < *pmb_nr; i++) {
int match;
struct moved_entry *prev = pmb[i].match;
struct moved_entry *cur = (prev && prev->next_line) ?
prev->next_line : NULL;
if (o->color_moved_ws_handling &
COLOR_MOVED_WS_ALLOW_INDENTATION_CHANGE)
match = cur &&
!cmp_in_block_with_wsd(cur, l, &pmb[i]);
else
match = cur && cur->es->id == l->id;
if (match) {
pmb[j] = pmb[i];
pmb[j++].match = cur;
}
}
*pmb_nr = j;
}
static void fill_potential_moved_blocks(struct diff_options *o,
struct moved_entry *match,
struct emitted_diff_symbol *l,
struct moved_block **pmb_p,
int *pmb_alloc_p, int *pmb_nr_p)
{
struct moved_block *pmb = *pmb_p;
int pmb_alloc = *pmb_alloc_p, pmb_nr = *pmb_nr_p;
/*
* The current line is the start of a new block.
* Setup the set of potential blocks.
*/
for (; match; match = match->next_match) {
ALLOC_GROW(pmb, pmb_nr + 1, pmb_alloc);
if (o->color_moved_ws_handling &
COLOR_MOVED_WS_ALLOW_INDENTATION_CHANGE)
pmb[pmb_nr].wsd = compute_ws_delta(l, match->es);
else
pmb[pmb_nr].wsd = 0;
pmb[pmb_nr++].match = match;
}
*pmb_p = pmb;
*pmb_alloc_p = pmb_alloc;
*pmb_nr_p = pmb_nr;
}
/*
* If o->color_moved is COLOR_MOVED_PLAIN, this function does nothing.
*
* Otherwise, if the last block has fewer alphanumeric characters than
* COLOR_MOVED_MIN_ALNUM_COUNT, unset DIFF_SYMBOL_MOVED_LINE on all lines in
* that block.
*
* The last block consists of the (n - block_length)'th line up to but not
* including the nth line.
*
* Returns 0 if the last block is empty or is unset by this function, non zero
* otherwise.
*
* NEEDSWORK: This uses the same heuristic as blame_entry_score() in blame.c.
* Think of a way to unify them.
*/
#define DIFF_SYMBOL_MOVED_LINE_ZEBRA_MASK \
(DIFF_SYMBOL_MOVED_LINE | DIFF_SYMBOL_MOVED_LINE_ALT)
static int adjust_last_block(struct diff_options *o, int n, int block_length)
{
int i, alnum_count = 0;
if (o->color_moved == COLOR_MOVED_PLAIN)
return block_length;
for (i = 1; i < block_length + 1; i++) {
const char *c = o->emitted_symbols->buf[n - i].line;
for (; *c; c++) {
if (!isalnum(*c))
continue;
alnum_count++;
if (alnum_count >= COLOR_MOVED_MIN_ALNUM_COUNT)
return 1;
}
}
for (i = 1; i < block_length + 1; i++)
o->emitted_symbols->buf[n - i].flags &= ~DIFF_SYMBOL_MOVED_LINE_ZEBRA_MASK;
return 0;
}
/* Find blocks of moved code, delegate actual coloring decision to helper */
static void mark_color_as_moved(struct diff_options *o,
struct moved_entry_list *entry_list)
{
struct moved_block *pmb = NULL; /* potentially moved blocks */
int pmb_nr = 0, pmb_alloc = 0;
int n, flipped_block = 0, block_length = 0;
enum diff_symbol moved_symbol = DIFF_SYMBOL_BINARY_DIFF_HEADER;
for (n = 0; n < o->emitted_symbols->nr; n++) {
struct moved_entry *match = NULL;
struct emitted_diff_symbol *l = &o->emitted_symbols->buf[n];
switch (l->s) {
case DIFF_SYMBOL_PLUS:
match = entry_list[l->id].del;
break;
case DIFF_SYMBOL_MINUS:
match = entry_list[l->id].add;
break;
default:
flipped_block = 0;
}
if (pmb_nr && (!match || l->s != moved_symbol)) {
if (!adjust_last_block(o, n, block_length) &&
block_length > 1) {
/*
* Rewind in case there is another match
* starting at the second line of the block
*/
match = NULL;
n -= block_length;
}
pmb_nr = 0;
block_length = 0;
flipped_block = 0;
}
if (!match) {
moved_symbol = DIFF_SYMBOL_BINARY_DIFF_HEADER;
continue;
}
if (o->color_moved == COLOR_MOVED_PLAIN) {
l->flags |= DIFF_SYMBOL_MOVED_LINE;
continue;
}
pmb_advance_or_null(o, l, pmb, &pmb_nr);
if (pmb_nr == 0) {
int contiguous = adjust_last_block(o, n, block_length);
if (!contiguous && block_length > 1)
/*
* Rewind in case there is another match
* starting at the second line of the block
*/
n -= block_length;
else
fill_potential_moved_blocks(o, match, l,
&pmb, &pmb_alloc,
&pmb_nr);
if (contiguous && pmb_nr && moved_symbol == l->s)
flipped_block = (flipped_block + 1) % 2;
else
flipped_block = 0;
if (pmb_nr)
moved_symbol = l->s;
else
moved_symbol = DIFF_SYMBOL_BINARY_DIFF_HEADER;
block_length = 0;
}
if (pmb_nr) {
block_length++;
l->flags |= DIFF_SYMBOL_MOVED_LINE;
if (flipped_block && o->color_moved != COLOR_MOVED_BLOCKS)
l->flags |= DIFF_SYMBOL_MOVED_LINE_ALT;
}
}
adjust_last_block(o, n, block_length);
free(pmb);
}
static void dim_moved_lines(struct diff_options *o)
{
int n;
for (n = 0; n < o->emitted_symbols->nr; n++) {
struct emitted_diff_symbol *prev = (n != 0) ?
&o->emitted_symbols->buf[n - 1] : NULL;
struct emitted_diff_symbol *l = &o->emitted_symbols->buf[n];
struct emitted_diff_symbol *next =
(n < o->emitted_symbols->nr - 1) ?
&o->emitted_symbols->buf[n + 1] : NULL;
/* Not a plus or minus line? */
if (l->s != DIFF_SYMBOL_PLUS && l->s != DIFF_SYMBOL_MINUS)
continue;
/* Not a moved line? */
if (!(l->flags & DIFF_SYMBOL_MOVED_LINE))
continue;
/*
* If prev or next are not a plus or minus line,
* pretend they don't exist
*/
if (prev && prev->s != DIFF_SYMBOL_PLUS &&
prev->s != DIFF_SYMBOL_MINUS)
prev = NULL;
if (next && next->s != DIFF_SYMBOL_PLUS &&
next->s != DIFF_SYMBOL_MINUS)
next = NULL;
/* Inside a block? */
if ((prev &&
(prev->flags & DIFF_SYMBOL_MOVED_LINE_ZEBRA_MASK) ==
(l->flags & DIFF_SYMBOL_MOVED_LINE_ZEBRA_MASK)) &&
(next &&
(next->flags & DIFF_SYMBOL_MOVED_LINE_ZEBRA_MASK) ==
(l->flags & DIFF_SYMBOL_MOVED_LINE_ZEBRA_MASK))) {
l->flags |= DIFF_SYMBOL_MOVED_LINE_UNINTERESTING;
continue;
}
/* Check if we are at an interesting bound: */
if (prev && (prev->flags & DIFF_SYMBOL_MOVED_LINE) &&
(prev->flags & DIFF_SYMBOL_MOVED_LINE_ALT) !=
(l->flags & DIFF_SYMBOL_MOVED_LINE_ALT))
continue;
if (next && (next->flags & DIFF_SYMBOL_MOVED_LINE) &&
(next->flags & DIFF_SYMBOL_MOVED_LINE_ALT) !=
(l->flags & DIFF_SYMBOL_MOVED_LINE_ALT))
continue;
/*
* The boundary to prev and next are not interesting,
* so this line is not interesting as a whole
*/
l->flags |= DIFF_SYMBOL_MOVED_LINE_UNINTERESTING;
}
}
static void emit_line_ws_markup(struct diff_options *o,
const char *set_sign, const char *set,
const char *reset,
int sign_index, const char *line, int len,
unsigned ws_rule, int blank_at_eof)
{
const char *ws = NULL;
int sign = o->output_indicators[sign_index];
if (diff_suppress_blank_empty &&
sign_index == OUTPUT_INDICATOR_CONTEXT &&
len == 1 && line[0] == '\n')
sign = 0;
if (o->ws_error_highlight & ws_rule) {
ws = diff_get_color_opt(o, DIFF_WHITESPACE);
if (!*ws)
ws = NULL;
}
if (!ws && !set_sign) {
emit_line_0(o, set, NULL, 0, reset, sign, line, len);
} else if (!ws) {
emit_line_0(o, set_sign, set, !!set_sign, reset, sign, line, len);
} else if (blank_at_eof) {
/* Blank line at EOF - paint '+' as well */
emit_line_0(o, ws, NULL, 0, reset, sign, line, len);
} else {
/* Emit just the prefix, then the rest. */
emit_line_0(o, set_sign ? set_sign : set, NULL, !!set_sign, reset,
sign, "", 0);
ws_check_emit(line, len, ws_rule,
o->file, set, reset, ws);
}
}
static void emit_diff_symbol_from_struct(struct diff_options *o,
struct emitted_diff_symbol *eds)
{
const char *context, *reset, *set, *set_sign, *meta, *fraginfo;
enum diff_symbol s = eds->s;
const char *line = eds->line;
int len = eds->len;
unsigned flags = eds->flags;
if (!o->file)
return;
switch (s) {
case DIFF_SYMBOL_SUBMODULE_HEADER:
case DIFF_SYMBOL_SUBMODULE_ERROR:
case DIFF_SYMBOL_SUBMODULE_PIPETHROUGH:
case DIFF_SYMBOL_STATS_SUMMARY_INSERTS_DELETES:
case DIFF_SYMBOL_SUMMARY:
case DIFF_SYMBOL_STATS_LINE:
case DIFF_SYMBOL_BINARY_DIFF_BODY:
case DIFF_SYMBOL_CONTEXT_FRAGINFO:
emit_line(o, "", "", line, len);
break;
case DIFF_SYMBOL_CONTEXT_INCOMPLETE:
if ((flags & WS_INCOMPLETE_LINE) &&
(flags & o->ws_error_highlight))
set = diff_get_color_opt(o, DIFF_WHITESPACE);
else
set = diff_get_color_opt(o, DIFF_CONTEXT);
reset = diff_get_color_opt(o, DIFF_RESET);
emit_line(o, set, reset, line, len);
break;
case DIFF_SYMBOL_CONTEXT_MARKER:
context = diff_get_color_opt(o, DIFF_CONTEXT);
reset = diff_get_color_opt(o, DIFF_RESET);
emit_line(o, context, reset, line, len);
break;
case DIFF_SYMBOL_SEPARATOR:
fprintf(o->file, "%s%c",
diff_line_prefix(o),
o->line_termination);
break;
case DIFF_SYMBOL_CONTEXT:
set = diff_get_color_opt(o, DIFF_CONTEXT);
reset = diff_get_color_opt(o, DIFF_RESET);
set_sign = NULL;
if (o->flags.dual_color_diffed_diffs) {
char c = !len ? 0 : line[0];
if (c == '+')
set = diff_get_color_opt(o, DIFF_FILE_NEW);
else if (c == '@')
set = diff_get_color_opt(o, DIFF_FRAGINFO);
else if (c == '-')
set = diff_get_color_opt(o, DIFF_FILE_OLD);
}
emit_line_ws_markup(o, set_sign, set, reset,
OUTPUT_INDICATOR_CONTEXT, line, len,
flags & (DIFF_SYMBOL_CONTENT_WS_MASK), 0);
break;
case DIFF_SYMBOL_PLUS:
switch (flags & (DIFF_SYMBOL_MOVED_LINE |
DIFF_SYMBOL_MOVED_LINE_ALT |
DIFF_SYMBOL_MOVED_LINE_UNINTERESTING)) {
case DIFF_SYMBOL_MOVED_LINE |
DIFF_SYMBOL_MOVED_LINE_ALT |
DIFF_SYMBOL_MOVED_LINE_UNINTERESTING:
set = diff_get_color_opt(o, DIFF_FILE_NEW_MOVED_ALT_DIM);
break;
case DIFF_SYMBOL_MOVED_LINE |
DIFF_SYMBOL_MOVED_LINE_ALT:
set = diff_get_color_opt(o, DIFF_FILE_NEW_MOVED_ALT);
break;
case DIFF_SYMBOL_MOVED_LINE |
DIFF_SYMBOL_MOVED_LINE_UNINTERESTING:
set = diff_get_color_opt(o, DIFF_FILE_NEW_MOVED_DIM);
break;
case DIFF_SYMBOL_MOVED_LINE:
set = diff_get_color_opt(o, DIFF_FILE_NEW_MOVED);
break;
default:
set = diff_get_color_opt(o, DIFF_FILE_NEW);
}
reset = diff_get_color_opt(o, DIFF_RESET);
if (!o->flags.dual_color_diffed_diffs)
set_sign = NULL;
else {
char c = !len ? 0 : line[0];
set_sign = set;
if (c == '-')
set = diff_get_color_opt(o, DIFF_FILE_OLD_BOLD);
else if (c == '@')
set = diff_get_color_opt(o, DIFF_FRAGINFO);
else if (c == '+')
set = diff_get_color_opt(o, DIFF_FILE_NEW_BOLD);
else
set = diff_get_color_opt(o, DIFF_CONTEXT_BOLD);
flags &= ~DIFF_SYMBOL_CONTENT_WS_MASK;
}
emit_line_ws_markup(o, set_sign, set, reset,
OUTPUT_INDICATOR_NEW, line, len,
flags & DIFF_SYMBOL_CONTENT_WS_MASK,
flags & DIFF_SYMBOL_CONTENT_BLANK_LINE_EOF);
break;
case DIFF_SYMBOL_MINUS:
switch (flags & (DIFF_SYMBOL_MOVED_LINE |
DIFF_SYMBOL_MOVED_LINE_ALT |
DIFF_SYMBOL_MOVED_LINE_UNINTERESTING)) {
case DIFF_SYMBOL_MOVED_LINE |
DIFF_SYMBOL_MOVED_LINE_ALT |
DIFF_SYMBOL_MOVED_LINE_UNINTERESTING:
set = diff_get_color_opt(o, DIFF_FILE_OLD_MOVED_ALT_DIM);
break;
case DIFF_SYMBOL_MOVED_LINE |
DIFF_SYMBOL_MOVED_LINE_ALT:
set = diff_get_color_opt(o, DIFF_FILE_OLD_MOVED_ALT);
break;
case DIFF_SYMBOL_MOVED_LINE |
DIFF_SYMBOL_MOVED_LINE_UNINTERESTING:
set = diff_get_color_opt(o, DIFF_FILE_OLD_MOVED_DIM);
break;
case DIFF_SYMBOL_MOVED_LINE:
set = diff_get_color_opt(o, DIFF_FILE_OLD_MOVED);
break;
default:
set = diff_get_color_opt(o, DIFF_FILE_OLD);
}
reset = diff_get_color_opt(o, DIFF_RESET);
if (!o->flags.dual_color_diffed_diffs)
set_sign = NULL;
else {
char c = !len ? 0 : line[0];
set_sign = set;
if (c == '+')
set = diff_get_color_opt(o, DIFF_FILE_NEW_DIM);
else if (c == '@')
set = diff_get_color_opt(o, DIFF_FRAGINFO);
else if (c == '-')
set = diff_get_color_opt(o, DIFF_FILE_OLD_DIM);
else
set = diff_get_color_opt(o, DIFF_CONTEXT_DIM);
}
emit_line_ws_markup(o, set_sign, set, reset,
OUTPUT_INDICATOR_OLD, line, len,
flags & DIFF_SYMBOL_CONTENT_WS_MASK, 0);
break;
case DIFF_SYMBOL_WORDS_PORCELAIN:
context = diff_get_color_opt(o, DIFF_CONTEXT);
reset = diff_get_color_opt(o, DIFF_RESET);
emit_line(o, context, reset, line, len);
fputs("~\n", o->file);
break;
case DIFF_SYMBOL_WORDS:
context = diff_get_color_opt(o, DIFF_CONTEXT);
reset = diff_get_color_opt(o, DIFF_RESET);
/* Skip the prefix character */
line++; len--;
emit_line(o, context, reset, line, len);
break;
case DIFF_SYMBOL_FILEPAIR_PLUS:
meta = diff_get_color_opt(o, DIFF_METAINFO);
reset = diff_get_color_opt(o, DIFF_RESET);
fprintf(o->file, "%s%s+++ %s%s%s\n", diff_line_prefix(o), meta,
line, reset,
strchr(line, ' ') ? "\t" : "");
break;
case DIFF_SYMBOL_FILEPAIR_MINUS:
meta = diff_get_color_opt(o, DIFF_METAINFO);
reset = diff_get_color_opt(o, DIFF_RESET);
fprintf(o->file, "%s%s--- %s%s%s\n", diff_line_prefix(o), meta,
line, reset,
strchr(line, ' ') ? "\t" : "");
break;
case DIFF_SYMBOL_BINARY_FILES:
case DIFF_SYMBOL_HEADER:
fprintf(o->file, "%s", line);
break;
case DIFF_SYMBOL_BINARY_DIFF_HEADER:
fprintf(o->file, "%sGIT binary patch\n", diff_line_prefix(o));
break;
case DIFF_SYMBOL_BINARY_DIFF_HEADER_DELTA:
fprintf(o->file, "%sdelta %s\n", diff_line_prefix(o), line);
break;
case DIFF_SYMBOL_BINARY_DIFF_HEADER_LITERAL:
fprintf(o->file, "%sliteral %s\n", diff_line_prefix(o), line);
break;
case DIFF_SYMBOL_BINARY_DIFF_FOOTER:
fputs(diff_line_prefix(o), o->file);
fputc('\n', o->file);
break;
case DIFF_SYMBOL_REWRITE_DIFF:
fraginfo = diff_get_color(o->use_color, DIFF_FRAGINFO);
reset = diff_get_color_opt(o, DIFF_RESET);
emit_line(o, fraginfo, reset, line, len);
break;
case DIFF_SYMBOL_SUBMODULE_ADD:
set = diff_get_color_opt(o, DIFF_FILE_NEW);
reset = diff_get_color_opt(o, DIFF_RESET);
emit_line(o, set, reset, line, len);
break;
case DIFF_SYMBOL_SUBMODULE_DEL:
set = diff_get_color_opt(o, DIFF_FILE_OLD);
reset = diff_get_color_opt(o, DIFF_RESET);
emit_line(o, set, reset, line, len);
break;
case DIFF_SYMBOL_SUBMODULE_UNTRACKED:
fprintf(o->file, "%sSubmodule %s contains untracked content\n",
diff_line_prefix(o), line);
break;
case DIFF_SYMBOL_SUBMODULE_MODIFIED:
fprintf(o->file, "%sSubmodule %s contains modified content\n",
diff_line_prefix(o), line);
break;
case DIFF_SYMBOL_STATS_SUMMARY_NO_FILES:
emit_line(o, "", "", " 0 files changed\n",
strlen(" 0 files changed\n"));
break;
case DIFF_SYMBOL_STATS_SUMMARY_ABBREV:
emit_line(o, "", "", " ...\n", strlen(" ...\n"));
break;
case DIFF_SYMBOL_WORD_DIFF:
fprintf(o->file, "%.*s", len, line);
break;
case DIFF_SYMBOL_STAT_SEP:
fputs(o->stat_sep, o->file);
break;
default:
BUG("unknown diff symbol");
}
}
static void emit_diff_symbol(struct diff_options *o, enum diff_symbol s,
const char *line, int len, unsigned flags)
{
struct emitted_diff_symbol e = {
.line = line, .len = len, .flags = flags, .s = s
};
if (o->emitted_symbols)
append_emitted_diff_symbol(o, &e);
else
emit_diff_symbol_from_struct(o, &e);
}
void diff_emit_submodule_del(struct diff_options *o, const char *line)
{
emit_diff_symbol(o, DIFF_SYMBOL_SUBMODULE_DEL, line, strlen(line), 0);
}
void diff_emit_submodule_add(struct diff_options *o, const char *line)
{
emit_diff_symbol(o, DIFF_SYMBOL_SUBMODULE_ADD, line, strlen(line), 0);
}
void diff_emit_submodule_untracked(struct diff_options *o, const char *path)
{
emit_diff_symbol(o, DIFF_SYMBOL_SUBMODULE_UNTRACKED,
path, strlen(path), 0);
}
void diff_emit_submodule_modified(struct diff_options *o, const char *path)
{
emit_diff_symbol(o, DIFF_SYMBOL_SUBMODULE_MODIFIED,
path, strlen(path), 0);
}
void diff_emit_submodule_header(struct diff_options *o, const char *header)
{
emit_diff_symbol(o, DIFF_SYMBOL_SUBMODULE_HEADER,
header, strlen(header), 0);
}
void diff_emit_submodule_error(struct diff_options *o, const char *err)
{
emit_diff_symbol(o, DIFF_SYMBOL_SUBMODULE_ERROR, err, strlen(err), 0);
}
void diff_emit_submodule_pipethrough(struct diff_options *o,
const char *line, int len)
{
emit_diff_symbol(o, DIFF_SYMBOL_SUBMODULE_PIPETHROUGH, line, len, 0);
}
static int new_blank_line_at_eof(struct emit_callback *ecbdata, const char *line, int len)
{
if (!((ecbdata->ws_rule & WS_BLANK_AT_EOF) &&
ecbdata->blank_at_eof_in_preimage &&
ecbdata->blank_at_eof_in_postimage &&
ecbdata->blank_at_eof_in_preimage <= ecbdata->lno_in_preimage &&
ecbdata->blank_at_eof_in_postimage <= ecbdata->lno_in_postimage))
return 0;
return ws_blank_line(line, len);
}
static void emit_add_line(struct emit_callback *ecbdata,
const char *line, int len)
{
unsigned flags = WSEH_NEW | ecbdata->ws_rule;
if (new_blank_line_at_eof(ecbdata, line, len))
flags |= DIFF_SYMBOL_CONTENT_BLANK_LINE_EOF;
emit_diff_symbol(ecbdata->opt, DIFF_SYMBOL_PLUS, line, len, flags);
}
static void emit_del_line(struct emit_callback *ecbdata,
const char *line, int len)
{
unsigned flags = WSEH_OLD | ecbdata->ws_rule;
emit_diff_symbol(ecbdata->opt, DIFF_SYMBOL_MINUS, line, len, flags);
}
static void emit_context_line(struct emit_callback *ecbdata,
const char *line, int len)
{
unsigned flags = WSEH_CONTEXT | ecbdata->ws_rule;
emit_diff_symbol(ecbdata->opt, DIFF_SYMBOL_CONTEXT, line, len, flags);
}
static void emit_incomplete_line_marker(struct emit_callback *ecbdata,
const char *line, int len)
{
int last_line_kind = ecbdata->last_line_kind;
unsigned flags = (last_line_kind == '+'
? WSEH_NEW
: last_line_kind == '-'
? WSEH_OLD
: WSEH_CONTEXT) | ecbdata->ws_rule;
emit_diff_symbol(ecbdata->opt, DIFF_SYMBOL_CONTEXT_INCOMPLETE,
line, len, flags);
}
static void emit_hunk_header(struct emit_callback *ecbdata,
const char *line, int len)
{
const char *context = diff_get_color(ecbdata->color_diff, DIFF_CONTEXT);
const char *frag = diff_get_color(ecbdata->color_diff, DIFF_FRAGINFO);
const char *func = diff_get_color(ecbdata->color_diff, DIFF_FUNCINFO);
const char *reset = diff_get_color(ecbdata->color_diff, DIFF_RESET);
const char *reverse = want_color(ecbdata->color_diff) ? GIT_COLOR_REVERSE : "";
static const char atat[2] = { '@', '@' };
const char *cp, *ep;
struct strbuf msgbuf = STRBUF_INIT;
int org_len = len;
int i = 1;
/*
* As a hunk header must begin with "@@ -<old>, +<new> @@",
* it always is at least 10 bytes long.
*/
if (len < 10 ||
memcmp(line, atat, 2) ||
!(ep = memmem(line + 2, len - 2, atat, 2))) {
emit_diff_symbol(ecbdata->opt,
DIFF_SYMBOL_CONTEXT_MARKER, line, len, 0);
return;
}
ep += 2; /* skip over @@ */
/* The hunk header in fraginfo color */
if (ecbdata->opt->flags.dual_color_diffed_diffs)
strbuf_addstr(&msgbuf, reverse);
strbuf_addstr(&msgbuf, frag);
if (ecbdata->opt->flags.suppress_hunk_header_line_count)
strbuf_add(&msgbuf, atat, sizeof(atat));
else
strbuf_add(&msgbuf, line, ep - line);
strbuf_addstr(&msgbuf, reset);
/*
* trailing "\r\n"
*/
for ( ; i < 3; i++)
if (line[len - i] == '\r' || line[len - i] == '\n')
len--;
/* blank before the func header */
for (cp = ep; ep - line < len; ep++)
if (*ep != ' ' && *ep != '\t')
break;
if (ep != cp) {
strbuf_addstr(&msgbuf, context);
strbuf_add(&msgbuf, cp, ep - cp);
strbuf_addstr(&msgbuf, reset);
}
if (ep < line + len) {
strbuf_addstr(&msgbuf, func);
strbuf_add(&msgbuf, ep, line + len - ep);
strbuf_addstr(&msgbuf, reset);
}
strbuf_add(&msgbuf, line + len, org_len - len);
strbuf_complete_line(&msgbuf);
emit_diff_symbol(ecbdata->opt,
DIFF_SYMBOL_CONTEXT_FRAGINFO, msgbuf.buf, msgbuf.len, 0);
strbuf_release(&msgbuf);
}
static struct diff_tempfile *claim_diff_tempfile(void)
{
int i;
for (i = 0; i < ARRAY_SIZE(diff_temp); i++)
if (!diff_temp[i].name)
return diff_temp + i;
BUG("diff is failing to clean up its tempfiles");
}
static void remove_tempfile(void)
{
int i;
for (i = 0; i < ARRAY_SIZE(diff_temp); i++) {
if (is_tempfile_active(diff_temp[i].tempfile))
delete_tempfile(&diff_temp[i].tempfile);
diff_temp[i].name = NULL;
}
}
static void add_line_count(struct strbuf *out, int count)
{
switch (count) {
case 0:
strbuf_addstr(out, "0,0");
break;
case 1:
strbuf_addstr(out, "1");
break;
default:
strbuf_addf(out, "1,%d", count);
break;
}
}
static void emit_rewrite_lines(struct emit_callback *ecbdata,
int prefix, const char *data, int size)
{
const char *endp = NULL;
while (0 < size) {
int len, plen;
char *pdata = NULL;
endp = memchr(data, '\n', size);
if (endp) {
len = endp - data + 1;
plen = len;
} else {
len = size;
plen = len + 1;
pdata = xmalloc(plen + 2);
memcpy(pdata, data, len);
pdata[len] = '\n';
pdata[len + 1] = '\0';
}
if (prefix != '+') {
ecbdata->lno_in_preimage++;
emit_del_line(ecbdata, pdata ? pdata : data, plen);
} else {
ecbdata->lno_in_postimage++;
emit_add_line(ecbdata, pdata ? pdata : data, plen);
}
free(pdata);
size -= len;
data += len;
}
if (!endp) {
static const char nneof[] = "\\ No newline at end of file\n";
ecbdata->last_line_kind = prefix;
emit_incomplete_line_marker(ecbdata, nneof, sizeof(nneof) - 1);
}
}
static void emit_rewrite_diff(const char *name_a,
const char *name_b,
struct diff_filespec *one,
struct diff_filespec *two,
struct userdiff_driver *textconv_one,
struct userdiff_driver *textconv_two,
struct diff_options *o)
{
int lc_a, lc_b;
static struct strbuf a_name = STRBUF_INIT, b_name = STRBUF_INIT;
const char *a_prefix, *b_prefix;
char *data_one, *data_two;
size_t size_one, size_two;
struct emit_callback ecbdata;
struct strbuf out = STRBUF_INIT;
if (diff_mnemonic_prefix && o->flags.reverse_diff) {
a_prefix = o->b_prefix;
b_prefix = o->a_prefix;
} else {
a_prefix = o->a_prefix;
b_prefix = o->b_prefix;
}
name_a += (*name_a == '/');
name_b += (*name_b == '/');
strbuf_reset(&a_name);
strbuf_reset(&b_name);
quote_two_c_style(&a_name, a_prefix, name_a, 0);
quote_two_c_style(&b_name, b_prefix, name_b, 0);
size_one = fill_textconv(o->repo, textconv_one, one, &data_one);
size_two = fill_textconv(o->repo, textconv_two, two, &data_two);
memset(&ecbdata, 0, sizeof(ecbdata));
ecbdata.color_diff = o->use_color;
ecbdata.ws_rule = whitespace_rule(o->repo->index, name_b);
ecbdata.opt = o;
if (ecbdata.ws_rule & WS_BLANK_AT_EOF) {
mmfile_t mf1, mf2;
mf1.ptr = (char *)data_one;
mf2.ptr = (char *)data_two;
mf1.size = size_one;
mf2.size = size_two;
check_blank_at_eof(&mf1, &mf2, &ecbdata);
}
ecbdata.lno_in_preimage = 1;
ecbdata.lno_in_postimage = 1;
lc_a = count_lines(data_one, size_one);
lc_b = count_lines(data_two, size_two);
emit_diff_symbol(o, DIFF_SYMBOL_FILEPAIR_MINUS,
a_name.buf, a_name.len, 0);
emit_diff_symbol(o, DIFF_SYMBOL_FILEPAIR_PLUS,
b_name.buf, b_name.len, 0);
strbuf_addstr(&out, "@@ -");
if (!o->irreversible_delete)
add_line_count(&out, lc_a);
else
strbuf_addstr(&out, "?,?");
strbuf_addstr(&out, " +");
add_line_count(&out, lc_b);
strbuf_addstr(&out, " @@\n");
emit_diff_symbol(o, DIFF_SYMBOL_REWRITE_DIFF, out.buf, out.len, 0);
strbuf_release(&out);
if (lc_a && !o->irreversible_delete)
emit_rewrite_lines(&ecbdata, '-', data_one, size_one);
if (lc_b)
emit_rewrite_lines(&ecbdata, '+', data_two, size_two);
if (textconv_one)
free((char *)data_one);
if (textconv_two)
free((char *)data_two);
}
struct diff_words_buffer {
mmfile_t text;
unsigned long alloc;
struct diff_words_orig {
const char *begin, *end;
} *orig;
int orig_nr, orig_alloc;
};
static void diff_words_append(char *line, unsigned long len,
struct diff_words_buffer *buffer)
{
ALLOC_GROW(buffer->text.ptr, buffer->text.size + len, buffer->alloc);
line++;
len--;
memcpy(buffer->text.ptr + buffer->text.size, line, len);
buffer->text.size += len;
buffer->text.ptr[buffer->text.size] = '\0';
}
struct diff_words_style_elem {
const char *prefix;
const char *suffix;
const char *color; /* NULL; filled in by the setup code if
* color is enabled */
};
struct diff_words_style {
enum diff_words_type type;
struct diff_words_style_elem new_word, old_word, ctx;
const char *newline;
};
static struct diff_words_style diff_words_styles[] = {
{ DIFF_WORDS_PORCELAIN, {"+", "\n"}, {"-", "\n"}, {" ", "\n"}, "~\n" },
{ DIFF_WORDS_PLAIN, {"{+", "+}"}, {"[-", "-]"}, {"", ""}, "\n" },
{ DIFF_WORDS_COLOR, {"", ""}, {"", ""}, {"", ""}, "\n" }
};
struct diff_words_data {
struct diff_words_buffer minus, plus;
const char *current_plus;
int last_minus;
struct diff_options *opt;
regex_t *word_regex;
enum diff_words_type type;
struct diff_words_style *style;
};
static int fn_out_diff_words_write_helper(struct diff_options *o,
struct diff_words_style_elem *st_el,
const char *newline,
size_t count, const char *buf)
{
int print = 0;
struct strbuf sb = STRBUF_INIT;
while (count) {
char *p = memchr(buf, '\n', count);
if (print)
strbuf_addstr(&sb, diff_line_prefix(o));
if (p != buf) {
const char *reset = st_el->color && *st_el->color ?
GIT_COLOR_RESET : NULL;
if (st_el->color && *st_el->color)
strbuf_addstr(&sb, st_el->color);
strbuf_addstr(&sb, st_el->prefix);
strbuf_add(&sb, buf, p ? p - buf : count);
strbuf_addstr(&sb, st_el->suffix);
if (reset)
strbuf_addstr(&sb, reset);
}
if (!p)
goto out;
strbuf_addstr(&sb, newline);
count -= p + 1 - buf;
buf = p + 1;
print = 1;
if (count) {
emit_diff_symbol(o, DIFF_SYMBOL_WORD_DIFF,
sb.buf, sb.len, 0);
strbuf_reset(&sb);
}
}
out:
if (sb.len)
emit_diff_symbol(o, DIFF_SYMBOL_WORD_DIFF,
sb.buf, sb.len, 0);
strbuf_release(&sb);
return 0;
}
/*
* '--color-words' algorithm can be described as:
*
* 1. collect the minus/plus lines of a diff hunk, divided into
* minus-lines and plus-lines;
*
* 2. break both minus-lines and plus-lines into words and
* place them into two mmfile_t with one word for each line;
*
* 3. use xdiff to run diff on the two mmfile_t to get the words level diff;
*
* And for the common parts of the both file, we output the plus side text.
* diff_words->current_plus is used to trace the current position of the plus file
* which printed. diff_words->last_minus is used to trace the last minus word
* printed.
*
* For '--graph' to work with '--color-words', we need to output the graph prefix
* on each line of color words output. Generally, there are two conditions on
* which we should output the prefix.
*
* 1. diff_words->last_minus == 0 &&
* diff_words->current_plus == diff_words->plus.text.ptr
*
* that is: the plus text must start as a new line, and if there is no minus
* word printed, a graph prefix must be printed.
*
* 2. diff_words->current_plus > diff_words->plus.text.ptr &&
* *(diff_words->current_plus - 1) == '\n'
*
* that is: a graph prefix must be printed following a '\n'
*/
static int color_words_output_graph_prefix(struct diff_words_data *diff_words)
{
if ((diff_words->last_minus == 0 &&
diff_words->current_plus == diff_words->plus.text.ptr) ||
(diff_words->current_plus > diff_words->plus.text.ptr &&
*(diff_words->current_plus - 1) == '\n')) {
return 1;
} else {
return 0;
}
}
static void fn_out_diff_words_aux(void *priv,
long minus_first, long minus_len,
long plus_first, long plus_len,
const char *func UNUSED, long funclen UNUSED)
{
struct diff_words_data *diff_words = priv;
struct diff_words_style *style = diff_words->style;
const char *minus_begin, *minus_end, *plus_begin, *plus_end;
struct diff_options *opt = diff_words->opt;
const char *line_prefix;
assert(opt);
line_prefix = diff_line_prefix(opt);
/* POSIX requires that first be decremented by one if len == 0... */
if (minus_len) {
minus_begin = diff_words->minus.orig[minus_first].begin;
minus_end =
diff_words->minus.orig[minus_first + minus_len - 1].end;
} else
minus_begin = minus_end =
diff_words->minus.orig[minus_first].end;
if (plus_len) {
plus_begin = diff_words->plus.orig[plus_first].begin;
plus_end = diff_words->plus.orig[plus_first + plus_len - 1].end;
} else
plus_begin = plus_end = diff_words->plus.orig[plus_first].end;
if (color_words_output_graph_prefix(diff_words)) {
fputs(line_prefix, diff_words->opt->file);
}
if (diff_words->current_plus != plus_begin) {
fn_out_diff_words_write_helper(diff_words->opt,
&style->ctx, style->newline,
plus_begin - diff_words->current_plus,
diff_words->current_plus);
}
if (minus_begin != minus_end) {
fn_out_diff_words_write_helper(diff_words->opt,
&style->old_word, style->newline,
minus_end - minus_begin, minus_begin);
}
if (plus_begin != plus_end) {
fn_out_diff_words_write_helper(diff_words->opt,
&style->new_word, style->newline,
plus_end - plus_begin, plus_begin);
}
diff_words->current_plus = plus_end;
diff_words->last_minus = minus_first;
}
/* This function starts looking at *begin, and returns 0 iff a word was found. */
static int find_word_boundaries(mmfile_t *buffer, regex_t *word_regex,
int *begin, int *end)
{
while (word_regex && *begin < buffer->size) {
regmatch_t match[1];
if (!regexec_buf(word_regex, buffer->ptr + *begin,
buffer->size - *begin, 1, match, 0)) {
char *p = memchr(buffer->ptr + *begin + match[0].rm_so,
'\n', match[0].rm_eo - match[0].rm_so);
*end = p ? p - buffer->ptr : match[0].rm_eo + *begin;
*begin += match[0].rm_so;
if (*begin == *end)
(*begin)++;
else
return *begin > *end;
} else {
return -1;
}
}
/* find the next word */
while (*begin < buffer->size && isspace(buffer->ptr[*begin]))
(*begin)++;
if (*begin >= buffer->size)
return -1;
/* find the end of the word */
*end = *begin + 1;
while (*end < buffer->size && !isspace(buffer->ptr[*end]))
(*end)++;
return 0;
}
/*
* This function splits the words in buffer->text, stores the list with
* newline separator into out, and saves the offsets of the original words
* in buffer->orig.
*/
static void diff_words_fill(struct diff_words_buffer *buffer, mmfile_t *out,
regex_t *word_regex)
{
int i, j;
long alloc = 0;
out->size = 0;
out->ptr = NULL;
/* fake an empty "0th" word */
ALLOC_GROW(buffer->orig, 1, buffer->orig_alloc);
buffer->orig[0].begin = buffer->orig[0].end = buffer->text.ptr;
buffer->orig_nr = 1;
for (i = 0; i < buffer->text.size; i++) {
if (find_word_boundaries(&buffer->text, word_regex, &i, &j))
return;
/* store original boundaries */
ALLOC_GROW(buffer->orig, buffer->orig_nr + 1,
buffer->orig_alloc);
buffer->orig[buffer->orig_nr].begin = buffer->text.ptr + i;
buffer->orig[buffer->orig_nr].end = buffer->text.ptr + j;
buffer->orig_nr++;
/* store one word */
ALLOC_GROW(out->ptr, out->size + j - i + 1, alloc);
memcpy(out->ptr + out->size, buffer->text.ptr + i, j - i);
out->ptr[out->size + j - i] = '\n';
out->size += j - i + 1;
i = j - 1;
}
}
/* this executes the word diff on the accumulated buffers */
static void diff_words_show(struct diff_words_data *diff_words)
{
xpparam_t xpp;
xdemitconf_t xecfg;
mmfile_t minus, plus;
struct diff_words_style *style = diff_words->style;
struct diff_options *opt = diff_words->opt;
const char *line_prefix;
assert(opt);
line_prefix = diff_line_prefix(opt);
/* special case: only removal */
if (!diff_words->plus.text.size) {
emit_diff_symbol(diff_words->opt, DIFF_SYMBOL_WORD_DIFF,
line_prefix, strlen(line_prefix), 0);
fn_out_diff_words_write_helper(diff_words->opt,
&style->old_word, style->newline,
diff_words->minus.text.size,
diff_words->minus.text.ptr);
diff_words->minus.text.size = 0;
return;
}
diff_words->current_plus = diff_words->plus.text.ptr;
diff_words->last_minus = 0;
memset(&xpp, 0, sizeof(xpp));
memset(&xecfg, 0, sizeof(xecfg));
diff_words_fill(&diff_words->minus, &minus, diff_words->word_regex);
diff_words_fill(&diff_words->plus, &plus, diff_words->word_regex);
xpp.flags = 0;
/* as only the hunk header will be parsed, we need a 0-context */
xecfg.ctxlen = 0;
if (xdi_diff_outf(&minus, &plus, fn_out_diff_words_aux, NULL,
diff_words, &xpp, &xecfg))
die("unable to generate word diff");
free(minus.ptr);
free(plus.ptr);
if (diff_words->current_plus != diff_words->plus.text.ptr +
diff_words->plus.text.size) {
if (color_words_output_graph_prefix(diff_words))
emit_diff_symbol(diff_words->opt, DIFF_SYMBOL_WORD_DIFF,
line_prefix, strlen(line_prefix), 0);
fn_out_diff_words_write_helper(diff_words->opt,
&style->ctx, style->newline,
diff_words->plus.text.ptr + diff_words->plus.text.size
- diff_words->current_plus, diff_words->current_plus);
}
diff_words->minus.text.size = diff_words->plus.text.size = 0;
}
/* In "color-words" mode, show word-diff of words accumulated in the buffer */
static void diff_words_flush(struct emit_callback *ecbdata)
{
struct diff_options *wo = ecbdata->diff_words->opt;
if (ecbdata->diff_words->minus.text.size ||
ecbdata->diff_words->plus.text.size)
diff_words_show(ecbdata->diff_words);
if (wo->emitted_symbols) {
struct diff_options *o = ecbdata->opt;
struct emitted_diff_symbols *wol = wo->emitted_symbols;
int i;
/*
* NEEDSWORK:
* Instead of appending each, concat all words to a line?
*/
for (i = 0; i < wol->nr; i++)
append_emitted_diff_symbol(o, &wol->buf[i]);
for (i = 0; i < wol->nr; i++)
free((void *)wol->buf[i].line);
wol->nr = 0;
}
}
static void diff_filespec_load_driver(struct diff_filespec *one,
struct index_state *istate)
{
/* Use already-loaded driver */
if (one->driver)
return;
if (S_ISREG(one->mode))
one->driver = userdiff_find_by_path(istate, one->path);
/* Fallback to default settings */
if (!one->driver)
one->driver = userdiff_find_by_name("default");
}
static const char *userdiff_word_regex(struct diff_filespec *one,
struct index_state *istate)
{
diff_filespec_load_driver(one, istate);
return one->driver->word_regex;
}
static void init_diff_words_data(struct emit_callback *ecbdata,
struct diff_options *orig_opts,
struct diff_filespec *one,
struct diff_filespec *two)
{
int i;
struct diff_options *o = xmalloc(sizeof(struct diff_options));
memcpy(o, orig_opts, sizeof(struct diff_options));
CALLOC_ARRAY(ecbdata->diff_words, 1);
ecbdata->diff_words->type = o->word_diff;
ecbdata->diff_words->opt = o;
if (orig_opts->emitted_symbols)
CALLOC_ARRAY(o->emitted_symbols, 1);
if (!o->word_regex)
o->word_regex = userdiff_word_regex(one, o->repo->index);
if (!o->word_regex)
o->word_regex = userdiff_word_regex(two, o->repo->index);
if (!o->word_regex)
o->word_regex = diff_word_regex_cfg;
if (o->word_regex) {
ecbdata->diff_words->word_regex = (regex_t *)
xmalloc(sizeof(regex_t));
if (regcomp(ecbdata->diff_words->word_regex,
o->word_regex,
REG_EXTENDED | REG_NEWLINE))
die("invalid regular expression: %s",
o->word_regex);
}
for (i = 0; i < ARRAY_SIZE(diff_words_styles); i++) {
if (o->word_diff == diff_words_styles[i].type) {
ecbdata->diff_words->style =
&diff_words_styles[i];
break;
}
}
if (want_color(o->use_color)) {
struct diff_words_style *st = ecbdata->diff_words->style;
st->old_word.color = diff_get_color_opt(o, DIFF_FILE_OLD);
st->new_word.color = diff_get_color_opt(o, DIFF_FILE_NEW);
st->ctx.color = diff_get_color_opt(o, DIFF_CONTEXT);
}
}
static void free_diff_words_data(struct emit_callback *ecbdata)
{
if (ecbdata->diff_words) {
diff_words_flush(ecbdata);
free_emitted_diff_symbols(ecbdata->diff_words->opt->emitted_symbols);
free (ecbdata->diff_words->opt);
free (ecbdata->diff_words->minus.text.ptr);
free (ecbdata->diff_words->minus.orig);
free (ecbdata->diff_words->plus.text.ptr);
free (ecbdata->diff_words->plus.orig);
if (ecbdata->diff_words->word_regex) {
regfree(ecbdata->diff_words->word_regex);
free(ecbdata->diff_words->word_regex);
}
FREE_AND_NULL(ecbdata->diff_words);
}
}
const char *diff_get_color(enum git_colorbool diff_use_color, enum color_diff ix)
{
if (want_color(diff_use_color))
return diff_colors[ix];
return "";
}
const char *diff_line_prefix(struct diff_options *opt)
{
return opt->output_prefix ?
opt->output_prefix(opt, opt->output_prefix_data) :
"";
}
static unsigned long sane_truncate_line(char *line, unsigned long len)
{
const char *cp;
unsigned long allot;
size_t l = len;
cp = line;
allot = l;
while (0 < l) {
(void) utf8_width(&cp, &l);
if (!cp)
break; /* truncated in the middle? */
}
return allot - l;
}
static void find_lno(const char *line, struct emit_callback *ecbdata)
{
const char *p;
ecbdata->lno_in_preimage = 0;
ecbdata->lno_in_postimage = 0;
p = strchr(line, '-');
if (!p)
return; /* cannot happen */
ecbdata->lno_in_preimage = strtol(p + 1, NULL, 10);
p = strchr(p, '+');
if (!p)
return; /* cannot happen */
ecbdata->lno_in_postimage = strtol(p + 1, NULL, 10);
}
static int fn_out_consume(void *priv, char *line, unsigned long len)
{
struct emit_callback *ecbdata = priv;
struct diff_options *o = ecbdata->opt;
o->found_changes = 1;
if (ecbdata->header) {
emit_diff_symbol(o, DIFF_SYMBOL_HEADER,
ecbdata->header->buf, ecbdata->header->len, 0);
strbuf_reset(ecbdata->header);
ecbdata->header = NULL;
}
if (ecbdata->label_path[0]) {
emit_diff_symbol(o, DIFF_SYMBOL_FILEPAIR_MINUS,
ecbdata->label_path[0],
strlen(ecbdata->label_path[0]), 0);
emit_diff_symbol(o, DIFF_SYMBOL_FILEPAIR_PLUS,
ecbdata->label_path[1],
strlen(ecbdata->label_path[1]), 0);
ecbdata->label_path[0] = ecbdata->label_path[1] = NULL;
}
if (line[0] == '@') {
if (ecbdata->diff_words)
diff_words_flush(ecbdata);
len = sane_truncate_line(line, len);
find_lno(line, ecbdata);
emit_hunk_header(ecbdata, line, len);
return 0;
}
if (ecbdata->diff_words) {
enum diff_symbol s =
ecbdata->diff_words->type == DIFF_WORDS_PORCELAIN ?
DIFF_SYMBOL_WORDS_PORCELAIN : DIFF_SYMBOL_WORDS;
if (line[0] == '-') {
diff_words_append(line, len,
&ecbdata->diff_words->minus);
return 0;
} else if (line[0] == '+') {
diff_words_append(line, len,
&ecbdata->diff_words->plus);
return 0;
} else if (starts_with(line, "\\ ")) {
/*
* Eat the "no newline at eof" marker as if we
* saw a "+" or "-" line with nothing on it,
* and return without diff_words_flush() to
* defer processing. If this is the end of
* preimage, more "+" lines may come after it.
*/
return 0;
}
diff_words_flush(ecbdata);
emit_diff_symbol(o, s, line, len, 0);
return 0;
}
switch (line[0]) {
case '+':
ecbdata->lno_in_postimage++;
emit_add_line(ecbdata, line + 1, len - 1);
break;
case '-':
ecbdata->lno_in_preimage++;
emit_del_line(ecbdata, line + 1, len - 1);
break;
case ' ':
ecbdata->lno_in_postimage++;
ecbdata->lno_in_preimage++;
emit_context_line(ecbdata, line + 1, len - 1);
break;
case '\\':
/* incomplete line at the end */
switch (ecbdata->last_line_kind) {
case '+':
case '-':
case ' ':
break;
default:
BUG("fn_out_consume: '\\No newline' after unknown line (%c)",
ecbdata->last_line_kind);
}
ecbdata->lno_in_preimage++;
emit_incomplete_line_marker(ecbdata, line, len);
break;
default:
BUG("fn_out_consume: unknown line '%s'", line);
}
ecbdata->last_line_kind = line[0];
return 0;
}
static int quick_consume(void *priv, char *line UNUSED, unsigned long len UNUSED)
{
struct emit_callback *ecbdata = priv;
struct diff_options *o = ecbdata->opt;
o->found_changes = 1;
return 1;
}
static void pprint_rename(struct strbuf *name, const char *a, const char *b)
{
const char *old_name = a;
const char *new_name = b;
int pfx_length, sfx_length;
int pfx_adjust_for_slash;
int len_a = strlen(a);
int len_b = strlen(b);
int a_midlen, b_midlen;
int qlen_a = quote_c_style(a, NULL, NULL, 0);
int qlen_b = quote_c_style(b, NULL, NULL, 0);
if (qlen_a || qlen_b) {
quote_c_style(a, name, NULL, 0);
strbuf_addstr(name, " => ");
quote_c_style(b, name, NULL, 0);
return;
}
/* Find common prefix */
pfx_length = 0;
while (*old_name && *new_name && *old_name == *new_name) {
if (*old_name == '/')
pfx_length = old_name - a + 1;
old_name++;
new_name++;
}
/* Find common suffix */
old_name = a + len_a;
new_name = b + len_b;
sfx_length = 0;
/*
* If there is a common prefix, it must end in a slash. In
* that case we let this loop run 1 into the prefix to see the
* same slash.
*
* If there is no common prefix, we cannot do this as it would
* underrun the input strings.
*/
pfx_adjust_for_slash = (pfx_length ? 1 : 0);
while (a + pfx_length - pfx_adjust_for_slash <= old_name &&
b + pfx_length - pfx_adjust_for_slash <= new_name &&
*old_name == *new_name) {
if (*old_name == '/')
sfx_length = len_a - (old_name - a);
old_name--;
new_name--;
}
/*
* pfx{mid-a => mid-b}sfx
* {pfx-a => pfx-b}sfx
* pfx{sfx-a => sfx-b}
* name-a => name-b
*/
a_midlen = len_a - pfx_length - sfx_length;
b_midlen = len_b - pfx_length - sfx_length;
if (a_midlen < 0)
a_midlen = 0;
if (b_midlen < 0)
b_midlen = 0;
strbuf_grow(name, pfx_length + a_midlen + b_midlen + sfx_length + 7);
if (pfx_length + sfx_length) {
strbuf_add(name, a, pfx_length);
strbuf_addch(name, '{');
}
strbuf_add(name, a + pfx_length, a_midlen);
strbuf_addstr(name, " => ");
strbuf_add(name, b + pfx_length, b_midlen);
if (pfx_length + sfx_length) {
strbuf_addch(name, '}');
strbuf_add(name, a + len_a - sfx_length, sfx_length);
}
}
static struct diffstat_file *diffstat_add(struct diffstat_t *diffstat,
const char *name_a,
const char *name_b)
{
struct diffstat_file *x;
CALLOC_ARRAY(x, 1);
ALLOC_GROW(diffstat->files, diffstat->nr + 1, diffstat->alloc);
diffstat->files[diffstat->nr++] = x;
if (name_b) {
x->from_name = xstrdup(name_a);
x->name = xstrdup(name_b);
x->is_renamed = 1;
}
else {
x->from_name = NULL;
x->name = xstrdup(name_a);
}
return x;
}
static int diffstat_consume(void *priv, char *line, unsigned long len)
{
struct diffstat_t *diffstat = priv;
struct diffstat_file *x = diffstat->files[diffstat->nr - 1];
if (!len)
BUG("xdiff fed us an empty line");
if (line[0] == '+')
x->added++;
else if (line[0] == '-')
x->deleted++;
return 0;
}
const char mime_boundary_leader[] = "------------";
static int scale_linear(int it, int width, int max_change)
{
if (!it)
return 0;
/*
* make sure that at least one '-' or '+' is printed if
* there is any change to this path. The easiest way is to
* scale linearly as if the allotted width is one column shorter
* than it is, and then add 1 to the result.
*/
return 1 + (it * (width - 1) / max_change);
}
static void show_graph(struct strbuf *out, char ch, int cnt,
const char *set, const char *reset)
{
if (cnt <= 0)
return;
strbuf_addstr(out, set);
strbuf_addchars(out, ch, cnt);
strbuf_addstr(out, reset);
}
static void fill_print_name(struct diffstat_file *file)
{
struct strbuf pname = STRBUF_INIT;
if (file->print_name)
return;
if (file->is_renamed)
pprint_rename(&pname, file->from_name, file->name);
else
quote_c_style(file->name, &pname, NULL, 0);
if (file->comments)
strbuf_addf(&pname, " (%s)", file->comments);
file->print_name = strbuf_detach(&pname, NULL);
}
static void print_stat_summary_inserts_deletes(struct diff_options *options,
int files, int insertions, int deletions)
{
struct strbuf sb = STRBUF_INIT;
if (!files) {
assert(insertions == 0 && deletions == 0);
emit_diff_symbol(options, DIFF_SYMBOL_STATS_SUMMARY_NO_FILES,
NULL, 0, 0);
return;
}
strbuf_addf(&sb,
(files == 1) ? " %d file changed" : " %d files changed",
files);
/*
* For binary diff, the caller may want to print "x files
* changed" with insertions == 0 && deletions == 0.
*
* Not omitting "0 insertions(+), 0 deletions(-)" in this case
* is probably less confusing (i.e skip over "2 files changed
* but nothing about added/removed lines? Is this a bug in Git?").
*/
if (insertions || deletions == 0) {
strbuf_addf(&sb,
(insertions == 1) ? ", %d insertion(+)" : ", %d insertions(+)",
insertions);
}
if (deletions || insertions == 0) {
strbuf_addf(&sb,
(deletions == 1) ? ", %d deletion(-)" : ", %d deletions(-)",
deletions);
}
strbuf_addch(&sb, '\n');
emit_diff_symbol(options, DIFF_SYMBOL_STATS_SUMMARY_INSERTS_DELETES,
sb.buf, sb.len, 0);
strbuf_release(&sb);
}
void print_stat_summary(FILE *fp, int files,
int insertions, int deletions)
{
struct diff_options o;
memset(&o, 0, sizeof(o));
o.file = fp;
print_stat_summary_inserts_deletes(&o, files, insertions, deletions);
}
static void show_stats(struct diffstat_t *data, struct diff_options *options)
{
int i, len, add, del, adds = 0, dels = 0;
uintmax_t max_change = 0, max_len = 0;
int total_files = data->nr, count;
int width, name_width, graph_width, number_width = 0, bin_width = 0;
const char *reset, *add_c, *del_c;
int extra_shown = 0;
const char *line_prefix = diff_line_prefix(options);
struct strbuf out = STRBUF_INIT;
if (data->nr == 0)
return;
count = options->stat_count ? options->stat_count : data->nr;
reset = diff_get_color_opt(options, DIFF_RESET);
add_c = diff_get_color_opt(options, DIFF_FILE_NEW);
del_c = diff_get_color_opt(options, DIFF_FILE_OLD);
/*
* Find the longest filename and max number of changes
*/
for (i = 0; (i < count) && (i < data->nr); i++) {
struct diffstat_file *file = data->files[i];
uintmax_t change = file->added + file->deleted;
if (!file->is_interesting && (change == 0)) {
count++; /* not shown == room for one more */
continue;
}
fill_print_name(file);
len = utf8_strwidth(file->print_name);
if (max_len < len)
max_len = len;
if (file->is_unmerged) {
/* "Unmerged" is 8 characters */
bin_width = bin_width < 8 ? 8 : bin_width;
continue;
}
if (file->is_binary) {
/* "Bin XXX -> YYY bytes" */
int w = 14 + decimal_width(file->added)
+ decimal_width(file->deleted);
bin_width = bin_width < w ? w : bin_width;
/* Display change counts aligned with "Bin" */
number_width = 3;
continue;
}
if (max_change < change)
max_change = change;
}
count = i; /* where we can stop scanning in data->files[] */
/*
* We have width = stat_width or term_columns() columns total.
* We want a maximum of min(max_len, stat_name_width) for the name part.
* We want a maximum of min(max_change, stat_graph_width) for the +- part.
* We also need 1 for " " and 4 + decimal_width(max_change)
* for " | NNNN " and one the empty column at the end, altogether
* 6 + decimal_width(max_change).
*
* If there's not enough space, we will use the smaller of
* stat_name_width (if set) and 5/8*width for the filename,
* and the rest for constant elements + graph part, but no more
* than stat_graph_width for the graph part.
* (5/8 gives 50 for filename and 30 for the constant parts + graph
* for the standard terminal size).
*
* In other words: stat_width limits the maximum width, and
* stat_name_width fixes the maximum width of the filename,
* and is also used to divide available columns if there
* aren't enough.
*
* Binary files are displayed with "Bin XXX -> YYY bytes"
* instead of the change count and graph. This part is treated
* similarly to the graph part, except that it is not
* "scaled". If total width is too small to accommodate the
* guaranteed minimum width of the filename part and the
* separators and this message, this message will "overflow"
* making the line longer than the maximum width.
*/
/*
* NEEDSWORK: line_prefix is often used for "log --graph" output
* and contains ANSI-colored string. utf8_strnwidth() should be
* used to correctly count the display width instead of strlen().
*/
if (options->stat_width == -1)
width = term_columns() - strlen(line_prefix);
else
width = options->stat_width ? options->stat_width : 80;
number_width = decimal_width(max_change) > number_width ?
decimal_width(max_change) : number_width;
if (options->stat_name_width == -1)
options->stat_name_width = diff_stat_name_width;
if (options->stat_graph_width == -1)
options->stat_graph_width = diff_stat_graph_width;
/*
* Guarantee 3/8*16 == 6 for the graph part
* and 5/8*16 == 10 for the filename part
*/
if (width < 16 + 6 + number_width)
width = 16 + 6 + number_width;
/*
* First assign sizes that are wanted, ignoring available width.
* strlen("Bin XXX -> YYY bytes") == bin_width, and the part
* starting from "XXX" should fit in graph_width.
*/
graph_width = max_change + 4 > bin_width ? max_change : bin_width - 4;
if (options->stat_graph_width &&
options->stat_graph_width < graph_width)
graph_width = options->stat_graph_width;
name_width = (options->stat_name_width > 0 &&
options->stat_name_width < max_len) ?
options->stat_name_width : max_len;
/*
* Adjust adjustable widths not to exceed maximum width
*/
if (name_width + number_width + 6 + graph_width > width) {
if (graph_width > width * 3/8 - number_width - 6) {
graph_width = width * 3/8 - number_width - 6;
if (graph_width < 6)
graph_width = 6;
}
if (options->stat_graph_width &&
graph_width > options->stat_graph_width)
graph_width = options->stat_graph_width;
if (name_width > width - number_width - 6 - graph_width)
name_width = width - number_width - 6 - graph_width;
else
graph_width = width - number_width - 6 - name_width;
}
/*
* From here name_width is the width of the name area,
* and graph_width is the width of the graph area.
* max_change is used to scale graph properly.
*/
for (i = 0; i < count; i++) {
const char *prefix = "";
struct diffstat_file *file = data->files[i];
char *name = file->print_name;
uintmax_t added = file->added;
uintmax_t deleted = file->deleted;
int name_len, padding;
if (!file->is_interesting && (added + deleted == 0))
continue;
/*
* "scale" the filename
*/
len = name_width;
name_len = utf8_strwidth(name);
if (name_width < name_len) {
char *slash;
prefix = "...";
len -= 3;
if (len < 0)
len = 0;
while (name_len > len)
name_len -= utf8_width((const char**)&name, NULL);
slash = strchr(name, '/');
if (slash)
name = slash;
}
padding = len - utf8_strwidth(name);
if (padding < 0)
padding = 0;
if (file->is_binary) {
strbuf_addf(&out, " %s%s%*s | %*s",
prefix, name, padding, "",
number_width, "Bin");
if (!added && !deleted) {
strbuf_addch(&out, '\n');
emit_diff_symbol(options, DIFF_SYMBOL_STATS_LINE,
out.buf, out.len, 0);
strbuf_reset(&out);
continue;
}
strbuf_addf(&out, " %s%"PRIuMAX"%s",
del_c, deleted, reset);
strbuf_addstr(&out, " -> ");
strbuf_addf(&out, "%s%"PRIuMAX"%s",
add_c, added, reset);
strbuf_addstr(&out, " bytes\n");
emit_diff_symbol(options, DIFF_SYMBOL_STATS_LINE,
out.buf, out.len, 0);
strbuf_reset(&out);
continue;
}
else if (file->is_unmerged) {
strbuf_addf(&out, " %s%s%*s | %*s",
prefix, name, padding, "",
number_width, "Unmerged\n");
emit_diff_symbol(options, DIFF_SYMBOL_STATS_LINE,
out.buf, out.len, 0);
strbuf_reset(&out);
continue;
}
/*
* scale the add/delete
*/
add = added;
del = deleted;
if (graph_width <= max_change) {
int total = scale_linear(add + del, graph_width, max_change);
if (total < 2 && add && del)
/* width >= 2 due to the sanity check */
total = 2;
if (add < del) {
add = scale_linear(add, graph_width, max_change);
del = total - add;
} else {
del = scale_linear(del, graph_width, max_change);
add = total - del;
}
}
strbuf_addf(&out, " %s%s%*s | %*"PRIuMAX"%s",
prefix, name, padding, "",
number_width, added + deleted,
added + deleted ? " " : "");
show_graph(&out, '+', add, add_c, reset);
show_graph(&out, '-', del, del_c, reset);
strbuf_addch(&out, '\n');
emit_diff_symbol(options, DIFF_SYMBOL_STATS_LINE,
out.buf, out.len, 0);
strbuf_reset(&out);
}
for (i = 0; i < data->nr; i++) {
struct diffstat_file *file = data->files[i];
uintmax_t added = file->added;
uintmax_t deleted = file->deleted;
if (file->is_unmerged ||
(!file->is_interesting && (added + deleted == 0))) {
total_files--;
continue;
}
if (!file->is_binary) {
adds += added;
dels += deleted;
}
if (i < count)
continue;
if (!extra_shown)
emit_diff_symbol(options,
DIFF_SYMBOL_STATS_SUMMARY_ABBREV,
NULL, 0, 0);
extra_shown = 1;
}
print_stat_summary_inserts_deletes(options, total_files, adds, dels);
strbuf_release(&out);
}
static void show_shortstats(struct diffstat_t *data, struct diff_options *options)
{
int i, adds = 0, dels = 0, total_files = data->nr;
if (data->nr == 0)
return;
for (i = 0; i < data->nr; i++) {
int added = data->files[i]->added;
int deleted = data->files[i]->deleted;
if (data->files[i]->is_unmerged ||
(!data->files[i]->is_interesting && (added + deleted == 0))) {
total_files--;
} else if (!data->files[i]->is_binary) { /* don't count bytes */
adds += added;
dels += deleted;
}
}
print_stat_summary_inserts_deletes(options, total_files, adds, dels);
}
static void show_numstat(struct diffstat_t *data, struct diff_options *options)
{
int i;
if (data->nr == 0)
return;
for (i = 0; i < data->nr; i++) {
struct diffstat_file *file = data->files[i];
fprintf(options->file, "%s", diff_line_prefix(options));
if (file->is_binary)
fprintf(options->file, "-\t-\t");
else
fprintf(options->file,
"%"PRIuMAX"\t%"PRIuMAX"\t",
file->added, file->deleted);
if (options->line_termination) {
fill_print_name(file);
if (!file->is_renamed)
write_name_quoted(file->name, options->file,
options->line_termination);
else {
fputs(file->print_name, options->file);
putc(options->line_termination, options->file);
}
} else {
if (file->is_renamed) {
putc('\0', options->file);
write_name_quoted(file->from_name, options->file, '\0');
}
write_name_quoted(file->name, options->file, '\0');
}
}
}
struct dirstat_file {
const char *name;
unsigned long changed;
};
struct dirstat_dir {
struct dirstat_file *files;
int alloc, nr, permille, cumulative;
};
static long gather_dirstat(struct diff_options *opt, struct dirstat_dir *dir,
unsigned long changed, const char *base, int baselen)
{
unsigned long sum_changes = 0;
unsigned int sources = 0;
const char *line_prefix = diff_line_prefix(opt);
while (dir->nr) {
struct dirstat_file *f = dir->files;
int namelen = strlen(f->name);
unsigned long changes;
char *slash;
if (namelen < baselen)
break;
if (memcmp(f->name, base, baselen))
break;
slash = strchr(f->name + baselen, '/');
if (slash) {
int newbaselen = slash + 1 - f->name;
changes = gather_dirstat(opt, dir, changed, f->name, newbaselen);
sources++;
} else {
changes = f->changed;
dir->files++;
dir->nr--;
sources += 2;
}
sum_changes += changes;
}
/*
* We don't report dirstat's for
* - the top level
* - or cases where everything came from a single directory
* under this directory (sources == 1).
*/
if (baselen && sources != 1) {
if (sum_changes) {
int permille = sum_changes * 1000 / changed;
if (permille >= dir->permille) {
fprintf(opt->file, "%s%4d.%01d%% %.*s\n", line_prefix,
permille / 10, permille % 10, baselen, base);
if (!dir->cumulative)
return 0;
}
}
}
return sum_changes;
}
static int dirstat_compare(const void *_a, const void *_b)
{
const struct dirstat_file *a = _a;
const struct dirstat_file *b = _b;
return strcmp(a->name, b->name);
}
static void conclude_dirstat(struct diff_options *options,
struct dirstat_dir *dir,
unsigned long changed)
{
struct dirstat_file *to_free = dir->files;
if (!changed) {
/* This can happen even with many files, if everything was renames */
;
} else {
/* Show all directories with more than x% of the changes */
QSORT(dir->files, dir->nr, dirstat_compare);
gather_dirstat(options, dir, changed, "", 0);
}
free(to_free);
}
static void show_dirstat(struct diff_options *options)
{
int i;
unsigned long changed;
struct dirstat_dir dir;
struct diff_queue_struct *q = &diff_queued_diff;
dir.files = NULL;
dir.alloc = 0;
dir.nr = 0;
dir.permille = options->dirstat_permille;
dir.cumulative = options->flags.dirstat_cumulative;
changed = 0;
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
const char *name;
unsigned long copied, added, damage;
struct diff_populate_filespec_options dpf_options = {
.check_size_only = 1,
};
name = p->two->path ? p->two->path : p->one->path;
if (p->one->oid_valid && p->two->oid_valid &&
oideq(&p->one->oid, &p->two->oid)) {
/*
* The SHA1 has not changed, so pre-/post-content is
* identical. We can therefore skip looking at the
* file contents altogether.
*/
damage = 0;
goto found_damage;
}
if (options->flags.dirstat_by_file) {
/*
* In --dirstat-by-file mode, we don't really need to
* look at the actual file contents at all.
* The fact that the SHA1 changed is enough for us to
* add this file to the list of results
* (with each file contributing equal damage).
*/
damage = 1;
goto found_damage;
}
if (DIFF_FILE_VALID(p->one) && DIFF_FILE_VALID(p->two)) {
diff_populate_filespec(options->repo, p->one, NULL);
diff_populate_filespec(options->repo, p->two, NULL);
diffcore_count_changes(options->repo,
p->one, p->two, NULL, NULL,
&copied, &added);
diff_free_filespec_data(p->one);
diff_free_filespec_data(p->two);
} else if (DIFF_FILE_VALID(p->one)) {
diff_populate_filespec(options->repo, p->one, &dpf_options);
copied = added = 0;
diff_free_filespec_data(p->one);
} else if (DIFF_FILE_VALID(p->two)) {
diff_populate_filespec(options->repo, p->two, &dpf_options);
copied = 0;
added = p->two->size;
diff_free_filespec_data(p->two);
} else
continue;
/*
* Original minus copied is the removed material,
* added is the new material. They are both damages
* made to the preimage.
* If the resulting damage is zero, we know that
* diffcore_count_changes() considers the two entries to
* be identical, but since the oid changed, we
* know that there must have been _some_ kind of change,
* so we force all entries to have damage > 0.
*/
damage = (p->one->size - copied) + added;
if (!damage)
damage = 1;
found_damage:
ALLOC_GROW(dir.files, dir.nr + 1, dir.alloc);
dir.files[dir.nr].name = name;
dir.files[dir.nr].changed = damage;
changed += damage;
dir.nr++;
}
conclude_dirstat(options, &dir, changed);
}
static void show_dirstat_by_line(struct diffstat_t *data, struct diff_options *options)
{
int i;
unsigned long changed;
struct dirstat_dir dir;
if (data->nr == 0)
return;
dir.files = NULL;
dir.alloc = 0;
dir.nr = 0;
dir.permille = options->dirstat_permille;
dir.cumulative = options->flags.dirstat_cumulative;
changed = 0;
for (i = 0; i < data->nr; i++) {
struct diffstat_file *file = data->files[i];
unsigned long damage = file->added + file->deleted;
if (file->is_binary)
/*
* binary files counts bytes, not lines. Must find some
* way to normalize binary bytes vs. textual lines.
* The following heuristic assumes that there are 64
* bytes per "line".
* This is stupid and ugly, but very cheap...
*/
damage = DIV_ROUND_UP(damage, 64);
ALLOC_GROW(dir.files, dir.nr + 1, dir.alloc);
dir.files[dir.nr].name = file->name;
dir.files[dir.nr].changed = damage;
changed += damage;
dir.nr++;
}
conclude_dirstat(options, &dir, changed);
}
static void free_diffstat_file(struct diffstat_file *f)
{
free(f->print_name);
free(f->name);
free(f->from_name);
free(f);
}
void free_diffstat_info(struct diffstat_t *diffstat)
{
int i;
for (i = 0; i < diffstat->nr; i++)
free_diffstat_file(diffstat->files[i]);
free(diffstat->files);
}
struct checkdiff_t {
const char *filename;
int lineno;
int conflict_marker_size;
struct diff_options *o;
unsigned ws_rule;
unsigned status;
int last_line_kind;
};
static int is_conflict_marker(const char *line, int marker_size, unsigned long len)
{
char firstchar;
int cnt;
if (len < marker_size + 1)
return 0;
firstchar = line[0];
switch (firstchar) {
case '=': case '>': case '<': case '|':
break;
default:
return 0;
}
for (cnt = 1; cnt < marker_size; cnt++)
if (line[cnt] != firstchar)
return 0;
/* line[1] through line[marker_size-1] are same as firstchar */
if (len < marker_size + 1 || !isspace(line[marker_size]))
return 0;
return 1;
}
static void checkdiff_consume_hunk(void *priv,
long ob UNUSED, long on UNUSED,
long nb, long nn UNUSED,
const char *func UNUSED, long funclen UNUSED)
{
struct checkdiff_t *data = priv;
data->lineno = nb - 1;
}
static int checkdiff_consume(void *priv, char *line, unsigned long len)
{
struct checkdiff_t *data = priv;
int last_line_kind;
int marker_size = data->conflict_marker_size;
const char *ws = diff_get_color(data->o->use_color, DIFF_WHITESPACE);
const char *reset = diff_get_color(data->o->use_color, DIFF_RESET);
const char *set = diff_get_color(data->o->use_color, DIFF_FILE_NEW);
char *err;
const char *line_prefix;
assert(data->o);
line_prefix = diff_line_prefix(data->o);
last_line_kind = data->last_line_kind;
data->last_line_kind = line[0];
if (line[0] == '+') {
unsigned bad;
data->lineno++;
if (is_conflict_marker(line + 1, marker_size, len - 1)) {
data->status |= 1;
fprintf(data->o->file,
"%s%s:%d: leftover conflict marker\n",
line_prefix, data->filename, data->lineno);
}
bad = ws_check(line + 1, len - 1, data->ws_rule);
if (!bad)
return 0;
data->status |= bad;
err = whitespace_error_string(bad);
fprintf(data->o->file, "%s%s:%d: %s.\n",
line_prefix, data->filename, data->lineno, err);
free(err);
emit_line(data->o, set, reset, line, 1);
ws_check_emit(line + 1, len - 1, data->ws_rule,
data->o->file, set, reset, ws);
} else if (line[0] == ' ') {
data->lineno++;
} else if (line[0] == '\\') {
/* no newline at the end of the line */
if ((data->ws_rule & WS_INCOMPLETE_LINE) &&
(last_line_kind == '+')) {
unsigned bad = WS_INCOMPLETE_LINE;
data->status |= bad;
err = whitespace_error_string(bad);
fprintf(data->o->file, "%s%s:%d: %s.\n",
line_prefix, data->filename, data->lineno, err);
free(err);
}
}
return 0;
}
static unsigned char *deflate_it(char *data,
unsigned long size,
unsigned long *result_size)
{
int bound;
unsigned char *deflated;
git_zstream stream;
git_deflate_init(&stream, zlib_compression_level);
bound = git_deflate_bound(&stream, size);
deflated = xmalloc(bound);
stream.next_out = deflated;
stream.avail_out = bound;
stream.next_in = (unsigned char *)data;
stream.avail_in = size;
while (git_deflate(&stream, Z_FINISH) == Z_OK)
; /* nothing */
git_deflate_end(&stream);
*result_size = stream.total_out;
return deflated;
}
static void emit_binary_diff_body(struct diff_options *o,
mmfile_t *one, mmfile_t *two)
{
void *cp;
void *delta;
void *deflated;
void *data;
unsigned long orig_size;
unsigned long delta_size;
unsigned long deflate_size;
unsigned long data_size;
/* We could do deflated delta, or we could do just deflated two,
* whichever is smaller.
*/
delta = NULL;
deflated = deflate_it(two->ptr, two->size, &deflate_size);
if (one->size && two->size) {
delta = diff_delta(one->ptr, one->size,
two->ptr, two->size,
&delta_size, deflate_size);
if (delta) {
void *to_free = delta;
orig_size = delta_size;
delta = deflate_it(delta, delta_size, &delta_size);
free(to_free);
}
}
if (delta && delta_size < deflate_size) {
char *s = xstrfmt("%"PRIuMAX , (uintmax_t)orig_size);
emit_diff_symbol(o, DIFF_SYMBOL_BINARY_DIFF_HEADER_DELTA,
s, strlen(s), 0);
free(s);
free(deflated);
data = delta;
data_size = delta_size;
} else {
char *s = xstrfmt("%lu", two->size);
emit_diff_symbol(o, DIFF_SYMBOL_BINARY_DIFF_HEADER_LITERAL,
s, strlen(s), 0);
free(s);
free(delta);
data = deflated;
data_size = deflate_size;
}
/* emit data encoded in base85 */
cp = data;
while (data_size) {
int len;
int bytes = (52 < data_size) ? 52 : data_size;
char line[71];
data_size -= bytes;
if (bytes <= 26)
line[0] = bytes + 'A' - 1;
else
line[0] = bytes - 26 + 'a' - 1;
encode_85(line + 1, cp, bytes);
cp = (char *) cp + bytes;
len = strlen(line);
line[len++] = '\n';
line[len] = '\0';
emit_diff_symbol(o, DIFF_SYMBOL_BINARY_DIFF_BODY,
line, len, 0);
}
emit_diff_symbol(o, DIFF_SYMBOL_BINARY_DIFF_FOOTER, NULL, 0, 0);
free(data);
}
static void emit_binary_diff(struct diff_options *o,
mmfile_t *one, mmfile_t *two)
{
emit_diff_symbol(o, DIFF_SYMBOL_BINARY_DIFF_HEADER, NULL, 0, 0);
emit_binary_diff_body(o, one, two);
emit_binary_diff_body(o, two, one);
}
int diff_filespec_is_binary(struct repository *r,
struct diff_filespec *one)
{
struct diff_populate_filespec_options dpf_options = {
.check_binary = 1,
};
if (one->is_binary == -1) {
diff_filespec_load_driver(one, r->index);
if (one->driver->binary != -1)
one->is_binary = one->driver->binary;
else {
if (!one->data && DIFF_FILE_VALID(one))
diff_populate_filespec(r, one, &dpf_options);
if (one->is_binary == -1 && one->data)
one->is_binary = buffer_is_binary(one->data,
one->size);
if (one->is_binary == -1)
one->is_binary = 0;
}
}
return one->is_binary;
}
static const struct userdiff_funcname *
diff_funcname_pattern(struct diff_options *o, struct diff_filespec *one)
{
diff_filespec_load_driver(one, o->repo->index);
return one->driver->funcname.pattern ? &one->driver->funcname : NULL;
}
void diff_set_mnemonic_prefix(struct diff_options *options, const char *a, const char *b)
{
if (!options->a_prefix)
options->a_prefix = a;
if (!options->b_prefix)
options->b_prefix = b;
}
void diff_set_noprefix(struct diff_options *options)
{
options->a_prefix = options->b_prefix = "";
}
void diff_set_default_prefix(struct diff_options *options)
{
options->a_prefix = diff_src_prefix ? diff_src_prefix : "a/";
options->b_prefix = diff_dst_prefix ? diff_dst_prefix : "b/";
}
struct userdiff_driver *get_textconv(struct repository *r,
struct diff_filespec *one)
{
if (!DIFF_FILE_VALID(one))
return NULL;
diff_filespec_load_driver(one, r->index);
return userdiff_get_textconv(r, one->driver);
}
static struct string_list *additional_headers(struct diff_options *o,
const char *path)
{
if (!o->additional_path_headers)
return NULL;
return strmap_get(o->additional_path_headers, path);
}
static void add_formatted_header(struct strbuf *msg,
const char *header,
const char *line_prefix,
const char *meta,
const char *reset)
{
const char *next, *newline;
for (next = header; *next; next = newline) {
newline = strchrnul(next, '\n');
strbuf_addf(msg, "%s%s%.*s%s\n", line_prefix, meta,
(int)(newline - next), next, reset);
if (*newline)
newline++;
}
}
static void add_formatted_headers(struct strbuf *msg,
struct string_list *more_headers,
const char *line_prefix,
const char *meta,
const char *reset)
{
int i;
for (i = 0; i < more_headers->nr; i++)
add_formatted_header(msg, more_headers->items[i].string,
line_prefix, meta, reset);
}
static int diff_filepair_is_phoney(struct diff_filespec *one,
struct diff_filespec *two)
{
/*
* This function specifically looks for pairs injected by
* create_filepairs_for_header_only_notifications(). Such
* pairs are "phoney" in that they do not represent any
* content or even mode difference, but were inserted because
* diff_queued_diff previously had no pair associated with
* that path but we needed some pair to avoid losing the
* "remerge CONFLICT" header associated with the path.
*/
return !DIFF_FILE_VALID(one) && !DIFF_FILE_VALID(two);
}
static int set_diff_algorithm(struct diff_options *opts,
const char *alg)
{
long value = parse_algorithm_value(alg);
if (value < 0)
return -1;
/* clear out previous settings */
opts->xdl_opts &= ~XDF_DIFF_ALGORITHM_MASK;
opts->xdl_opts |= value;
return 0;
}
static void builtin_diff(const char *name_a,
const char *name_b,
struct diff_filespec *one,
struct diff_filespec *two,
const char *xfrm_msg,
int must_show_header,
struct diff_options *o,
int complete_rewrite)
{
mmfile_t mf1, mf2;
const char *lbl[2];
char *a_one, *b_two;
const char *meta = diff_get_color_opt(o, DIFF_METAINFO);
const char *reset = diff_get_color_opt(o, DIFF_RESET);
const char *a_prefix, *b_prefix;
struct userdiff_driver *textconv_one = NULL;
struct userdiff_driver *textconv_two = NULL;
struct strbuf header = STRBUF_INIT;
const char *line_prefix = diff_line_prefix(o);
diff_set_mnemonic_prefix(o, "a/", "b/");
if (o->flags.reverse_diff) {
a_prefix = o->b_prefix;
b_prefix = o->a_prefix;
} else {
a_prefix = o->a_prefix;
b_prefix = o->b_prefix;
}
if (o->submodule_format == DIFF_SUBMODULE_LOG &&
(!one->mode || S_ISGITLINK(one->mode)) &&
(!two->mode || S_ISGITLINK(two->mode)) &&
(!diff_filepair_is_phoney(one, two))) {
show_submodule_diff_summary(o, one->path ? one->path : two->path,
&one->oid, &two->oid,
two->dirty_submodule);
o->found_changes = 1;
return;
} else if (o->submodule_format == DIFF_SUBMODULE_INLINE_DIFF &&
(!one->mode || S_ISGITLINK(one->mode)) &&
(!two->mode || S_ISGITLINK(two->mode)) &&
(!diff_filepair_is_phoney(one, two))) {
show_submodule_inline_diff(o, one->path ? one->path : two->path,
&one->oid, &two->oid,
two->dirty_submodule);
o->found_changes = 1;
return;
}
if (o->flags.allow_textconv) {
textconv_one = get_textconv(o->repo, one);
textconv_two = get_textconv(o->repo, two);
}
/* Never use a non-valid filename anywhere if at all possible */
name_a = DIFF_FILE_VALID(one) ? name_a : name_b;
name_b = DIFF_FILE_VALID(two) ? name_b : name_a;
a_one = quote_two(a_prefix, name_a + (*name_a == '/'));
b_two = quote_two(b_prefix, name_b + (*name_b == '/'));
lbl[0] = DIFF_FILE_VALID(one) ? a_one : "/dev/null";
lbl[1] = DIFF_FILE_VALID(two) ? b_two : "/dev/null";
if (diff_filepair_is_phoney(one, two)) {
/*
* We should only reach this point for pairs generated from
* create_filepairs_for_header_only_notifications(). For
* these, we want to avoid the "/dev/null" special casing
* above, because we do not want such pairs shown as either
* "new file" or "deleted file" below.
*/
lbl[0] = a_one;
lbl[1] = b_two;
}
strbuf_addf(&header, "%s%sdiff --git %s %s%s\n", line_prefix, meta, a_one, b_two, reset);
if (lbl[0][0] == '/') {
/* /dev/null */
strbuf_addf(&header, "%s%snew file mode %06o%s\n", line_prefix, meta, two->mode, reset);
if (xfrm_msg)
strbuf_addstr(&header, xfrm_msg);
o->found_changes = 1;
must_show_header = 1;
}
else if (lbl[1][0] == '/') {
strbuf_addf(&header, "%s%sdeleted file mode %06o%s\n", line_prefix, meta, one->mode, reset);
if (xfrm_msg)
strbuf_addstr(&header, xfrm_msg);
o->found_changes = 1;
must_show_header = 1;
}
else {
if (one->mode != two->mode) {
strbuf_addf(&header, "%s%sold mode %06o%s\n", line_prefix, meta, one->mode, reset);
strbuf_addf(&header, "%s%snew mode %06o%s\n", line_prefix, meta, two->mode, reset);
o->found_changes = 1;
must_show_header = 1;
}
if (xfrm_msg)
strbuf_addstr(&header, xfrm_msg);
/*
* we do not run diff between different kind
* of objects.
*/
if ((one->mode ^ two->mode) & S_IFMT)
goto free_ab_and_return;
if (complete_rewrite &&
(textconv_one || !diff_filespec_is_binary(o->repo, one)) &&
(textconv_two || !diff_filespec_is_binary(o->repo, two))) {
emit_diff_symbol(o, DIFF_SYMBOL_HEADER,
header.buf, header.len, 0);
strbuf_reset(&header);
emit_rewrite_diff(name_a, name_b, one, two,
textconv_one, textconv_two, o);
o->found_changes = 1;
goto free_ab_and_return;
}
}
if (o->irreversible_delete && lbl[1][0] == '/') {
emit_diff_symbol(o, DIFF_SYMBOL_HEADER, header.buf,
header.len, 0);
strbuf_reset(&header);
goto free_ab_and_return;
} else if (!o->flags.text &&
( (!textconv_one && diff_filespec_is_binary(o->repo, one)) ||
(!textconv_two && diff_filespec_is_binary(o->repo, two)) )) {
struct strbuf sb = STRBUF_INIT;
if (!one->data && !two->data &&
S_ISREG(one->mode) && S_ISREG(two->mode) &&
!o->flags.binary) {
if (oideq(&one->oid, &two->oid)) {
if (must_show_header)
emit_diff_symbol(o, DIFF_SYMBOL_HEADER,
header.buf, header.len,
0);
goto free_ab_and_return;
}
emit_diff_symbol(o, DIFF_SYMBOL_HEADER,
header.buf, header.len, 0);
strbuf_addf(&sb, "%sBinary files %s and %s differ\n",
diff_line_prefix(o), lbl[0], lbl[1]);
emit_diff_symbol(o, DIFF_SYMBOL_BINARY_FILES,
sb.buf, sb.len, 0);
strbuf_release(&sb);
o->found_changes = 1;
goto free_ab_and_return;
}
if (fill_mmfile(o->repo, &mf1, one) < 0 ||
fill_mmfile(o->repo, &mf2, two) < 0)
die("unable to read files to diff");
/* Quite common confusing case */
if (mf1.size == mf2.size &&
!memcmp(mf1.ptr, mf2.ptr, mf1.size)) {
if (must_show_header)
emit_diff_symbol(o, DIFF_SYMBOL_HEADER,
header.buf, header.len, 0);
goto free_ab_and_return;
}
emit_diff_symbol(o, DIFF_SYMBOL_HEADER, header.buf, header.len, 0);
strbuf_reset(&header);
if (o->flags.binary)
emit_binary_diff(o, &mf1, &mf2);
else {
strbuf_addf(&sb, "%sBinary files %s and %s differ\n",
diff_line_prefix(o), lbl[0], lbl[1]);
emit_diff_symbol(o, DIFF_SYMBOL_BINARY_FILES,
sb.buf, sb.len, 0);
strbuf_release(&sb);
}
o->found_changes = 1;
} else {
/* Crazy xdl interfaces.. */
const char *diffopts;
const char *v;
xpparam_t xpp;
xdemitconf_t xecfg;
struct emit_callback ecbdata;
const struct userdiff_funcname *pe;
if (must_show_header) {
emit_diff_symbol(o, DIFF_SYMBOL_HEADER,
header.buf, header.len, 0);
strbuf_reset(&header);
}
mf1.size = fill_textconv(o->repo, textconv_one, one, &mf1.ptr);
mf2.size = fill_textconv(o->repo, textconv_two, two, &mf2.ptr);
pe = diff_funcname_pattern(o, one);
if (!pe)
pe = diff_funcname_pattern(o, two);
memset(&xpp, 0, sizeof(xpp));
memset(&xecfg, 0, sizeof(xecfg));
memset(&ecbdata, 0, sizeof(ecbdata));
if (o->flags.suppress_diff_headers)
lbl[0] = NULL;
ecbdata.label_path = lbl;
ecbdata.color_diff = o->use_color;
ecbdata.ws_rule = whitespace_rule(o->repo->index, name_b);
if (ecbdata.ws_rule & WS_BLANK_AT_EOF)
check_blank_at_eof(&mf1, &mf2, &ecbdata);
ecbdata.opt = o;
if (header.len && !o->flags.suppress_diff_headers)
ecbdata.header = &header;
xpp.flags = o->xdl_opts;
xpp.ignore_regex = o->ignore_regex;
xpp.ignore_regex_nr = o->ignore_regex_nr;
xpp.anchors = o->anchors;
xpp.anchors_nr = o->anchors_nr;
xecfg.ctxlen = o->context;
xecfg.interhunkctxlen = o->interhunkcontext;
xecfg.flags = XDL_EMIT_FUNCNAMES;
if (o->flags.funccontext)
xecfg.flags |= XDL_EMIT_FUNCCONTEXT;
if (pe)
xdiff_set_find_func(&xecfg, pe->pattern, pe->cflags);
diffopts = getenv("GIT_DIFF_OPTS");
if (!diffopts)
;
else if (skip_prefix(diffopts, "--unified=", &v))
xecfg.ctxlen = strtoul(v, NULL, 10);
else if (skip_prefix(diffopts, "-u", &v))
xecfg.ctxlen = strtoul(v, NULL, 10);
if (o->word_diff)
init_diff_words_data(&ecbdata, o, one, two);
if (!o->file) {
/*
* Unlike the normal output case, we need to ignore the
* return value from xdi_diff_outf() here, because
* xdi_diff_outf() takes non-zero return from its
* callback function as a sign of error and returns
* early (which is why we return non-zero from our
* callback, quick_consume()). Unfortunately,
* xdi_diff_outf() signals an error by returning
* non-zero.
*/
xdi_diff_outf(&mf1, &mf2, NULL, quick_consume,
&ecbdata, &xpp, &xecfg);
} else if (xdi_diff_outf(&mf1, &mf2, NULL, fn_out_consume,
&ecbdata, &xpp, &xecfg))
die("unable to generate diff for %s", one->path);
if (o->word_diff)
free_diff_words_data(&ecbdata);
if (textconv_one)
free(mf1.ptr);
if (textconv_two)
free(mf2.ptr);
xdiff_clear_find_func(&xecfg);
}
free_ab_and_return:
strbuf_release(&header);
diff_free_filespec_data(one);
diff_free_filespec_data(two);
free(a_one);
free(b_two);
return;
}
static const char *get_compact_summary(const struct diff_filepair *p, int is_renamed)
{
if (!is_renamed) {
if (p->status == DIFF_STATUS_ADDED) {
if (S_ISLNK(p->two->mode))
return "new +l";
else if ((p->two->mode & 0777) == 0755)
return "new +x";
else
return "new";
} else if (p->status == DIFF_STATUS_DELETED)
return "gone";
}
if (S_ISLNK(p->one->mode) && !S_ISLNK(p->two->mode))
return "mode -l";
else if (!S_ISLNK(p->one->mode) && S_ISLNK(p->two->mode))
return "mode +l";
else if ((p->one->mode & 0777) == 0644 &&
(p->two->mode & 0777) == 0755)
return "mode +x";
else if ((p->one->mode & 0777) == 0755 &&
(p->two->mode & 0777) == 0644)
return "mode -x";
return NULL;
}
static void builtin_diffstat(const char *name_a, const char *name_b,
struct diff_filespec *one,
struct diff_filespec *two,
struct diffstat_t *diffstat,
struct diff_options *o,
struct diff_filepair *p)
{
mmfile_t mf1, mf2;
struct diffstat_file *data;
int may_differ;
int complete_rewrite = 0;
if (!DIFF_PAIR_UNMERGED(p)) {
if (p->status == DIFF_STATUS_MODIFIED && p->score)
complete_rewrite = 1;
}
data = diffstat_add(diffstat, name_a, name_b);
data->is_interesting = p->status != DIFF_STATUS_UNKNOWN;
if (o->flags.stat_with_summary)
data->comments = get_compact_summary(p, data->is_renamed);
if (!one || !two) {
data->is_unmerged = 1;
return;
}
/* saves some reads if true, not a guarantee of diff outcome */
may_differ = !(one->oid_valid && two->oid_valid &&
oideq(&one->oid, &two->oid));
if (diff_filespec_is_binary(o->repo, one) ||
diff_filespec_is_binary(o->repo, two)) {
data->is_binary = 1;
if (!may_differ) {
data->added = 0;
data->deleted = 0;
} else {
data->added = diff_filespec_size(o->repo, two);
data->deleted = diff_filespec_size(o->repo, one);
}
}
else if (complete_rewrite) {
diff_populate_filespec(o->repo, one, NULL);
diff_populate_filespec(o->repo, two, NULL);
data->deleted = count_lines(one->data, one->size);
data->added = count_lines(two->data, two->size);
}
else if (may_differ) {
/* Crazy xdl interfaces.. */
xpparam_t xpp;
xdemitconf_t xecfg;
if (fill_mmfile(o->repo, &mf1, one) < 0 ||
fill_mmfile(o->repo, &mf2, two) < 0)
die("unable to read files to diff");
memset(&xpp, 0, sizeof(xpp));
memset(&xecfg, 0, sizeof(xecfg));
xpp.flags = o->xdl_opts;
xpp.ignore_regex = o->ignore_regex;
xpp.ignore_regex_nr = o->ignore_regex_nr;
xpp.anchors = o->anchors;
xpp.anchors_nr = o->anchors_nr;
xecfg.ctxlen = o->context;
xecfg.interhunkctxlen = o->interhunkcontext;
xecfg.flags = XDL_EMIT_NO_HUNK_HDR;
if (xdi_diff_outf(&mf1, &mf2, NULL,
diffstat_consume, diffstat, &xpp, &xecfg))
die("unable to generate diffstat for %s", one->path);
if (DIFF_FILE_VALID(one) && DIFF_FILE_VALID(two)) {
struct diffstat_file *file =
diffstat->files[diffstat->nr - 1];
/*
* Omit diffstats of modified files where nothing changed.
* Even if may_differ, this might be the case due to
* ignoring whitespace changes, etc.
*
* But note that we special-case additions, deletions,
* renames, and mode changes as adding an empty file,
* for example is still of interest.
*/
if ((p->status == DIFF_STATUS_MODIFIED)
&& !file->added
&& !file->deleted
&& one->mode == two->mode) {
free_diffstat_file(file);
diffstat->nr--;
}
}
}
diff_free_filespec_data(one);
diff_free_filespec_data(two);
}
static void builtin_checkdiff(const char *name_a, const char *name_b,
const char *attr_path,
struct diff_filespec *one,
struct diff_filespec *two,
struct diff_options *o)
{
mmfile_t mf1, mf2;
struct checkdiff_t data;
if (!two)
return;
memset(&data, 0, sizeof(data));
data.filename = name_b ? name_b : name_a;
data.lineno = 0;
data.o = o;
data.ws_rule = whitespace_rule(o->repo->index, attr_path);
data.conflict_marker_size = ll_merge_marker_size(o->repo->index, attr_path);
if (fill_mmfile(o->repo, &mf1, one) < 0 ||
fill_mmfile(o->repo, &mf2, two) < 0)
die("unable to read files to diff");
/*
* All the other codepaths check both sides, but not checking
* the "old" side here is deliberate. We are checking the newly
* introduced changes, and as long as the "new" side is text, we
* can and should check what it introduces.
*/
if (diff_filespec_is_binary(o->repo, two))
goto free_and_return;
else {
/* Crazy xdl interfaces.. */
xpparam_t xpp;
xdemitconf_t xecfg;
memset(&xpp, 0, sizeof(xpp));
memset(&xecfg, 0, sizeof(xecfg));
xecfg.ctxlen = 1; /* at least one context line */
xpp.flags = 0;
if (xdi_diff_outf(&mf1, &mf2, checkdiff_consume_hunk,
checkdiff_consume, &data,
&xpp, &xecfg))
die("unable to generate checkdiff for %s", one->path);
if (data.ws_rule & WS_BLANK_AT_EOF) {
struct emit_callback ecbdata;
int blank_at_eof;
ecbdata.ws_rule = data.ws_rule;
check_blank_at_eof(&mf1, &mf2, &ecbdata);
blank_at_eof = ecbdata.blank_at_eof_in_postimage;
if (blank_at_eof) {
static char *err;
if (!err)
err = whitespace_error_string(WS_BLANK_AT_EOF);
fprintf(o->file, "%s:%d: %s.\n",
data.filename, blank_at_eof, err);
data.status = 1; /* report errors */
}
}
}
free_and_return:
diff_free_filespec_data(one);
diff_free_filespec_data(two);
if (data.status)
o->flags.check_failed = 1;
}
struct diff_filespec *alloc_filespec(const char *path)
{
struct diff_filespec *spec;
FLEXPTR_ALLOC_STR(spec, path, path);
spec->count = 1;
spec->is_binary = -1;
return spec;
}
void free_filespec(struct diff_filespec *spec)
{
if (!--spec->count) {
diff_free_filespec_data(spec);
free(spec);
}
}
void fill_filespec(struct diff_filespec *spec, const struct object_id *oid,
int oid_valid, unsigned short mode)
{
if (mode) {
spec->mode = canon_mode(mode);
oidcpy(&spec->oid, oid);
spec->oid_valid = oid_valid;
}
}
/*
* Given a name and sha1 pair, if the index tells us the file in
* the work tree has that object contents, return true, so that
* prepare_temp_file() does not have to inflate and extract.
*/
static int reuse_worktree_file(struct index_state *istate,
const char *name,
const struct object_id *oid,
int want_file)
{
const struct cache_entry *ce;
struct stat st;
int pos, len;
/*
* We do not read the cache ourselves here, because the
* benchmark with my previous version that always reads cache
* shows that it makes things worse for diff-tree comparing
* two linux-2.6 kernel trees in an already checked out work
* tree. This is because most diff-tree comparisons deal with
* only a small number of files, while reading the cache is
* expensive for a large project, and its cost outweighs the
* savings we get by not inflating the object to a temporary
* file. Practically, this code only helps when we are used
* by diff-cache --cached, which does read the cache before
* calling us.
*/
if (!istate->cache)
return 0;
/* We want to avoid the working directory if our caller
* doesn't need the data in a normal file, this system
* is rather slow with its stat/open/mmap/close syscalls,
* and the object is contained in a pack file. The pack
* is probably already open and will be faster to obtain
* the data through than the working directory. Loose
* objects however would tend to be slower as they need
* to be individually opened and inflated.
*/
if (!FAST_WORKING_DIRECTORY && !want_file &&
has_object_pack(istate->repo, oid))
return 0;
/*
* Similarly, if we'd have to convert the file contents anyway, that
* makes the optimization not worthwhile.
*/
if (!want_file && would_convert_to_git(istate, name))
return 0;
/*
* If this path does not match our sparse-checkout definition,
* then the file will not be in the working directory.
*/
if (!path_in_sparse_checkout(name, istate))
return 0;
len = strlen(name);
pos = index_name_pos(istate, name, len);
if (pos < 0)
return 0;
ce = istate->cache[pos];
/*
* This is not the sha1 we are looking for, or
* unreusable because it is not a regular file.
*/
if (!oideq(oid, &ce->oid) || !S_ISREG(ce->ce_mode))
return 0;
/*
* If ce is marked as "assume unchanged", there is no
* guarantee that work tree matches what we are looking for.
*/
if ((ce->ce_flags & CE_VALID) || ce_skip_worktree(ce))
return 0;
/*
* If ce matches the file in the work tree, we can reuse it.
*/
if (ce_uptodate(ce) ||
(!lstat(name, &st) && !ie_match_stat(istate, ce, &st, 0)))
return 1;
return 0;
}
static int diff_populate_gitlink(struct diff_filespec *s, int size_only)
{
struct strbuf buf = STRBUF_INIT;
const char *dirty = "";
/* Are we looking at the work tree? */
if (s->dirty_submodule)
dirty = "-dirty";
strbuf_addf(&buf, "Subproject commit %s%s\n",
oid_to_hex(&s->oid), dirty);
s->size = buf.len;
if (size_only) {
s->data = NULL;
strbuf_release(&buf);
} else {
s->data = strbuf_detach(&buf, NULL);
s->should_free = 1;
}
return 0;
}
/*
* While doing rename detection and pickaxe operation, we may need to
* grab the data for the blob (or file) for our own in-core comparison.
* diff_filespec has data and size fields for this purpose.
*/
int diff_populate_filespec(struct repository *r,
struct diff_filespec *s,
const struct diff_populate_filespec_options *options)
{
int size_only = options ? options->check_size_only : 0;
int check_binary = options ? options->check_binary : 0;
int err = 0;
int conv_flags = global_conv_flags_eol;
/*
* demote FAIL to WARN to allow inspecting the situation
* instead of refusing.
*/
if (conv_flags & CONV_EOL_RNDTRP_DIE)
conv_flags = CONV_EOL_RNDTRP_WARN;
if (!DIFF_FILE_VALID(s))
die("internal error: asking to populate invalid file.");
if (S_ISDIR(s->mode))
return -1;
if (s->data)
return 0;
if (size_only && 0 < s->size)
return 0;
if (S_ISGITLINK(s->mode))
return diff_populate_gitlink(s, size_only);
if (!s->oid_valid ||
reuse_worktree_file(r->index, s->path, &s->oid, 0)) {
struct strbuf buf = STRBUF_INIT;
struct stat st;
int fd;
if (lstat(s->path, &st) < 0) {
err_empty:
err = -1;
empty:
s->data = (char *)"";
s->size = 0;
return err;
}
s->size = xsize_t(st.st_size);
if (!s->size)
goto empty;
if (S_ISLNK(st.st_mode)) {
struct strbuf sb = STRBUF_INIT;
if (strbuf_readlink(&sb, s->path, s->size))
goto err_empty;
s->size = sb.len;
s->data = strbuf_detach(&sb, NULL);
s->should_free = 1;
return 0;
}
/*
* Even if the caller would be happy with getting
* only the size, we cannot return early at this
* point if the path requires us to run the content
* conversion.
*/
if (size_only && !would_convert_to_git(r->index, s->path))
return 0;
/*
* Note: this check uses xsize_t(st.st_size) that may
* not be the true size of the blob after it goes
* through convert_to_git(). This may not strictly be
* correct, but the whole point of big_file_threshold
* and is_binary check being that we want to avoid
* opening the file and inspecting the contents, this
* is probably fine.
*/
if (check_binary &&
s->size > repo_settings_get_big_file_threshold(the_repository) &&
s->is_binary == -1) {
s->is_binary = 1;
return 0;
}
fd = open(s->path, O_RDONLY);
if (fd < 0)
goto err_empty;
s->data = xmmap(NULL, s->size, PROT_READ, MAP_PRIVATE, fd, 0);
close(fd);
s->should_munmap = 1;
/*
* Convert from working tree format to canonical git format
*/
if (convert_to_git(r->index, s->path, s->data, s->size, &buf, conv_flags)) {
size_t size = 0;
munmap(s->data, s->size);
s->should_munmap = 0;
s->data = strbuf_detach(&buf, &size);
s->size = size;
s->should_free = 1;
}
}
else {
struct object_info info = {
.sizep = &s->size
};
if (!(size_only || check_binary))
/*
* Set contentp, since there is no chance that merely
* the size is sufficient.
*/
info.contentp = &s->data;
if (options && options->missing_object_cb) {
if (!odb_read_object_info_extended(r->objects, &s->oid, &info,
OBJECT_INFO_LOOKUP_REPLACE |
OBJECT_INFO_SKIP_FETCH_OBJECT))
goto object_read;
options->missing_object_cb(options->missing_object_data);
}
if (odb_read_object_info_extended(r->objects, &s->oid, &info,
OBJECT_INFO_LOOKUP_REPLACE))
die("unable to read %s", oid_to_hex(&s->oid));
object_read:
if (size_only || check_binary) {
if (size_only)
return 0;
if (s->size > repo_settings_get_big_file_threshold(the_repository) &&
s->is_binary == -1) {
s->is_binary = 1;
return 0;
}
}
if (!info.contentp) {
info.contentp = &s->data;
if (odb_read_object_info_extended(r->objects, &s->oid, &info,
OBJECT_INFO_LOOKUP_REPLACE))
die("unable to read %s", oid_to_hex(&s->oid));
}
s->should_free = 1;
}
return 0;
}
void diff_free_filespec_blob(struct diff_filespec *s)
{
if (s->should_free)
free(s->data);
else if (s->should_munmap)
munmap(s->data, s->size);
if (s->should_free || s->should_munmap) {
s->should_free = s->should_munmap = 0;
s->data = NULL;
}
}
void diff_free_filespec_data(struct diff_filespec *s)
{
if (!s)
return;
diff_free_filespec_blob(s);
FREE_AND_NULL(s->cnt_data);
}
static void prep_temp_blob(struct index_state *istate,
const char *path, struct diff_tempfile *temp,
void *blob,
unsigned long size,
const struct object_id *oid,
int mode)
{
struct strbuf buf = STRBUF_INIT;
char *path_dup = xstrdup(path);
const char *base = basename(path_dup);
struct checkout_metadata meta;
init_checkout_metadata(&meta, NULL, NULL, oid);
temp->tempfile = mks_tempfile_dt("git-blob-XXXXXX", base);
if (!temp->tempfile)
die_errno("unable to create temp-file");
if (convert_to_working_tree(istate, path,
(const char *)blob, (size_t)size, &buf, &meta)) {
blob = buf.buf;
size = buf.len;
}
if (write_in_full(temp->tempfile->fd, blob, size) < 0 ||
close_tempfile_gently(temp->tempfile))
die_errno("unable to write temp-file");
temp->name = get_tempfile_path(temp->tempfile);
oid_to_hex_r(temp->hex, oid);
xsnprintf(temp->mode, sizeof(temp->mode), "%06o", mode);
strbuf_release(&buf);
free(path_dup);
}
static struct diff_tempfile *prepare_temp_file(struct repository *r,
struct diff_filespec *one)
{
struct diff_tempfile *temp = claim_diff_tempfile();
if (!DIFF_FILE_VALID(one)) {
not_a_valid_file:
/* A '-' entry produces this for file-2, and
* a '+' entry produces this for file-1.
*/
temp->name = "/dev/null";
xsnprintf(temp->hex, sizeof(temp->hex), ".");
xsnprintf(temp->mode, sizeof(temp->mode), ".");
return temp;
}
if (!S_ISGITLINK(one->mode) &&
(!one->oid_valid ||
reuse_worktree_file(r->index, one->path, &one->oid, 1))) {
struct stat st;
if (lstat(one->path, &st) < 0) {
if (errno == ENOENT)
goto not_a_valid_file;
die_errno("stat(%s)", one->path);
}
if (S_ISLNK(st.st_mode)) {
struct strbuf sb = STRBUF_INIT;
if (strbuf_readlink(&sb, one->path, st.st_size) < 0)
die_errno("readlink(%s)", one->path);
prep_temp_blob(r->index, one->path, temp, sb.buf, sb.len,
(one->oid_valid ?
&one->oid : null_oid(the_hash_algo)),
(one->oid_valid ?
one->mode : S_IFLNK));
strbuf_release(&sb);
}
else {
/* we can borrow from the file in the work tree */
temp->name = one->path;
if (!one->oid_valid)
oid_to_hex_r(temp->hex, null_oid(the_hash_algo));
else
oid_to_hex_r(temp->hex, &one->oid);
/* Even though we may sometimes borrow the
* contents from the work tree, we always want
* one->mode. mode is trustworthy even when
* !(one->oid_valid), as long as
* DIFF_FILE_VALID(one).
*/
xsnprintf(temp->mode, sizeof(temp->mode), "%06o", one->mode);
}
return temp;
}
else {
if (diff_populate_filespec(r, one, NULL))
die("cannot read data blob for %s", one->path);
prep_temp_blob(r->index, one->path, temp,
one->data, one->size,
&one->oid, one->mode);
}
return temp;
}
static void add_external_diff_name(struct repository *r,
struct strvec *argv,
struct diff_filespec *df)
{
struct diff_tempfile *temp = prepare_temp_file(r, df);
strvec_push(argv, temp->name);
strvec_push(argv, temp->hex);
strvec_push(argv, temp->mode);
}
/* An external diff command takes:
*
* diff-cmd name infile1 infile1-sha1 infile1-mode \
* infile2 infile2-sha1 infile2-mode [ rename-to ]
*
*/
static void run_external_diff(const struct external_diff *pgm,
const char *name,
const char *other,
struct diff_filespec *one,
struct diff_filespec *two,
const char *xfrm_msg,
struct diff_options *o)
{
struct child_process cmd = CHILD_PROCESS_INIT;
struct diff_queue_struct *q = &diff_queued_diff;
int rc;
/*
* Trivial equality is handled by diff_unmodified_pair() before
* we get here. If we don't need to show the diff and the
* external diff program lacks the ability to tell us whether
* it's empty then we consider it non-empty without even asking.
*/
if (!pgm->trust_exit_code && !o->file) {
o->found_changes = 1;
return;
}
strvec_push(&cmd.args, pgm->cmd);
strvec_push(&cmd.args, name);
if (one && two) {
add_external_diff_name(o->repo, &cmd.args, one);
add_external_diff_name(o->repo, &cmd.args, two);
if (other) {
strvec_push(&cmd.args, other);
if (xfrm_msg)
strvec_push(&cmd.args, xfrm_msg);
}
}
strvec_pushf(&cmd.env, "GIT_DIFF_PATH_COUNTER=%d",
++o->diff_path_counter);
strvec_pushf(&cmd.env, "GIT_DIFF_PATH_TOTAL=%d", q->nr);
diff_free_filespec_data(one);
diff_free_filespec_data(two);
cmd.use_shell = 1;
if (!o->file)
cmd.no_stdout = 1;
else if (o->file != stdout)
cmd.out = xdup(fileno(o->file));
rc = run_command(&cmd);
if (!pgm->trust_exit_code && rc == 0)
o->found_changes = 1;
else if (pgm->trust_exit_code && rc == 0)
; /* nothing */
else if (pgm->trust_exit_code && rc == 1)
o->found_changes = 1;
else
die(_("external diff died, stopping at %s"), name);
remove_tempfile();
}
static int similarity_index(struct diff_filepair *p)
{
return p->score * 100 / MAX_SCORE;
}
static const char *diff_abbrev_oid(const struct object_id *oid, int abbrev)
{
if (startup_info->have_repository)
return repo_find_unique_abbrev(the_repository, oid, abbrev);
else {
char *hex = oid_to_hex(oid);
if (abbrev < 0)
abbrev = FALLBACK_DEFAULT_ABBREV;
if (abbrev > the_hash_algo->hexsz)
BUG("oid abbreviation out of range: %d", abbrev);
if (abbrev)
hex[abbrev] = '\0';
return hex;
}
}
static void fill_metainfo(struct strbuf *msg,
const char *name,
const char *other,
struct diff_filespec *one,
struct diff_filespec *two,
struct diff_options *o,
struct diff_filepair *p,
int *must_show_header,
enum git_colorbool use_color)
{
const char *set = diff_get_color(use_color, DIFF_METAINFO);
const char *reset = diff_get_color(use_color, DIFF_RESET);
const char *line_prefix = diff_line_prefix(o);
struct string_list *more_headers = NULL;
*must_show_header = 1;
strbuf_init(msg, PATH_MAX * 2 + 300);
switch (p->status) {
case DIFF_STATUS_COPIED:
strbuf_addf(msg, "%s%ssimilarity index %d%%",
line_prefix, set, similarity_index(p));
strbuf_addf(msg, "%s\n%s%scopy from ",
reset, line_prefix, set);
quote_c_style(name, msg, NULL, 0);
strbuf_addf(msg, "%s\n%s%scopy to ", reset, line_prefix, set);
quote_c_style(other, msg, NULL, 0);
strbuf_addf(msg, "%s\n", reset);
break;
case DIFF_STATUS_RENAMED:
strbuf_addf(msg, "%s%ssimilarity index %d%%",
line_prefix, set, similarity_index(p));
strbuf_addf(msg, "%s\n%s%srename from ",
reset, line_prefix, set);
quote_c_style(name, msg, NULL, 0);
strbuf_addf(msg, "%s\n%s%srename to ",
reset, line_prefix, set);
quote_c_style(other, msg, NULL, 0);
strbuf_addf(msg, "%s\n", reset);
break;
case DIFF_STATUS_MODIFIED:
if (p->score) {
strbuf_addf(msg, "%s%sdissimilarity index %d%%%s\n",
line_prefix,
set, similarity_index(p), reset);
break;
}
/* fallthru */
default:
*must_show_header = 0;
}
if ((more_headers = additional_headers(o, name))) {
add_formatted_headers(msg, more_headers,
line_prefix, set, reset);
*must_show_header = 1;
}
if (one && two && !oideq(&one->oid, &two->oid)) {
const unsigned hexsz = the_hash_algo->hexsz;
int abbrev = o->abbrev ? o->abbrev : DEFAULT_ABBREV;
if (o->flags.full_index)
abbrev = hexsz;
if (o->flags.binary) {
mmfile_t mf;
if ((!fill_mmfile(o->repo, &mf, one) &&
diff_filespec_is_binary(o->repo, one)) ||
(!fill_mmfile(o->repo, &mf, two) &&
diff_filespec_is_binary(o->repo, two)))
abbrev = hexsz;
}
strbuf_addf(msg, "%s%sindex %s..%s", line_prefix, set,
diff_abbrev_oid(&one->oid, abbrev),
diff_abbrev_oid(&two->oid, abbrev));
if (one->mode == two->mode)
strbuf_addf(msg, " %06o", one->mode);
strbuf_addf(msg, "%s\n", reset);
}
}
static void run_diff_cmd(const struct external_diff *pgm,
const char *name,
const char *other,
const char *attr_path,
struct diff_filespec *one,
struct diff_filespec *two,
struct strbuf *msg,
struct diff_options *o,
struct diff_filepair *p)
{
const char *xfrm_msg = NULL;
int complete_rewrite = (p->status == DIFF_STATUS_MODIFIED) && p->score;
int must_show_header = 0;
struct userdiff_driver *drv = NULL;
if (o->flags.allow_external || !o->ignore_driver_algorithm)
drv = userdiff_find_by_path(o->repo->index, attr_path);
if (o->flags.allow_external && drv && drv->external.cmd)
pgm = &drv->external;
if (msg) {
/*
* don't use colors when the header is intended for an
* external diff driver
*/
fill_metainfo(msg, name, other, one, two, o, p,
&must_show_header,
pgm ? GIT_COLOR_NEVER : o->use_color);
xfrm_msg = msg->len ? msg->buf : NULL;
}
if (pgm) {
run_external_diff(pgm, name, other, one, two, xfrm_msg, o);
return;
}
if (one && two) {
if (!o->ignore_driver_algorithm && drv && drv->algorithm)
set_diff_algorithm(o, drv->algorithm);
builtin_diff(name, other ? other : name,
one, two, xfrm_msg, must_show_header,
o, complete_rewrite);
if (p->status == DIFF_STATUS_COPIED ||
p->status == DIFF_STATUS_RENAMED)
o->found_changes = 1;
} else {
if (o->file)
fprintf(o->file, "* Unmerged path %s\n", name);
o->found_changes = 1;
}
}
static void diff_fill_oid_info(struct diff_filespec *one, struct index_state *istate)
{
if (DIFF_FILE_VALID(one)) {
if (!one->oid_valid) {
struct stat st;
if (one->is_stdin) {
oidclr(&one->oid, the_repository->hash_algo);
return;
}
if (lstat(one->path, &st) < 0)
die_errno("stat '%s'", one->path);
if (index_path(istate, &one->oid, one->path, &st, 0))
die("cannot hash %s", one->path);
}
}
else
oidclr(&one->oid, the_repository->hash_algo);
}
static void strip_prefix(int prefix_length, const char **namep, const char **otherp)
{
/* Strip the prefix but do not molest /dev/null and absolute paths */
if (*namep && !is_absolute_path(*namep)) {
*namep += prefix_length;
if (**namep == '/')
++*namep;
}
if (*otherp && !is_absolute_path(*otherp)) {
*otherp += prefix_length;
if (**otherp == '/')
++*otherp;
}
}
static void run_diff(struct diff_filepair *p, struct diff_options *o)
{
const struct external_diff *pgm = external_diff();
struct strbuf msg;
struct diff_filespec *one = p->one;
struct diff_filespec *two = p->two;
const char *name;
const char *other;
const char *attr_path;
name = one->path;
other = (strcmp(name, two->path) ? two->path : NULL);
attr_path = name;
if (o->prefix_length)
strip_prefix(o->prefix_length, &name, &other);
if (!o->flags.allow_external)
pgm = NULL;
if (DIFF_PAIR_UNMERGED(p)) {
run_diff_cmd(pgm, name, NULL, attr_path,
NULL, NULL, NULL, o, p);
return;
}
diff_fill_oid_info(one, o->repo->index);
diff_fill_oid_info(two, o->repo->index);
if (!pgm &&
DIFF_FILE_VALID(one) && DIFF_FILE_VALID(two) &&
(S_IFMT & one->mode) != (S_IFMT & two->mode)) {
/*
* a filepair that changes between file and symlink
* needs to be split into deletion and creation.
*/
struct diff_filespec *null = alloc_filespec(two->path);
run_diff_cmd(NULL, name, other, attr_path,
one, null, &msg,
o, p);
free(null);
strbuf_release(&msg);
null = alloc_filespec(one->path);
run_diff_cmd(NULL, name, other, attr_path,
null, two, &msg, o, p);
free(null);
}
else
run_diff_cmd(pgm, name, other, attr_path,
one, two, &msg, o, p);
strbuf_release(&msg);
}
static void run_diffstat(struct diff_filepair *p, struct diff_options *o,
struct diffstat_t *diffstat)
{
const char *name;
const char *other;
if (!o->ignore_driver_algorithm) {
struct userdiff_driver *drv = userdiff_find_by_path(o->repo->index,
p->one->path);
if (drv && drv->algorithm)
set_diff_algorithm(o, drv->algorithm);
}
if (DIFF_PAIR_UNMERGED(p)) {
/* unmerged */
builtin_diffstat(p->one->path, NULL, NULL, NULL,
diffstat, o, p);
return;
}
name = p->one->path;
other = (strcmp(name, p->two->path) ? p->two->path : NULL);
if (o->prefix_length)
strip_prefix(o->prefix_length, &name, &other);
diff_fill_oid_info(p->one, o->repo->index);
diff_fill_oid_info(p->two, o->repo->index);
builtin_diffstat(name, other, p->one, p->two,
diffstat, o, p);
}
static void run_checkdiff(struct diff_filepair *p, struct diff_options *o)
{
const char *name;
const char *other;
const char *attr_path;
if (DIFF_PAIR_UNMERGED(p)) {
/* unmerged */
return;
}
name = p->one->path;
other = (strcmp(name, p->two->path) ? p->two->path : NULL);
attr_path = other ? other : name;
if (o->prefix_length)
strip_prefix(o->prefix_length, &name, &other);
diff_fill_oid_info(p->one, o->repo->index);
diff_fill_oid_info(p->two, o->repo->index);
builtin_checkdiff(name, other, attr_path, p->one, p->two, o);
}
void repo_diff_setup(struct repository *r, struct diff_options *options)
{
memcpy(options, &default_diff_options, sizeof(*options));
options->file = stdout;
options->repo = r;
options->output_indicators[OUTPUT_INDICATOR_NEW] = '+';
options->output_indicators[OUTPUT_INDICATOR_OLD] = '-';
options->output_indicators[OUTPUT_INDICATOR_CONTEXT] = ' ';
options->abbrev = DEFAULT_ABBREV;
options->line_termination = '\n';
options->break_opt = -1;
options->rename_limit = -1;
options->dirstat_permille = diff_dirstat_permille_default;
options->context = diff_context_default;
options->interhunkcontext = diff_interhunk_context_default;
options->ws_error_highlight = ws_error_highlight_default;
options->flags.rename_empty = 1;
options->flags.relative_name = diff_relative;
options->objfind = NULL;
/* pathchange left =NULL by default */
options->change = diff_change;
options->add_remove = diff_addremove;
options->use_color = diff_use_color_default;
options->detect_rename = diff_detect_rename_default;
options->xdl_opts |= diff_algorithm;
if (diff_indent_heuristic)
DIFF_XDL_SET(options, INDENT_HEURISTIC);
options->orderfile = xstrdup_or_null(diff_order_file_cfg);
if (!options->flags.ignore_submodule_set)
options->flags.ignore_untracked_in_submodules = 1;
if (diff_no_prefix) {
diff_set_noprefix(options);
} else if (!diff_mnemonic_prefix) {
diff_set_default_prefix(options);
}
options->color_moved = diff_color_moved_default;
options->color_moved_ws_handling = diff_color_moved_ws_default;
}
static const char diff_status_letters[] = {
DIFF_STATUS_ADDED,
DIFF_STATUS_COPIED,
DIFF_STATUS_DELETED,
DIFF_STATUS_MODIFIED,
DIFF_STATUS_RENAMED,
DIFF_STATUS_TYPE_CHANGED,
DIFF_STATUS_UNKNOWN,
DIFF_STATUS_UNMERGED,
DIFF_STATUS_FILTER_AON,
DIFF_STATUS_FILTER_BROKEN,
'\0',
};
static unsigned int filter_bit['Z' + 1];
static void prepare_filter_bits(void)
{
int i;
if (!filter_bit[DIFF_STATUS_ADDED]) {
for (i = 0; diff_status_letters[i]; i++)
filter_bit[(int) diff_status_letters[i]] = (1 << i);
}
}
static unsigned filter_bit_tst(char status, const struct diff_options *opt)
{
return opt->filter & filter_bit[(int) status];
}
unsigned diff_filter_bit(char status)
{
prepare_filter_bits();
return filter_bit[(int) status];
}
int diff_check_follow_pathspec(struct pathspec *ps, int die_on_error)
{
unsigned forbidden_magic;
if (ps->nr != 1) {
if (die_on_error)
die(_("--follow requires exactly one pathspec"));
return 0;
}
forbidden_magic = ps->items[0].magic & ~(PATHSPEC_FROMTOP |
PATHSPEC_LITERAL);
if (forbidden_magic) {
if (die_on_error) {
struct strbuf sb = STRBUF_INIT;
pathspec_magic_names(forbidden_magic, &sb);
die(_("pathspec magic not supported by --follow: %s"),
sb.buf);
}
return 0;
}
return 1;
}
void diff_setup_done(struct diff_options *options)
{
unsigned check_mask = DIFF_FORMAT_NAME |
DIFF_FORMAT_NAME_STATUS |
DIFF_FORMAT_CHECKDIFF |
DIFF_FORMAT_NO_OUTPUT;
/*
* This must be signed because we're comparing against a potentially
* negative value.
*/
const int hexsz = the_hash_algo->hexsz;
if (options->set_default)
options->set_default(options);
if (HAS_MULTI_BITS(options->output_format & check_mask))
die(_("options '%s', '%s', '%s', and '%s' cannot be used together"),
"--name-only", "--name-status", "--check", "-s");
if (HAS_MULTI_BITS(options->pickaxe_opts & DIFF_PICKAXE_KINDS_MASK))
die(_("options '%s', '%s', and '%s' cannot be used together"),
"-G", "-S", "--find-object");
if (HAS_MULTI_BITS(options->pickaxe_opts & DIFF_PICKAXE_KINDS_G_REGEX_MASK))
die(_("options '%s' and '%s' cannot be used together, use '%s' with '%s'"),
"-G", "--pickaxe-regex", "--pickaxe-regex", "-S");
if (HAS_MULTI_BITS(options->pickaxe_opts & DIFF_PICKAXE_KINDS_ALL_OBJFIND_MASK))
die(_("options '%s' and '%s' cannot be used together, use '%s' with '%s' and '%s'"),
"--pickaxe-all", "--find-object", "--pickaxe-all", "-G", "-S");
/*
* Most of the time we can say "there are changes"
* only by checking if there are changed paths, but
* --ignore-whitespace* options force us to look
* inside contents.
*/
if ((options->xdl_opts & XDF_WHITESPACE_FLAGS) ||
options->ignore_regex_nr)
options->flags.diff_from_contents = 1;
else
options->flags.diff_from_contents = 0;
if (options->flags.find_copies_harder)
options->detect_rename = DIFF_DETECT_COPY;
if (!options->flags.relative_name)
options->prefix = NULL;
if (options->prefix)
options->prefix_length = strlen(options->prefix);
else
options->prefix_length = 0;
/*
* --name-only, --name-status, --checkdiff, and -s
* turn other output format off.
*/
if (options->output_format & (DIFF_FORMAT_NAME |
DIFF_FORMAT_NAME_STATUS |
DIFF_FORMAT_CHECKDIFF |
DIFF_FORMAT_NO_OUTPUT))
options->output_format &= ~(DIFF_FORMAT_RAW |
DIFF_FORMAT_NUMSTAT |
DIFF_FORMAT_DIFFSTAT |
DIFF_FORMAT_SHORTSTAT |
DIFF_FORMAT_DIRSTAT |
DIFF_FORMAT_SUMMARY |
DIFF_FORMAT_PATCH);
/*
* These cases always need recursive; we do not drop caller-supplied
* recursive bits for other formats here.
*/
if (options->output_format & (DIFF_FORMAT_PATCH |
DIFF_FORMAT_NUMSTAT |
DIFF_FORMAT_DIFFSTAT |
DIFF_FORMAT_SHORTSTAT |
DIFF_FORMAT_DIRSTAT |
DIFF_FORMAT_SUMMARY |
DIFF_FORMAT_CHECKDIFF))
options->flags.recursive = 1;
/*
* Also pickaxe would not work very well if you do not say recursive
*/
if (options->pickaxe_opts & DIFF_PICKAXE_KINDS_MASK)
options->flags.recursive = 1;
/*
* When patches are generated, submodules diffed against the work tree
* must be checked for dirtiness too so it can be shown in the output
*/
if (options->output_format & DIFF_FORMAT_PATCH)
options->flags.dirty_submodules = 1;
if (options->detect_rename && options->rename_limit < 0)
options->rename_limit = diff_rename_limit_default;
if (hexsz < options->abbrev)
options->abbrev = hexsz; /* full */
/*
* It does not make sense to show the first hit we happened
* to have found. It does not make sense not to return with
* exit code in such a case either.
*/
if (options->flags.quick) {
options->output_format = DIFF_FORMAT_NO_OUTPUT;
options->flags.exit_with_status = 1;
options->detect_rename = 0;
options->flags.find_copies_harder = 0;
}
/*
* External diffs could declare non-identical contents equal
* (think diff --ignore-space-change).
*/
if (options->flags.allow_external && options->flags.exit_with_status)
options->flags.diff_from_contents = 1;
options->diff_path_counter = 0;
if (options->flags.follow_renames)
diff_check_follow_pathspec(&options->pathspec, 1);
if (options->flags.allow_external && external_diff())
options->color_moved = 0;
if (options->filter_not) {
if (!options->filter)
options->filter = ~filter_bit[DIFF_STATUS_FILTER_AON];
options->filter &= ~options->filter_not;
}
if (options->pathspec.has_wildcard && options->max_depth_valid)
die("max-depth cannot be used with wildcard pathspecs");
}
int parse_long_opt(const char *opt, const char **argv,
const char **optarg)
{
const char *arg = argv[0];
if (!skip_prefix(arg, "--", &arg))
return 0;
if (!skip_prefix(arg, opt, &arg))
return 0;
if (*arg == '=') { /* stuck form: --option=value */
*optarg = arg + 1;
return 1;
}
if (*arg != '\0')
return 0;
/* separate form: --option value */
if (!argv[1])
die("Option '--%s' requires a value", opt);
*optarg = argv[1];
return 2;
}
static int diff_opt_stat(const struct option *opt, const char *value, int unset)
{
struct diff_options *options = opt->value;
int width = options->stat_width;
int name_width = options->stat_name_width;
int graph_width = options->stat_graph_width;
int count = options->stat_count;
char *end;
BUG_ON_OPT_NEG(unset);
if (!strcmp(opt->long_name, "stat")) {
if (value) {
width = strtoul(value, &end, 10);
if (*end == ',')
name_width = strtoul(end+1, &end, 10);
if (*end == ',')
count = strtoul(end+1, &end, 10);
if (*end)
return error(_("invalid --stat value: %s"), value);
}
} else if (!strcmp(opt->long_name, "stat-width")) {
width = strtoul(value, &end, 10);
if (*end)
return error(_("%s expects a numerical value"),
opt->long_name);
} else if (!strcmp(opt->long_name, "stat-name-width")) {
name_width = strtoul(value, &end, 10);
if (*end)
return error(_("%s expects a numerical value"),
opt->long_name);
} else if (!strcmp(opt->long_name, "stat-graph-width")) {
graph_width = strtoul(value, &end, 10);
if (*end)
return error(_("%s expects a numerical value"),
opt->long_name);
} else if (!strcmp(opt->long_name, "stat-count")) {
count = strtoul(value, &end, 10);
if (*end)
return error(_("%s expects a numerical value"),
opt->long_name);
} else
BUG("%s should not get here", opt->long_name);
options->output_format &= ~DIFF_FORMAT_NO_OUTPUT;
options->output_format |= DIFF_FORMAT_DIFFSTAT;
options->stat_name_width = name_width;
options->stat_graph_width = graph_width;
options->stat_width = width;
options->stat_count = count;
return 0;
}
static int parse_dirstat_opt(struct diff_options *options, const char *params)
{
struct strbuf errmsg = STRBUF_INIT;
if (parse_dirstat_params(options, params, &errmsg))
die(_("Failed to parse --dirstat/-X option parameter:\n%s"),
errmsg.buf);
strbuf_release(&errmsg);
/*
* The caller knows a dirstat-related option is given from the command
* line; allow it to say "return this_function();"
*/
options->output_format &= ~DIFF_FORMAT_NO_OUTPUT;
options->output_format |= DIFF_FORMAT_DIRSTAT;
return 1;
}
static int diff_opt_diff_filter(const struct option *option,
const char *optarg, int unset)
{
struct diff_options *opt = option->value;
int i, optch;
BUG_ON_OPT_NEG(unset);
prepare_filter_bits();
for (i = 0; (optch = optarg[i]) != '\0'; i++) {
unsigned int bit;
int negate;
if ('a' <= optch && optch <= 'z') {
negate = 1;
optch = toupper(optch);
} else {
negate = 0;
}
bit = (0 <= optch && optch <= 'Z') ? filter_bit[optch] : 0;
if (!bit)
return error(_("unknown change class '%c' in --diff-filter=%s"),
optarg[i], optarg);
if (negate)
opt->filter_not |= bit;
else
opt->filter |= bit;
}
return 0;
}
static void enable_patch_output(int *fmt)
{
*fmt &= ~DIFF_FORMAT_NO_OUTPUT;
*fmt |= DIFF_FORMAT_PATCH;
}
static int diff_opt_ws_error_highlight(const struct option *option,
const char *arg, int unset)
{
struct diff_options *opt = option->value;
int val = parse_ws_error_highlight(arg);
BUG_ON_OPT_NEG(unset);
if (val < 0)
return error(_("unknown value after ws-error-highlight=%.*s"),
-1 - val, arg);
opt->ws_error_highlight = val;
return 0;
}
static int diff_opt_find_object(const struct option *option,
const char *arg, int unset)
{
struct diff_options *opt = option->value;
struct object_id oid;
BUG_ON_OPT_NEG(unset);
if (repo_get_oid(the_repository, arg, &oid))
return error(_("unable to resolve '%s'"), arg);
if (!opt->objfind)
CALLOC_ARRAY(opt->objfind, 1);
opt->pickaxe_opts |= DIFF_PICKAXE_KIND_OBJFIND;
opt->flags.recursive = 1;
opt->flags.tree_in_recursive = 1;
oidset_insert(opt->objfind, &oid);
return 0;
}
static int diff_opt_anchored(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
options->xdl_opts = DIFF_WITH_ALG(options, PATIENCE_DIFF);
ALLOC_GROW(options->anchors, options->anchors_nr + 1,
options->anchors_alloc);
options->anchors[options->anchors_nr++] = xstrdup(arg);
return 0;
}
static int diff_opt_binary(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
BUG_ON_OPT_ARG(arg);
enable_patch_output(&options->output_format);
options->flags.binary = 1;
return 0;
}
static int diff_opt_break_rewrites(const struct option *opt,
const char *arg, int unset)
{
int *break_opt = opt->value;
int opt1, opt2;
BUG_ON_OPT_NEG(unset);
if (!arg)
arg = "";
opt1 = parse_rename_score(&arg);
if (*arg == 0)
opt2 = 0;
else if (*arg != '/')
return error(_("%s expects <n>/<m> form"), opt->long_name);
else {
arg++;
opt2 = parse_rename_score(&arg);
}
if (*arg != 0)
return error(_("%s expects <n>/<m> form"), opt->long_name);
*break_opt = opt1 | (opt2 << 16);
return 0;
}
static int diff_opt_char(const struct option *opt,
const char *arg, int unset)
{
char *value = opt->value;
BUG_ON_OPT_NEG(unset);
if (arg[1])
return error(_("%s expects a character, got '%s'"),
opt->long_name, arg);
*value = arg[0];
return 0;
}
static int diff_opt_color_moved(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
if (unset) {
options->color_moved = COLOR_MOVED_NO;
} else if (!arg) {
if (diff_color_moved_default)
options->color_moved = diff_color_moved_default;
if (options->color_moved == COLOR_MOVED_NO)
options->color_moved = COLOR_MOVED_DEFAULT;
} else {
int cm = parse_color_moved(arg);
if (cm < 0)
return error(_("bad --color-moved argument: %s"), arg);
options->color_moved = cm;
}
return 0;
}
static int diff_opt_color_moved_ws(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
unsigned cm;
if (unset) {
options->color_moved_ws_handling = 0;
return 0;
}
cm = parse_color_moved_ws(arg);
if (cm & COLOR_MOVED_WS_ERROR)
return error(_("invalid mode '%s' in --color-moved-ws"), arg);
options->color_moved_ws_handling = cm;
return 0;
}
static int diff_opt_color_words(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
options->use_color = GIT_COLOR_ALWAYS;
options->word_diff = DIFF_WORDS_COLOR;
options->word_regex = arg;
return 0;
}
static int diff_opt_compact_summary(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_ARG(arg);
if (unset) {
options->flags.stat_with_summary = 0;
} else {
options->flags.stat_with_summary = 1;
options->output_format &= ~DIFF_FORMAT_NO_OUTPUT;
options->output_format |= DIFF_FORMAT_DIFFSTAT;
}
return 0;
}
static int diff_opt_diff_algorithm(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
if (set_diff_algorithm(options, arg))
return error(_("option diff-algorithm accepts \"myers\", "
"\"minimal\", \"patience\" and \"histogram\""));
options->ignore_driver_algorithm = 1;
return 0;
}
static int diff_opt_diff_algorithm_no_arg(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
BUG_ON_OPT_ARG(arg);
if (set_diff_algorithm(options, opt->long_name))
BUG("available diff algorithms include \"myers\", "
"\"minimal\", \"patience\" and \"histogram\"");
options->ignore_driver_algorithm = 1;
return 0;
}
static int diff_opt_dirstat(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
if (!strcmp(opt->long_name, "cumulative")) {
if (arg)
BUG("how come --cumulative take a value?");
arg = "cumulative";
} else if (!strcmp(opt->long_name, "dirstat-by-file"))
parse_dirstat_opt(options, "files");
parse_dirstat_opt(options, arg ? arg : "");
return 0;
}
static int diff_opt_find_copies(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
if (!arg)
arg = "";
options->rename_score = parse_rename_score(&arg);
if (*arg != 0)
return error(_("invalid argument to %s"), opt->long_name);
if (options->detect_rename == DIFF_DETECT_COPY)
options->flags.find_copies_harder = 1;
else
options->detect_rename = DIFF_DETECT_COPY;
return 0;
}
static int diff_opt_find_renames(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
if (!arg)
arg = "";
options->rename_score = parse_rename_score(&arg);
if (*arg != 0)
return error(_("invalid argument to %s"), opt->long_name);
options->detect_rename = DIFF_DETECT_RENAME;
return 0;
}
static int diff_opt_follow(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_ARG(arg);
if (unset) {
options->flags.follow_renames = 0;
options->flags.default_follow_renames = 0;
} else {
options->flags.follow_renames = 1;
}
return 0;
}
static int diff_opt_ignore_submodules(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
if (!arg)
arg = "all";
options->flags.override_submodule_config = 1;
handle_ignore_submodules_arg(options, arg);
return 0;
}
static int diff_opt_line_prefix(const struct option *opt,
const char *optarg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
options->line_prefix = optarg;
graph_setup_line_prefix(options);
return 0;
}
static int diff_opt_no_prefix(const struct option *opt,
const char *optarg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
BUG_ON_OPT_ARG(optarg);
diff_set_noprefix(options);
return 0;
}
static int diff_opt_default_prefix(const struct option *opt,
const char *optarg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
BUG_ON_OPT_ARG(optarg);
FREE_AND_NULL(diff_src_prefix);
FREE_AND_NULL(diff_dst_prefix);
diff_set_default_prefix(options);
return 0;
}
static enum parse_opt_result diff_opt_output(struct parse_opt_ctx_t *ctx,
const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
char *path;
BUG_ON_OPT_NEG(unset);
path = prefix_filename(ctx->prefix, arg);
options->file = xfopen(path, "w");
options->close_file = 1;
if (options->use_color != GIT_COLOR_ALWAYS)
options->use_color = GIT_COLOR_NEVER;
free(path);
return 0;
}
static int diff_opt_patience(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
int i;
BUG_ON_OPT_NEG(unset);
BUG_ON_OPT_ARG(arg);
/*
* Both --patience and --anchored use PATIENCE_DIFF
* internally, so remove any anchors previously
* specified.
*/
for (i = 0; i < options->anchors_nr; i++)
free(options->anchors[i]);
options->anchors_nr = 0;
options->ignore_driver_algorithm = 1;
return set_diff_algorithm(options, "patience");
}
static int diff_opt_ignore_regex(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
regex_t *regex;
BUG_ON_OPT_NEG(unset);
regex = xmalloc(sizeof(*regex));
if (regcomp(regex, arg, REG_EXTENDED | REG_NEWLINE)) {
free(regex);
return error(_("invalid regex given to -I: '%s'"), arg);
}
ALLOC_GROW(options->ignore_regex, options->ignore_regex_nr + 1,
options->ignore_regex_alloc);
options->ignore_regex[options->ignore_regex_nr++] = regex;
return 0;
}
static int diff_opt_pickaxe_regex(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
options->pickaxe = arg;
options->pickaxe_opts |= DIFF_PICKAXE_KIND_G;
if (arg && !*arg)
return error(_("-G requires a non-empty argument"));
return 0;
}
static int diff_opt_pickaxe_string(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
options->pickaxe = arg;
options->pickaxe_opts |= DIFF_PICKAXE_KIND_S;
if (arg && !*arg)
return error(_("-S requires a non-empty argument"));
return 0;
}
static int diff_opt_relative(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
options->flags.relative_name = !unset;
if (arg)
options->prefix = arg;
return 0;
}
static int diff_opt_submodule(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
if (!arg)
arg = "log";
if (parse_submodule_params(options, arg))
return error(_("failed to parse --submodule option parameter: '%s'"),
arg);
return 0;
}
static int diff_opt_textconv(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_ARG(arg);
if (unset) {
options->flags.allow_textconv = 0;
} else {
options->flags.allow_textconv = 1;
options->flags.textconv_set_via_cmdline = 1;
}
return 0;
}
static int diff_opt_unified(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
char *s;
BUG_ON_OPT_NEG(unset);
if (arg) {
options->context = strtol(arg, &s, 10);
if (*s)
return error(_("%s expects a numerical value"), "--unified");
}
enable_patch_output(&options->output_format);
return 0;
}
static int diff_opt_word_diff(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
if (arg) {
if (!strcmp(arg, "plain"))
options->word_diff = DIFF_WORDS_PLAIN;
else if (!strcmp(arg, "color")) {
options->use_color = GIT_COLOR_ALWAYS;
options->word_diff = DIFF_WORDS_COLOR;
}
else if (!strcmp(arg, "porcelain"))
options->word_diff = DIFF_WORDS_PORCELAIN;
else if (!strcmp(arg, "none"))
options->word_diff = DIFF_WORDS_NONE;
else
return error(_("bad --word-diff argument: %s"), arg);
} else {
if (options->word_diff == DIFF_WORDS_NONE)
options->word_diff = DIFF_WORDS_PLAIN;
}
return 0;
}
static int diff_opt_word_diff_regex(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
if (options->word_diff == DIFF_WORDS_NONE)
options->word_diff = DIFF_WORDS_PLAIN;
options->word_regex = arg;
return 0;
}
static int diff_opt_rotate_to(const struct option *opt, const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
if (!strcmp(opt->long_name, "skip-to"))
options->skip_instead_of_rotate = 1;
else
options->skip_instead_of_rotate = 0;
options->rotate_to = arg;
return 0;
}
static int diff_opt_max_depth(const struct option *opt,
const char *arg, int unset)
{
struct diff_options *options = opt->value;
BUG_ON_OPT_NEG(unset);
if (!git_parse_int(arg, &options->max_depth))
return error(_("invalid value for '%s': '%s'"),
"--max-depth", arg);
options->flags.recursive = 1;
options->max_depth_valid = options->max_depth >= 0;
return 0;
}
/*
* Consider adding new flags to __git_diff_common_options
* in contrib/completion/git-completion.bash
*/
struct option *add_diff_options(const struct option *opts,
struct diff_options *options)
{
struct option parseopts[] = {
OPT_GROUP(N_("Diff output format options")),
OPT_BITOP('p', "patch", &options->output_format,
N_("generate patch"),
DIFF_FORMAT_PATCH, DIFF_FORMAT_NO_OUTPUT),
OPT_SET_INT('s', "no-patch", &options->output_format,
N_("suppress diff output"), DIFF_FORMAT_NO_OUTPUT),
OPT_BITOP('u', NULL, &options->output_format,
N_("generate patch"),
DIFF_FORMAT_PATCH, DIFF_FORMAT_NO_OUTPUT),
OPT_CALLBACK_F('U', "unified", options, N_("<n>"),
N_("generate diffs with <n> lines context"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG, diff_opt_unified),
OPT_BOOL('W', "function-context", &options->flags.funccontext,
N_("generate diffs with <n> lines context")),
OPT_BITOP(0, "raw", &options->output_format,
N_("generate the diff in raw format"),
DIFF_FORMAT_RAW, DIFF_FORMAT_NO_OUTPUT),
OPT_BITOP(0, "patch-with-raw", &options->output_format,
N_("synonym for '-p --raw'"),
DIFF_FORMAT_PATCH | DIFF_FORMAT_RAW,
DIFF_FORMAT_NO_OUTPUT),
OPT_BITOP(0, "patch-with-stat", &options->output_format,
N_("synonym for '-p --stat'"),
DIFF_FORMAT_PATCH | DIFF_FORMAT_DIFFSTAT,
DIFF_FORMAT_NO_OUTPUT),
OPT_BITOP(0, "numstat", &options->output_format,
N_("machine friendly --stat"),
DIFF_FORMAT_NUMSTAT, DIFF_FORMAT_NO_OUTPUT),
OPT_BITOP(0, "shortstat", &options->output_format,
N_("output only the last line of --stat"),
DIFF_FORMAT_SHORTSTAT, DIFF_FORMAT_NO_OUTPUT),
OPT_CALLBACK_F('X', "dirstat", options, N_("<param1>,<param2>..."),
N_("output the distribution of relative amount of changes for each sub-directory"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG,
diff_opt_dirstat),
OPT_CALLBACK_F(0, "cumulative", options, NULL,
N_("synonym for --dirstat=cumulative"),
PARSE_OPT_NONEG | PARSE_OPT_NOARG,
diff_opt_dirstat),
OPT_CALLBACK_F(0, "dirstat-by-file", options, N_("<param1>,<param2>..."),
N_("synonym for --dirstat=files,<param1>,<param2>..."),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG,
diff_opt_dirstat),
OPT_BIT_F(0, "check", &options->output_format,
N_("warn if changes introduce conflict markers or whitespace errors"),
DIFF_FORMAT_CHECKDIFF, PARSE_OPT_NONEG),
OPT_BITOP(0, "summary", &options->output_format,
N_("condensed summary such as creations, renames and mode changes"),
DIFF_FORMAT_SUMMARY, DIFF_FORMAT_NO_OUTPUT),
OPT_BIT_F(0, "name-only", &options->output_format,
N_("show only names of changed files"),
DIFF_FORMAT_NAME, PARSE_OPT_NONEG),
OPT_BIT_F(0, "name-status", &options->output_format,
N_("show only names and status of changed files"),
DIFF_FORMAT_NAME_STATUS, PARSE_OPT_NONEG),
OPT_CALLBACK_F(0, "stat", options, N_("<width>[,<name-width>[,<count>]]"),
N_("generate diffstat"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG, diff_opt_stat),
OPT_CALLBACK_F(0, "stat-width", options, N_("<width>"),
N_("generate diffstat with a given width"),
PARSE_OPT_NONEG, diff_opt_stat),
OPT_CALLBACK_F(0, "stat-name-width", options, N_("<width>"),
N_("generate diffstat with a given name width"),
PARSE_OPT_NONEG, diff_opt_stat),
OPT_CALLBACK_F(0, "stat-graph-width", options, N_("<width>"),
N_("generate diffstat with a given graph width"),
PARSE_OPT_NONEG, diff_opt_stat),
OPT_CALLBACK_F(0, "stat-count", options, N_("<count>"),
N_("generate diffstat with limited lines"),
PARSE_OPT_NONEG, diff_opt_stat),
OPT_CALLBACK_F(0, "compact-summary", options, NULL,
N_("generate compact summary in diffstat"),
PARSE_OPT_NOARG, diff_opt_compact_summary),
OPT_CALLBACK_F(0, "binary", options, NULL,
N_("output a binary diff that can be applied"),
PARSE_OPT_NONEG | PARSE_OPT_NOARG, diff_opt_binary),
OPT_BOOL(0, "full-index", &options->flags.full_index,
N_("show full pre- and post-image object names on the \"index\" lines")),
OPT_COLOR_FLAG(0, "color", &options->use_color,
N_("show colored diff")),
OPT_CALLBACK_F(0, "ws-error-highlight", options, N_("<kind>"),
N_("highlight whitespace errors in the 'context', 'old' or 'new' lines in the diff"),
PARSE_OPT_NONEG, diff_opt_ws_error_highlight),
OPT_SET_INT('z', NULL, &options->line_termination,
N_("do not munge pathnames and use NULs as output field terminators in --raw or --numstat"),
0),
OPT__ABBREV(&options->abbrev),
OPT_STRING_F(0, "src-prefix", &options->a_prefix, N_("<prefix>"),
N_("show the given source prefix instead of \"a/\""),
PARSE_OPT_NONEG),
OPT_STRING_F(0, "dst-prefix", &options->b_prefix, N_("<prefix>"),
N_("show the given destination prefix instead of \"b/\""),
PARSE_OPT_NONEG),
OPT_CALLBACK_F(0, "line-prefix", options, N_("<prefix>"),
N_("prepend an additional prefix to every line of output"),
PARSE_OPT_NONEG, diff_opt_line_prefix),
OPT_CALLBACK_F(0, "no-prefix", options, NULL,
N_("do not show any source or destination prefix"),
PARSE_OPT_NONEG | PARSE_OPT_NOARG, diff_opt_no_prefix),
OPT_CALLBACK_F(0, "default-prefix", options, NULL,
N_("use default prefixes a/ and b/"),
PARSE_OPT_NONEG | PARSE_OPT_NOARG, diff_opt_default_prefix),
OPT_INTEGER_F(0, "inter-hunk-context", &options->interhunkcontext,
N_("show context between diff hunks up to the specified number of lines"),
PARSE_OPT_NONEG),
OPT_CALLBACK_F(0, "output-indicator-new",
&options->output_indicators[OUTPUT_INDICATOR_NEW],
N_("<char>"),
N_("specify the character to indicate a new line instead of '+'"),
PARSE_OPT_NONEG, diff_opt_char),
OPT_CALLBACK_F(0, "output-indicator-old",
&options->output_indicators[OUTPUT_INDICATOR_OLD],
N_("<char>"),
N_("specify the character to indicate an old line instead of '-'"),
PARSE_OPT_NONEG, diff_opt_char),
OPT_CALLBACK_F(0, "output-indicator-context",
&options->output_indicators[OUTPUT_INDICATOR_CONTEXT],
N_("<char>"),
N_("specify the character to indicate a context instead of ' '"),
PARSE_OPT_NONEG, diff_opt_char),
OPT_GROUP(N_("Diff rename options")),
OPT_CALLBACK_F('B', "break-rewrites", &options->break_opt, N_("<n>[/<m>]"),
N_("break complete rewrite changes into pairs of delete and create"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG,
diff_opt_break_rewrites),
OPT_CALLBACK_F('M', "find-renames", options, N_("<n>"),
N_("detect renames"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG,
diff_opt_find_renames),
OPT_SET_INT_F('D', "irreversible-delete", &options->irreversible_delete,
N_("omit the preimage for deletes"),
1, PARSE_OPT_NONEG),
OPT_CALLBACK_F('C', "find-copies", options, N_("<n>"),
N_("detect copies"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG,
diff_opt_find_copies),
OPT_BOOL(0, "find-copies-harder", &options->flags.find_copies_harder,
N_("use unmodified files as source to find copies")),
OPT_SET_INT_F(0, "no-renames", &options->detect_rename,
N_("disable rename detection"),
0, PARSE_OPT_NONEG),
OPT_BOOL(0, "rename-empty", &options->flags.rename_empty,
N_("use empty blobs as rename source")),
OPT_CALLBACK_F(0, "follow", options, NULL,
N_("continue listing the history of a file beyond renames"),
PARSE_OPT_NOARG, diff_opt_follow),
OPT_INTEGER('l', NULL, &options->rename_limit,
N_("prevent rename/copy detection if the number of rename/copy targets exceeds given limit")),
OPT_GROUP(N_("Diff algorithm options")),
OPT_CALLBACK_F(0, "minimal", options, NULL,
N_("produce the smallest possible diff"),
PARSE_OPT_NONEG | PARSE_OPT_NOARG,
diff_opt_diff_algorithm_no_arg),
OPT_BIT_F('w', "ignore-all-space", &options->xdl_opts,
N_("ignore whitespace when comparing lines"),
XDF_IGNORE_WHITESPACE, PARSE_OPT_NONEG),
OPT_BIT_F('b', "ignore-space-change", &options->xdl_opts,
N_("ignore changes in amount of whitespace"),
XDF_IGNORE_WHITESPACE_CHANGE, PARSE_OPT_NONEG),
OPT_BIT_F(0, "ignore-space-at-eol", &options->xdl_opts,
N_("ignore changes in whitespace at EOL"),
XDF_IGNORE_WHITESPACE_AT_EOL, PARSE_OPT_NONEG),
OPT_BIT_F(0, "ignore-cr-at-eol", &options->xdl_opts,
N_("ignore carrier-return at the end of line"),
XDF_IGNORE_CR_AT_EOL, PARSE_OPT_NONEG),
OPT_BIT_F(0, "ignore-blank-lines", &options->xdl_opts,
N_("ignore changes whose lines are all blank"),
XDF_IGNORE_BLANK_LINES, PARSE_OPT_NONEG),
OPT_CALLBACK_F('I', "ignore-matching-lines", options, N_("<regex>"),
N_("ignore changes whose all lines match <regex>"),
0, diff_opt_ignore_regex),
OPT_BIT(0, "indent-heuristic", &options->xdl_opts,
N_("heuristic to shift diff hunk boundaries for easy reading"),
XDF_INDENT_HEURISTIC),
OPT_CALLBACK_F(0, "patience", options, NULL,
N_("generate diff using the \"patience diff\" algorithm"),
PARSE_OPT_NONEG | PARSE_OPT_NOARG,
diff_opt_patience),
OPT_CALLBACK_F(0, "histogram", options, NULL,
N_("generate diff using the \"histogram diff\" algorithm"),
PARSE_OPT_NONEG | PARSE_OPT_NOARG,
diff_opt_diff_algorithm_no_arg),
OPT_CALLBACK_F(0, "diff-algorithm", options, N_("<algorithm>"),
N_("choose a diff algorithm"),
PARSE_OPT_NONEG, diff_opt_diff_algorithm),
OPT_CALLBACK_F(0, "anchored", options, N_("<text>"),
N_("generate diff using the \"anchored diff\" algorithm"),
PARSE_OPT_NONEG, diff_opt_anchored),
OPT_CALLBACK_F(0, "word-diff", options, N_("<mode>"),
N_("show word diff, using <mode> to delimit changed words"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG, diff_opt_word_diff),
OPT_CALLBACK_F(0, "word-diff-regex", options, N_("<regex>"),
N_("use <regex> to decide what a word is"),
PARSE_OPT_NONEG, diff_opt_word_diff_regex),
OPT_CALLBACK_F(0, "color-words", options, N_("<regex>"),
N_("equivalent to --word-diff=color --word-diff-regex=<regex>"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG, diff_opt_color_words),
OPT_CALLBACK_F(0, "color-moved", options, N_("<mode>"),
N_("moved lines of code are colored differently"),
PARSE_OPT_OPTARG, diff_opt_color_moved),
OPT_CALLBACK_F(0, "color-moved-ws", options, N_("<mode>"),
N_("how white spaces are ignored in --color-moved"),
0, diff_opt_color_moved_ws),
OPT_GROUP(N_("Other diff options")),
OPT_CALLBACK_F(0, "relative", options, N_("<prefix>"),
N_("when run from subdir, exclude changes outside and show relative paths"),
PARSE_OPT_OPTARG,
diff_opt_relative),
OPT_BOOL('a', "text", &options->flags.text,
N_("treat all files as text")),
OPT_BOOL('R', NULL, &options->flags.reverse_diff,
N_("swap two inputs, reverse the diff")),
OPT_BOOL(0, "exit-code", &options->flags.exit_with_status,
N_("exit with 1 if there were differences, 0 otherwise")),
OPT_BOOL(0, "quiet", &options->flags.quick,
N_("disable all output of the program")),
OPT_BOOL(0, "ext-diff", &options->flags.allow_external,
N_("allow an external diff helper to be executed")),
OPT_CALLBACK_F(0, "textconv", options, NULL,
N_("run external text conversion filters when comparing binary files"),
PARSE_OPT_NOARG, diff_opt_textconv),
OPT_CALLBACK_F(0, "ignore-submodules", options, N_("<when>"),
N_("ignore changes to submodules in the diff generation"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG,
diff_opt_ignore_submodules),
OPT_CALLBACK_F(0, "submodule", options, N_("<format>"),
N_("specify how differences in submodules are shown"),
PARSE_OPT_NONEG | PARSE_OPT_OPTARG,
diff_opt_submodule),
OPT_SET_INT_F(0, "ita-invisible-in-index", &options->ita_invisible_in_index,
N_("hide 'git add -N' entries from the index"),
1, PARSE_OPT_NONEG),
OPT_SET_INT_F(0, "ita-visible-in-index", &options->ita_invisible_in_index,
N_("treat 'git add -N' entries as real in the index"),
0, PARSE_OPT_NONEG),
OPT_CALLBACK_F('S', NULL, options, N_("<string>"),
N_("look for differences that change the number of occurrences of the specified string"),
0, diff_opt_pickaxe_string),
OPT_CALLBACK_F('G', NULL, options, N_("<regex>"),
N_("look for differences that change the number of occurrences of the specified regex"),
0, diff_opt_pickaxe_regex),
OPT_BIT_F(0, "pickaxe-all", &options->pickaxe_opts,
N_("show all changes in the changeset with -S or -G"),
DIFF_PICKAXE_ALL, PARSE_OPT_NONEG),
OPT_BIT_F(0, "pickaxe-regex", &options->pickaxe_opts,
N_("treat <string> in -S as extended POSIX regular expression"),
DIFF_PICKAXE_REGEX, PARSE_OPT_NONEG),
OPT_FILENAME('O', NULL, &options->orderfile,
N_("control the order in which files appear in the output")),
OPT_CALLBACK_F(0, "rotate-to", options, N_("<path>"),
N_("show the change in the specified path first"),
PARSE_OPT_NONEG, diff_opt_rotate_to),
OPT_CALLBACK_F(0, "skip-to", options, N_("<path>"),
N_("skip the output to the specified path"),
PARSE_OPT_NONEG, diff_opt_rotate_to),
OPT_CALLBACK_F(0, "find-object", options, N_("<object-id>"),
N_("look for differences that change the number of occurrences of the specified object"),
PARSE_OPT_NONEG, diff_opt_find_object),
OPT_CALLBACK_F(0, "diff-filter", options, N_("[(A|C|D|M|R|T|U|X|B)...[*]]"),
N_("select files by diff type"),
PARSE_OPT_NONEG, diff_opt_diff_filter),
OPT_CALLBACK_F(0, "max-depth", options, N_("<depth>"),
N_("maximum tree depth to recurse"),
PARSE_OPT_NONEG, diff_opt_max_depth),
{
.type = OPTION_CALLBACK,
.long_name = "output",
.value = options,
.argh = N_("<file>"),
.help = N_("output to a specific file"),
.flags = PARSE_OPT_NONEG,
.ll_callback = diff_opt_output,
},
OPT_END()
};
return parse_options_concat(opts, parseopts);
}
int diff_opt_parse(struct diff_options *options,
const char **av, int ac, const char *prefix)
{
struct option no_options[] = { OPT_END() };
struct option *parseopts = add_diff_options(no_options, options);
if (!prefix)
prefix = "";
ac = parse_options(ac, av, prefix, parseopts, NULL,
PARSE_OPT_KEEP_DASHDASH |
PARSE_OPT_KEEP_UNKNOWN_OPT |
PARSE_OPT_NO_INTERNAL_HELP |
PARSE_OPT_ONE_SHOT |
PARSE_OPT_STOP_AT_NON_OPTION);
free(parseopts);
return ac;
}
int parse_rename_score(const char **cp_p)
{
unsigned long num, scale;
int ch, dot;
const char *cp = *cp_p;
num = 0;
scale = 1;
dot = 0;
for (;;) {
ch = *cp;
if ( !dot && ch == '.' ) {
scale = 1;
dot = 1;
} else if ( ch == '%' ) {
scale = dot ? scale*100 : 100;
cp++; /* % is always at the end */
break;
} else if ( ch >= '0' && ch <= '9' ) {
if ( scale < 100000 ) {
scale *= 10;
num = (num*10) + (ch-'0');
}
} else {
break;
}
cp++;
}
*cp_p = cp;
/* user says num divided by scale and we say internally that
* is MAX_SCORE * num / scale.
*/
return (int)((num >= scale) ? MAX_SCORE : (MAX_SCORE * num / scale));
}
struct diff_queue_struct diff_queued_diff;
void diff_q(struct diff_queue_struct *queue, struct diff_filepair *dp)
{
ALLOC_GROW(queue->queue, queue->nr + 1, queue->alloc);
queue->queue[queue->nr++] = dp;
}
struct diff_filepair *diff_queue(struct diff_queue_struct *queue,
struct diff_filespec *one,
struct diff_filespec *two)
{
struct diff_filepair *dp = xcalloc(1, sizeof(*dp));
dp->one = one;
dp->two = two;
if (queue)
diff_q(queue, dp);
return dp;
}
void diff_free_filepair(struct diff_filepair *p)
{
free_filespec(p->one);
free_filespec(p->two);
free(p);
}
void diff_queue_init(struct diff_queue_struct *q)
{
struct diff_queue_struct blank = DIFF_QUEUE_INIT;
memcpy(q, &blank, sizeof(*q));
}
void diff_queue_clear(struct diff_queue_struct *q)
{
for (int i = 0; i < q->nr; i++)
diff_free_filepair(q->queue[i]);
free(q->queue);
diff_queue_init(q);
}
const char *diff_aligned_abbrev(const struct object_id *oid, int len)
{
int abblen;
const char *abbrev;
/* Do we want all 40 hex characters? */
if (len == the_hash_algo->hexsz)
return oid_to_hex(oid);
/* An abbreviated value is fine, possibly followed by an ellipsis. */
abbrev = diff_abbrev_oid(oid, len);
if (!print_sha1_ellipsis())
return abbrev;
abblen = strlen(abbrev);
/*
* In well-behaved cases, where the abbreviated result is the
* same as the requested length, append three dots after the
* abbreviation (hence the whole logic is limited to the case
* where abblen < 37); when the actual abbreviated result is a
* bit longer than the requested length, we reduce the number
* of dots so that they match the well-behaved ones. However,
* if the actual abbreviation is longer than the requested
* length by more than three, we give up on aligning, and add
* three dots anyway, to indicate that the output is not the
* full object name. Yes, this may be suboptimal, but this
* appears only in "diff --raw --abbrev" output and it is not
* worth the effort to change it now. Note that this would
* likely to work fine when the automatic sizing of default
* abbreviation length is used--we would be fed -1 in "len" in
* that case, and will end up always appending three-dots, but
* the automatic sizing is supposed to give abblen that ensures
* uniqueness across all objects (statistically speaking).
*/
if (abblen < the_hash_algo->hexsz - 3) {
static char hex[GIT_MAX_HEXSZ + 1];
if (len < abblen && abblen <= len + 2)
xsnprintf(hex, sizeof(hex), "%s%.*s", abbrev, len+3-abblen, "..");
else
xsnprintf(hex, sizeof(hex), "%s...", abbrev);
return hex;
}
return oid_to_hex(oid);
}
static void diff_flush_raw(struct diff_filepair *p, struct diff_options *opt)
{
int line_termination = opt->line_termination;
int inter_name_termination = line_termination ? '\t' : '\0';
fprintf(opt->file, "%s", diff_line_prefix(opt));
if (!(opt->output_format & DIFF_FORMAT_NAME_STATUS)) {
fprintf(opt->file, ":%06o %06o %s ", p->one->mode, p->two->mode,
diff_aligned_abbrev(&p->one->oid, opt->abbrev));
fprintf(opt->file, "%s ",
diff_aligned_abbrev(&p->two->oid, opt->abbrev));
}
if (p->score) {
fprintf(opt->file, "%c%03d%c", p->status, similarity_index(p),
inter_name_termination);
} else {
fprintf(opt->file, "%c%c", p->status, inter_name_termination);
}
if (p->status == DIFF_STATUS_COPIED ||
p->status == DIFF_STATUS_RENAMED) {
const char *name_a, *name_b;
name_a = p->one->path;
name_b = p->two->path;
strip_prefix(opt->prefix_length, &name_a, &name_b);
write_name_quoted(name_a, opt->file, inter_name_termination);
write_name_quoted(name_b, opt->file, line_termination);
} else {
const char *name_a, *name_b;
name_a = p->one->mode ? p->one->path : p->two->path;
name_b = NULL;
strip_prefix(opt->prefix_length, &name_a, &name_b);
write_name_quoted(name_a, opt->file, line_termination);
}
}
int diff_unmodified_pair(struct diff_filepair *p)
{
/* This function is written stricter than necessary to support
* the currently implemented transformers, but the idea is to
* let transformers to produce diff_filepairs any way they want,
* and filter and clean them up here before producing the output.
*/
struct diff_filespec *one = p->one, *two = p->two;
if (DIFF_PAIR_UNMERGED(p))
return 0; /* unmerged is interesting */
/* deletion, addition, mode or type change
* and rename are all interesting.
*/
if (DIFF_FILE_VALID(one) != DIFF_FILE_VALID(two) ||
DIFF_PAIR_MODE_CHANGED(p) ||
strcmp(one->path, two->path))
return 0;
/* both are valid and point at the same path. that is, we are
* dealing with a change.
*/
if (one->oid_valid && two->oid_valid &&
oideq(&one->oid, &two->oid) &&
!one->dirty_submodule && !two->dirty_submodule)
return 1; /* no change */
if (!one->oid_valid && !two->oid_valid)
return 1; /* both look at the same file on the filesystem. */
return 0;
}
static void diff_flush_patch(struct diff_filepair *p, struct diff_options *o)
{
int include_conflict_headers =
(additional_headers(o, p->one->path) &&
!o->pickaxe_opts &&
(!o->filter || filter_bit_tst(DIFF_STATUS_UNMERGED, o)));
/*
* Check if we can return early without showing a diff. Note that
* diff_filepair only stores {oid, path, mode, is_valid}
* information for each path, and thus diff_unmodified_pair() only
* considers those bits of info. However, we do not want pairs
* created by create_filepairs_for_header_only_notifications()
* (which always look like unmodified pairs) to be ignored, so
* return early if both p is unmodified AND we don't want to
* include_conflict_headers.
*/
if (diff_unmodified_pair(p) && !include_conflict_headers)
return;
/* Actually, we can also return early to avoid showing tree diffs */
if ((DIFF_FILE_VALID(p->one) && S_ISDIR(p->one->mode)) ||
(DIFF_FILE_VALID(p->two) && S_ISDIR(p->two->mode)))
return;
run_diff(p, o);
}
/* return 1 if any change is found; otherwise, return 0 */
static int diff_flush_patch_quietly(struct diff_filepair *p, struct diff_options *o)
{
FILE *saved_file = o->file;
int saved_found_changes = o->found_changes;
int ret;
o->file = NULL;
o->found_changes = 0;
diff_flush_patch(p, o);
ret = o->found_changes;
o->file = saved_file;
o->found_changes |= saved_found_changes;
return ret;
}
static void diff_flush_stat(struct diff_filepair *p, struct diff_options *o,
struct diffstat_t *diffstat)
{
if (diff_unmodified_pair(p))
return;
if ((DIFF_FILE_VALID(p->one) && S_ISDIR(p->one->mode)) ||
(DIFF_FILE_VALID(p->two) && S_ISDIR(p->two->mode)))
return; /* no useful stat for tree diffs */
run_diffstat(p, o, diffstat);
}
static void diff_flush_checkdiff(struct diff_filepair *p,
struct diff_options *o)
{
if (diff_unmodified_pair(p))
return;
if ((DIFF_FILE_VALID(p->one) && S_ISDIR(p->one->mode)) ||
(DIFF_FILE_VALID(p->two) && S_ISDIR(p->two->mode)))
return; /* nothing to check in tree diffs */
run_checkdiff(p, o);
}
int diff_queue_is_empty(struct diff_options *o)
{
struct diff_queue_struct *q = &diff_queued_diff;
int i;
int include_conflict_headers =
(o->additional_path_headers &&
strmap_get_size(o->additional_path_headers) &&
!o->pickaxe_opts &&
(!o->filter || filter_bit_tst(DIFF_STATUS_UNMERGED, o)));
if (include_conflict_headers)
return 0;
for (i = 0; i < q->nr; i++)
if (!diff_unmodified_pair(q->queue[i]))
return 0;
return 1;
}
#if DIFF_DEBUG
void diff_debug_filespec(struct diff_filespec *s, int x, const char *one)
{
fprintf(stderr, "queue[%d] %s (%s) %s %06o %s\n",
x, one ? one : "",
s->path,
DIFF_FILE_VALID(s) ? "valid" : "invalid",
s->mode,
s->oid_valid ? oid_to_hex(&s->oid) : "");
fprintf(stderr, "queue[%d] %s size %lu\n",
x, one ? one : "",
s->size);
}
void diff_debug_filepair(const struct diff_filepair *p, int i)
{
diff_debug_filespec(p->one, i, "one");
diff_debug_filespec(p->two, i, "two");
fprintf(stderr, "score %d, status %c rename_used %d broken %d\n",
p->score, p->status ? p->status : '?',
p->one->rename_used, p->broken_pair);
}
void diff_debug_queue(const char *msg, struct diff_queue_struct *q)
{
int i;
if (msg)
fprintf(stderr, "%s\n", msg);
fprintf(stderr, "q->nr = %d\n", q->nr);
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
diff_debug_filepair(p, i);
}
}
#endif
static void diff_resolve_rename_copy(void)
{
int i;
struct diff_filepair *p;
struct diff_queue_struct *q = &diff_queued_diff;
diff_debug_queue("resolve-rename-copy", q);
for (i = 0; i < q->nr; i++) {
p = q->queue[i];
p->status = 0; /* undecided */
if (DIFF_PAIR_UNMERGED(p))
p->status = DIFF_STATUS_UNMERGED;
else if (!DIFF_FILE_VALID(p->one))
p->status = DIFF_STATUS_ADDED;
else if (!DIFF_FILE_VALID(p->two))
p->status = DIFF_STATUS_DELETED;
else if (DIFF_PAIR_TYPE_CHANGED(p))
p->status = DIFF_STATUS_TYPE_CHANGED;
/* from this point on, we are dealing with a pair
* whose both sides are valid and of the same type, i.e.
* either in-place edit or rename/copy edit.
*/
else if (DIFF_PAIR_RENAME(p)) {
/*
* A rename might have re-connected a broken
* pair up, causing the pathnames to be the
* same again. If so, that's not a rename at
* all, just a modification..
*
* Otherwise, see if this source was used for
* multiple renames, in which case we decrement
* the count, and call it a copy.
*/
if (!strcmp(p->one->path, p->two->path))
p->status = DIFF_STATUS_MODIFIED;
else if (--p->one->rename_used > 0)
p->status = DIFF_STATUS_COPIED;
else
p->status = DIFF_STATUS_RENAMED;
}
else if (!oideq(&p->one->oid, &p->two->oid) ||
p->one->mode != p->two->mode ||
p->one->dirty_submodule ||
p->two->dirty_submodule ||
is_null_oid(&p->one->oid))
p->status = DIFF_STATUS_MODIFIED;
else {
/* This is a "no-change" entry and should not
* happen anymore, but prepare for broken callers.
*/
error("feeding unmodified %s to diffcore",
p->one->path);
p->status = DIFF_STATUS_UNKNOWN;
}
}
diff_debug_queue("resolve-rename-copy done", q);
}
static int check_pair_status(struct diff_filepair *p)
{
switch (p->status) {
case DIFF_STATUS_UNKNOWN:
return 0;
case 0:
die("internal error in diff-resolve-rename-copy");
default:
return 1;
}
}
static void flush_one_pair(struct diff_filepair *p, struct diff_options *opt)
{
int fmt = opt->output_format;
if (fmt & DIFF_FORMAT_CHECKDIFF)
diff_flush_checkdiff(p, opt);
else if (fmt & (DIFF_FORMAT_RAW | DIFF_FORMAT_NAME_STATUS))
diff_flush_raw(p, opt);
else if (fmt & DIFF_FORMAT_NAME) {
const char *name_a, *name_b;
name_a = p->two->path;
name_b = NULL;
strip_prefix(opt->prefix_length, &name_a, &name_b);
fprintf(opt->file, "%s", diff_line_prefix(opt));
write_name_quoted(name_a, opt->file, opt->line_termination);
}
opt->found_changes = 1;
}
static void show_file_mode_name(struct diff_options *opt, const char *newdelete, struct diff_filespec *fs)
{
struct strbuf sb = STRBUF_INIT;
if (fs->mode)
strbuf_addf(&sb, " %s mode %06o ", newdelete, fs->mode);
else
strbuf_addf(&sb, " %s ", newdelete);
quote_c_style(fs->path, &sb, NULL, 0);
strbuf_addch(&sb, '\n');
emit_diff_symbol(opt, DIFF_SYMBOL_SUMMARY,
sb.buf, sb.len, 0);
strbuf_release(&sb);
}
static void show_mode_change(struct diff_options *opt, struct diff_filepair *p,
int show_name)
{
if (p->one->mode && p->two->mode && p->one->mode != p->two->mode) {
struct strbuf sb = STRBUF_INIT;
strbuf_addf(&sb, " mode change %06o => %06o",
p->one->mode, p->two->mode);
if (show_name) {
strbuf_addch(&sb, ' ');
quote_c_style(p->two->path, &sb, NULL, 0);
}
strbuf_addch(&sb, '\n');
emit_diff_symbol(opt, DIFF_SYMBOL_SUMMARY,
sb.buf, sb.len, 0);
strbuf_release(&sb);
}
}
static void show_rename_copy(struct diff_options *opt, const char *renamecopy,
struct diff_filepair *p)
{
struct strbuf sb = STRBUF_INIT;
struct strbuf names = STRBUF_INIT;
pprint_rename(&names, p->one->path, p->two->path);
strbuf_addf(&sb, " %s %s (%d%%)\n",
renamecopy, names.buf, similarity_index(p));
strbuf_release(&names);
emit_diff_symbol(opt, DIFF_SYMBOL_SUMMARY,
sb.buf, sb.len, 0);
show_mode_change(opt, p, 0);
strbuf_release(&sb);
}
static void diff_summary(struct diff_options *opt, struct diff_filepair *p)
{
switch(p->status) {
case DIFF_STATUS_DELETED:
show_file_mode_name(opt, "delete", p->one);
break;
case DIFF_STATUS_ADDED:
show_file_mode_name(opt, "create", p->two);
break;
case DIFF_STATUS_COPIED:
show_rename_copy(opt, "copy", p);
break;
case DIFF_STATUS_RENAMED:
show_rename_copy(opt, "rename", p);
break;
default:
if (p->score) {
struct strbuf sb = STRBUF_INIT;
strbuf_addstr(&sb, " rewrite ");
quote_c_style(p->two->path, &sb, NULL, 0);
strbuf_addf(&sb, " (%d%%)\n", similarity_index(p));
emit_diff_symbol(opt, DIFF_SYMBOL_SUMMARY,
sb.buf, sb.len, 0);
strbuf_release(&sb);
}
show_mode_change(opt, p, !p->score);
break;
}
}
struct patch_id_t {
struct git_hash_ctx *ctx;
int patchlen;
};
static int remove_space(char *line, int len)
{
int i;
char *dst = line;
unsigned char c;
for (i = 0; i < len; i++)
if (!isspace((c = line[i])))
*dst++ = c;
return dst - line;
}
void flush_one_hunk(struct object_id *result, struct git_hash_ctx *ctx)
{
unsigned char hash[GIT_MAX_RAWSZ];
unsigned short carry = 0;
int i;
git_hash_final(hash, ctx);
the_hash_algo->init_fn(ctx);
/* 20-byte sum, with carry */
for (i = 0; i < the_hash_algo->rawsz; ++i) {
carry += result->hash[i] + hash[i];
result->hash[i] = carry;
carry >>= 8;
}
}
static int patch_id_consume(void *priv, char *line, unsigned long len)
{
struct patch_id_t *data = priv;
int new_len;
if (len > 12 && starts_with(line, "\\ "))
return 0;
new_len = remove_space(line, len);
git_hash_update(data->ctx, line, new_len);
data->patchlen += new_len;
return 0;
}
static void patch_id_add_string(struct git_hash_ctx *ctx, const char *str)
{
git_hash_update(ctx, str, strlen(str));
}
static void patch_id_add_mode(struct git_hash_ctx *ctx, unsigned mode)
{
/* large enough for 2^32 in octal */
char buf[12];
int len = xsnprintf(buf, sizeof(buf), "%06o", mode);
git_hash_update(ctx, buf, len);
}
/* returns 0 upon success, and writes result into oid */
static int diff_get_patch_id(struct diff_options *options, struct object_id *oid, int diff_header_only)
{
struct diff_queue_struct *q = &diff_queued_diff;
int i;
struct git_hash_ctx ctx;
struct patch_id_t data;
the_hash_algo->init_fn(&ctx);
memset(&data, 0, sizeof(struct patch_id_t));
data.ctx = &ctx;
oidclr(oid, the_repository->hash_algo);
for (i = 0; i < q->nr; i++) {
xpparam_t xpp;
xdemitconf_t xecfg;
mmfile_t mf1, mf2;
struct diff_filepair *p = q->queue[i];
int len1, len2;
memset(&xpp, 0, sizeof(xpp));
memset(&xecfg, 0, sizeof(xecfg));
if (p->status == 0)
return error("internal diff status error");
if (p->status == DIFF_STATUS_UNKNOWN)
continue;
if (diff_unmodified_pair(p))
continue;
if ((DIFF_FILE_VALID(p->one) && S_ISDIR(p->one->mode)) ||
(DIFF_FILE_VALID(p->two) && S_ISDIR(p->two->mode)))
continue;
if (DIFF_PAIR_UNMERGED(p))
continue;
diff_fill_oid_info(p->one, options->repo->index);
diff_fill_oid_info(p->two, options->repo->index);
len1 = remove_space(p->one->path, strlen(p->one->path));
len2 = remove_space(p->two->path, strlen(p->two->path));
patch_id_add_string(&ctx, "diff--git");
patch_id_add_string(&ctx, "a/");
git_hash_update(&ctx, p->one->path, len1);
patch_id_add_string(&ctx, "b/");
git_hash_update(&ctx, p->two->path, len2);
if (p->one->mode == 0) {
patch_id_add_string(&ctx, "newfilemode");
patch_id_add_mode(&ctx, p->two->mode);
} else if (p->two->mode == 0) {
patch_id_add_string(&ctx, "deletedfilemode");
patch_id_add_mode(&ctx, p->one->mode);
} else if (p->one->mode != p->two->mode) {
patch_id_add_string(&ctx, "oldmode");
patch_id_add_mode(&ctx, p->one->mode);
patch_id_add_string(&ctx, "newmode");
patch_id_add_mode(&ctx, p->two->mode);
}
if (diff_header_only) {
/* don't do anything since we're only populating header info */
} else if (diff_filespec_is_binary(options->repo, p->one) ||
diff_filespec_is_binary(options->repo, p->two)) {
git_hash_update(&ctx, oid_to_hex(&p->one->oid),
the_hash_algo->hexsz);
git_hash_update(&ctx, oid_to_hex(&p->two->oid),
the_hash_algo->hexsz);
} else {
if (p->one->mode == 0) {
patch_id_add_string(&ctx, "---/dev/null");
patch_id_add_string(&ctx, "+++b/");
git_hash_update(&ctx, p->two->path, len2);
} else if (p->two->mode == 0) {
patch_id_add_string(&ctx, "---a/");
git_hash_update(&ctx, p->one->path, len1);
patch_id_add_string(&ctx, "+++/dev/null");
} else {
patch_id_add_string(&ctx, "---a/");
git_hash_update(&ctx, p->one->path, len1);
patch_id_add_string(&ctx, "+++b/");
git_hash_update(&ctx, p->two->path, len2);
}
if (fill_mmfile(options->repo, &mf1, p->one) < 0 ||
fill_mmfile(options->repo, &mf2, p->two) < 0)
return error("unable to read files to diff");
xpp.flags = 0;
xecfg.ctxlen = 3;
xecfg.flags = XDL_EMIT_NO_HUNK_HDR;
if (xdi_diff_outf(&mf1, &mf2, NULL,
patch_id_consume, &data, &xpp, &xecfg))
return error("unable to generate patch-id diff for %s",
p->one->path);
}
flush_one_hunk(oid, &ctx);
}
return 0;
}
int diff_flush_patch_id(struct diff_options *options, struct object_id *oid, int diff_header_only)
{
struct diff_queue_struct *q = &diff_queued_diff;
int result = diff_get_patch_id(options, oid, diff_header_only);
diff_queue_clear(q);
return result;
}
static int is_summary_empty(const struct diff_queue_struct *q)
{
int i;
for (i = 0; i < q->nr; i++) {
const struct diff_filepair *p = q->queue[i];
switch (p->status) {
case DIFF_STATUS_DELETED:
case DIFF_STATUS_ADDED:
case DIFF_STATUS_COPIED:
case DIFF_STATUS_RENAMED:
return 0;
default:
if (p->score)
return 0;
if (p->one->mode && p->two->mode &&
p->one->mode != p->two->mode)
return 0;
break;
}
}
return 1;
}
static const char rename_limit_warning[] =
N_("exhaustive rename detection was skipped due to too many files.");
static const char degrade_cc_to_c_warning[] =
N_("only found copies from modified paths due to too many files.");
static const char rename_limit_advice[] =
N_("you may want to set your %s variable to at least "
"%d and retry the command.");
void diff_warn_rename_limit(const char *varname, int needed, int degraded_cc)
{
fflush(stdout);
if (degraded_cc)
warning(_(degrade_cc_to_c_warning));
else if (needed)
warning(_(rename_limit_warning));
else
return;
if (0 < needed)
warning(_(rename_limit_advice), varname, needed);
}
static void create_filepairs_for_header_only_notifications(struct diff_options *o)
{
struct strset present;
struct diff_queue_struct *q = &diff_queued_diff;
struct hashmap_iter iter;
struct strmap_entry *e;
int i;
strset_init_with_options(&present, /*pool*/ NULL, /*strdup*/ 0);
/*
* Find out which paths exist in diff_queued_diff, preferring
* one->path for any pair that has multiple paths.
*/
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
char *path = p->one->path ? p->one->path : p->two->path;
if (strmap_contains(o->additional_path_headers, path))
strset_add(&present, path);
}
/*
* Loop over paths in additional_path_headers; for each NOT already
* in diff_queued_diff, create a synthetic filepair and insert that
* into diff_queued_diff.
*/
strmap_for_each_entry(o->additional_path_headers, &iter, e) {
if (!strset_contains(&present, e->key)) {
struct diff_filespec *one, *two;
struct diff_filepair *p;
one = alloc_filespec(e->key);
two = alloc_filespec(e->key);
fill_filespec(one, null_oid(the_hash_algo), 0, 0);
fill_filespec(two, null_oid(the_hash_algo), 0, 0);
p = diff_queue(q, one, two);
p->status = DIFF_STATUS_MODIFIED;
}
}
/* Re-sort the filepairs */
diffcore_fix_diff_index();
/* Cleanup */
strset_clear(&present);
}
static void diff_flush_patch_all_file_pairs(struct diff_options *o)
{
int i;
static struct emitted_diff_symbols esm = EMITTED_DIFF_SYMBOLS_INIT;
struct diff_queue_struct *q = &diff_queued_diff;
if (WSEH_NEW & WS_RULE_MASK)
BUG("WS rules bit mask overlaps with diff symbol flags");
if (o->color_moved && want_color(o->use_color))
o->emitted_symbols = &esm;
if (o->additional_path_headers)
create_filepairs_for_header_only_notifications(o);
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
if (check_pair_status(p))
diff_flush_patch(p, o);
}
if (o->emitted_symbols) {
struct mem_pool entry_pool;
struct moved_entry_list *entry_list;
mem_pool_init(&entry_pool, 1024 * 1024);
entry_list = add_lines_to_move_detection(o, &entry_pool);
mark_color_as_moved(o, entry_list);
if (o->color_moved == COLOR_MOVED_ZEBRA_DIM)
dim_moved_lines(o);
mem_pool_discard(&entry_pool, 0);
free(entry_list);
for (i = 0; i < esm.nr; i++)
emit_diff_symbol_from_struct(o, &esm.buf[i]);
for (i = 0; i < esm.nr; i++)
free((void *)esm.buf[i].line);
esm.nr = 0;
o->emitted_symbols = NULL;
}
}
static void diff_free_file(struct diff_options *options)
{
if (options->close_file && options->file) {
fclose(options->file);
options->file = NULL;
}
}
static void diff_free_ignore_regex(struct diff_options *options)
{
int i;
for (i = 0; i < options->ignore_regex_nr; i++) {
regfree(options->ignore_regex[i]);
free(options->ignore_regex[i]);
}
FREE_AND_NULL(options->ignore_regex);
options->ignore_regex_nr = 0;
}
void diff_free(struct diff_options *options)
{
if (options->no_free)
return;
if (options->objfind) {
oidset_clear(options->objfind);
FREE_AND_NULL(options->objfind);
}
FREE_AND_NULL(options->orderfile);
for (size_t i = 0; i < options->anchors_nr; i++)
free(options->anchors[i]);
FREE_AND_NULL(options->anchors);
options->anchors_nr = options->anchors_alloc = 0;
diff_free_file(options);
diff_free_ignore_regex(options);
clear_pathspec(&options->pathspec);
}
void diff_flush(struct diff_options *options)
{
struct diff_queue_struct *q = &diff_queued_diff;
int i, output_format = options->output_format;
int separator = 0;
int dirstat_by_line = 0;
/*
* Order: raw, stat, summary, patch
* or: name/name-status/checkdiff (other bits clear)
*/
if (!q->nr && !options->additional_path_headers)
goto free_queue;
if (output_format & (DIFF_FORMAT_RAW |
DIFF_FORMAT_NAME |
DIFF_FORMAT_NAME_STATUS |
DIFF_FORMAT_CHECKDIFF)) {
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
if (!check_pair_status(p))
continue;
if (options->flags.diff_from_contents &&
!diff_flush_patch_quietly(p, options))
continue;
flush_one_pair(p, options);
}
separator++;
}
if (output_format & DIFF_FORMAT_DIRSTAT && options->flags.dirstat_by_line)
dirstat_by_line = 1;
if (output_format & (DIFF_FORMAT_DIFFSTAT|DIFF_FORMAT_SHORTSTAT|DIFF_FORMAT_NUMSTAT) ||
dirstat_by_line) {
struct diffstat_t diffstat;
compute_diffstat(options, &diffstat, q);
if (output_format & DIFF_FORMAT_NUMSTAT)
show_numstat(&diffstat, options);
if (output_format & DIFF_FORMAT_DIFFSTAT)
show_stats(&diffstat, options);
if (output_format & DIFF_FORMAT_SHORTSTAT)
show_shortstats(&diffstat, options);
if (output_format & DIFF_FORMAT_DIRSTAT && dirstat_by_line)
show_dirstat_by_line(&diffstat, options);
free_diffstat_info(&diffstat);
separator++;
}
if ((output_format & DIFF_FORMAT_DIRSTAT) && !dirstat_by_line)
show_dirstat(options);
if (output_format & DIFF_FORMAT_SUMMARY && !is_summary_empty(q)) {
for (i = 0; i < q->nr; i++) {
diff_summary(options, q->queue[i]);
}
separator++;
}
if (output_format & DIFF_FORMAT_PATCH) {
if (separator) {
emit_diff_symbol(options, DIFF_SYMBOL_SEPARATOR, NULL, 0, 0);
if (options->stat_sep)
/* attach patch instead of inline */
emit_diff_symbol(options, DIFF_SYMBOL_STAT_SEP,
NULL, 0, 0);
}
diff_flush_patch_all_file_pairs(options);
}
if (output_format & DIFF_FORMAT_CALLBACK)
options->format_callback(q, options, options->format_callback_data);
if (output_format & DIFF_FORMAT_NO_OUTPUT &&
options->flags.exit_with_status &&
options->flags.diff_from_contents) {
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
if (check_pair_status(p))
diff_flush_patch_quietly(p, options);
if (options->found_changes)
break;
}
}
free_queue:
diff_queue_clear(q);
diff_free(options);
/*
* Report the content-level differences with HAS_CHANGES;
* diff_addremove/diff_change does not set the bit when
* DIFF_FROM_CONTENTS is in effect (e.g. with -w).
*/
if (options->flags.diff_from_contents) {
if (options->found_changes)
options->flags.has_changes = 1;
else
options->flags.has_changes = 0;
}
}
static int match_filter(const struct diff_options *options, const struct diff_filepair *p)
{
return (((p->status == DIFF_STATUS_MODIFIED) &&
((p->score &&
filter_bit_tst(DIFF_STATUS_FILTER_BROKEN, options)) ||
(!p->score &&
filter_bit_tst(DIFF_STATUS_MODIFIED, options)))) ||
((p->status != DIFF_STATUS_MODIFIED) &&
filter_bit_tst(p->status, options)));
}
static void diffcore_apply_filter(struct diff_options *options)
{
int i;
struct diff_queue_struct *q = &diff_queued_diff;
struct diff_queue_struct outq = DIFF_QUEUE_INIT;
if (!options->filter)
return;
if (filter_bit_tst(DIFF_STATUS_FILTER_AON, options)) {
int found;
for (i = found = 0; !found && i < q->nr; i++) {
if (match_filter(options, q->queue[i]))
found++;
}
if (found)
return;
/* otherwise we will clear the whole queue
* by copying the empty outq at the end of this
* function, but first clear the current entries
* in the queue.
*/
for (i = 0; i < q->nr; i++)
diff_free_filepair(q->queue[i]);
}
else {
/* Only the matching ones */
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
if (match_filter(options, p))
diff_q(&outq, p);
else
diff_free_filepair(p);
}
}
free(q->queue);
*q = outq;
}
/* Check whether two filespecs with the same mode and size are identical */
static int diff_filespec_is_identical(struct repository *r,
struct diff_filespec *one,
struct diff_filespec *two)
{
if (S_ISGITLINK(one->mode))
return 0;
if (diff_populate_filespec(r, one, NULL))
return 0;
if (diff_populate_filespec(r, two, NULL))
return 0;
return !memcmp(one->data, two->data, one->size);
}
static int diff_filespec_check_stat_unmatch(struct repository *r,
struct diff_filepair *p)
{
struct diff_populate_filespec_options dpf_options = {
.check_size_only = 1,
.missing_object_cb = diff_queued_diff_prefetch,
.missing_object_data = r,
};
if (p->done_skip_stat_unmatch)
return p->skip_stat_unmatch_result;
p->done_skip_stat_unmatch = 1;
p->skip_stat_unmatch_result = 0;
/*
* 1. Entries that come from stat info dirtiness
* always have both sides (iow, not create/delete),
* one side of the object name is unknown, with
* the same mode and size. Keep the ones that
* do not match these criteria. They have real
* differences.
*
* 2. At this point, the file is known to be modified,
* with the same mode and size, and the object
* name of one side is unknown. Need to inspect
* the identical contents.
*/
if (!DIFF_FILE_VALID(p->one) || /* (1) */
!DIFF_FILE_VALID(p->two) ||
(p->one->oid_valid && p->two->oid_valid) ||
(p->one->mode != p->two->mode) ||
diff_populate_filespec(r, p->one, &dpf_options) ||
diff_populate_filespec(r, p->two, &dpf_options) ||
(p->one->size != p->two->size) ||
!diff_filespec_is_identical(r, p->one, p->two)) /* (2) */
p->skip_stat_unmatch_result = 1;
return p->skip_stat_unmatch_result;
}
static void diffcore_skip_stat_unmatch(struct diff_options *diffopt)
{
int i;
struct diff_queue_struct *q = &diff_queued_diff;
struct diff_queue_struct outq = DIFF_QUEUE_INIT;
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
if (diff_filespec_check_stat_unmatch(diffopt->repo, p))
diff_q(&outq, p);
else {
/*
* The caller can subtract 1 from skip_stat_unmatch
* to determine how many paths were dirty only
* due to stat info mismatch.
*/
if (!diffopt->flags.no_index)
diffopt->skip_stat_unmatch++;
diff_free_filepair(p);
q->queue[i] = NULL;
}
}
free(q->queue);
*q = outq;
}
static int diffnamecmp(const void *a_, const void *b_)
{
const struct diff_filepair *a = *((const struct diff_filepair **)a_);
const struct diff_filepair *b = *((const struct diff_filepair **)b_);
const char *name_a, *name_b;
name_a = a->one ? a->one->path : a->two->path;
name_b = b->one ? b->one->path : b->two->path;
return strcmp(name_a, name_b);
}
void diffcore_fix_diff_index(void)
{
struct diff_queue_struct *q = &diff_queued_diff;
QSORT(q->queue, q->nr, diffnamecmp);
}
void diff_add_if_missing(struct repository *r,
struct oid_array *to_fetch,
const struct diff_filespec *filespec)
{
if (filespec && filespec->oid_valid &&
!S_ISGITLINK(filespec->mode) &&
odb_read_object_info_extended(r->objects, &filespec->oid, NULL,
OBJECT_INFO_FOR_PREFETCH))
oid_array_append(to_fetch, &filespec->oid);
}
void diff_queued_diff_prefetch(void *repository)
{
struct repository *repo = repository;
int i;
struct diff_queue_struct *q = &diff_queued_diff;
struct oid_array to_fetch = OID_ARRAY_INIT;
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
if (!p)
continue;
diff_add_if_missing(repo, &to_fetch, p->one);
diff_add_if_missing(repo, &to_fetch, p->two);
}
/*
* NEEDSWORK: Consider deduplicating the OIDs sent.
*/
promisor_remote_get_direct(repo, to_fetch.oid, to_fetch.nr);
oid_array_clear(&to_fetch);
}
void init_diffstat_widths(struct diff_options *options)
{
options->stat_width = -1; /* use full terminal width */
options->stat_name_width = -1; /* respect diff.statNameWidth config */
options->stat_graph_width = -1; /* respect diff.statGraphWidth config */
}
void diffcore_std(struct diff_options *options)
{
int output_formats_to_prefetch = DIFF_FORMAT_DIFFSTAT |
DIFF_FORMAT_NUMSTAT |
DIFF_FORMAT_PATCH |
DIFF_FORMAT_SHORTSTAT |
DIFF_FORMAT_DIRSTAT;
/*
* Check if the user requested a blob-data-requiring diff output and/or
* break-rewrite detection (which requires blob data). If yes, prefetch
* the diff pairs.
*
* If no prefetching occurs, diffcore_rename() will prefetch if it
* decides that it needs inexact rename detection.
*/
if (options->repo == the_repository && repo_has_promisor_remote(the_repository) &&
(options->output_format & output_formats_to_prefetch ||
options->pickaxe_opts & DIFF_PICKAXE_KINDS_MASK))
diff_queued_diff_prefetch(options->repo);
/* NOTE please keep the following in sync with diff_tree_combined() */
if (options->skip_stat_unmatch)
diffcore_skip_stat_unmatch(options);
if (!options->found_follow) {
/* See try_to_follow_renames() in tree-diff.c */
if (options->break_opt != -1)
diffcore_break(options->repo,
options->break_opt);
if (options->detect_rename)
diffcore_rename(options);
if (options->break_opt != -1)
diffcore_merge_broken();
}
if (options->pickaxe_opts & DIFF_PICKAXE_KINDS_MASK)
diffcore_pickaxe(options);
if (options->orderfile)
diffcore_order(options->orderfile);
if (options->rotate_to)
diffcore_rotate(options);
if (!options->found_follow && !options->skip_resolving_statuses)
/* See try_to_follow_renames() in tree-diff.c */
diff_resolve_rename_copy();
diffcore_apply_filter(options);
if (diff_queued_diff.nr && !options->flags.diff_from_contents)
options->flags.has_changes = 1;
else
options->flags.has_changes = 0;
options->found_follow = 0;
}
int diff_result_code(struct rev_info *revs)
{
struct diff_options *opt = &revs->diffopt;
int result = 0;
if (revs->remerge_diff) {
tmp_objdir_destroy(revs->remerge_objdir);
revs->remerge_objdir = NULL;
}
diff_warn_rename_limit("diff.renameLimit",
opt->needed_rename_limit,
opt->degraded_cc_to_c);
if (opt->flags.exit_with_status &&
opt->flags.has_changes)
result |= 01;
if ((opt->output_format & DIFF_FORMAT_CHECKDIFF) &&
opt->flags.check_failed)
result |= 02;
return result;
}
int diff_can_quit_early(struct diff_options *opt)
{
return (opt->flags.quick &&
!opt->filter &&
opt->flags.has_changes);
}
/*
* Shall changes to this submodule be ignored?
*
* Submodule changes can be configured to be ignored separately for each path,
* but that configuration can be overridden from the command line.
*/
static int is_submodule_ignored(const char *path, struct diff_options *options)
{
int ignored = 0;
struct diff_flags orig_flags = options->flags;
if (!options->flags.override_submodule_config)
set_diffopt_flags_from_submodule_config(options, path);
if (options->flags.ignore_submodules)
ignored = 1;
options->flags = orig_flags;
return ignored;
}
void compute_diffstat(struct diff_options *options,
struct diffstat_t *diffstat,
struct diff_queue_struct *q)
{
int i;
memset(diffstat, 0, sizeof(struct diffstat_t));
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
if (check_pair_status(p))
diff_flush_stat(p, options, diffstat);
}
options->found_changes = !!diffstat->nr;
}
struct diff_filepair *diff_queue_addremove(struct diff_queue_struct *queue,
struct diff_options *options,
int addremove, unsigned mode,
const struct object_id *oid,
int oid_valid,
const char *concatpath,
unsigned dirty_submodule)
{
struct diff_filespec *one, *two;
struct diff_filepair *pair;
if (S_ISGITLINK(mode) && is_submodule_ignored(concatpath, options))
return NULL;
/* This may look odd, but it is a preparation for
* feeding "there are unchanged files which should
* not produce diffs, but when you are doing copy
* detection you would need them, so here they are"
* entries to the diff-core. They will be prefixed
* with something like '=' or '*' (I haven't decided
* which but should not make any difference).
* Feeding the same new and old to diff_change()
* also has the same effect.
* Before the final output happens, they are pruned after
* merged into rename/copy pairs as appropriate.
*/
if (options->flags.reverse_diff)
addremove = (addremove == '+' ? '-' :
addremove == '-' ? '+' : addremove);
if (options->prefix &&
strncmp(concatpath, options->prefix, options->prefix_length))
return NULL;
one = alloc_filespec(concatpath);
two = alloc_filespec(concatpath);
if (addremove != '+')
fill_filespec(one, oid, oid_valid, mode);
if (addremove != '-') {
fill_filespec(two, oid, oid_valid, mode);
two->dirty_submodule = dirty_submodule;
}
pair = diff_queue(queue, one, two);
if (!options->flags.diff_from_contents)
options->flags.has_changes = 1;
return pair;
}
struct diff_filepair *diff_queue_change(struct diff_queue_struct *queue,
struct diff_options *options,
unsigned old_mode, unsigned new_mode,
const struct object_id *old_oid,
const struct object_id *new_oid,
int old_oid_valid, int new_oid_valid,
const char *concatpath,
unsigned old_dirty_submodule,
unsigned new_dirty_submodule)
{
struct diff_filespec *one, *two;
struct diff_filepair *p;
if (S_ISGITLINK(old_mode) && S_ISGITLINK(new_mode) &&
is_submodule_ignored(concatpath, options))
return NULL;
if (options->flags.reverse_diff) {
SWAP(old_mode, new_mode);
SWAP(old_oid, new_oid);
SWAP(old_oid_valid, new_oid_valid);
SWAP(old_dirty_submodule, new_dirty_submodule);
}
if (options->prefix &&
strncmp(concatpath, options->prefix, options->prefix_length))
return NULL;
one = alloc_filespec(concatpath);
two = alloc_filespec(concatpath);
fill_filespec(one, old_oid, old_oid_valid, old_mode);
fill_filespec(two, new_oid, new_oid_valid, new_mode);
one->dirty_submodule = old_dirty_submodule;
two->dirty_submodule = new_dirty_submodule;
p = diff_queue(queue, one, two);
if (options->flags.diff_from_contents)
return p;
if (options->flags.quick && options->skip_stat_unmatch &&
!diff_filespec_check_stat_unmatch(options->repo, p)) {
diff_free_filespec_data(p->one);
diff_free_filespec_data(p->two);
return p;
}
options->flags.has_changes = 1;
return p;
}
void diff_addremove(struct diff_options *options, int addremove, unsigned mode,
const struct object_id *oid, int oid_valid,
const char *concatpath, unsigned dirty_submodule)
{
diff_queue_addremove(&diff_queued_diff, options, addremove, mode, oid,
oid_valid, concatpath, dirty_submodule);
}
void diff_change(struct diff_options *options,
unsigned old_mode, unsigned new_mode,
const struct object_id *old_oid,
const struct object_id *new_oid,
int old_oid_valid, int new_oid_valid,
const char *concatpath,
unsigned old_dirty_submodule, unsigned new_dirty_submodule)
{
diff_queue_change(&diff_queued_diff, options, old_mode, new_mode,
old_oid, new_oid, old_oid_valid, new_oid_valid,
concatpath, old_dirty_submodule, new_dirty_submodule);
}
void diff_same(struct diff_options *options,
unsigned mode,
const struct object_id *oid,
const char *concatpath)
{
struct diff_filespec *one;
if (S_ISGITLINK(mode) && is_submodule_ignored(concatpath, options))
return;
if (options->prefix &&
strncmp(concatpath, options->prefix, options->prefix_length))
return;
one = alloc_filespec(concatpath);
fill_filespec(one, oid, 1, mode);
one->count++;
diff_queue(&diff_queued_diff, one, one);
}
struct diff_filepair *diff_unmerge(struct diff_options *options, const char *path)
{
struct diff_filepair *pair;
struct diff_filespec *one, *two;
if (options->prefix &&
strncmp(path, options->prefix, options->prefix_length))
return NULL;
one = alloc_filespec(path);
two = alloc_filespec(path);
pair = diff_queue(&diff_queued_diff, one, two);
pair->is_unmerged = 1;
return pair;
}
static char *run_textconv(struct repository *r,
const char *pgm,
struct diff_filespec *spec,
size_t *outsize)
{
struct diff_tempfile *temp;
struct child_process child = CHILD_PROCESS_INIT;
struct strbuf buf = STRBUF_INIT;
int err = 0;
temp = prepare_temp_file(r, spec);
strvec_push(&child.args, pgm);
strvec_push(&child.args, temp->name);
child.use_shell = 1;
child.out = -1;
if (start_command(&child)) {
remove_tempfile();
return NULL;
}
if (strbuf_read(&buf, child.out, 0) < 0)
err = error("error reading from textconv command '%s'", pgm);
close(child.out);
if (finish_command(&child) || err) {
strbuf_release(&buf);
remove_tempfile();
return NULL;
}
remove_tempfile();
return strbuf_detach(&buf, outsize);
}
size_t fill_textconv(struct repository *r,
struct userdiff_driver *driver,
struct diff_filespec *df,
char **outbuf)
{
size_t size;
if (!driver) {
if (!DIFF_FILE_VALID(df)) {
*outbuf = (char *) "";
return 0;
}
if (diff_populate_filespec(r, df, NULL))
die("unable to read files to diff");
*outbuf = df->data;
return df->size;
}
if (!driver->textconv)
BUG("fill_textconv called with non-textconv driver");
if (driver->textconv_cache && df->oid_valid) {
*outbuf = notes_cache_get(driver->textconv_cache,
&df->oid,
&size);
if (*outbuf)
return size;
}
*outbuf = run_textconv(r, driver->textconv, df, &size);
if (!*outbuf)
die("unable to read files to diff");
if (driver->textconv_cache && df->oid_valid) {
/* ignore errors, as we might be in a readonly repository */
notes_cache_put(driver->textconv_cache, &df->oid, *outbuf,
size);
/*
* we could save up changes and flush them all at the end,
* but we would need an extra call after all diffing is done.
* Since generating a cache entry is the slow path anyway,
* this extra overhead probably isn't a big deal.
*/
notes_cache_write(driver->textconv_cache);
}
return size;
}
int textconv_object(struct repository *r,
const char *path,
unsigned mode,
const struct object_id *oid,
int oid_valid,
char **buf,
unsigned long *buf_size)
{
struct diff_filespec *df;
struct userdiff_driver *textconv;
df = alloc_filespec(path);
fill_filespec(df, oid, oid_valid, mode);
textconv = get_textconv(r, df);
if (!textconv) {
free_filespec(df);
return 0;
}
*buf_size = fill_textconv(r, textconv, df, buf);
free_filespec(df);
return 1;
}
void setup_diff_pager(struct diff_options *opt)
{
/*
* If the user asked for our exit code, then either they want --quiet
* or --exit-code. We should definitely not bother with a pager in the
* former case, as we will generate no output. Since we still properly
* report our exit code even when a pager is run, we _could_ run a
* pager with --exit-code. But since we have not done so historically,
* and because it is easy to find people oneline advising "git diff
* --exit-code" in hooks and other scripts, we do not do so.
*/
if (!opt->flags.exit_with_status &&
check_pager_config(the_repository, "diff") != 0)
setup_pager(the_repository);
} | c | github | https://github.com/git/git | diff.c |
"""
Module with location helpers.
detect_location_info and elevation are mocked by default during tests.
"""
import collections
import math
from typing import Any, Optional, Tuple, Dict
import requests
ELEVATION_URL = 'http://maps.googleapis.com/maps/api/elevation/json'
FREEGEO_API = 'https://freegeoip.io/json/'
IP_API = 'http://ip-api.com/json'
# Constants from https://github.com/maurycyp/vincenty
# Earth ellipsoid according to WGS 84
# Axis a of the ellipsoid (Radius of the earth in meters)
AXIS_A = 6378137
# Flattening f = (a-b) / a
FLATTENING = 1 / 298.257223563
# Axis b of the ellipsoid in meters.
AXIS_B = 6356752.314245
MILES_PER_KILOMETER = 0.621371
MAX_ITERATIONS = 200
CONVERGENCE_THRESHOLD = 1e-12
LocationInfo = collections.namedtuple(
"LocationInfo",
['ip', 'country_code', 'country_name', 'region_code', 'region_name',
'city', 'zip_code', 'time_zone', 'latitude', 'longitude',
'use_metric'])
def detect_location_info():
"""Detect location information."""
data = _get_freegeoip()
if data is None:
data = _get_ip_api()
if data is None:
return None
data['use_metric'] = data['country_code'] not in (
'US', 'MM', 'LR')
return LocationInfo(**data)
def distance(lat1, lon1, lat2, lon2):
"""Calculate the distance in meters between two points.
Async friendly.
"""
return vincenty((lat1, lon1), (lat2, lon2)) * 1000
def elevation(latitude, longitude):
"""Return elevation for given latitude and longitude."""
try:
req = requests.get(
ELEVATION_URL,
params={
'locations': '{},{}'.format(latitude, longitude),
'sensor': 'false',
},
timeout=10)
except requests.RequestException:
return 0
if req.status_code != 200:
return 0
try:
return int(float(req.json()['results'][0]['elevation']))
except (ValueError, KeyError, IndexError):
return 0
# Author: https://github.com/maurycyp
# Source: https://github.com/maurycyp/vincenty
# License: https://github.com/maurycyp/vincenty/blob/master/LICENSE
# pylint: disable=invalid-name, unused-variable, invalid-sequence-index
def vincenty(point1: Tuple[float, float], point2: Tuple[float, float],
miles: bool=False) -> Optional[float]:
"""
Vincenty formula (inverse method) to calculate the distance.
Result in kilometers or miles between two points on the surface of a
spheroid.
Async friendly.
"""
# short-circuit coincident points
if point1[0] == point2[0] and point1[1] == point2[1]:
return 0.0
U1 = math.atan((1 - FLATTENING) * math.tan(math.radians(point1[0])))
U2 = math.atan((1 - FLATTENING) * math.tan(math.radians(point2[0])))
L = math.radians(point2[1] - point1[1])
Lambda = L
sinU1 = math.sin(U1)
cosU1 = math.cos(U1)
sinU2 = math.sin(U2)
cosU2 = math.cos(U2)
for iteration in range(MAX_ITERATIONS):
sinLambda = math.sin(Lambda)
cosLambda = math.cos(Lambda)
sinSigma = math.sqrt((cosU2 * sinLambda) ** 2 +
(cosU1 * sinU2 - sinU1 * cosU2 * cosLambda) ** 2)
if sinSigma == 0:
return 0.0 # coincident points
cosSigma = sinU1 * sinU2 + cosU1 * cosU2 * cosLambda
sigma = math.atan2(sinSigma, cosSigma)
sinAlpha = cosU1 * cosU2 * sinLambda / sinSigma
cosSqAlpha = 1 - sinAlpha ** 2
try:
cos2SigmaM = cosSigma - 2 * sinU1 * sinU2 / cosSqAlpha
except ZeroDivisionError:
cos2SigmaM = 0
C = FLATTENING / 16 * cosSqAlpha * (4 + FLATTENING * (4 - 3 *
cosSqAlpha))
LambdaPrev = Lambda
Lambda = L + (1 - C) * FLATTENING * sinAlpha * (sigma + C * sinSigma *
(cos2SigmaM + C *
cosSigma *
(-1 + 2 *
cos2SigmaM ** 2)))
if abs(Lambda - LambdaPrev) < CONVERGENCE_THRESHOLD:
break # successful convergence
else:
return None # failure to converge
uSq = cosSqAlpha * (AXIS_A ** 2 - AXIS_B ** 2) / (AXIS_B ** 2)
A = 1 + uSq / 16384 * (4096 + uSq * (-768 + uSq * (320 - 175 * uSq)))
B = uSq / 1024 * (256 + uSq * (-128 + uSq * (74 - 47 * uSq)))
deltaSigma = B * sinSigma * (cos2SigmaM +
B / 4 * (cosSigma * (-1 + 2 *
cos2SigmaM ** 2) -
B / 6 * cos2SigmaM *
(-3 + 4 * sinSigma ** 2) *
(-3 + 4 * cos2SigmaM ** 2)))
s = AXIS_B * A * (sigma - deltaSigma)
s /= 1000 # Converion of meters to kilometers
if miles:
s *= MILES_PER_KILOMETER # kilometers to miles
return round(s, 6)
def _get_freegeoip() -> Optional[Dict[str, Any]]:
"""Query freegeoip.io for location data."""
try:
raw_info = requests.get(FREEGEO_API, timeout=5).json()
except (requests.RequestException, ValueError):
return None
return {
'ip': raw_info.get('ip'),
'country_code': raw_info.get('country_code'),
'country_name': raw_info.get('country_name'),
'region_code': raw_info.get('region_code'),
'region_name': raw_info.get('region_name'),
'city': raw_info.get('city'),
'zip_code': raw_info.get('zip_code'),
'time_zone': raw_info.get('time_zone'),
'latitude': raw_info.get('latitude'),
'longitude': raw_info.get('longitude'),
}
def _get_ip_api() -> Optional[Dict[str, Any]]:
"""Query ip-api.com for location data."""
try:
raw_info = requests.get(IP_API, timeout=5).json()
except (requests.RequestException, ValueError):
return None
return {
'ip': raw_info.get('query'),
'country_code': raw_info.get('countryCode'),
'country_name': raw_info.get('country'),
'region_code': raw_info.get('region'),
'region_name': raw_info.get('regionName'),
'city': raw_info.get('city'),
'zip_code': raw_info.get('zip'),
'time_zone': raw_info.get('timezone'),
'latitude': raw_info.get('lat'),
'longitude': raw_info.get('lon'),
} | unknown | codeparrot/codeparrot-clean | ||
# Copyright (c) 2015 Blizzard Entertainment
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
from heroprotocol.decoders import *
# Decoding instructions for each protocol type.
typeinfos = [
('_int',[(0,7)]), #0
('_int',[(0,4)]), #1
('_int',[(0,5)]), #2
('_int',[(0,6)]), #3
('_int',[(0,14)]), #4
('_int',[(0,22)]), #5
('_int',[(0,32)]), #6
('_choice',[(0,2),{0:('m_uint6',3),1:('m_uint14',4),2:('m_uint22',5),3:('m_uint32',6)}]), #7
('_struct',[[('m_userId',2,-1)]]), #8
('_blob',[(0,8)]), #9
('_int',[(0,8)]), #10
('_struct',[[('m_flags',10,0),('m_major',10,1),('m_minor',10,2),('m_revision',10,3),('m_build',6,4),('m_baseBuild',6,5)]]), #11
('_int',[(0,3)]), #12
('_bool',[]), #13
('_array',[(16,0),10]), #14
('_optional',[14]), #15
('_blob',[(16,0)]), #16
('_struct',[[('m_dataDeprecated',15,0),('m_data',16,1)]]), #17
('_struct',[[('m_signature',9,0),('m_version',11,1),('m_type',12,2),('m_elapsedGameLoops',6,3),('m_useScaledTime',13,4),('m_ngdpRootKey',17,5),('m_dataBuildNum',6,6),('m_fixedFileHash',17,7)]]), #18
('_fourcc',[]), #19
('_blob',[(0,7)]), #20
('_int',[(0,64)]), #21
('_struct',[[('m_region',10,0),('m_programId',19,1),('m_realm',6,2),('m_name',20,3),('m_id',21,4)]]), #22
('_struct',[[('m_a',10,0),('m_r',10,1),('m_g',10,2),('m_b',10,3)]]), #23
('_int',[(0,2)]), #24
('_optional',[10]), #25
('_struct',[[('m_name',9,0),('m_toon',22,1),('m_race',9,2),('m_color',23,3),('m_control',10,4),('m_teamId',1,5),('m_handicap',0,6),('m_observe',24,7),('m_result',24,8),('m_workingSetSlotId',25,9),('m_hero',9,10)]]), #26
('_array',[(0,5),26]), #27
('_optional',[27]), #28
('_blob',[(0,10)]), #29
('_blob',[(0,11)]), #30
('_struct',[[('m_file',30,0)]]), #31
('_optional',[13]), #32
('_int',[(-9223372036854775808,64)]), #33
('_blob',[(0,12)]), #34
('_blob',[(40,0)]), #35
('_array',[(0,6),35]), #36
('_optional',[36]), #37
('_array',[(0,6),30]), #38
('_optional',[38]), #39
('_struct',[[('m_playerList',28,0),('m_title',29,1),('m_difficulty',9,2),('m_thumbnail',31,3),('m_isBlizzardMap',13,4),('m_restartAsTransitionMap',32,16),('m_timeUTC',33,5),('m_timeLocalOffset',33,6),('m_description',34,7),('m_imageFilePath',30,8),('m_campaignIndex',10,15),('m_mapFileName',30,9),('m_cacheHandles',37,10),('m_miniSave',13,11),('m_gameSpeed',12,12),('m_defaultDifficulty',3,13),('m_modPaths',39,14)]]), #40
('_optional',[9]), #41
('_optional',[35]), #42
('_optional',[6]), #43
('_struct',[[('m_race',25,-1)]]), #44
('_struct',[[('m_team',25,-1)]]), #45
('_blob',[(0,9)]), #46
('_struct',[[('m_name',9,-18),('m_clanTag',41,-17),('m_clanLogo',42,-16),('m_highestLeague',25,-15),('m_combinedRaceLevels',43,-14),('m_randomSeed',6,-13),('m_racePreference',44,-12),('m_teamPreference',45,-11),('m_testMap',13,-10),('m_testAuto',13,-9),('m_examine',13,-8),('m_customInterface',13,-7),('m_testType',6,-6),('m_observe',24,-5),('m_hero',46,-4),('m_skin',46,-3),('m_mount',46,-2),('m_toonHandle',20,-1)]]), #47
('_array',[(0,5),47]), #48
('_struct',[[('m_lockTeams',13,-16),('m_teamsTogether',13,-15),('m_advancedSharedControl',13,-14),('m_randomRaces',13,-13),('m_battleNet',13,-12),('m_amm',13,-11),('m_competitive',13,-10),('m_practice',13,-9),('m_cooperative',13,-8),('m_noVictoryOrDefeat',13,-7),('m_heroDuplicatesAllowed',13,-6),('m_fog',24,-5),('m_observers',24,-4),('m_userDifficulty',24,-3),('m_clientDebugFlags',21,-2),('m_ammId',43,-1)]]), #49
('_int',[(1,4)]), #50
('_int',[(1,8)]), #51
('_bitarray',[(0,6)]), #52
('_bitarray',[(0,8)]), #53
('_bitarray',[(0,2)]), #54
('_bitarray',[(0,7)]), #55
('_struct',[[('m_allowedColors',52,-6),('m_allowedRaces',53,-5),('m_allowedDifficulty',52,-4),('m_allowedControls',53,-3),('m_allowedObserveTypes',54,-2),('m_allowedAIBuilds',55,-1)]]), #56
('_array',[(0,5),56]), #57
('_struct',[[('m_randomValue',6,-26),('m_gameCacheName',29,-25),('m_gameOptions',49,-24),('m_gameSpeed',12,-23),('m_gameType',12,-22),('m_maxUsers',2,-21),('m_maxObservers',2,-20),('m_maxPlayers',2,-19),('m_maxTeams',50,-18),('m_maxColors',3,-17),('m_maxRaces',51,-16),('m_maxControls',10,-15),('m_mapSizeX',10,-14),('m_mapSizeY',10,-13),('m_mapFileSyncChecksum',6,-12),('m_mapFileName',30,-11),('m_mapAuthorName',9,-10),('m_modFileSyncChecksum',6,-9),('m_slotDescriptions',57,-8),('m_defaultDifficulty',3,-7),('m_defaultAIBuild',0,-6),('m_cacheHandles',36,-5),('m_hasExtensionMod',13,-4),('m_isBlizzardMap',13,-3),('m_isPremadeFFA',13,-2),('m_isCoopMode',13,-1)]]), #58
('_optional',[1]), #59
('_optional',[2]), #60
('_struct',[[('m_color',60,-1)]]), #61
('_array',[(0,4),46]), #62
('_array',[(0,17),6]), #63
('_array',[(0,9),6]), #64
('_struct',[[('m_control',10,-20),('m_userId',59,-19),('m_teamId',1,-18),('m_colorPref',61,-17),('m_racePref',44,-16),('m_difficulty',3,-15),('m_aiBuild',0,-14),('m_handicap',0,-13),('m_observe',24,-12),('m_logoIndex',6,-11),('m_hero',46,-10),('m_skin',46,-9),('m_mount',46,-8),('m_artifacts',62,-7),('m_workingSetSlotId',25,-6),('m_rewards',63,-5),('m_toonHandle',20,-4),('m_licenses',64,-3),('m_tandemLeaderUserId',59,-2),('m_hasSilencePenalty',13,-1)]]), #65
('_array',[(0,5),65]), #66
('_struct',[[('m_phase',12,-11),('m_maxUsers',2,-10),('m_maxObservers',2,-9),('m_slots',66,-8),('m_randomSeed',6,-7),('m_hostUserId',59,-6),('m_isSinglePlayer',13,-5),('m_pickedMapTag',10,-4),('m_gameDuration',6,-3),('m_defaultDifficulty',3,-2),('m_defaultAIBuild',0,-1)]]), #67
('_struct',[[('m_userInitialData',48,-3),('m_gameDescription',58,-2),('m_lobbyState',67,-1)]]), #68
('_struct',[[('m_syncLobbyState',68,-1)]]), #69
('_struct',[[('m_name',20,-1)]]), #70
('_blob',[(0,6)]), #71
('_struct',[[('m_name',71,-1)]]), #72
('_struct',[[('m_name',71,-3),('m_type',6,-2),('m_data',20,-1)]]), #73
('_struct',[[('m_type',6,-3),('m_name',71,-2),('m_data',34,-1)]]), #74
('_array',[(0,5),10]), #75
('_struct',[[('m_signature',75,-2),('m_toonHandle',20,-1)]]), #76
('_struct',[[('m_gameFullyDownloaded',13,-14),('m_developmentCheatsEnabled',13,-13),('m_testCheatsEnabled',13,-12),('m_multiplayerCheatsEnabled',13,-11),('m_syncChecksummingEnabled',13,-10),('m_isMapToMapTransition',13,-9),('m_debugPauseEnabled',13,-8),('m_useGalaxyAsserts',13,-7),('m_platformMac',13,-6),('m_cameraFollow',13,-5),('m_baseBuildNum',6,-4),('m_buildNum',6,-3),('m_versionFlags',6,-2),('m_hotkeyProfile',46,-1)]]), #77
('_struct',[[]]), #78
('_int',[(0,16)]), #79
('_struct',[[('x',79,-2),('y',79,-1)]]), #80
('_struct',[[('m_which',12,-2),('m_target',80,-1)]]), #81
('_struct',[[('m_fileName',30,-5),('m_automatic',13,-4),('m_overwrite',13,-3),('m_name',9,-2),('m_description',29,-1)]]), #82
('_int',[(1,32)]), #83
('_struct',[[('m_sequence',83,-1)]]), #84
('_null',[]), #85
('_int',[(0,20)]), #86
('_int',[(-2147483648,32)]), #87
('_struct',[[('x',86,-3),('y',86,-2),('z',87,-1)]]), #88
('_struct',[[('m_targetUnitFlags',79,-7),('m_timer',10,-6),('m_tag',6,-5),('m_snapshotUnitLink',79,-4),('m_snapshotControlPlayerId',59,-3),('m_snapshotUpkeepPlayerId',59,-2),('m_snapshotPoint',88,-1)]]), #89
('_choice',[(0,2),{0:('None',85),1:('TargetPoint',88),2:('TargetUnit',89)}]), #90
('_struct',[[('m_target',90,-4),('m_time',87,-3),('m_verb',29,-2),('m_arguments',29,-1)]]), #91
('_struct',[[('m_data',91,-1)]]), #92
('_int',[(0,26)]), #93
('_struct',[[('m_abilLink',79,-3),('m_abilCmdIndex',2,-2),('m_abilCmdData',25,-1)]]), #94
('_optional',[94]), #95
('_choice',[(0,2),{0:('None',85),1:('TargetPoint',88),2:('TargetUnit',89),3:('Data',6)}]), #96
('_optional',[88]), #97
('_struct',[[('m_cmdFlags',93,-7),('m_abil',95,-6),('m_data',96,-5),('m_vector',97,-4),('m_sequence',83,-3),('m_otherUnit',43,-2),('m_unitGroup',43,-1)]]), #98
('_int',[(0,9)]), #99
('_bitarray',[(0,9)]), #100
('_array',[(0,9),99]), #101
('_choice',[(0,2),{0:('None',85),1:('Mask',100),2:('OneIndices',101),3:('ZeroIndices',101)}]), #102
('_struct',[[('m_unitLink',79,-4),('m_subgroupPriority',10,-3),('m_intraSubgroupPriority',10,-2),('m_count',99,-1)]]), #103
('_array',[(0,9),103]), #104
('_struct',[[('m_subgroupIndex',99,-4),('m_removeMask',102,-3),('m_addSubgroups',104,-2),('m_addUnitTags',64,-1)]]), #105
('_struct',[[('m_controlGroupId',1,-2),('m_delta',105,-1)]]), #106
('_struct',[[('m_controlGroupIndex',1,-3),('m_controlGroupUpdate',12,-2),('m_mask',102,-1)]]), #107
('_struct',[[('m_count',99,-6),('m_subgroupCount',99,-5),('m_activeSubgroupIndex',99,-4),('m_unitTagsChecksum',6,-3),('m_subgroupIndicesChecksum',6,-2),('m_subgroupsChecksum',6,-1)]]), #108
('_struct',[[('m_controlGroupId',1,-2),('m_selectionSyncData',108,-1)]]), #109
('_struct',[[('m_chatMessage',29,-1)]]), #110
('_struct',[[('m_speed',12,-1)]]), #111
('_int',[(-128,8)]), #112
('_struct',[[('m_delta',112,-1)]]), #113
('_struct',[[('x',87,-2),('y',87,-1)]]), #114
('_struct',[[('m_point',114,-4),('m_unit',6,-3),('m_pingedMinimap',13,-2),('m_option',87,-1)]]), #115
('_struct',[[('m_verb',29,-2),('m_arguments',29,-1)]]), #116
('_struct',[[('m_alliance',6,-2),('m_control',6,-1)]]), #117
('_struct',[[('m_unitTag',6,-1)]]), #118
('_struct',[[('m_unitTag',6,-2),('m_flags',10,-1)]]), #119
('_struct',[[('m_conversationId',87,-2),('m_replyId',87,-1)]]), #120
('_optional',[20]), #121
('_struct',[[('m_gameUserId',1,-6),('m_observe',24,-5),('m_name',9,-4),('m_toonHandle',121,-3),('m_clanTag',41,-2),('m_clanLogo',42,-1)]]), #122
('_array',[(0,5),122]), #123
('_int',[(0,1)]), #124
('_struct',[[('m_userInfos',123,-2),('m_method',124,-1)]]), #125
('_choice',[(0,3),{0:('None',85),1:('Checked',13),2:('ValueChanged',6),3:('SelectionChanged',87),4:('TextChanged',30),5:('MouseButton',6)}]), #126
('_struct',[[('m_controlId',87,-3),('m_eventType',87,-2),('m_eventData',126,-1)]]), #127
('_struct',[[('m_soundHash',6,-2),('m_length',6,-1)]]), #128
('_array',[(0,7),6]), #129
('_struct',[[('m_soundHash',129,-2),('m_length',129,-1)]]), #130
('_struct',[[('m_syncInfo',130,-1)]]), #131
('_struct',[[('m_queryId',79,-3),('m_lengthMs',6,-2),('m_finishGameLoop',6,-1)]]), #132
('_struct',[[('m_queryId',79,-2),('m_lengthMs',6,-1)]]), #133
('_struct',[[('m_animWaitQueryId',79,-1)]]), #134
('_struct',[[('m_sound',6,-1)]]), #135
('_struct',[[('m_transmissionId',87,-2),('m_thread',6,-1)]]), #136
('_struct',[[('m_transmissionId',87,-1)]]), #137
('_optional',[80]), #138
('_optional',[79]), #139
('_optional',[112]), #140
('_struct',[[('m_target',138,-6),('m_distance',139,-5),('m_pitch',139,-4),('m_yaw',139,-3),('m_reason',140,-2),('m_follow',13,-1)]]), #141
('_struct',[[('m_skipType',124,-1)]]), #142
('_int',[(0,11)]), #143
('_struct',[[('x',143,-2),('y',143,-1)]]), #144
('_struct',[[('m_button',6,-5),('m_down',13,-4),('m_posUI',144,-3),('m_posWorld',88,-2),('m_flags',112,-1)]]), #145
('_struct',[[('m_posUI',144,-3),('m_posWorld',88,-2),('m_flags',112,-1)]]), #146
('_struct',[[('m_achievementLink',79,-1)]]), #147
('_struct',[[('m_hotkey',6,-2),('m_down',13,-1)]]), #148
('_struct',[[('m_abilLink',79,-3),('m_abilCmdIndex',2,-2),('m_state',112,-1)]]), #149
('_struct',[[('m_soundtrack',6,-1)]]), #150
('_struct',[[('m_key',112,-2),('m_flags',112,-1)]]), #151
('_struct',[[('m_error',87,-2),('m_abil',95,-1)]]), #152
('_int',[(0,19)]), #153
('_struct',[[('m_decrementMs',153,-1)]]), #154
('_struct',[[('m_portraitId',87,-1)]]), #155
('_struct',[[('m_functionName',20,-1)]]), #156
('_struct',[[('m_result',87,-1)]]), #157
('_struct',[[('m_gameMenuItemIndex',87,-1)]]), #158
('_int',[(-32768,16)]), #159
('_struct',[[('m_wheelSpin',159,-2),('m_flags',112,-1)]]), #160
('_struct',[[('m_button',79,-1)]]), #161
('_struct',[[('m_cutsceneId',87,-2),('m_bookmarkName',20,-1)]]), #162
('_struct',[[('m_cutsceneId',87,-1)]]), #163
('_struct',[[('m_cutsceneId',87,-3),('m_conversationLine',20,-2),('m_altConversationLine',20,-1)]]), #164
('_struct',[[('m_cutsceneId',87,-2),('m_conversationLine',20,-1)]]), #165
('_struct',[[('m_leaveReason',1,-1)]]), #166
('_struct',[[('m_observe',24,-7),('m_name',9,-6),('m_toonHandle',121,-5),('m_clanTag',41,-4),('m_clanLogo',42,-3),('m_hijack',13,-2),('m_hijackCloneGameUserId',59,-1)]]), #167
('_optional',[83]), #168
('_struct',[[('m_state',24,-2),('m_sequence',168,-1)]]), #169
('_struct',[[('m_sequence',168,-2),('m_target',88,-1)]]), #170
('_struct',[[('m_sequence',168,-2),('m_target',89,-1)]]), #171
('_struct',[[('m_catalog',10,-4),('m_entry',79,-3),('m_field',9,-2),('m_value',9,-1)]]), #172
('_struct',[[('m_index',6,-1)]]), #173
('_struct',[[('m_shown',13,-1)]]), #174
('_struct',[[('m_recipient',12,-2),('m_string',30,-1)]]), #175
('_struct',[[('m_recipient',12,-2),('m_point',114,-1)]]), #176
('_struct',[[('m_progress',87,-1)]]), #177
('_struct',[[('m_status',24,-1)]]), #178
('_struct',[[('m_abilLink',79,-3),('m_abilCmdIndex',2,-2),('m_buttonLink',79,-1)]]), #179
('_struct',[[('m_behaviorLink',79,-2),('m_buttonLink',79,-1)]]), #180
('_choice',[(0,2),{0:('None',85),1:('Ability',179),2:('Behavior',180),3:('Vitals',159)}]), #181
('_struct',[[('m_announcement',181,-3),('m_otherUnitTag',6,-2),('m_unitTag',6,-1)]]), #182
('_struct',[[('m_unitTagIndex',6,0),('m_unitTagRecycle',6,1),('m_unitTypeName',29,2),('m_controlPlayerId',1,3),('m_upkeepPlayerId',1,4),('m_x',10,5),('m_y',10,6)]]), #183
('_struct',[[('m_unitTagIndex',6,0),('m_unitTagRecycle',6,1),('m_x',10,2),('m_y',10,3)]]), #184
('_struct',[[('m_unitTagIndex',6,0),('m_unitTagRecycle',6,1),('m_killerPlayerId',59,2),('m_x',10,3),('m_y',10,4),('m_killerUnitTagIndex',43,5),('m_killerUnitTagRecycle',43,6)]]), #185
('_struct',[[('m_unitTagIndex',6,0),('m_unitTagRecycle',6,1),('m_controlPlayerId',1,2),('m_upkeepPlayerId',1,3)]]), #186
('_struct',[[('m_unitTagIndex',6,0),('m_unitTagRecycle',6,1),('m_unitTypeName',29,2)]]), #187
('_struct',[[('m_playerId',1,0),('m_upgradeTypeName',29,1),('m_count',87,2)]]), #188
('_struct',[[('m_unitTagIndex',6,0),('m_unitTagRecycle',6,1)]]), #189
('_array',[(0,10),87]), #190
('_struct',[[('m_firstUnitIndex',6,0),('m_items',190,1)]]), #191
('_struct',[[('m_playerId',1,0),('m_type',6,1),('m_userId',43,2),('m_slotId',43,3)]]), #192
('_struct',[[('m_key',29,0)]]), #193
('_struct',[[('__parent',193,0),('m_value',29,1)]]), #194
('_array',[(0,6),194]), #195
('_optional',[195]), #196
('_struct',[[('__parent',193,0),('m_value',87,1)]]), #197
('_array',[(0,6),197]), #198
('_optional',[198]), #199
('_struct',[[('m_eventName',29,0),('m_stringData',196,1),('m_intData',199,2),('m_fixedData',199,3)]]), #200
('_struct',[[('m_value',6,0),('m_time',6,1)]]), #201
('_array',[(0,6),201]), #202
('_array',[(0,5),202]), #203
('_struct',[[('m_name',29,0),('m_values',203,1)]]), #204
('_array',[(0,21),204]), #205
('_struct',[[('m_instanceList',205,0)]]), #206
]
# Map from protocol NNet.Game.*Event eventid to (typeid, name)
game_event_types = {
5: (78, 'NNet.Game.SUserFinishedLoadingSyncEvent'),
7: (77, 'NNet.Game.SUserOptionsEvent'),
9: (70, 'NNet.Game.SBankFileEvent'),
10: (72, 'NNet.Game.SBankSectionEvent'),
11: (73, 'NNet.Game.SBankKeyEvent'),
12: (74, 'NNet.Game.SBankValueEvent'),
13: (76, 'NNet.Game.SBankSignatureEvent'),
14: (81, 'NNet.Game.SCameraSaveEvent'),
21: (82, 'NNet.Game.SSaveGameEvent'),
22: (78, 'NNet.Game.SSaveGameDoneEvent'),
23: (78, 'NNet.Game.SLoadGameDoneEvent'),
25: (84, 'NNet.Game.SCommandManagerResetEvent'),
26: (92, 'NNet.Game.SGameCheatEvent'),
27: (98, 'NNet.Game.SCmdEvent'),
28: (106, 'NNet.Game.SSelectionDeltaEvent'),
29: (107, 'NNet.Game.SControlGroupUpdateEvent'),
30: (109, 'NNet.Game.SSelectionSyncCheckEvent'),
32: (110, 'NNet.Game.STriggerChatMessageEvent'),
34: (111, 'NNet.Game.SSetAbsoluteGameSpeedEvent'),
35: (113, 'NNet.Game.SAddAbsoluteGameSpeedEvent'),
36: (115, 'NNet.Game.STriggerPingEvent'),
37: (116, 'NNet.Game.SBroadcastCheatEvent'),
38: (117, 'NNet.Game.SAllianceEvent'),
39: (118, 'NNet.Game.SUnitClickEvent'),
40: (119, 'NNet.Game.SUnitHighlightEvent'),
41: (120, 'NNet.Game.STriggerReplySelectedEvent'),
43: (125, 'NNet.Game.SHijackReplayGameEvent'),
44: (78, 'NNet.Game.STriggerSkippedEvent'),
45: (128, 'NNet.Game.STriggerSoundLengthQueryEvent'),
46: (135, 'NNet.Game.STriggerSoundOffsetEvent'),
47: (136, 'NNet.Game.STriggerTransmissionOffsetEvent'),
48: (137, 'NNet.Game.STriggerTransmissionCompleteEvent'),
49: (141, 'NNet.Game.SCameraUpdateEvent'),
50: (78, 'NNet.Game.STriggerAbortMissionEvent'),
55: (127, 'NNet.Game.STriggerDialogControlEvent'),
56: (131, 'NNet.Game.STriggerSoundLengthSyncEvent'),
57: (142, 'NNet.Game.STriggerConversationSkippedEvent'),
58: (145, 'NNet.Game.STriggerMouseClickedEvent'),
59: (146, 'NNet.Game.STriggerMouseMovedEvent'),
60: (147, 'NNet.Game.SAchievementAwardedEvent'),
61: (148, 'NNet.Game.STriggerHotkeyPressedEvent'),
62: (149, 'NNet.Game.STriggerTargetModeUpdateEvent'),
64: (150, 'NNet.Game.STriggerSoundtrackDoneEvent'),
66: (151, 'NNet.Game.STriggerKeyPressedEvent'),
67: (156, 'NNet.Game.STriggerMovieFunctionEvent'),
76: (152, 'NNet.Game.STriggerCommandErrorEvent'),
86: (78, 'NNet.Game.STriggerMovieStartedEvent'),
87: (78, 'NNet.Game.STriggerMovieFinishedEvent'),
88: (154, 'NNet.Game.SDecrementGameTimeRemainingEvent'),
89: (155, 'NNet.Game.STriggerPortraitLoadedEvent'),
90: (157, 'NNet.Game.STriggerCustomDialogDismissedEvent'),
91: (158, 'NNet.Game.STriggerGameMenuItemSelectedEvent'),
92: (160, 'NNet.Game.STriggerMouseWheelEvent'),
95: (161, 'NNet.Game.STriggerButtonPressedEvent'),
96: (78, 'NNet.Game.STriggerGameCreditsFinishedEvent'),
97: (162, 'NNet.Game.STriggerCutsceneBookmarkFiredEvent'),
98: (163, 'NNet.Game.STriggerCutsceneEndSceneFiredEvent'),
99: (164, 'NNet.Game.STriggerCutsceneConversationLineEvent'),
100: (165, 'NNet.Game.STriggerCutsceneConversationLineMissingEvent'),
101: (166, 'NNet.Game.SGameUserLeaveEvent'),
102: (167, 'NNet.Game.SGameUserJoinEvent'),
103: (169, 'NNet.Game.SCommandManagerStateEvent'),
104: (170, 'NNet.Game.SCmdUpdateTargetPointEvent'),
105: (171, 'NNet.Game.SCmdUpdateTargetUnitEvent'),
106: (132, 'NNet.Game.STriggerAnimLengthQueryByNameEvent'),
107: (133, 'NNet.Game.STriggerAnimLengthQueryByPropsEvent'),
108: (134, 'NNet.Game.STriggerAnimOffsetEvent'),
109: (172, 'NNet.Game.SCatalogModifyEvent'),
110: (173, 'NNet.Game.SHeroTalentTreeSelectedEvent'),
111: (78, 'NNet.Game.STriggerProfilerLoggingFinishedEvent'),
112: (174, 'NNet.Game.SHeroTalentTreeSelectionPanelToggledEvent'),
}
# The typeid of the NNet.Game.EEventId enum.
game_eventid_typeid = 0
# Map from protocol NNet.Game.*Message eventid to (typeid, name)
message_event_types = {
0: (175, 'NNet.Game.SChatMessage'),
1: (176, 'NNet.Game.SPingMessage'),
2: (177, 'NNet.Game.SLoadingProgressMessage'),
3: (78, 'NNet.Game.SServerPingMessage'),
4: (178, 'NNet.Game.SReconnectNotifyMessage'),
5: (182, 'NNet.Game.SPlayerAnnounceMessage'),
}
# The typeid of the NNet.Game.EMessageId enum.
message_eventid_typeid = 1
# Map from protocol NNet.Replay.Tracker.*Event eventid to (typeid, name)
tracker_event_types = {
1: (183, 'NNet.Replay.Tracker.SUnitBornEvent'),
2: (185, 'NNet.Replay.Tracker.SUnitDiedEvent'),
3: (186, 'NNet.Replay.Tracker.SUnitOwnerChangeEvent'),
4: (187, 'NNet.Replay.Tracker.SUnitTypeChangeEvent'),
5: (188, 'NNet.Replay.Tracker.SUpgradeEvent'),
6: (183, 'NNet.Replay.Tracker.SUnitInitEvent'),
7: (189, 'NNet.Replay.Tracker.SUnitDoneEvent'),
8: (191, 'NNet.Replay.Tracker.SUnitPositionsEvent'),
9: (192, 'NNet.Replay.Tracker.SPlayerSetupEvent'),
10: (200, 'NNet.Replay.Tracker.SStatGameEvent'),
11: (206, 'NNet.Replay.Tracker.SScoreResultEvent'),
12: (184, 'NNet.Replay.Tracker.SUnitRevivedEvent'),
}
# The typeid of the NNet.Replay.Tracker.EEventId enum.
tracker_eventid_typeid = 2
# The typeid of NNet.SVarUint32 (the type used to encode gameloop deltas).
svaruint32_typeid = 7
# The typeid of NNet.Replay.SGameUserId (the type used to encode player ids).
replay_userid_typeid = 8
# The typeid of NNet.Replay.SHeader (the type used to store replay game version and length).
replay_header_typeid = 18
# The typeid of NNet.Game.SDetails (the type used to store overall replay details).
game_details_typeid = 40
# The typeid of NNet.Replay.SInitData (the type used to store the inital lobby).
replay_initdata_typeid = 69
def _varuint32_value(value):
# Returns the numeric value from a SVarUint32 instance.
for k,v in value.iteritems():
return v
return 0
def _decode_event_stream(decoder, eventid_typeid, event_types, decode_user_id):
# Decodes events prefixed with a gameloop and possibly userid
gameloop = 0
while not decoder.done():
start_bits = decoder.used_bits()
# decode the gameloop delta before each event
delta = _varuint32_value(decoder.instance(svaruint32_typeid))
gameloop += delta
# decode the userid before each event
if decode_user_id:
userid = decoder.instance(replay_userid_typeid)
# decode the event id
eventid = decoder.instance(eventid_typeid)
typeid, typename = event_types.get(eventid, (None, None))
if typeid is None:
raise CorruptedError('eventid(%d) at %s' % (eventid, decoder))
# decode the event struct instance
event = decoder.instance(typeid)
event['_event'] = typename
event['_eventid'] = eventid
# insert gameloop and userid
event['_gameloop'] = gameloop
if decode_user_id:
event['_userid'] = userid
# the next event is byte aligned
decoder.byte_align()
# insert bits used in stream
event['_bits'] = decoder.used_bits() - start_bits
yield event
def decode_replay_game_events(contents):
"""Decodes and yields each game event from the contents byte string."""
decoder = BitPackedDecoder(contents, typeinfos)
for event in _decode_event_stream(decoder,
game_eventid_typeid,
game_event_types,
decode_user_id=True):
yield event
def decode_replay_message_events(contents):
"""Decodes and yields each message event from the contents byte string."""
decoder = BitPackedDecoder(contents, typeinfos)
for event in _decode_event_stream(decoder,
message_eventid_typeid,
message_event_types,
decode_user_id=True):
yield event
def decode_replay_tracker_events(contents):
"""Decodes and yields each tracker event from the contents byte string."""
decoder = VersionedDecoder(contents, typeinfos)
for event in _decode_event_stream(decoder,
tracker_eventid_typeid,
tracker_event_types,
decode_user_id=False):
yield event
def decode_replay_header(contents):
"""Decodes and return the replay header from the contents byte string."""
decoder = VersionedDecoder(contents, typeinfos)
return decoder.instance(replay_header_typeid)
def decode_replay_details(contents):
"""Decodes and returns the game details from the contents byte string."""
decoder = VersionedDecoder(contents, typeinfos)
return decoder.instance(game_details_typeid)
def decode_replay_initdata(contents):
"""Decodes and return the replay init data from the contents byte string."""
decoder = BitPackedDecoder(contents, typeinfos)
return decoder.instance(replay_initdata_typeid)
def decode_replay_attributes_events(contents):
"""Decodes and yields each attribute from the contents byte string."""
buffer = BitPackedBuffer(contents, 'little')
attributes = {}
if not buffer.done():
attributes['source'] = buffer.read_bits(8)
attributes['mapNamespace'] = buffer.read_bits(32)
count = buffer.read_bits(32)
attributes['scopes'] = {}
while not buffer.done():
value = {}
value['namespace'] = buffer.read_bits(32)
value['attrid'] = attrid = buffer.read_bits(32)
scope = buffer.read_bits(8)
value['value'] = buffer.read_aligned_bytes(4)[::-1].strip('\x00')
if not scope in attributes['scopes']:
attributes['scopes'][scope] = {}
if not attrid in attributes['scopes'][scope]:
attributes['scopes'][scope][attrid] = []
attributes['scopes'][scope][attrid].append(value)
return attributes
def unit_tag(unitTagIndex, unitTagRecycle):
return (unitTagIndex << 18) + unitTagRecycle
def unit_tag_index(unitTag):
return (unitTag >> 18) & 0x00003fff
def unit_tag_recycle(unitTag):
return (unitTag) & 0x0003ffff | unknown | codeparrot/codeparrot-clean | ||
from direct.directnotify import DirectNotifyGlobal
from direct.fsm import StateData
import CogHQLoader, MintInterior
from toontown.toonbase import ToontownGlobals
from direct.gui import DirectGui
from toontown.toonbase import TTLocalizer
from toontown.toon import Toon
from direct.fsm import State
import CashbotHQExterior
import CashbotHQBossBattle
from pandac.PandaModules import DecalEffect
class CashbotCogHQLoader(CogHQLoader.CogHQLoader):
notify = DirectNotifyGlobal.directNotify.newCategory('CashbotCogHQLoader')
def __init__(self, hood, parentFSMState, doneEvent):
CogHQLoader.CogHQLoader.__init__(self, hood, parentFSMState, doneEvent)
self.fsm.addState(State.State('mintInterior', self.enterMintInterior, self.exitMintInterior, ['quietZone', 'cogHQExterior']))
for stateName in ['start', 'cogHQExterior', 'quietZone']:
state = self.fsm.getStateNamed(stateName)
state.addTransition('mintInterior')
self.musicFile = 'phase_9/audio/bgm/encntr_suit_HQ_nbrhood.ogg'
self.cogHQExteriorModelPath = 'phase_10/models/cogHQ/CashBotShippingStation'
self.cogHQLobbyModelPath = 'phase_10/models/cogHQ/VaultLobby'
self.geom = None
return
def load(self, zoneId):
CogHQLoader.CogHQLoader.load(self, zoneId)
Toon.loadCashbotHQAnims()
def unloadPlaceGeom(self):
if self.geom:
self.geom.removeNode()
self.geom = None
CogHQLoader.CogHQLoader.unloadPlaceGeom(self)
return
def loadPlaceGeom(self, zoneId):
self.notify.info('loadPlaceGeom: %s' % zoneId)
zoneId = zoneId - zoneId % 100
if zoneId == ToontownGlobals.CashbotHQ:
self.geom = loader.loadModel(self.cogHQExteriorModelPath)
ddLinkTunnel = self.geom.find('**/LinkTunnel1')
ddLinkTunnel.setName('linktunnel_dl_9252_DNARoot')
locator = self.geom.find('**/sign_origin')
backgroundGeom = self.geom.find('**/EntranceFrameFront')
backgroundGeom.node().setEffect(DecalEffect.make())
signText = DirectGui.OnscreenText(text=TTLocalizer.DonaldsDreamland[-1], font=ToontownGlobals.getSuitFont(), scale=3, fg=(0.87, 0.87, 0.87, 1), mayChange=False, parent=backgroundGeom)
signText.setPosHpr(locator, 0, 0, 0, 0, 0, 0)
signText.setDepthWrite(0)
elif zoneId == ToontownGlobals.CashbotLobby:
if base.config.GetBool('want-qa-regression', 0):
self.notify.info('QA-REGRESSION: COGHQ: Visit CashbotLobby')
self.geom = loader.loadModel(self.cogHQLobbyModelPath)
else:
self.notify.warning('loadPlaceGeom: unclassified zone %s' % zoneId)
CogHQLoader.CogHQLoader.loadPlaceGeom(self, zoneId)
def unload(self):
CogHQLoader.CogHQLoader.unload(self)
Toon.unloadCashbotHQAnims()
def enterMintInterior(self, requestStatus):
self.placeClass = MintInterior.MintInterior
self.mintId = requestStatus['mintId']
self.enterPlace(requestStatus)
def exitMintInterior(self):
self.exitPlace()
self.placeClass = None
del self.mintId
return
def getExteriorPlaceClass(self):
return CashbotHQExterior.CashbotHQExterior
def getBossPlaceClass(self):
return CashbotHQBossBattle.CashbotHQBossBattle | unknown | codeparrot/codeparrot-clean | ||
"""
The Netio switch component.
For more details about this platform, please refer to the documentation at
https://home-assistant.io/components/switch.netio/
"""
import logging
from collections import namedtuple
from datetime import timedelta
import voluptuous as vol
from homeassistant.core import callback
from homeassistant import util
from homeassistant.components.http import HomeAssistantView
from homeassistant.const import (
CONF_HOST, CONF_PORT, CONF_USERNAME, CONF_PASSWORD,
EVENT_HOMEASSISTANT_STOP, STATE_ON)
from homeassistant.components.switch import (SwitchDevice, PLATFORM_SCHEMA)
import homeassistant.helpers.config_validation as cv
REQUIREMENTS = ['pynetio==0.1.6']
_LOGGER = logging.getLogger(__name__)
ATTR_CURRENT_POWER_MWH = 'current_power_mwh'
ATTR_CURRENT_POWER_W = 'current_power_w'
ATTR_START_DATE = 'start_date'
ATTR_TODAY_MWH = 'today_mwh'
ATTR_TOTAL_CONSUMPTION_KWH = 'total_energy_kwh'
CONF_OUTLETS = 'outlets'
DEFAULT_PORT = 1234
DEFAULT_USERNAME = 'admin'
DEPENDENCIES = ['http']
Device = namedtuple('device', ['netio', 'entities'])
DEVICES = {}
MIN_TIME_BETWEEN_SCANS = timedelta(seconds=10)
REQ_CONF = [CONF_HOST, CONF_OUTLETS]
URL_API_NETIO_EP = '/api/netio/{host}'
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({
vol.Required(CONF_HOST): cv.string,
vol.Required(CONF_PORT, default=DEFAULT_PORT): cv.port,
vol.Required(CONF_USERNAME, default=DEFAULT_USERNAME): cv.string,
vol.Required(CONF_PASSWORD): cv.string,
vol.Optional(CONF_OUTLETS): {cv.string: cv.string},
})
def setup_platform(hass, config, add_devices, discovery_info=None):
"""Configure the Netio platform."""
from pynetio import Netio
host = config.get(CONF_HOST)
username = config.get(CONF_USERNAME)
password = config.get(CONF_PASSWORD)
port = config.get(CONF_PORT)
if len(DEVICES) == 0:
hass.http.register_view(NetioApiView)
dev = Netio(host, port, username, password)
DEVICES[host] = Device(dev, [])
# Throttle the update for all NetioSwitches of one Netio
dev.update = util.Throttle(MIN_TIME_BETWEEN_SCANS)(dev.update)
for key in config[CONF_OUTLETS]:
switch = NetioSwitch(
DEVICES[host].netio, key, config[CONF_OUTLETS][key])
DEVICES[host].entities.append(switch)
add_devices(DEVICES[host].entities)
hass.bus.listen_once(EVENT_HOMEASSISTANT_STOP, dispose)
return True
def dispose(event):
"""Close connections to Netio Devices."""
for _, value in DEVICES.items():
value.netio.stop()
class NetioApiView(HomeAssistantView):
"""WSGI handler class."""
url = URL_API_NETIO_EP
name = 'api:netio'
@callback
def get(self, request, host):
"""Request handler."""
hass = request.app['hass']
data = request.GET
states, consumptions, cumulated_consumptions, start_dates = \
[], [], [], []
for i in range(1, 5):
out = 'output%d' % i
states.append(data.get('%s_state' % out) == STATE_ON)
consumptions.append(float(data.get('%s_consumption' % out, 0)))
cumulated_consumptions.append(
float(data.get('%s_cumulatedConsumption' % out, 0)) / 1000)
start_dates.append(data.get('%s_consumptionStart' % out, ""))
_LOGGER.debug('%s: %s, %s, %s since %s', host, states,
consumptions, cumulated_consumptions, start_dates)
ndev = DEVICES[host].netio
ndev.consumptions = consumptions
ndev.cumulated_consumptions = cumulated_consumptions
ndev.states = states
ndev.start_dates = start_dates
for dev in DEVICES[host].entities:
hass.async_add_job(dev.async_update_ha_state())
return self.json(True)
class NetioSwitch(SwitchDevice):
"""Provide a netio linked switch."""
def __init__(self, netio, outlet, name):
"""Defined to handle throttle."""
self._name = name
self.outlet = outlet
self.netio = netio
@property
def name(self):
"""Netio device's name."""
return self._name
@property
def available(self):
"""Return True if entity is available."""
return not hasattr(self, 'telnet')
def turn_on(self):
"""Turn switch on."""
self._set(True)
def turn_off(self):
"""Turn switch off."""
self._set(False)
def _set(self, value):
val = list('uuuu')
val[self.outlet - 1] = '1' if value else '0'
self.netio.get('port list %s' % ''.join(val))
self.netio.states[self.outlet - 1] = value
self.schedule_update_ha_state()
@property
def is_on(self):
"""Return switch's status."""
return self.netio.states[self.outlet - 1]
def update(self):
"""Called by Home Assistant."""
self.netio.update()
@property
def state_attributes(self):
"""Return optional state attributes."""
return {
ATTR_CURRENT_POWER_W: self.current_power_w,
ATTR_TOTAL_CONSUMPTION_KWH: self.cumulated_consumption_kwh,
ATTR_START_DATE: self.start_date.split('|')[0]
}
@property
def current_power_w(self):
"""Return actual power."""
return self.netio.consumptions[self.outlet - 1]
@property
def cumulated_consumption_kwh(self):
"""Total enerygy consumption since start_date."""
return self.netio.cumulated_consumptions[self.outlet - 1]
@property
def start_date(self):
"""Point in time when the energy accumulation started."""
return self.netio.start_dates[self.outlet - 1] | unknown | codeparrot/codeparrot-clean | ||
from django.views.generic import TemplateView
from django.http import HttpResponseRedirect
from models import SocialFriendList
from utils import setting
REDIRECT_IF_NO_ACCOUNT = setting('SF_REDIRECT_IF_NO_SOCIAL_ACCOUNT_FOUND', False)
REDIRECT_URL = setting('SF_REDIRECT_URL', "/")
class FriendListView(TemplateView):
"""
displays existing social friends of the current user
"""
template_name = "social_friends/friend_list.html"
def get(self, request, *args, **kwargs):
"""prepare the social friend model"""
provider = kwargs.pop('provider')
# Get the social auth connections
self.social_auths = request.user.social_auth.all()
# if the user did not connect any social accounts, no need to continue
if self.social_auths.count() == 0:
if REDIRECT_IF_NO_ACCOUNT:
return HttpResponseRedirect(REDIRECT_URL)
return super(FriendListView, self).get(request)
# for each social network, get or create social_friend_list
self.social_friend_lists = SocialFriendList.objects.get_or_create_with_social_auths(self.social_auths)
return super(FriendListView, self).get(request)
def get_context_data(self, **kwargs):
"""
checks if there is SocialFrind model record for the user
if not attempt to create one
if all fail, redirects to the next page
"""
context = super(FriendListView, self).get_context_data(**kwargs)
friends = []
for friend_list in self.social_friend_lists:
fs = friend_list.existing_social_friends()
for f in fs:
friends.append(f)
# Add friends to context
context['friends'] = friends
connected_providers = []
for sa in self.social_auths:
connected_providers.append(sa.provider)
context['connected_providers'] = connected_providers
return context | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright 2010-2024 JetBrains s.r.o. and Kotlin Programming Language contributors.
* Use of this source code is governed by the Apache 2.0 license that can be found in the license/LICENSE.txt file.
*/
package org.jetbrains.kotlin.analysis.api.standalone.fir.test.cases.generated.cases.components.klibSourceFileProvider;
import com.intellij.testFramework.TestDataPath;
import org.jetbrains.kotlin.test.util.KtTestUtil;
import org.jetbrains.annotations.NotNull;
import org.jetbrains.kotlin.analysis.api.standalone.fir.test.configurators.AnalysisApiFirStandaloneModeTestConfiguratorFactory;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.AnalysisApiTestConfiguratorFactoryData;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.AnalysisApiTestConfigurator;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.TestModuleKind;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.FrontendKind;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.AnalysisSessionMode;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.AnalysisApiMode;
import org.jetbrains.kotlin.analysis.api.impl.base.test.cases.components.klibSourceFileProvider.AbstractGetKlibSourceFileNameTest;
import org.jetbrains.kotlin.test.TestMetadata;
import org.junit.jupiter.api.Test;
import java.io.File;
import java.util.regex.Pattern;
/** This class is generated by {@link org.jetbrains.kotlin.generators.tests.analysis.api.GenerateAnalysisApiTestsKt}. DO NOT MODIFY MANUALLY */
@SuppressWarnings("all")
@TestMetadata("analysis/analysis-api/testData/components/klibSourceFileNameProvider/getKlibSourceFileName")
@TestDataPath("$PROJECT_ROOT")
public class FirStandaloneNormalAnalysisSourceModuleGetKlibSourceFileNameTestGenerated extends AbstractGetKlibSourceFileNameTest {
@NotNull
@Override
public AnalysisApiTestConfigurator getConfigurator() {
return AnalysisApiFirStandaloneModeTestConfiguratorFactory.INSTANCE.createConfigurator(
new AnalysisApiTestConfiguratorFactoryData(
FrontendKind.Fir,
TestModuleKind.Source,
AnalysisSessionMode.Normal,
AnalysisApiMode.Standalone
)
);
}
@Test
public void testAllFilesPresentInGetKlibSourceFileName() {
KtTestUtil.assertAllTestsPresentByMetadataWithExcluded(this.getClass(), new File("analysis/analysis-api/testData/components/klibSourceFileNameProvider/getKlibSourceFileName"), Pattern.compile("^(.+)\\.kt$"), null, true);
}
@Test
@TestMetadata("class.kt")
public void testClass() {
runTest("analysis/analysis-api/testData/components/klibSourceFileNameProvider/getKlibSourceFileName/class.kt");
}
@Test
@TestMetadata("topLevelFunction.kt")
public void testTopLevelFunction() {
runTest("analysis/analysis-api/testData/components/klibSourceFileNameProvider/getKlibSourceFileName/topLevelFunction.kt");
}
@Test
@TestMetadata("topLevelProperty.kt")
public void testTopLevelProperty() {
runTest("analysis/analysis-api/testData/components/klibSourceFileNameProvider/getKlibSourceFileName/topLevelProperty.kt");
}
} | java | github | https://github.com/JetBrains/kotlin | analysis/analysis-api-standalone/tests-gen/org/jetbrains/kotlin/analysis/api/standalone/fir/test/cases/generated/cases/components/klibSourceFileProvider/FirStandaloneNormalAnalysisSourceModuleGetKlibSourceFileNameTestGenerated.java |
#
# tohtml.py
#
# A sub-class container of the `Formatter' class to produce HTML.
#
# Copyright 2002-2015 by
# David Turner.
#
# This file is part of the FreeType project, and may only be used,
# modified, and distributed under the terms of the FreeType project
# license, LICENSE.TXT. By continuing to use, modify, or distribute
# this file you indicate that you have read the license and
# understand and accept it fully.
# The parent class is contained in file `formatter.py'.
from sources import *
from content import *
from formatter import *
import time
# The following strings define the HTML header used by all generated pages.
html_header_1 = """\
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>\
"""
html_header_2 = """\
API Reference</title>
<style type="text/css">
a:link { color: #0000EF; }
a:visited { color: #51188E; }
a:hover { color: #FF0000; }
body { font-family: Verdana, Geneva, Arial, Helvetica, serif;
color: #000000;
background: #FFFFFF;
width: 87%;
margin: auto; }
div.section { width: 75%;
margin: auto; }
div.section hr { margin: 4ex 0 1ex 0; }
div.section h4 { background-color: #EEEEFF;
font-size: medium;
font-style: oblique;
font-weight: bold;
margin: 3ex 0 1.5ex 9%;
padding: 0.3ex 0 0.3ex 1%; }
div.section p { margin: 1.5ex 0 1.5ex 10%; }
div.section pre { margin: 3ex 0 3ex 9%;
background-color: #D6E8FF;
padding: 2ex 0 2ex 1%; }
div.section table.fields { width: 90%;
margin: 1.5ex 0 1.5ex 10%; }
div.section table.toc { width: 95%;
margin: 1.5ex 0 1.5ex 5%; }
div.timestamp { text-align: center;
font-size: 69%;
margin: 1.5ex 0 1.5ex 0; }
h1 { text-align: center; }
h3 { font-size: medium;
margin: 4ex 0 1.5ex 0; }
p { text-align: justify; }
pre.colored { color: blue; }
span.keyword { font-family: monospace;
text-align: left;
white-space: pre;
color: darkblue; }
table.fields td.val { font-weight: bold;
text-align: right;
width: 30%;
vertical-align: baseline;
padding: 1ex 1em 1ex 0; }
table.fields td.desc { vertical-align: baseline;
padding: 1ex 0 1ex 1em; }
table.fields td.desc p:first-child { margin: 0; }
table.fields td.desc p { margin: 1.5ex 0 0 0; }
table.index { margin: 6ex auto 6ex auto;
border: 0;
border-collapse: separate;
border-spacing: 1em 0.3ex; }
table.index tr { padding: 0; }
table.index td { padding: 0; }
table.index-toc-link { width: 100%;
border: 0;
border-spacing: 0;
margin: 1ex 0 1ex 0; }
table.index-toc-link td.left { padding: 0 0.5em 0 0.5em;
font-size: 83%;
text-align: left; }
table.index-toc-link td.middle { padding: 0 0.5em 0 0.5em;
font-size: 83%;
text-align: center; }
table.index-toc-link td.right { padding: 0 0.5em 0 0.5em;
font-size: 83%;
text-align: right; }
table.synopsis { margin: 6ex auto 6ex auto;
border: 0;
border-collapse: separate;
border-spacing: 2em 0.6ex; }
table.synopsis tr { padding: 0; }
table.synopsis td { padding: 0; }
table.toc td.link { width: 30%;
text-align: right;
vertical-align: baseline;
padding: 1ex 1em 1ex 0; }
table.toc td.desc { vertical-align: baseline;
padding: 1ex 0 1ex 1em;
text-align: left; }
table.toc td.desc p:first-child { margin: 0;
text-align: left; }
table.toc td.desc p { margin: 1.5ex 0 0 0;
text-align: left; }
</style>
</head>
<body>
"""
html_header_3l = """
<table class="index-toc-link"><tr><td class="left">[<a href="\
"""
html_header_3r = """
<table class="index-toc-link"><tr><td class="right">[<a href="\
"""
html_header_4 = """\
">Index</a>]</td><td class="right">[<a href="\
"""
html_header_5t = """\
">TOC</a>]</td></tr></table>
<h1>\
"""
html_header_5i = """\
">Index</a>]</td></tr></table>
<h1>\
"""
html_header_6 = """\
API Reference</h1>
"""
# The HTML footer used by all generated pages.
html_footer = """\
</body>
</html>\
"""
# The header and footer used for each section.
section_title_header1 = '<h1 id="'
section_title_header2 = '">'
section_title_footer = "</h1>"
# The header and footer used for code segments.
code_header = '<pre class="colored">'
code_footer = '</pre>'
# Paragraph header and footer.
para_header = "<p>"
para_footer = "</p>"
# Block header and footer.
block_header = '<div class="section">'
block_footer_start = """\
<hr>
<table class="index-toc-link"><tr><td class="left">[<a href="\
"""
block_footer_middle = """\
">Index</a>]</td>\
<td class="middle">[<a href="#">Top</a>]</td>\
<td class="right">[<a href="\
"""
block_footer_end = """\
">TOC</a>]</td></tr></table></div>
"""
# Description header/footer.
description_header = ""
description_footer = ""
# Marker header/inter/footer combination.
marker_header = "<h4>"
marker_inter = "</h4>"
marker_footer = ""
# Header location header/footer.
header_location_header = "<p>"
header_location_footer = "</p>"
# Source code extracts header/footer.
source_header = "<pre>"
source_footer = "</pre>"
# Chapter header/inter/footer.
chapter_header = """\
<div class="section">
<h2>\
"""
chapter_inter = '</h2>'
chapter_footer = '</div>'
# Index footer.
index_footer_start = """\
<hr>
<table class="index-toc-link"><tr><td class="right">[<a href="\
"""
index_footer_end = """\
">TOC</a>]</td></tr></table>
"""
# TOC footer.
toc_footer_start = """\
<hr>
<table class="index-toc-link"><tr><td class="left">[<a href="\
"""
toc_footer_end = """\
">Index</a>]</td></tr></table>
"""
# Source language keyword coloration and styling.
keyword_prefix = '<span class="keyword">'
keyword_suffix = '</span>'
section_synopsis_header = '<h2>Synopsis</h2>'
section_synopsis_footer = ''
# Translate a single line of source to HTML. This converts `<', `>', and
# `&' into `<',`>', and `&'.
#
def html_quote( line ):
result = string.replace( line, "&", "&" )
result = string.replace( result, "<", "<" )
result = string.replace( result, ">", ">" )
return result
################################################################
##
## HTML FORMATTER CLASS
##
class HtmlFormatter( Formatter ):
def __init__( self, processor, project_title, file_prefix ):
Formatter.__init__( self, processor )
global html_header_1
global html_header_2
global html_header_3l, html_header_3r
global html_header_4
global html_header_5t, html_header_5i
global html_header_6
global html_footer
if file_prefix:
file_prefix = file_prefix + "-"
else:
file_prefix = ""
self.headers = processor.headers
self.project_title = project_title
self.file_prefix = file_prefix
self.html_header = (
html_header_1 + project_title
+ html_header_2
+ html_header_3l + file_prefix + "index.html"
+ html_header_4 + file_prefix + "toc.html"
+ html_header_5t + project_title
+ html_header_6 )
self.html_index_header = (
html_header_1 + project_title
+ html_header_2
+ html_header_3r + file_prefix + "toc.html"
+ html_header_5t + project_title
+ html_header_6 )
self.html_toc_header = (
html_header_1 + project_title
+ html_header_2
+ html_header_3l + file_prefix + "index.html"
+ html_header_5i + project_title
+ html_header_6 )
self.html_footer = (
'<div class="timestamp">generated on '
+ time.asctime( time.localtime( time.time() ) )
+ "</div>" + html_footer )
self.columns = 3
def make_section_url( self, section ):
return self.file_prefix + section.name + ".html"
def make_block_url( self, block, name = None ):
if name == None:
name = block.name
try:
section_url = self.make_section_url( block.section )
except:
# we already have a section
section_url = self.make_section_url( block )
return section_url + "#" + name
def make_html_word( self, word ):
"""Analyze a simple word to detect cross-references and markup."""
# handle cross-references
m = re_crossref.match( word )
if m:
try:
name = m.group( 'name' )
rest = m.group( 'rest' )
block = self.identifiers[name]
url = self.make_block_url( block )
# display `foo[bar]' as `foo'
name = re.sub( r'\[.*\]', '', name )
# normalize url, following RFC 3986
url = string.replace( url, "[", "(" )
url = string.replace( url, "]", ")" )
try:
# for sections, display title
url = ( '‘<a href="' + url + '">'
+ block.title + '</a>’'
+ rest )
except:
url = ( '<a href="' + url + '">'
+ name + '</a>'
+ rest )
return url
except:
# we detected a cross-reference to an unknown item
sys.stderr.write( "WARNING: undefined cross reference"
+ " '" + name + "'.\n" )
return '?' + name + '?' + rest
# handle markup for italic and bold
m = re_italic.match( word )
if m:
name = m.group( 1 )
rest = m.group( 2 )
return '<i>' + name + '</i>' + rest
m = re_bold.match( word )
if m:
name = m.group( 1 )
rest = m.group( 2 )
return '<b>' + name + '</b>' + rest
return html_quote( word )
def make_html_para( self, words ):
"""Convert words of a paragraph into tagged HTML text. Also handle
cross references."""
line = ""
if words:
line = self.make_html_word( words[0] )
for word in words[1:]:
line = line + " " + self.make_html_word( word )
# handle hyperlinks
line = re_url.sub( r'<a href="\1">\1</a>', line )
# convert `...' quotations into real left and right single quotes
line = re.sub( r"(^|\W)`(.*?)'(\W|$)",
r'\1‘\2’\3',
line )
# convert tilde into non-breakable space
line = string.replace( line, "~", " " )
return para_header + line + para_footer
def make_html_code( self, lines ):
"""Convert a code sequence to HTML."""
line = code_header + '\n'
for l in lines:
line = line + html_quote( l ) + '\n'
return line + code_footer
def make_html_items( self, items ):
"""Convert a field's content into HTML."""
lines = []
for item in items:
if item.lines:
lines.append( self.make_html_code( item.lines ) )
else:
lines.append( self.make_html_para( item.words ) )
return string.join( lines, '\n' )
def print_html_items( self, items ):
print self.make_html_items( items )
def print_html_field( self, field ):
if field.name:
print( '<table><tr valign="top"><td><b>'
+ field.name
+ "</b></td><td>" )
print self.make_html_items( field.items )
if field.name:
print "</td></tr></table>"
def html_source_quote( self, line, block_name = None ):
result = ""
while line:
m = re_source_crossref.match( line )
if m:
name = m.group( 2 )
prefix = html_quote( m.group( 1 ) )
length = len( m.group( 0 ) )
if name == block_name:
# this is the current block name, if any
result = result + prefix + '<b>' + name + '</b>'
elif re_source_keywords.match( name ):
# this is a C keyword
result = ( result + prefix
+ keyword_prefix + name + keyword_suffix )
elif name in self.identifiers:
# this is a known identifier
block = self.identifiers[name]
id = block.name
# link to a field ID if possible
try:
for markup in block.markups:
if markup.tag == 'values':
for field in markup.fields:
if field.name:
id = name
result = ( result + prefix
+ '<a href="'
+ self.make_block_url( block, id )
+ '">' + name + '</a>' )
except:
# sections don't have `markups'; however, we don't
# want references to sections here anyway
result = result + html_quote( line[:length] )
else:
result = result + html_quote( line[:length] )
line = line[length:]
else:
result = result + html_quote( line )
line = []
return result
def print_html_field_list( self, fields ):
print '<table class="fields">'
for field in fields:
print ( '<tr><td class="val" id="' + field.name + '">'
+ field.name
+ '</td><td class="desc">' )
self.print_html_items( field.items )
print "</td></tr>"
print "</table>"
def print_html_markup( self, markup ):
table_fields = []
for field in markup.fields:
if field.name:
# We begin a new series of field or value definitions. We
# record them in the `table_fields' list before outputting
# all of them as a single table.
table_fields.append( field )
else:
if table_fields:
self.print_html_field_list( table_fields )
table_fields = []
self.print_html_items( field.items )
if table_fields:
self.print_html_field_list( table_fields )
#
# formatting the index
#
def index_enter( self ):
print self.html_index_header
self.index_items = {}
def index_name_enter( self, name ):
block = self.identifiers[name]
url = self.make_block_url( block )
self.index_items[name] = url
def index_exit( self ):
# `block_index' already contains the sorted list of index names
count = len( self.block_index )
rows = ( count + self.columns - 1 ) // self.columns
print '<table class="index">'
for r in range( rows ):
line = "<tr>"
for c in range( self.columns ):
i = r + c * rows
if i < count:
bname = self.block_index[r + c * rows]
url = self.index_items[bname]
# display `foo[bar]' as `foo (bar)'
bname = string.replace( bname, "[", " (" )
bname = string.replace( bname, "]", ")" )
# normalize url, following RFC 3986
url = string.replace( url, "[", "(" )
url = string.replace( url, "]", ")" )
line = ( line + '<td><a href="' + url + '">'
+ bname + '</a></td>' )
else:
line = line + '<td></td>'
line = line + "</tr>"
print line
print "</table>"
print( index_footer_start
+ self.file_prefix + "toc.html"
+ index_footer_end )
print self.html_footer
self.index_items = {}
def index_dump( self, index_filename = None ):
if index_filename == None:
index_filename = self.file_prefix + "index.html"
Formatter.index_dump( self, index_filename )
#
# formatting the table of contents
#
def toc_enter( self ):
print self.html_toc_header
print "<h1>Table of Contents</h1>"
def toc_chapter_enter( self, chapter ):
print chapter_header + string.join( chapter.title ) + chapter_inter
print '<table class="toc">'
def toc_section_enter( self, section ):
print ( '<tr><td class="link">'
+ '<a href="' + self.make_section_url( section ) + '">'
+ section.title + '</a></td><td class="desc">' )
print self.make_html_para( section.abstract )
def toc_section_exit( self, section ):
print "</td></tr>"
def toc_chapter_exit( self, chapter ):
print "</table>"
print chapter_footer
def toc_index( self, index_filename ):
print( chapter_header
+ '<a href="' + index_filename + '">Global Index</a>'
+ chapter_inter + chapter_footer )
def toc_exit( self ):
print( toc_footer_start
+ self.file_prefix + "index.html"
+ toc_footer_end )
print self.html_footer
def toc_dump( self, toc_filename = None, index_filename = None ):
if toc_filename == None:
toc_filename = self.file_prefix + "toc.html"
if index_filename == None:
index_filename = self.file_prefix + "index.html"
Formatter.toc_dump( self, toc_filename, index_filename )
#
# formatting sections
#
def section_enter( self, section ):
print self.html_header
print ( section_title_header1 + section.name + section_title_header2
+ section.title
+ section_title_footer )
maxwidth = 0
for b in section.blocks.values():
if len( b.name ) > maxwidth:
maxwidth = len( b.name )
width = 70 # XXX magic number
if maxwidth > 0:
# print section synopsis
print section_synopsis_header
print '<table class="synopsis">'
columns = width // maxwidth
if columns < 1:
columns = 1
count = len( section.block_names )
# don't handle last entry if it is empty
if section.block_names[-1] == "/empty/":
count -= 1
rows = ( count + columns - 1 ) // columns
for r in range( rows ):
line = "<tr>"
for c in range( columns ):
i = r + c * rows
line = line + '<td>'
if i < count:
name = section.block_names[i]
if name == "/empty/":
# it can happen that a complete row is empty, and
# without a proper `filler' the browser might
# collapse the row to a much smaller height (or
# even omit it completely)
line = line + " "
else:
url = name
# display `foo[bar]' as `foo'
name = re.sub( r'\[.*\]', '', name )
# normalize url, following RFC 3986
url = string.replace( url, "[", "(" )
url = string.replace( url, "]", ")" )
line = ( line + '<a href="#' + url + '">'
+ name + '</a>' )
line = line + '</td>'
line = line + "</tr>"
print line
print "</table>"
print section_synopsis_footer
print description_header
print self.make_html_items( section.description )
print description_footer
def block_enter( self, block ):
print block_header
# place html anchor if needed
if block.name:
url = block.name
# display `foo[bar]' as `foo'
name = re.sub( r'\[.*\]', '', block.name )
# normalize url, following RFC 3986
url = string.replace( url, "[", "(" )
url = string.replace( url, "]", ")" )
print( '<h3 id="' + url + '">' + name + '</h3>' )
# dump the block C source lines now
if block.code:
header = ''
for f in self.headers.keys():
if block.source.filename.find( f ) >= 0:
header = self.headers[f] + ' (' + f + ')'
break
# if not header:
# sys.stderr.write(
# "WARNING: No header macro for"
# + " '" + block.source.filename + "'.\n" )
if header:
print ( header_location_header
+ 'Defined in ' + header + '.'
+ header_location_footer )
print source_header
for l in block.code:
print self.html_source_quote( l, block.name )
print source_footer
def markup_enter( self, markup, block ):
if markup.tag == "description":
print description_header
else:
print marker_header + markup.tag + marker_inter
self.print_html_markup( markup )
def markup_exit( self, markup, block ):
if markup.tag == "description":
print description_footer
else:
print marker_footer
def block_exit( self, block ):
print( block_footer_start + self.file_prefix + "index.html"
+ block_footer_middle + self.file_prefix + "toc.html"
+ block_footer_end )
def section_exit( self, section ):
print html_footer
def section_dump_all( self ):
for section in self.sections:
self.section_dump( section,
self.file_prefix + section.name + '.html' )
# eof | unknown | codeparrot/codeparrot-clean | ||
"""
Move a file in the safest way possible::
>>> from django.core.files.move import file_move_safe
>>> file_move_safe("/tmp/old_file", "/tmp/new_file")
"""
import os
from django.core.files import locks
try:
from shutil import copystat
except ImportError:
import stat
def copystat(src, dst):
"""Copy all stat info (mode bits, atime and mtime) from src to dst"""
st = os.stat(src)
mode = stat.S_IMODE(st.st_mode)
if hasattr(os, 'utime'):
os.utime(dst, (st.st_atime, st.st_mtime))
if hasattr(os, 'chmod'):
os.chmod(dst, mode)
__all__ = ['file_move_safe']
def _samefile(src, dst):
# Macintosh, Unix.
if hasattr(os.path,'samefile'):
try:
return os.path.samefile(src, dst)
except OSError:
return False
# All other platforms: check for same pathname.
return (os.path.normcase(os.path.abspath(src)) ==
os.path.normcase(os.path.abspath(dst)))
def file_move_safe(old_file_name, new_file_name, chunk_size = 1024*64, allow_overwrite=False):
"""
Moves a file from one location to another in the safest way possible.
First, tries ``os.rename``, which is simple but will break across filesystems.
If that fails, streams manually from one file to another in pure Python.
If the destination file exists and ``allow_overwrite`` is ``False``, this
function will throw an ``IOError``.
"""
# There's no reason to move if we don't have to.
if _samefile(old_file_name, new_file_name):
return
try:
os.rename(old_file_name, new_file_name)
return
except OSError:
# This will happen with os.rename if moving to another filesystem
# or when moving opened files on certain operating systems
pass
# first open the old file, so that it won't go away
old_file = open(old_file_name, 'rb')
try:
# now open the new file, not forgetting allow_overwrite
fd = os.open(new_file_name, os.O_WRONLY | os.O_CREAT | getattr(os, 'O_BINARY', 0) |
(not allow_overwrite and os.O_EXCL or 0))
try:
locks.lock(fd, locks.LOCK_EX)
current_chunk = None
while current_chunk != '':
current_chunk = old_file.read(chunk_size)
os.write(fd, current_chunk)
finally:
locks.unlock(fd)
os.close(fd)
finally:
old_file.close()
copystat(old_file_name, new_file_name)
try:
os.remove(old_file_name)
except OSError, e:
# Certain operating systems (Cygwin and Windows)
# fail when deleting opened files, ignore it. (For the
# systems where this happens, temporary files will be auto-deleted
# on close anyway.)
if getattr(e, 'winerror', 0) != 32 and getattr(e, 'errno', 0) != 13:
raise | unknown | codeparrot/codeparrot-clean | ||
r"""UUID objects (universally unique identifiers) according to RFC 4122.
This module provides immutable UUID objects (class UUID) and the functions
uuid1(), uuid3(), uuid4(), uuid5() for generating version 1, 3, 4, and 5
UUIDs as specified in RFC 4122.
If all you want is a unique ID, you should probably call uuid1() or uuid4().
Note that uuid1() may compromise privacy since it creates a UUID containing
the computer's network address. uuid4() creates a random UUID.
Typical usage:
>>> import uuid
# make a UUID based on the host ID and current time
>>> uuid.uuid1()
UUID('a8098c1a-f86e-11da-bd1a-00112444be1e')
# make a UUID using an MD5 hash of a namespace UUID and a name
>>> uuid.uuid3(uuid.NAMESPACE_DNS, 'python.org')
UUID('6fa459ea-ee8a-3ca4-894e-db77e160355e')
# make a random UUID
>>> uuid.uuid4()
UUID('16fd2706-8baf-433b-82eb-8c7fada847da')
# make a UUID using a SHA-1 hash of a namespace UUID and a name
>>> uuid.uuid5(uuid.NAMESPACE_DNS, 'python.org')
UUID('886313e1-3b8a-5372-9b90-0c9aee199e5d')
# make a UUID from a string of hex digits (braces and hyphens ignored)
>>> x = uuid.UUID('{00010203-0405-0607-0809-0a0b0c0d0e0f}')
# convert a UUID to a string of hex digits in standard form
>>> str(x)
'00010203-0405-0607-0809-0a0b0c0d0e0f'
# get the raw 16 bytes of the UUID
>>> x.bytes
'\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f'
# make a UUID from a 16-byte string
>>> uuid.UUID(bytes=x.bytes)
UUID('00010203-0405-0607-0809-0a0b0c0d0e0f')
"""
__author__ = 'Ka-Ping Yee <ping@zesty.ca>'
RESERVED_NCS, RFC_4122, RESERVED_MICROSOFT, RESERVED_FUTURE = [
'reserved for NCS compatibility', 'specified in RFC 4122',
'reserved for Microsoft compatibility', 'reserved for future definition'
]
class UUID(object):
"""Instances of the UUID class represent UUIDs as specified in RFC 4122.
UUID objects are immutable, hashable, and usable as dictionary keys.
Converting a UUID to a string with str() yields something in the form
'12345678-1234-1234-1234-123456789abc'. The UUID constructor accepts
five possible forms: a similar string of hexadecimal digits, or a tuple
of six integer fields (with 32-bit, 16-bit, 16-bit, 8-bit, 8-bit, and
48-bit values respectively) as an argument named 'fields', or a string
of 16 bytes (with all the integer fields in big-endian order) as an
argument named 'bytes', or a string of 16 bytes (with the first three
fields in little-endian order) as an argument named 'bytes_le', or a
single 128-bit integer as an argument named 'int'.
UUIDs have these read-only attributes:
bytes the UUID as a 16-byte string (containing the six
integer fields in big-endian byte order)
bytes_le the UUID as a 16-byte string (with time_low, time_mid,
and time_hi_version in little-endian byte order)
fields a tuple of the six integer fields of the UUID,
which are also available as six individual attributes
and two derived attributes:
time_low the first 32 bits of the UUID
time_mid the next 16 bits of the UUID
time_hi_version the next 16 bits of the UUID
clock_seq_hi_variant the next 8 bits of the UUID
clock_seq_low the next 8 bits of the UUID
node the last 48 bits of the UUID
time the 60-bit timestamp
clock_seq the 14-bit sequence number
hex the UUID as a 32-character hexadecimal string
int the UUID as a 128-bit integer
urn the UUID as a URN as specified in RFC 4122
variant the UUID variant (one of the constants RESERVED_NCS,
RFC_4122, RESERVED_MICROSOFT, or RESERVED_FUTURE)
version the UUID version number (1 through 5, meaningful only
when the variant is RFC_4122)
"""
def __init__(self, hex=None, bytes=None, bytes_le=None, fields=None,
int=None, version=None):
r"""Create a UUID from either a string of 32 hexadecimal digits,
a string of 16 bytes as the 'bytes' argument, a string of 16 bytes
in little-endian order as the 'bytes_le' argument, a tuple of six
integers (32-bit time_low, 16-bit time_mid, 16-bit time_hi_version,
8-bit clock_seq_hi_variant, 8-bit clock_seq_low, 48-bit node) as
the 'fields' argument, or a single 128-bit integer as the 'int'
argument. When a string of hex digits is given, curly braces,
hyphens, and a URN prefix are all optional. For example, these
expressions all yield the same UUID:
UUID('{12345678-1234-5678-1234-567812345678}')
UUID('12345678123456781234567812345678')
UUID('urn:uuid:12345678-1234-5678-1234-567812345678')
UUID(bytes='\x12\x34\x56\x78'*4)
UUID(bytes_le='\x78\x56\x34\x12\x34\x12\x78\x56' +
'\x12\x34\x56\x78\x12\x34\x56\x78')
UUID(fields=(0x12345678, 0x1234, 0x5678, 0x12, 0x34, 0x567812345678))
UUID(int=0x12345678123456781234567812345678)
Exactly one of 'hex', 'bytes', 'bytes_le', 'fields', or 'int' must
be given. The 'version' argument is optional; if given, the resulting
UUID will have its variant and version set according to RFC 4122,
overriding the given 'hex', 'bytes', 'bytes_le', 'fields', or 'int'.
"""
if [hex, bytes, bytes_le, fields, int].count(None) != 4:
raise TypeError('need one of hex, bytes, bytes_le, fields, or int')
if hex is not None:
hex = hex.replace('urn:', '').replace('uuid:', '')
hex = hex.strip('{}').replace('-', '')
if len(hex) != 32:
raise ValueError('badly formed hexadecimal UUID string')
int = long(hex, 16)
if bytes_le is not None:
if len(bytes_le) != 16:
raise ValueError('bytes_le is not a 16-char string')
bytes = (bytes_le[3] + bytes_le[2] + bytes_le[1] + bytes_le[0] +
bytes_le[5] + bytes_le[4] + bytes_le[7] + bytes_le[6] +
bytes_le[8:])
if bytes is not None:
if len(bytes) != 16:
raise ValueError('bytes is not a 16-char string')
int = long(('%02x' * 16) % tuple(map(ord, bytes)), 16)
if fields is not None:
if len(fields) != 6:
raise ValueError('fields is not a 6-tuple')
(time_low, time_mid, time_hi_version,
clock_seq_hi_variant, clock_seq_low, node) = fields
if not 0 <= time_low < 1 << 32L:
raise ValueError('field 1 out of range (need a 32-bit value)')
if not 0 <= time_mid < 1 << 16L:
raise ValueError('field 2 out of range (need a 16-bit value)')
if not 0 <= time_hi_version < 1 << 16L:
raise ValueError('field 3 out of range (need a 16-bit value)')
if not 0 <= clock_seq_hi_variant < 1 << 8L:
raise ValueError('field 4 out of range (need an 8-bit value)')
if not 0 <= clock_seq_low < 1 << 8L:
raise ValueError('field 5 out of range (need an 8-bit value)')
if not 0 <= node < 1 << 48L:
raise ValueError('field 6 out of range (need a 48-bit value)')
clock_seq = (clock_seq_hi_variant << 8L) | clock_seq_low
int = ((time_low << 96L) | (time_mid << 80L) |
(time_hi_version << 64L) | (clock_seq << 48L) | node)
if int is not None:
if not 0 <= int < 1 << 128L:
raise ValueError('int is out of range (need a 128-bit value)')
if version is not None:
if not 1 <= version <= 5:
raise ValueError('illegal version number')
# Set the variant to RFC 4122.
int &= ~(0xc000 << 48L)
int |= 0x8000 << 48L
# Set the version number.
int &= ~(0xf000 << 64L)
int |= version << 76L
self.__dict__['int'] = int
def __cmp__(self, other):
if isinstance(other, UUID):
return cmp(self.int, other.int)
return NotImplemented
def __hash__(self):
return hash(self.int)
def __int__(self):
return self.int
def __repr__(self):
return 'UUID(%r)' % str(self)
def __setattr__(self, name, value):
raise TypeError('UUID objects are immutable')
def __str__(self):
hex = '%032x' % self.int
return '%s-%s-%s-%s-%s' % (
hex[:8], hex[8:12], hex[12:16], hex[16:20], hex[20:])
def get_bytes(self):
bytes = ''
for shift in range(0, 128, 8):
bytes = chr((self.int >> shift) & 0xff) + bytes
return bytes
bytes = property(get_bytes)
def get_bytes_le(self):
bytes = self.bytes
return (bytes[3] + bytes[2] + bytes[1] + bytes[0] +
bytes[5] + bytes[4] + bytes[7] + bytes[6] + bytes[8:])
bytes_le = property(get_bytes_le)
def get_fields(self):
return (self.time_low, self.time_mid, self.time_hi_version,
self.clock_seq_hi_variant, self.clock_seq_low, self.node)
fields = property(get_fields)
def get_time_low(self):
return self.int >> 96L
time_low = property(get_time_low)
def get_time_mid(self):
return (self.int >> 80L) & 0xffff
time_mid = property(get_time_mid)
def get_time_hi_version(self):
return (self.int >> 64L) & 0xffff
time_hi_version = property(get_time_hi_version)
def get_clock_seq_hi_variant(self):
return (self.int >> 56L) & 0xff
clock_seq_hi_variant = property(get_clock_seq_hi_variant)
def get_clock_seq_low(self):
return (self.int >> 48L) & 0xff
clock_seq_low = property(get_clock_seq_low)
def get_time(self):
return (((self.time_hi_version & 0x0fffL) << 48L) |
(self.time_mid << 32L) | self.time_low)
time = property(get_time)
def get_clock_seq(self):
return (((self.clock_seq_hi_variant & 0x3fL) << 8L) |
self.clock_seq_low)
clock_seq = property(get_clock_seq)
def get_node(self):
return self.int & 0xffffffffffff
node = property(get_node)
def get_hex(self):
return '%032x' % self.int
hex = property(get_hex)
def get_urn(self):
return 'urn:uuid:' + str(self)
urn = property(get_urn)
def get_variant(self):
if not self.int & (0x8000 << 48L):
return RESERVED_NCS
elif not self.int & (0x4000 << 48L):
return RFC_4122
elif not self.int & (0x2000 << 48L):
return RESERVED_MICROSOFT
else:
return RESERVED_FUTURE
variant = property(get_variant)
def get_version(self):
# The version bits are only meaningful for RFC 4122 UUIDs.
if self.variant == RFC_4122:
return int((self.int >> 76L) & 0xf)
version = property(get_version)
def _find_mac(command, args, hw_identifiers, get_index):
import os
for dir in ['', '/sbin/', '/usr/sbin']:
executable = os.path.join(dir, command)
if not os.path.exists(executable):
continue
try:
# LC_ALL to get English output, 2>/dev/null to
# prevent output on stderr
cmd = 'LC_ALL=C %s %s 2>/dev/null' % (executable, args)
pipe = os.popen(cmd)
except IOError:
continue
for line in pipe:
words = line.lower().split()
for i in range(len(words)):
if words[i] in hw_identifiers:
return int(words[get_index(i)].replace(':', ''), 16)
return None
def _ifconfig_getnode():
"""Get the hardware address on Unix by running ifconfig."""
# This works on Linux ('' or '-a'), Tru64 ('-av'), but not all Unixes.
for args in ('', '-a', '-av'):
mac = _find_mac('ifconfig', args, ['hwaddr', 'ether'], lambda i: i + 1)
if mac:
return mac
import socket
ip_addr = socket.gethostbyname(socket.gethostname())
# Try getting the MAC addr from arp based on our IP address (Solaris).
mac = _find_mac('arp', '-an', [ip_addr], lambda i: -1)
if mac:
return mac
# This might work on HP-UX.
mac = _find_mac('lanscan', '-ai', ['lan0'], lambda i: 0)
if mac:
return mac
return None
def _ipconfig_getnode():
"""Get the hardware address on Windows by running ipconfig.exe."""
import os
import re
dirs = ['', r'c:\windows\system32', r'c:\winnt\system32']
try:
import ctypes
buffer = ctypes.create_string_buffer(300)
ctypes.windll.kernel32.GetSystemDirectoryA(buffer, 300)
dirs.insert(0, buffer.value.decode('mbcs'))
except:
pass
for dir in dirs:
try:
pipe = os.popen(os.path.join(dir, 'ipconfig') + ' /all')
except IOError:
continue
for line in pipe:
value = line.split(':')[-1].strip().lower()
if re.match('([0-9a-f][0-9a-f]-){5}[0-9a-f][0-9a-f]', value):
return int(value.replace('-', ''), 16)
def _netbios_getnode():
"""Get the hardware address on Windows using NetBIOS calls.
See http://support.microsoft.com/kb/118623 for details."""
import win32wnet
import netbios
ncb = netbios.NCB()
ncb.Command = netbios.NCBENUM
ncb.Buffer = adapters = netbios.LANA_ENUM()
adapters._pack()
if win32wnet.Netbios(ncb) != 0:
return
adapters._unpack()
for i in range(adapters.length):
ncb.Reset()
ncb.Command = netbios.NCBRESET
ncb.Lana_num = ord(adapters.lana[i])
if win32wnet.Netbios(ncb) != 0:
continue
ncb.Reset()
ncb.Command = netbios.NCBASTAT
ncb.Lana_num = ord(adapters.lana[i])
ncb.Callname = '*'.ljust(16)
ncb.Buffer = status = netbios.ADAPTER_STATUS()
if win32wnet.Netbios(ncb) != 0:
continue
status._unpack()
bytes = map(ord, status.adapter_address)
return ((bytes[0] << 40L) + (bytes[1] << 32L) + (bytes[2] << 24L) +
(bytes[3] << 16L) + (bytes[4] << 8L) + bytes[5])
# Thanks to Thomas Heller for ctypes and for his help with its use here.
# If ctypes is available, use it to find system routines for UUID generation.
_uuid_generate_random = _uuid_generate_time = _UuidCreate = None
try:
import ctypes
import ctypes.util
_buffer = ctypes.create_string_buffer(16)
# The uuid_generate_* routines are provided by libuuid on at least
# Linux and FreeBSD, and provided by libc on Mac OS X.
for libname in ['uuid', 'c']:
try:
lib = ctypes.CDLL(ctypes.util.find_library(libname))
except:
continue
if hasattr(lib, 'uuid_generate_random'):
_uuid_generate_random = lib.uuid_generate_random
if hasattr(lib, 'uuid_generate_time'):
_uuid_generate_time = lib.uuid_generate_time
# On Windows prior to 2000, UuidCreate gives a UUID containing the
# hardware address. On Windows 2000 and later, UuidCreate makes a
# random UUID and UuidCreateSequential gives a UUID containing the
# hardware address. These routines are provided by the RPC runtime.
# NOTE: at least on Tim's WinXP Pro SP2 desktop box, while the last
# 6 bytes returned by UuidCreateSequential are fixed, they don't appear
# to bear any relationship to the MAC address of any network device
# on the box.
try:
lib = ctypes.windll.rpcrt4
except:
lib = None
_UuidCreate = getattr(lib, 'UuidCreateSequential',
getattr(lib, 'UuidCreate', None))
except:
pass
def _unixdll_getnode():
"""Get the hardware address on Unix using ctypes."""
_uuid_generate_time(_buffer)
return UUID(bytes=_buffer.raw).node
def _windll_getnode():
"""Get the hardware address on Windows using ctypes."""
if _UuidCreate(_buffer) == 0:
return UUID(bytes=_buffer.raw).node
def _random_getnode():
"""Get a random node ID, with eighth bit set as suggested by RFC 4122."""
import random
return random.randrange(0, 1 << 48L) | 0x010000000000L
_node = None
def getnode():
"""Get the hardware address as a 48-bit positive integer.
The first time this runs, it may launch a separate program, which could
be quite slow. If all attempts to obtain the hardware address fail, we
choose a random 48-bit number with its eighth bit set to 1 as recommended
in RFC 4122.
"""
global _node
if _node is not None:
return _node
import sys
if sys.platform == 'win32':
getters = [_windll_getnode, _netbios_getnode, _ipconfig_getnode]
else:
getters = [_unixdll_getnode, _ifconfig_getnode]
for getter in getters + [_random_getnode]:
try:
_node = getter()
except:
continue
if _node is not None:
return _node
_last_timestamp = None
def uuid1(node=None, clock_seq=None):
"""Generate a UUID from a host ID, sequence number, and the current time.
If 'node' is not given, getnode() is used to obtain the hardware
address. If 'clock_seq' is given, it is used as the sequence number;
otherwise a random 14-bit sequence number is chosen."""
# When the system provides a version-1 UUID generator, use it (but don't
# use UuidCreate here because its UUIDs don't conform to RFC 4122).
if _uuid_generate_time and node is clock_seq is None:
_uuid_generate_time(_buffer)
return UUID(bytes=_buffer.raw)
global _last_timestamp
import time
nanoseconds = int(time.time() * 1e9)
# 0x01b21dd213814000 is the number of 100-ns intervals between the
# UUID epoch 1582-10-15 00:00:00 and the Unix epoch 1970-01-01 00:00:00.
timestamp = int(nanoseconds / 100) + 0x01b21dd213814000L
if timestamp <= _last_timestamp:
timestamp = _last_timestamp + 1
_last_timestamp = timestamp
if clock_seq is None:
import random
clock_seq = random.randrange(1 << 14L) # instead of stable storage
time_low = timestamp & 0xffffffffL
time_mid = (timestamp >> 32L) & 0xffffL
time_hi_version = (timestamp >> 48L) & 0x0fffL
clock_seq_low = clock_seq & 0xffL
clock_seq_hi_variant = (clock_seq >> 8L) & 0x3fL
if node is None:
node = getnode()
return UUID(fields=(time_low, time_mid, time_hi_version,
clock_seq_hi_variant, clock_seq_low, node), version=1)
def uuid3(namespace, name):
"""Generate a UUID from the MD5 hash of a namespace UUID and a name."""
try:
from hashlib import md5
except ImportError:
from md5 import md5
hash = md5(namespace.bytes + name).digest()
return UUID(bytes=hash[:16], version=3)
def uuid4():
"""Generate a random UUID."""
# When the system provides a version-4 UUID generator, use it.
if _uuid_generate_random:
_uuid_generate_random(_buffer)
return UUID(bytes=_buffer.raw)
# Otherwise, get randomness from urandom or the 'random' module.
try:
import os
return UUID(bytes=os.urandom(16), version=4)
except:
import random
bytes = [chr(random.randrange(256)) for i in range(16)]
return UUID(bytes=bytes, version=4)
def uuid5(namespace, name):
"""Generate a UUID from the SHA-1 hash of a namespace UUID and a name."""
try:
from hashlib import sha1 as sha
except ImportError:
from sha import sha
hash = sha(namespace.bytes + name).digest()
return UUID(bytes=hash[:16], version=5)
# The following standard UUIDs are for use with uuid3() or uuid5().
NAMESPACE_DNS = UUID('6ba7b810-9dad-11d1-80b4-00c04fd430c8')
NAMESPACE_URL = UUID('6ba7b811-9dad-11d1-80b4-00c04fd430c8')
NAMESPACE_OID = UUID('6ba7b812-9dad-11d1-80b4-00c04fd430c8')
NAMESPACE_X500 = UUID('6ba7b814-9dad-11d1-80b4-00c04fd430c8') | unknown | codeparrot/codeparrot-clean | ||
#
# organize_photos.py: (C) 2011-2014 Sameer Sundresh. No warranty.
#
# exif_cache.py is a helper for organize_photos.py
#
# It maintains a cache file exif_cache.json in the source directory that
# keeps track of which files have already been copied out of that source
# directory, including where they were copied and their size and filename.
#
# Note that this cache has only been tested in the case where there is
# just one destination directory. If you plan to use this script with the
# same SD card on multiple computers (for example), you should check to
# make sure that it actually works correctly!
#
import json, logging, os, os.path, time
_TIME_PRINT_FORMAT = '%Y-%m-%d %H:%M:%S UTC'
_TIME_PARSE_FORMAT = '%Y-%m-%d %H:%M:%S %Z'
def format_time(timestamp):
return time.strftime(_TIME_PRINT_FORMAT, time.gmtime(timestamp))
def parse_time(time_string):
return (time.mktime(time.strptime(time_string, _TIME_PARSE_FORMAT)) - time.timezone)
def is_direct_rel_path(path):
if path[0] == '/':
return False
path = os.path.join('/', path)
return path == os.path.abspath(path)
def backup_file(file_path):
if os.path.lexists(file_path):
i = 0
while True:
i += 1
bak_path = '%s.bak%i' % (file_path, i)
if not os.path.lexists(bak_path):
os.rename(file_path, bak_path)
return bak_path
def time_close_enough(t0, t1, is_src=False):
if is_src:
return -10 <= ((t0 - t1) - round((t0 - t1) / 3600.0) * 3600) <= 10
else:
return -10 <= (t0 - t1) <= 10
class ExifCache(object):
def __init__(self, src_dir_path, dest_dir_path, autosave_interval=0):
self.src_dir_path = src_dir_path
self.dest_dir_path = dest_dir_path
self.autosave_interval = autosave_interval
self._adds_since_last_save = 0
print 'Loading EXIF cache...'
self.data = self._load()
def _load(self):
# Read the JSON EXIF cache data
exif_cache_path = os.path.join(self.src_dir_path, 'exif_cache.json')
if os.path.lexists(exif_cache_path):
assert not os.path.islink(exif_cache_path)
with open(exif_cache_path, 'r') as f:
exif_cache_data = json.load(f)
else:
exif_cache_data = { }
# Check that the EXIF cache data is well-formed,
# and parse all the time strings as timestamps.
data = { }
for entry in exif_cache_data.iteritems():
try:
(src_img_path, [dest_img_path, size, time_string]) = entry
assert is_direct_rel_path(src_img_path)
assert is_direct_rel_path(dest_img_path)
assert (type(size) == int) and (size >= 0)
timestamp = parse_time(time_string)
data[src_img_path] = (dest_img_path, size, timestamp)
except:
logging.error('Could not decode EXIF cache entry %s' % repr(entry))
return data
def save(self):
if self._adds_since_last_save == 0:
return
print 'Saving EXIF cache...'
# Check that the EXIF cache data is well-formed,
# and format all the timestamps as time string.
exif_cache_data = { }
for (src_img_path, (dest_img_path, size, timestamp)) in self.data.iteritems():
assert is_direct_rel_path(src_img_path)
assert is_direct_rel_path(dest_img_path)
assert (type(size) == int) and (size >= 0)
time_string = format_time(timestamp)
exif_cache_data[src_img_path] = (dest_img_path, size, time_string)
# Backup the old JSON EXIF cache data and write the new data
exif_cache_path = os.path.join(self.src_dir_path, 'exif_cache.json')
backup_file_path = backup_file(exif_cache_path)
with open(exif_cache_path, 'w') as f:
json.dump(exif_cache_data, f)
# Check that the data was written correctly, and if so, remove the backup
if self._load() == self.data:
if backup_file_path:
os.remove(backup_file_path)
else:
logging.error('Error saving EXIF cache')
# Should raise an exception...
self._adds_since_last_save = 0
def check(self, src_img_path):
try:
# Get cache entry
rel_src_img_path = os.path.relpath(src_img_path, self.src_dir_path)
(rel_dest_img_path, size, timestamp) = self.data.get(rel_src_img_path)
# Absolute dest_img_path
dest_img_path = os.path.join(self.dest_dir_path, rel_dest_img_path)
# Check file paths exist
assert os.path.exists(src_img_path) and os.path.exists(dest_img_path)
# Check file sizes match
assert os.path.getsize(src_img_path) == size == os.path.getsize(dest_img_path)
# Check file mtimes match
#assert time_close_enough(os.path.getmtime(src_img_path), timestamp, is_src=True)
assert time_close_enough(os.path.getmtime(dest_img_path), timestamp)
return True
except:
return False
def add(self, src_img_path, dest_img_path):
# Check file paths exist
assert os.path.exists(src_img_path) and os.path.exists(dest_img_path)
rel_src_img_path = os.path.relpath(src_img_path, self.src_dir_path)
rel_dest_img_path = os.path.relpath(dest_img_path, self.dest_dir_path)
# Check file sizes match
size = os.path.getsize(src_img_path)
assert os.path.getsize(dest_img_path) == size
# Check file mtimes match
timestamp = os.path.getmtime(src_img_path)
#assert time_close_enough(os.path.getmtime(dest_img_path), timestamp, is_src=True)
# Write to cache
self.data[rel_src_img_path] = (rel_dest_img_path, size, timestamp)
# Autosave
self._adds_since_last_save += 1
if self.autosave_interval > 0 and self._adds_since_last_save >= self.autosave_interval:
self.save() | unknown | codeparrot/codeparrot-clean | ||
from __future__ import unicode_literals
import frappe
def execute():
if frappe.db.exists("DocType", "Student"):
student_table_cols = frappe.db.get_table_columns("Student")
if "father_name" in student_table_cols:
frappe.reload_doc("schools", "doctype", "student")
frappe.reload_doc("schools", "doctype", "guardian")
frappe.reload_doc("schools", "doctype", "guardian_interest")
frappe.reload_doc("hr", "doctype", "interest")
fields = ["name", "father_name", "mother_name"]
if "father_email_id" in student_table_cols:
fields += ["father_email_id", "mother_email_id"]
students = frappe.get_all("Student", fields)
for stud in students:
if stud.father_name:
make_guardian(stud.father_name, stud.name, stud.father_email_id)
if stud.mother_name:
make_guardian(stud.mother_name, stud.name, stud.mother_email_id)
def make_guardian(name, student, email=None):
frappe.get_doc({
'doctype': 'Guardian',
'guardian_name': name,
'email': email,
'student': student
}).insert() | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = '''
---
module: nxos_udld
extends_documentation_fragment: nxos
version_added: "2.2"
short_description: Manages UDLD global configuration params.
description:
- Manages UDLD global configuration params.
author:
- Jason Edelman (@jedelman8)
notes:
- Tested against NXOSv 7.3.(0)D1(1) on VIRL
- Module will fail if the udld feature has not been previously enabled.
options:
aggressive:
description:
- Toggles aggressive mode.
choices: ['enabled','disabled']
msg_time:
description:
- Message time in seconds for UDLD packets or keyword 'default'.
reset:
description:
- Ability to reset all ports shut down by UDLD. 'state' parameter
cannot be 'absent' when this is present.
type: bool
default: 'no'
state:
description:
- Manage the state of the resource. When set to 'absent',
aggressive and msg_time are set to their default values.
default: present
choices: ['present','absent']
'''
EXAMPLES = '''
# ensure udld aggressive mode is globally disabled and se global message interval is 20
- nxos_udld:
aggressive: disabled
msg_time: 20
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
# Ensure agg mode is globally enabled and msg time is 15
- nxos_udld:
aggressive: enabled
msg_time: 15
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
'''
RETURN = '''
proposed:
description: k/v pairs of parameters passed into module
returned: always
type: dict
sample: {"aggressive": "enabled", "msg_time": "40"}
existing:
description:
- k/v pairs of existing udld configuration
returned: always
type: dict
sample: {"aggressive": "disabled", "msg_time": "15"}
end_state:
description: k/v pairs of udld configuration after module execution
returned: always
type: dict
sample: {"aggressive": "enabled", "msg_time": "40"}
updates:
description: command sent to the device
returned: always
type: list
sample: ["udld message-time 40", "udld aggressive"]
changed:
description: check to see if a change was made on the device
returned: always
type: boolean
sample: true
'''
import re
from ansible.module_utils.network.nxos.nxos import get_config, load_config, run_commands
from ansible.module_utils.network.nxos.nxos import get_capabilities, nxos_argument_spec
from ansible.module_utils.basic import AnsibleModule
PARAM_TO_DEFAULT_KEYMAP = {
'msg_time': '15',
}
def execute_show_command(command, module, command_type='cli_show'):
device_info = get_capabilities(module)
network_api = device_info.get('network_api', 'nxapi')
if network_api == 'cliconf':
if 'show run' not in command:
command += ' | json'
cmds = [command]
body = run_commands(module, cmds)
elif network_api == 'nxapi':
cmds = [command]
body = run_commands(module, cmds)
return body
def flatten_list(command_lists):
flat_command_list = []
for command in command_lists:
if isinstance(command, list):
flat_command_list.extend(command)
else:
flat_command_list.append(command)
return flat_command_list
def apply_key_map(key_map, table):
new_dict = {}
for key, value in table.items():
new_key = key_map.get(key)
if new_key:
value = table.get(key)
if value:
new_dict[new_key] = str(value)
else:
new_dict[new_key] = value
return new_dict
def get_commands_config_udld_global(delta, reset, existing):
commands = []
for param, value in delta.items():
if param == 'aggressive':
command = 'udld aggressive' if value == 'enabled' else 'no udld aggressive'
commands.append(command)
elif param == 'msg_time':
if value == 'default':
if existing.get('msg_time') != PARAM_TO_DEFAULT_KEYMAP.get('msg_time'):
commands.append('no udld message-time')
else:
commands.append('udld message-time ' + value)
if reset:
command = 'udld reset'
commands.append(command)
return commands
def get_commands_remove_udld_global(existing):
commands = []
if existing.get('aggressive') == 'enabled':
command = 'no udld aggressive'
commands.append(command)
if existing.get('msg_time') != PARAM_TO_DEFAULT_KEYMAP.get('msg_time'):
command = 'no udld message-time'
commands.append(command)
return commands
def get_udld_global(module):
command = 'show udld global'
udld_table = execute_show_command(command, module)[0]
status = str(udld_table.get('udld-global-mode', None))
if status == 'enabled-aggressive':
aggressive = 'enabled'
else:
aggressive = 'disabled'
interval = str(udld_table.get('message-interval', None))
udld = dict(msg_time=interval, aggressive=aggressive)
return udld
def main():
argument_spec = dict(
aggressive=dict(required=False, choices=['enabled', 'disabled']),
msg_time=dict(required=False, type='str'),
reset=dict(required=False, type='bool'),
state=dict(choices=['absent', 'present'], default='present'),
)
argument_spec.update(nxos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
warnings = list()
aggressive = module.params['aggressive']
msg_time = module.params['msg_time']
reset = module.params['reset']
state = module.params['state']
if reset and state == 'absent':
module.fail_json(msg="state must be present when using reset flag.")
args = dict(aggressive=aggressive, msg_time=msg_time, reset=reset)
proposed = dict((k, v) for k, v in args.items() if v is not None)
existing = get_udld_global(module)
end_state = existing
delta = set(proposed.items()).difference(existing.items())
changed = False
commands = []
if state == 'present':
if delta:
command = get_commands_config_udld_global(dict(delta), reset, existing)
commands.append(command)
elif state == 'absent':
command = get_commands_remove_udld_global(existing)
if command:
commands.append(command)
cmds = flatten_list(commands)
if cmds:
if module.check_mode:
module.exit_json(changed=True, commands=cmds)
else:
changed = True
load_config(module, cmds)
end_state = get_udld_global(module)
if 'configure' in cmds:
cmds.pop(0)
results = {}
results['proposed'] = proposed
results['existing'] = existing
results['end_state'] = end_state
results['updates'] = cmds
results['changed'] = changed
results['warnings'] = warnings
module.exit_json(**results)
if __name__ == '__main__':
main() | unknown | codeparrot/codeparrot-clean | ||
"""
=========================================================
SVM Margins Example
=========================================================
The plots below illustrate the effect the parameter `C` has
on the separation line. A large value of `C` basically tells
our model that we do not have that much faith in our data's
distribution, and will only consider points close to line
of separation.
A small value of `C` includes more/all the observations, allowing
the margins to be calculated using all the data in the area.
"""
# Authors: The scikit-learn developers
# SPDX-License-Identifier: BSD-3-Clause
import matplotlib.pyplot as plt
import numpy as np
from sklearn import svm
# we create 40 separable points
np.random.seed(0)
X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]]
Y = [0] * 20 + [1] * 20
# figure number
fignum = 1
# fit the model
for name, penalty in (("unreg", 1), ("reg", 0.05)):
clf = svm.SVC(kernel="linear", C=penalty)
clf.fit(X, Y)
# get the separating hyperplane
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx - (clf.intercept_[0]) / w[1]
# plot the parallels to the separating hyperplane that pass through the
# support vectors (margin away from hyperplane in direction
# perpendicular to hyperplane). This is sqrt(1+a^2) away vertically in
# 2-d.
margin = 1 / np.sqrt(np.sum(clf.coef_**2))
yy_down = yy - np.sqrt(1 + a**2) * margin
yy_up = yy + np.sqrt(1 + a**2) * margin
# plot the line, the points, and the nearest vectors to the plane
plt.figure(fignum, figsize=(4, 3))
plt.clf()
plt.plot(xx, yy, "k-")
plt.plot(xx, yy_down, "k--")
plt.plot(xx, yy_up, "k--")
plt.scatter(
clf.support_vectors_[:, 0],
clf.support_vectors_[:, 1],
s=80,
facecolors="none",
zorder=10,
edgecolors="k",
)
plt.scatter(
X[:, 0], X[:, 1], c=Y, zorder=10, cmap=plt.get_cmap("RdBu"), edgecolors="k"
)
plt.axis("tight")
x_min = -4.8
x_max = 4.2
y_min = -6
y_max = 6
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = clf.decision_function(xy).reshape(XX.shape)
# Put the result into a contour plot
plt.contourf(XX, YY, Z, cmap=plt.get_cmap("RdBu"), alpha=0.5, linestyles=["-"])
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
fignum = fignum + 1
plt.show() | python | github | https://github.com/scikit-learn/scikit-learn | examples/svm/plot_svm_margin.py |
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/altr,tse.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Altera Triple Speed Ethernet MAC driver (TSE)
maintainers:
- Maxime Chevallier <maxime.chevallier@bootlin.com>
properties:
compatible:
oneOf:
- const: altr,tse-1.0
- const: ALTR,tse-1.0
deprecated: true
- const: altr,tse-msgdma-1.0
interrupts:
minItems: 2
interrupt-names:
items:
- const: rx_irq
- const: tx_irq
rx-fifo-depth:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Depth in bytes of the RX FIFO
tx-fifo-depth:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Depth in bytes of the TX FIFO
altr,has-supplementary-unicast:
type: boolean
description:
If present, TSE supports additional unicast addresses.
altr,has-hash-multicast-filter:
type: boolean
description:
If present, TSE supports hash based multicast filter.
mdio:
$ref: mdio.yaml#
unevaluatedProperties: false
description:
Creates and registers an MDIO bus.
properties:
compatible:
const: altr,tse-mdio
required:
- compatible
required:
- compatible
- reg
- interrupts
- rx-fifo-depth
- tx-fifo-depth
allOf:
- $ref: ethernet-controller.yaml#
- if:
properties:
compatible:
contains:
enum:
- altr,tse-1.0
- ALTR,tse-1.0
then:
properties:
reg:
minItems: 4
reg-names:
items:
- const: control_port
- const: rx_csr
- const: tx_csr
- const: s1
- if:
properties:
compatible:
contains:
enum:
- altr,tse-msgdma-1.0
then:
properties:
reg:
minItems: 6
maxItems: 7
reg-names:
minItems: 6
items:
- const: control_port
- const: rx_csr
- const: rx_desc
- const: rx_resp
- const: tx_csr
- const: tx_desc
- const: pcs
unevaluatedProperties: false
examples:
- |
tse_sub_0: ethernet@c0100000 {
compatible = "altr,tse-msgdma-1.0";
reg = <0xc0100000 0x00000400>,
<0xc0101000 0x00000020>,
<0xc0102000 0x00000020>,
<0xc0103000 0x00000008>,
<0xc0104000 0x00000020>,
<0xc0105000 0x00000020>,
<0xc0106000 0x00000100>;
reg-names = "control_port", "rx_csr", "rx_desc", "rx_resp", "tx_csr", "tx_desc", "pcs";
interrupt-parent = <&intc>;
interrupts = <0 44 4>,<0 45 4>;
interrupt-names = "rx_irq","tx_irq";
rx-fifo-depth = <2048>;
tx-fifo-depth = <2048>;
max-frame-size = <1500>;
local-mac-address = [ 00 00 00 00 00 00 ];
altr,has-supplementary-unicast;
altr,has-hash-multicast-filter;
sfp = <&sfp0>;
phy-mode = "sgmii";
managed = "in-band-status";
};
- |
tse_sub_1_eth_tse_0: ethernet@1,00001000 {
compatible = "altr,tse-msgdma-1.0";
reg = <0x00001000 0x00000400>,
<0x00001460 0x00000020>,
<0x00001480 0x00000020>,
<0x000014A0 0x00000008>,
<0x00001400 0x00000020>,
<0x00001420 0x00000020>;
reg-names = "control_port", "rx_csr", "rx_desc", "rx_resp", "tx_csr", "tx_desc";
interrupt-parent = <&hps_0_arm_gic_0>;
interrupts = <0 43 4>, <0 42 4>;
interrupt-names = "rx_irq", "tx_irq";
rx-fifo-depth = <2048>;
tx-fifo-depth = <2048>;
max-frame-size = <1500>;
local-mac-address = [ 00 00 00 00 00 00 ];
phy-mode = "gmii";
altr,has-supplementary-unicast;
altr,has-hash-multicast-filter;
phy-handle = <&phy1>;
mdio {
compatible = "altr,tse-mdio";
#address-cells = <1>;
#size-cells = <0>;
phy1: ethernet-phy@1 {
reg = <0x1>;
};
};
};
... | unknown | github | https://github.com/torvalds/linux | Documentation/devicetree/bindings/net/altr,tse.yaml |
# -*- coding: utf-8 -*-
'''
tests.test_atom
'''
import multiprocessing
import threading
import pytest
import atomos.atom
import atomos.multiprocessing.atom
atoms = [(atomos.atom.Atom({}), threading.Thread),
(atomos.multiprocessing.atom.Atom({}), multiprocessing.Process)]
@pytest.fixture(params=atoms)
def atom(request):
return request.param
def test_atom_deref(atom):
atom, _ = atom
assert atom.deref() == {}
def test_atom_swap(atom):
atom, _ = atom
def update_state(cur_state, k, v):
cur_state = cur_state.copy()
cur_state[k] = v
return cur_state
atom.swap(update_state, 'foo', 'bar')
assert atom.deref() == {'foo': 'bar'}
def test_atom_reset(atom):
atom, _ = atom
assert atom.reset('foo') == 'foo'
assert atom.deref() == 'foo'
def test_atom_compare_and_set(atom):
atom, _ = atom
atom.reset('foo')
assert atom.compare_and_set('foo', 'bar') is True
assert atom.compare_and_set('foo', 'bar') is False
def test_concurrent_swap(atom, proc_count=10, loop_count=1000):
atom, proc = atom
atom.reset(0)
def inc_for_loop_count():
for _ in range(loop_count):
atom.swap(lambda n: n + 1)
processes = []
for _ in range(proc_count):
p = proc(target=inc_for_loop_count)
processes.append(p)
p.start()
for p in processes:
p.join()
assert atom.deref() == proc_count * loop_count
def test_concurrent_compare_and_set(atom, proc_count=10, loop_count=1000):
atom, proc = atom
atom.reset(0)
successes = multiprocessing.Value('i', 0)
def attempt_inc_for_loop_count(successes):
for _ in range(loop_count):
oldval = atom.deref()
newval = oldval + 1
if atom.compare_and_set(oldval, newval):
with successes.get_lock():
successes.value += 1
processes = []
for _ in range(proc_count):
p = proc(target=attempt_inc_for_loop_count, args=(successes,))
processes.append(p)
p.start()
for p in processes:
p.join()
assert atom.deref() == successes.value | unknown | codeparrot/codeparrot-clean | ||
% This is generated by ESQL's AbstractFunctionTestCase. Do not edit it. See ../README.md for how to regenerate it.
**Example**
```esql
FROM airports
| WHERE country == "India"
| STATS extent = ST_EXTENT_AGG(location)
```
| extent:geo_shape |
| --- |
| BBOX (70.77995480038226, 91.5882289968431, 33.9830909203738, 8.47650992218405) | | unknown | github | https://github.com/elastic/elasticsearch | docs/reference/query-languages/esql/_snippets/functions/examples/st_extent_agg.md |
# Copyright 2012 the V8 project authors. All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
CLIENT_PORT = 9991 # Port for the local client to connect to.
PEER_PORT = 9992 # Port for peers on the network to connect to.
PRESENCE_PORT = 9993 # Port for presence daemon.
STATUS_PORT = 9994 # Port for network requests not related to workpackets.
END_OF_STREAM = "end of dtest stream" # Marker for end of network requests.
SIZE_T = 4 # Number of bytes used for network request size header.
# Messages understood by the local request handler.
ADD_TRUSTED = "add trusted"
INFORM_DURATION = "inform about duration"
REQUEST_PEERS = "get peers"
UNRESPONSIVE_PEER = "unresponsive peer"
REQUEST_PUBKEY_FINGERPRINT = "get pubkey fingerprint"
REQUEST_STATUS = "get status"
UPDATE_PERF = "update performance"
# Messages understood by the status request handler.
LIST_TRUSTED_PUBKEYS = "list trusted pubkeys"
GET_SIGNED_PUBKEY = "pass on signed pubkey"
NOTIFY_NEW_TRUSTED = "new trusted peer"
TRUST_YOU_NOW = "trust you now"
DO_YOU_TRUST = "do you trust" | unknown | codeparrot/codeparrot-clean |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.