Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 1105
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 8076)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 1105
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>minic: Not compatible 👼</title>
<link rel="shortcut icon" type="image/png" href="../../../../../favicon.png" />
<link href="../../../../../bootstrap.min.css" rel="stylesheet">
<link href="../../../../../bootstrap-custom.css" rel="stylesheet">
<link href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet">
<script src="../../../../../moment.min.js"></script>
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body>
<div class="container">
<div class="navbar navbar-default" role="navigation">
<div class="container-fluid">
<div class="navbar-header">
<a class="navbar-brand" href="../../../../.."><i class="fa fa-lg fa-flag-checkered"></i> Coq bench</a>
</div>
<div id="navbar" class="collapse navbar-collapse">
<ul class="nav navbar-nav">
<li><a href="../..">clean / released</a></li>
<li class="active"><a href="">8.8.0 / minic - 8.9.0</a></li>
</ul>
</div>
</div>
</div>
<div class="article">
<div class="row">
<div class="col-md-12">
<a href="../..">« Up</a>
<h1>
minic
<small>
8.9.0
<span class="label label-info">Not compatible 👼</span>
</small>
</h1>
<p>📅 <em><script>document.write(moment("2022-11-04 17:24:13 +0000", "YYYY-MM-DD HH:mm:ss Z").fromNow());</script> (2022-11-04 17:24:13 UTC)</em><p>
<h2>Context</h2>
<pre># Packages matching: installed
# Name # Installed # Synopsis
base-bigarray base
base-num base Num library distributed with the OCaml compiler
base-threads base
base-unix base
camlp5 7.14 Preprocessor-pretty-printer of OCaml
conf-findutils 1 Virtual package relying on findutils
conf-perl 2 Virtual package relying on perl
coq 8.8.0 Formal proof management system
num 0 The Num library for arbitrary-precision integer and rational arithmetic
ocaml 4.03.0 The OCaml compiler (virtual package)
ocaml-base-compiler 4.03.0 Official 4.03.0 release
ocaml-config 1 OCaml Switch Configuration
ocamlfind 1.9.5 A library manager for OCaml
# opam file:
opam-version: "2.0"
maintainer: "Hugo.Herbelin@inria.fr"
homepage: "https://github.com/coq-contribs/minic"
license: "LGPL 2.1"
build: [make "-j%{jobs}%"]
install: [make "install"]
remove: ["rm" "-R" "%{lib}%/coq/user-contrib/MiniC"]
depends: [
"ocaml"
"coq" {>= "8.9" & < "8.10~"}
]
tags: [
"keyword: denotational semantics"
"keyword: compilation"
"category: Computer Science/Semantics and Compilation/Semantics"
]
authors: [
"Eduardo Giménez and Emmanuel Ledinot"
]
bug-reports: "https://github.com/coq-contribs/minic/issues"
dev-repo: "git+https://github.com/coq-contribs/minic.git"
synopsis: "Semantics of a subset of the C language"
description: """
This contribution defines the denotational semantics of MiniC, a
sub-set of the C language. This sub-set is sufficiently large to
contain any program generated by lustre2C.
The denotation function describing the semantics of a MiniC program
actually provides an interpreter for the program."""
flags: light-uninstall
url {
src: "https://github.com/coq-contribs/minic/archive/v8.9.0.tar.gz"
checksum: "md5=d5bc1f0d4ac3cd9e52be007a19639399"
}
</pre>
<h2>Lint</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
</dl>
<h2>Dry install 🏜️</h2>
<p>Dry install with the current Coq version:</p>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>opam install -y --show-action coq-minic.8.9.0 coq.8.8.0</code></dd>
<dt>Return code</dt>
<dd>5120</dd>
<dt>Output</dt>
<dd><pre>[NOTE] Package coq is already installed (current version is 8.8.0).
The following dependencies couldn't be met:
- coq-minic -> coq >= 8.9 -> ocaml >= 4.05.0
base of this switch (use `--unlock-base' to force)
Your request can't be satisfied:
- No available version of coq satisfies the constraints
No solution found, exiting
</pre></dd>
</dl>
<p>Dry install without Coq/switch base, to test if the problem was incompatibility with the current Coq/OCaml version:</p>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>opam remove -y coq; opam install -y --show-action --unlock-base coq-minic.8.9.0</code></dd>
<dt>Return code</dt>
<dd>0</dd>
</dl>
<h2>Install dependencies</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
<dt>Duration</dt>
<dd>0 s</dd>
</dl>
<h2>Install 🚀</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
<dt>Duration</dt>
<dd>0 s</dd>
</dl>
<h2>Installation size</h2>
<p>No files were installed.</p>
<h2>Uninstall 🧹</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
<dt>Missing removes</dt>
<dd>
none
</dd>
<dt>Wrong removes</dt>
<dd>
none
</dd>
</dl>
</div>
</div>
</div>
<hr/>
<div class="footer">
<p class="text-center">
Sources are on <a href="https://github.com/coq-bench">GitHub</a> © Guillaume Claret 🐣
</p>
</div>
</div>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<script src="../../../../../bootstrap.min.js"></script>
</body>
</html>
|
{
"content_hash": "048d142673a5ca4485e494b87e5b4fef",
"timestamp": "",
"source": "github",
"line_count": 176,
"max_line_length": 159,
"avg_line_length": 40.80681818181818,
"alnum_prop": 0.5488721804511278,
"repo_name": "coq-bench/coq-bench.github.io",
"id": "954de3c4905a7ff7ca84a653d6bc7fce33c6a536",
"size": "7208",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "clean/Linux-x86_64-4.03.0-2.0.5/released/8.8.0/minic/8.9.0.html",
"mode": "33188",
"license": "mit",
"language": []
}
|
title: "Your Private API Network"
order: 73.2
page_id: "adding_private_network"
updated: 2022-05-11
warning: false
contextual_links:
- type: section
name: "Prerequisites"
- type: link
name: "Working with your team"
url: "/docs/collaborating-in-postman/working-with-your-team/collaboration-overview/"
- type: section
name: "Additional Resources"
- type: subtitle
name: "Videos"
- type: link
name: "Private API Network | The Exploratory"
url: "https://youtu.be/1SINcytmKsc"
- type: subtitle
name: "Blog Posts"
- type: link
name: "Improved Internal API Discovery with the Redesigned Private API Network"
url: "https://blog.postman.com/improving-api-discovery-with-the-redesigned-private-api-network/"
- type: link
name: "Introducing the API Network Manager Role and Approval Process"
url: "https://blog.postman.com/introducing-private-api-network-manager-role-and-approval-process/"
- type: subtitle
name: "Case Studies"
- type: link
name: "ChargeHub"
url: "https://www.postman.com/case-studies/chargehub/"
search_keyword: "Private API Network, API sharing, folders, network listing, filtering apis, publish versions, github import, private apis, adding apis"
---
> **[The Private API Network is available on Postman Enterprise plans.](https://www.postman.com/pricing)**
The _Private API Network_ provides a central directory of workspaces, collections, and APIs your team uses internally. Your Postman team can access these resources and start using them right away. By utilizing the Private API Network, you can enable developers across your organization to discover, consume, and track API development in one place.
Workspaces, collections, and APIs in the Private API Network are visible to logged-in users who are on your Postman team. Users who aren't on your team can't find or access these resources.
> As a quality control measure, your team can turn on an [optional approval process workflow](#using-the-approval-process-workflow). The approval workflow ensures that only designated users with the [API Network Manager role](/docs/collaborating-in-postman/roles-and-permissions/#team-roles) can add elements to your Private API Network. If your team doesn't use the optional approval process workflow, any user who has an Editor role for a workspace, collection, or API can [add it to the Private API Network](#adding-elements-to-the-private-api-network).
<img alt="Private API Network overview" src="https://assets.postman.com/postman-docs/v10/private-api-network-overview1-v10.jpg"/>
## Contents
* [Navigating the Private API Network](#navigating-the-private-api-network)
* [Using the approval process workflow](#using-the-approval-process-workflow)
* [Assign the API Network Manager role](#assign-the-api-network-manager-role)
* [Turn on the approval process](#turn-on-the-approval-process)
* [Editor: Requesting to add an element](#editor-requesting-to-add-an-element)
* [API Network Manager: Reviewing requests to add elements](#api-network-manager-reviewing-requests-to-add-elements)
* [Adding elements to the Private API Network](#adding-elements-to-the-private-api-network)
* [Adding workspaces](#adding-workspaces)
* [Adding collections](#adding-collections)
* [Adding APIs](#adding-apis)
* [Managing the Private API Network](#managing-the-private-api-network)
* [Organizing with folders](#organizing-with-folders)
* [Editing element listings](#editing-element-listings)
* [Removing elements from the Private API Network](#removing-elements-from-the-private-api-network)
* [Private API Network reports](#private-api-network-reports)
## Navigating the Private API Network
The Private API Network is a good place to learn about workspaces, collections, and APIs shared within your team. Under your team name, you can browse a directory of elements shared within your team.
There are two ways to access the Private API Network:
* Select **Home** from the Postman header, then select **Private API Network** in your team information on the left side.
* Select **API Network** from the Postman header, then select **Private API Network**.
In the Private API Network, you can filter elements by name using the search box. Select **Added by** to filter based on the person who added the element. Select **Type** to filter by the type of element. Select **Sort by** to sort elements based on name and date added. You can also filter folders, sub-folders, and elements based on name and date added.
<img alt="Private API List" src="https://assets.postman.com/postman-docs/v10/private-api-network-list-v10.jpg"/>
To review information about an element, select it from the list. You can view the element's description and the editors who have worked on it. For workspaces, you can view all of the collections and APIs inside them. For collections, you can view available documentation. For APIs, you can view definitions and associated collections.
To watch a workspace, collection, or API and get notified about any changes, select **Watch** in the upper right corner.
> To learn more about watch notifications, see [Watching a workspace](/docs/collaborating-in-postman/using-workspaces/managing-workspaces/#watching-a-workspace), [Watching a collection](/docs/sending-requests/intro-to-collections/#watching-a-collection), and [Watching an API](/docs/designing-and-developing-your-api/managing-apis/#watching-apis).
## Using the approval process workflow
> **[The approval process workflow is available on Postman Enterprise plans.](https://www.postman.com/pricing/)**
As a quality control measure, your team can turn on an [optional approval process workflow](#using-the-approval-process-workflow). The approval workflow ensures that only designated users with the [API Network Manager role](/docs/collaborating-in-postman/roles-and-permissions/#team-roles) can add elements to your Private API Network.
To use this approval process for your team, you need to complete two steps first:
1. [Assign the API Network Manager role to a user](#assign-the-api-network-manager-role)
1. [Turn on the approval process in Team Settings](#turn-on-the-approval-process)
Once these steps are complete, users with an Editor role for an element will need to [request to add the API](#editor-requesting-to-add-an-element) to the Private API Network. The API Network Manager will be able to [review requests](#api-network-manager-reviewing-requests-to-add-elements) to add elements to the Private API Network, [add elements](#adding-elements-to-the-private-api-network), and [create and edit folders](#organizing-with-folders).
### Assign the API Network Manager role
An [API Network Manager](/docs/collaborating-in-postman/roles-and-permissions/#team-roles) can:
* Add any element to the team's Private API Network
* Create and edit folders
* Assign this role to other team members
> You must have either [the Super Admin role or the API Network Manager role](/docs/administration/managing-your-team/managing-your-team/#managing-team-roles) to assign this role to a user.
To assign the API Network Manager role:
1. On the Team Settings page, select **Members and groups**.
1. Select the user you want to assign the API Network Manager role to.
1. In the **Roles** dropdown list next to their name, select **API Network Manager**, then select **Update Roles**. For more information about assigning team roles to individual users, see [Managing roles](/docs/administration/managing-your-team/managing-your-team/#managing-team-roles).
> Enterprise teams can also assign this role to a user group. For more information about assigning team roles to groups, see [Managing user groups](/docs/administration/managing-your-team/user-groups/).
Postman will send an email to new API Network Managers about their updated role.
### Turn on the approval process
The approval process enables an API Network Manager to control the process of adding elements to their team's Private API Network.
> You must have either [the Super Admin role or the API Network Manager role](/docs/administration/managing-your-team/managing-your-team/#managing-team-roles) to turn on the approval process.
To turn on the approval process workflow:
1. In the Postman header, select **Team** > **Team Settings**.
1. Select **Private API Network**.
1. Turn on the approval process.
<img alt="Turn on the Private API Network approval process" src="https://assets.postman.com/postman-docs/v10/private-api-network-approval-process-v10.jpg"/>
Once you have turned on the approval process, any team member with the Editor role for an element can [request to add it to the Private API Network](#editor-requesting-to-add-an-element).
### Editor: Requesting to add an element
When you enable the [optional approval process](#using-the-approval-process-workflow), users with an Editor role for an element can request to add it to the Private API Network.
When you submit a request, Postman notifies the [API Network Manager](/docs/collaborating-in-postman/roles-and-permissions/#team-roles) who will review your request and either approve or deny it. Postman will notify you of the API Network Manager's decision. If they deny your request, the notification will include a comment with their reason.
#### Requesting to add a workspace
1. Open the workspace you want to add to the Private API Network.
1. In the workspace overview, select **Request to Add to API Network**.
1. (Optional) Select a folder or create one to keep elements organized.
1. (Optional) Select **Add comment** to add a note for the API Network Manager.
1. Select **Request to Add to API Network**.
#### Requesting to add a collection
1. Open the collection you want to add to the Private API Network.
1. Select the information icon <img alt="Information icon" src="https://assets.postman.com/postman-docs/icon-information-v9-5.jpg#icon" width="16px">, then select **Request to Add to API Network**.
1. (Optional) Add a brief summary about the collection.
1. (Optional) Select **Select Environments** to make sure users have access to environment variables.
1. (Optional) Select a folder or create one to keep elements organized.
1. (Optional) Select **Add comment** to add a note for the API Network Manager.
1. Select **Request to Add to API Network**.
#### Requesting to add an API
1. Open the API you want to add to the Private API Network.
1. In the API overview, select **Request to Add to API Network**.
1. (Optional) Select a folder or create one to keep elements organized.
1. (Optional) Select **Add comment** to add a note for the API Network Manager.
1. Select **Request to Add to API Network**.
### API Network Manager: Reviewing requests to add elements
When an Editor requests to add an element to your team's Private API Network, Postman will send you an email and an in-app notification. For the list of all the pending requests, open the [Private API Network](https://go.postman.co/network/private) and select **Pending Requests**.
<img alt="View pending requests" src="https://assets.postman.com/postman-docs/v10/private-api-network-pending-requests-v10.jpg" width="250px"/>
Pending requests include the user who submitted the request, the date they submitted it on, a link to view the element, and an optional note from the requesting user.
<img alt="Approve or deny a request" src="https://assets.postman.com/postman-docs/v10/private-api-network-pending-requests-1-v10.jpg"/>
To approve a request:
1. Select **Approve**.
1. (Collections)(Optional) Edit the provided summary.
1. (Optional) Select a folder or create one to keep elements organized.
1. Select **Approve Request**.
To deny a request:
1. Select **Deny**.
1. Write a note for the Editor who submitted the request with details about why you are denying their request.
1. Select **Deny request**.
## Adding elements to the Private API Network
To add an element to the Private API Network, it must be in the [API Builder](/docs/designing-and-developing-your-api/creating-an-api/) in a team or public workspace. You can't add an element to the Private API Network unless all team members have at least view access to the element. Learn more about team [roles and permissions](/docs/collaborating-in-postman/roles-and-permissions/).
* **If your team uses the [optional approval process](#using-the-approval-process-workflow)**, users with the Editor role must [submit a request](#editor-requesting-to-add-an-element) to add an element to the Private API Network and an API Network Manager must [approve the request](#api-network-manager-reviewing-requests-to-add-elements). API Network Managers can add elements directly to the Private API Network.
* **If your team doesn't use the optional approval process**, any user with an Editor role for an element can add it to the Private API Network.
### Adding workspaces
If your team uses the [optional approval process](#using-the-approval-process-workflow), only an API Network Manager can add workspaces directly to the Private API Network. Workspace Editors must [request to add a workspace](#requesting-to-add-an-api). If your team doesn't use the approval process, any user with Editor access for the workspace can add it to the Private API Network.
To add a workspace to the Private API Network from the API Builder:
1. Open the workspace you want to add to the Private API Network.
1. In the workspace overview, select **Add to API Network**.
1. (Optional) Select a folder or create one to keep elements organized.
1. Select **Add**.
You can also add a workspace from inside your team's Private API Network:
1. Open your [Private API Network](https://go.postman.co/network/private).
1. Select **Add to network**.
1. Search for and select the workspace you want to add.
1. (Optional) Select a folder or create one to keep elements organized.
1. Select **Add**.
When you add a workspace to the [Private API Network](https://go.postman.co/network/private), it's visible to your Postman team, but isn't visible to [partners](/docs/collaborating-in-postman/using-workspaces/partner-workspaces/).
### Adding collections
If your team uses the [optional approval process](#using-the-approval-process-workflow), only an API Network Manager can add collections directly to the Private API Network. Collection Editors must [request to add a collection](#requesting-to-add-an-api). If your team doesn't use the approval process, any user with Editor access for the collection can add it to the Private API Network.
To add a collection to the Private API Network from the API Builder:
1. Open the collection you want to add to the Private API Network.
1. Select the information icon <img alt="Information icon" src="https://assets.postman.com/postman-docs/icon-information-v9-5.jpg#icon" width="16px">, then select **Add to API Network**.
1. (Optional) Add a brief summary about the collection.
1. (Optional) Select **Select Environments** to make sure users have access to environment variables.
1. (Optional) Select a folder or create one to keep elements organized.
1. Select **Add**.
You can also add a collection while in your team's Private API Network:
1. Open your [Private API Network](https://go.postman.co/network/private).
1. Select **Add to network**
1. Search for and select the collection you want to add.
1. (Optional) Select a folder or create one to keep elements organized.
1. Select **Add**.
When you add a collection to the [Private API Network](https://go.postman.co/network/private), it's visible to your Postman team, but isn't visible to [partners](/docs/collaborating-in-postman/using-workspaces/partner-workspaces/).
The collections that you add to the Private API Network reflect the latest state of the collection in your team workspace. In other words, changes made to the collection in the workspace are reflected in the network in real time.
### Adding APIs
If your team uses the [optional approval process](#using-the-approval-process-workflow), only an API Network Manager can add APIs directly to the Private API Network. API Editors must [request to add an API](#requesting-to-add-an-api). If your team doesn't use the approval process, any user with Editor access for the API can add it to the Private API Network.
To add an API to the Private API Network from the API Builder:
1. Open the API you want to add to the Private API Network.
1. In the API overview, select **Publish API**.
1. Select **Request to add to Private API Network**.
1. (Optional) Select a folder or create one to keep elements organized.
1. Select **Add**.
You can also add an API while in your team's Private API Network:
1. Open your [Private API Network](https://go.postman.co/network/private).
1. Select **Add to network**
1. Search for and select the API you want to add.
1. (Optional) Select a folder or create one to keep elements organized.
1. Select **Add**.
When you add an API to the [Private API Network](https://go.postman.co/network/private), it's visible to your Postman team, but isn't visible to [partners](/docs/collaborating-in-postman/using-workspaces/partner-workspaces/).
The APIs that you publish to the Private API Network reflect the latest state of the API in your team workspace. In other words, published changes made to the API in the workspace are reflected in the network in real time.
#### Publishing specific API versions
Publishing a version creates a static representation of your API that consumers can view on the Private API Network. If your API is connected to a Git repository, you need to publish an API version to update your team workspace with the latest changes. When you publish a version, the API's definition and collections are synced to the Postman cloud.
Learn more about [publishing an API version](/docs/designing-and-developing-your-api/versioning-an-api/api-versions/).
#### Importing APIs from a code repository
You can make all your existing APIs discoverable on the Private API Network after you import them from a code repository. Learn more about [importing an API](/docs/designing-and-developing-your-api/importing-an-api/).
## Managing the Private API Network
Once you've added elements to your Private API Network, you can manage them by organizing them in folders, editing their listings, and removing them from the Private API Network.
* **If your team uses the [optional approval process](#using-the-approval-process-workflow)**, a user with the API Network Manager can complete these tasks.
* **If your team doesn't use the optional approval process**, any user with an Editor role can complete these tasks.
### Organizing with folders
The sidebar navigation displays the folder structure for your Private API Network. You can drag elements and sub-folders into different folders. You can also add descriptions to folders to describe elements within the folders.
<img alt="Create new folder in Private Network" src="https://assets.postman.com/postman-docs/v10/private-api-network-create-folder-in-sidebar-v10.jpg" width="300px"/>
To create a new folder from the Private API Network view:
1. Select __Create Folder__ from the sidebar.
1. Give the folder a name.
1. (Optional) Give a description.
1. Select **Save**.
To create a new folder from the Private API Network overview page:
1. Select **Create Folder** on the right.
1. Give the folder a name.
1. (Optional) Give a description.
1. Select **Save**.
You can also use **Create folder** to create sub-folders inside a folder.
Use **Search elements and folders** to search across folders, sub-folders, and elements in your Private API Network.
### Editing element listings
Select the more actions icon <img alt="More actions icon" src="https://assets.postman.com/postman-docs/icon-more-actions-v9.jpg#icon" width="16px"> next to the element you would like to edit from the network, then select **Edit element**. You can change an element's summary, location, and associated environments. Select **Edit** to save your changes.
<img alt="Edit API listing" src="https://assets.postman.com/postman-docs/v10/private-api-network-edit-element-v10.jpg" width="450px"/>
### Removing elements from the Private API Network
> If your team uses the [optional approval process](#using-the-approval-process-workflow), only an API Network Manager can remove an element from the Private API Network. If your team doesn't use the approval process, any user who has an Editor role for the element can remove it.
To remove elements from your Private API Network:
1. Select the more actions icon <img alt="More actions icon" src="https://assets.postman.com/postman-docs/icon-more-actions-v9.jpg#icon" width="16px"> next to the element you want to remove from network.
1. Select **Remove**.
<img alt="Remove element from Network" src="https://assets.postman.com/postman-docs/v10/private-api-network-remove-element-v10.jpg" width="250px"/>
After you remove the element, your team members won't have access to it through the Private API Network.
## Private API Network reports
The report feature makes it easier to govern your internal API landscape through deeper insights into APIs in your Private API Network.
Select [**Home**](https://go.postman.co/) from the Postman header, then select **Reports** on the left side.
API reports offer the following information:
* **API name** is the name of the API published to the Private API Network
* **API created by** is the name of person who created the API
* **API created on** is the date when the API was created
* **Number of API requests** is the total number of API requests sent over a period of time
* **Failed test runs** is the number of failed test runs over a time frame
* **Average response size** is the average response size in bytes for the requests over the reporting period
* **Average response time** is the average response time in milliseconds for requests over a time frame
* **API response codes** is a graph showing different response codes for API requests plotted vs the number of API requests
To learn more about reports, see the [Reports overview](/docs/reports/reports-overview/).
|
{
"content_hash": "c7319f33c73e618b4b9ec7cc2d473f08",
"timestamp": "",
"source": "github",
"line_count": 341,
"max_line_length": 556,
"avg_line_length": 64.92961876832844,
"alnum_prop": 0.7671740210469266,
"repo_name": "postmanlabs/postman-docs",
"id": "28d240608f5d3b1b7c557b452d0f4e11d94e02fa",
"size": "22145",
"binary": false,
"copies": "1",
"ref": "refs/heads/develop",
"path": "src/pages/docs/collaborating-in-postman/adding-private-network.md",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "CSS",
"bytes": "5018"
},
{
"name": "JavaScript",
"bytes": "247453"
},
{
"name": "SCSS",
"bytes": "732"
},
{
"name": "Shell",
"bytes": "1397"
}
]
}
|
import { Component } from '@angular/core';
import { IonicPage, NavController, NavParams , ViewController } from 'ionic-angular';
import { FormBuilder, Validators, FormGroup } from '@angular/forms';
import { PublicService } from '../../../../providers/PublicService';
import { ContractService } from "../contract-service";
@IonicPage()
@Component({
selector: 'page-contract-search',
templateUrl: 'contract-search.html',
})
export class ContractSearchPage {
searchForm: FormGroup;
search: any;
DeptLs: Array<any> = [];
typeLs: Array<any>;
yearLs: Array<any>;
constructor(public navCtrl: NavController,
public navParams: NavParams,
public viewCtrl: ViewController,
private publicService: PublicService,
private contractService: ContractService,
private formBuilder: FormBuilder) {
this.search = this.navParams.get("search");
this.searchForm = this.formBuilder.group({
"bmid": ["", []], // 第一个参数是默认值
"nf": ["", []],
"tp": ["", []],
"startdate": ["", []],
"enddate": ["", []],
});
this.contractService.getType().subscribe(resJson => {
if (resJson.Result){
let arr = [];
for (let i in resJson.Data){
arr.push({"id": i, "name": resJson.Data[i]});
}
this.typeLs = arr;
}
});
this.contractService.getYear().subscribe(resJson => {
if (resJson.Result){
let arr = [];
for (let i in resJson.Data){
arr.push({"id": i, "name": resJson.Data[i]});
}
this.yearLs = arr;
}
});
this.publicService.GetDeptLs().subscribe((resJson) => {
if (resJson.Result){} this.DeptLs = resJson.Data;
});
if (this.search){
this.searchForm.setValue({
"bmid": this.search.bmid,
"nf": this.search.nf,
"tp": this.search.tp,
"startdate": this.search.startdate,
"enddate": this.search.enddate
});
}
}
sent(value){
this.viewCtrl.dismiss({"search": value});
}
reset(){
this.searchForm.reset();
this.searchForm.setValidators(null);
this.searchForm.updateValueAndValidity();
}
}
|
{
"content_hash": "862616be782ec02f81534790e73e345c",
"timestamp": "",
"source": "github",
"line_count": 78,
"max_line_length": 85,
"avg_line_length": 28.23076923076923,
"alnum_prop": 0.5799273387829246,
"repo_name": "cicixiaoyan/OA_WEBApp",
"id": "92ab45ca94530f4c91afcfea7e1e092d7e749ad7",
"size": "2220",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "OA_WEBApp/src/pages/hr-management/contract/contract-search/contract-search.ts",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "C",
"bytes": "2914"
},
{
"name": "C#",
"bytes": "180305"
},
{
"name": "C++",
"bytes": "469641"
},
{
"name": "CSS",
"bytes": "798021"
},
{
"name": "HTML",
"bytes": "424412"
},
{
"name": "Java",
"bytes": "544740"
},
{
"name": "JavaScript",
"bytes": "4442876"
},
{
"name": "Objective-C",
"bytes": "893818"
},
{
"name": "PHP",
"bytes": "7599"
},
{
"name": "QML",
"bytes": "6499"
},
{
"name": "Ruby",
"bytes": "962"
},
{
"name": "TypeScript",
"bytes": "452190"
}
]
}
|
// --------------------------------------------------------------------------------------------------------------------
// <copyright file="MergedApiRoot.cs" company="KlusterKite">
// All rights reserved
// </copyright>
// <summary>
// The merged api root description
// </summary>
// --------------------------------------------------------------------------------------------------------------------
namespace KlusterKite.Web.GraphQL.Publisher.Internals
{
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using global::GraphQL.Resolvers;
using global::GraphQL.Types;
using KlusterKite.API.Attributes.Authorization;
using KlusterKite.API.Client;
using KlusterKite.Security.Attributes;
using KlusterKite.Security.Client;
using KlusterKite.Web.GraphQL.Publisher.GraphTypes;
using Newtonsoft.Json.Linq;
/// <summary>
/// The merged api root description
/// </summary>
internal class MergedApiRoot : MergedObjectType
{
/// <summary>
/// Initializes a new instance of the <see cref="MergedApiRoot"/> class.
/// </summary>
/// <param name="originalTypeName">
/// The original type name.
/// </param>
public MergedApiRoot(string originalTypeName)
: base(originalTypeName)
{
}
/// <summary>
/// Gets combined name from all provider
/// </summary>
public override string ComplexTypeName
{
get
{
if (this.Providers.Any())
{
var providersNames =
this.Providers.Select(p => EscapeName(p.Provider.Description.ApiName))
.Distinct()
.OrderBy(s => s)
.ToArray();
return string.Join("_", providersNames);
}
return EscapeName(this.OriginalTypeName);
}
}
/// <summary>
/// Gets the list of declared mutations
/// </summary>
public Dictionary<string, MergedField> Mutations { get; } = new Dictionary<string, MergedField>();
/// <summary>
/// Gets or sets the node searcher
/// </summary>
public NodeSearcher NodeSearher { get; internal set; }
/// <inheritdoc />
public override MergedObjectType Clone()
{
var clone = new MergedApiRoot(this.OriginalTypeName);
this.FillWithMyFields(clone);
foreach (var mutation in this.Mutations)
{
clone.Mutations[mutation.Key] = mutation.Value.Clone();
}
return clone;
}
/// <inheritdoc />
public override IGraphType ExtractInterface(ApiProvider provider, NodeInterface nodeInterface)
{
var extractInterface = (TypeInterface)base.ExtractInterface(provider, nodeInterface);
extractInterface.AddField(this.CreateNodeField(nodeInterface));
return extractInterface;
}
/// <inheritdoc />
public override IGraphType GenerateGraphType(NodeInterface nodeInterface, List<TypeInterface> interfaces)
{
var graphType = (VirtualGraphType)base.GenerateGraphType(nodeInterface, interfaces);
var nodeFieldType = this.CreateNodeField(nodeInterface);
graphType.AddField(nodeFieldType);
return graphType;
}
/// <summary>
/// Generate graph type for all registered mutations
/// </summary>
/// <returns>The mutations graph type</returns>
public IObjectGraphType GenerateMutationType()
{
var fields = this.Mutations.Select(f => this.ConvertApiField(f, new MutationResolver(f.Value)));
return new VirtualGraphType("Mutations", fields.ToList())
{
Description =
"The list of all detected mutations"
};
}
/// <inheritdoc />
public override string GetInterfaceName(ApiProvider provider)
{
return $"I{provider.Description.ApiName}";
}
/// <summary>
/// Resolves request value
/// </summary>
/// <param name="context">
/// The request context
/// </param>
/// <returns>
/// Resolved value
/// </returns>
public override object Resolve(ResolveFieldContext context)
{
return this.DoApiRequests(context, context.UserContext as RequestContext);
}
/// <summary>
/// Creates the node searcher field for the graph type
/// </summary>
/// <param name="nodeInterface">The node interface</param>
/// <returns>The node field</returns>
private FieldType CreateNodeField(NodeInterface nodeInterface)
{
var nodeFieldType = new FieldType();
nodeFieldType.Name = "__node";
nodeFieldType.ResolvedType = nodeInterface;
nodeFieldType.Description = "The node global searcher according to Relay specification";
nodeFieldType.Arguments =
new QueryArguments(
new QueryArgument(typeof(IdGraphType)) { Name = "id", Description = "The node global id" });
nodeFieldType.Resolver = this.NodeSearher;
return nodeFieldType;
}
/// <summary>
/// Creates an api requests to gather all data
/// </summary>
/// <param name="context">
/// The request contexts
/// </param>
/// <param name="requestContext">
/// The request Context.
/// </param>
/// <returns>
/// The request data
/// </returns>
private async Task<JObject> DoApiRequests(ResolveFieldContext context, RequestContext requestContext)
{
var taskList = new List<Task<JObject>>();
foreach (var provider in this.Providers.Select(fp => fp.Provider))
{
var request = this.GatherMultipleApiRequest(provider, context.FieldAst, context).ToList();
if (request.Count > 0)
{
taskList.Add(provider.GetData(request, requestContext));
}
}
JObject data;
if (taskList.Count == 0)
{
data = new JObject();
}
else
{
var responses = await Task.WhenAll(taskList);
var options = new JsonMergeSettings
{
MergeArrayHandling = MergeArrayHandling.Merge,
MergeNullValueHandling = MergeNullValueHandling.Ignore
};
var response = responses.Aggregate(
new JObject(),
(seed, next) =>
{
seed.Merge(next, options);
return seed;
});
data = this.ResolveData(context, response);
}
if (data.Property(GlobalIdPropertyName) != null)
{
data.Property(GlobalIdPropertyName).Value = new JArray();
}
else
{
data.Add(GlobalIdPropertyName, new JArray());
}
return data;
}
/// <summary>
/// Resolves mutation requests
/// </summary>
private class MutationResolver : IFieldResolver
{
/// <summary>
/// The mutation description
/// </summary>
private readonly MergedField mergedField;
/// <summary>
/// Mutation API provider
/// </summary>
private readonly ApiProvider provider;
/// <summary>
/// Initializes a new instance of the <see cref="MutationResolver"/> class.
/// </summary>
/// <param name="mergedField">
/// The merged field.
/// </param>
public MutationResolver(MergedField mergedField)
{
this.mergedField = mergedField;
this.provider = this.mergedField.Providers.First();
}
/// <summary>
/// Resolves mutation value (sends request to API)
/// </summary>
/// <param name="context">
/// The context.
/// </param>
/// <returns>
/// The <see cref="object"/>.
/// </returns>
public object Resolve(ResolveFieldContext context)
{
var connectionMutationResultType = this.mergedField.Type as MergedConnectionMutationResultType;
if (connectionMutationResultType != null)
{
return
this.DoConnectionMutationApiRequests(
context,
context.UserContext as RequestContext,
connectionMutationResultType).Result;
}
var untypedMutationResultType = this.mergedField.Type as MergedUntypedMutationResult;
if (untypedMutationResultType != null)
{
return
this.DoUntypedMutationApiRequests(
context,
context.UserContext as RequestContext,
untypedMutationResultType).Result;
}
return this.DoApiRequests(context, context.UserContext as RequestContext);
}
/// <summary>
/// Creates an api requests to gather all data
/// </summary>
/// <param name="context">
/// The request contexts
/// </param>
/// <param name="requestContext">
/// The request Context.
/// </param>
/// <returns>
/// The request data
/// </returns>
private Task<JObject> DoApiRequests(ResolveFieldContext context, RequestContext requestContext)
{
var request = new MutationApiRequest
{
Arguments = context.FieldAst.Arguments.ToJson(context),
FieldName = this.mergedField.FieldName,
Fields =
this.mergedField.Type.GatherSingleApiRequest(
context.FieldAst,
context).ToList()
};
var apiRequests = this.provider.GetData(new List<ApiRequest> { request }, requestContext);
return apiRequests;
}
/// <summary>
/// Creates an api requests to gather all data
/// </summary>
/// <param name="context">
/// The request contexts
/// </param>
/// <param name="requestContext">
/// The request Context.
/// </param>
/// <param name="responseType">response type</param>
/// <returns>
/// The request data
/// </returns>
private async Task<JObject> DoConnectionMutationApiRequests(
ResolveFieldContext context,
RequestContext requestContext,
MergedConnectionMutationResultType responseType)
{
var arguments = context.FieldAst.Arguments.ToJson(context).Property("input")?.Value as JObject;
var actionName = this.mergedField.FieldName?.Split('.').LastOrDefault();
EnConnectionAction action;
var originalApiField = this.mergedField.OriginalFields.Values.FirstOrDefault();
if (Enum.TryParse(actionName, true, out action) && originalApiField != null
&& !originalApiField.CheckAuthorization(requestContext, action))
{
var severity = originalApiField.LogAccessRules.Any()
? originalApiField.LogAccessRules.Max(l => l.Severity)
: EnSeverity.Trivial;
SecurityLog.CreateRecord(
EnSecurityLogType.OperationDenied,
severity,
context.UserContext as RequestContext,
"Unauthorized call to {ApiPath}",
context.FieldAst.Name);
var emptyResponse = new JObject
{
{
"clientMutationId",
arguments?.Property("clientMutationId")?.ToObject<string>()
}
};
return emptyResponse;
}
var edgeType = responseType.EdgeType;
var nodeType = responseType.EdgeType.ObjectType;
var requestedFields = new List<ApiRequest>();
var idSubRequestRequest = new List<ApiRequest>
{
new ApiRequest
{
FieldName =
nodeType.KeyField.FieldName,
Alias = "__id"
}
};
var idRequestRequest = new ApiRequest
{
Alias = "__idRequest",
FieldName = "result",
Fields = idSubRequestRequest
};
requestedFields.Add(idRequestRequest);
var topFields =
GetRequestedFields(context.FieldAst.SelectionSet, context, this.mergedField.Type).ToList();
var nodeRequests = topFields.Where(f => f.Name == "node" || f.Name == "edge").ToList();
foreach (var nodeRequest in nodeRequests)
{
var nodeAlias = nodeRequest.Alias ?? nodeRequest.Name;
switch (nodeRequest.Name)
{
case "node":
var nodeFields = nodeType.GatherSingleApiRequest(nodeRequest, context).ToList();
nodeFields.Add(new ApiRequest { Alias = "__id", FieldName = nodeType.KeyField.FieldName });
requestedFields.Add(
new ApiRequest { Alias = nodeAlias, FieldName = "result", Fields = nodeFields });
break;
case "edge":
var edgeFields = new List<ApiRequest>();
foreach (var edgeNodeRequests in
GetRequestedFields(nodeRequest.SelectionSet, context, edgeType)
.Where(f => f.Name == "node"))
{
edgeFields.AddRange(
nodeType.GatherSingleApiRequest(edgeNodeRequests, context).Select(
f =>
{
f.Alias =
$"{edgeNodeRequests.Alias ?? edgeNodeRequests.Name}_{f.Alias ?? f.FieldName}";
return f;
}));
}
edgeFields.Add(new ApiRequest { Alias = "__id", FieldName = nodeType.KeyField.FieldName });
requestedFields.Add(
new ApiRequest { Alias = nodeAlias, FieldName = "result", Fields = edgeFields });
break;
}
}
if (responseType.ErrorType != null)
{
var errorsRequest = topFields.Where(f => f.Name == "errors");
foreach (var field in errorsRequest)
{
requestedFields.Add(
new ApiRequest
{
FieldName = "errors",
Alias = field.Alias,
Fields =
responseType.ErrorType.GatherSingleApiRequest(field, context)
.ToList()
});
}
}
var request = new MutationApiRequest
{
Arguments = arguments,
FieldName = this.mergedField.FieldName,
Fields = requestedFields
};
var data = await this.provider.GetData(new List<ApiRequest> { request }, requestContext);
if (data != null)
{
var mutation = (ApiMutation)this.mergedField.OriginalFields[this.provider.Description.ApiName];
var treePath = mutation.Path.Take(mutation.Path.Count - 1).ToList();
var parentGlobalId = new JArray(treePath.Select(r => new JObject { { "f", r.FieldName } }));
data.Add(GlobalIdPropertyName, parentGlobalId);
var elementRequest = mutation.Path.LastOrDefault();
if (elementRequest != null)
{
var localRequest = new JObject { { "f", elementRequest.FieldName } };
data.Add(RequestPropertyName, localRequest);
}
}
data?.Add("clientMutationId", arguments?.Property("clientMutationId")?.ToObject<string>());
return data;
}
/// <summary>
/// Creates an api requests to gather all data
/// </summary>
/// <param name="context">
/// The request contexts
/// </param>
/// <param name="requestContext">
/// The request Context.
/// </param>
/// <param name="responseType">
/// The response type
/// </param>
/// <returns>
/// The request data
/// </returns>
private async Task<JObject> DoUntypedMutationApiRequests(
ResolveFieldContext context,
RequestContext requestContext,
MergedUntypedMutationResult responseType)
{
var topFields =
GetRequestedFields(context.FieldAst.SelectionSet, context, this.mergedField.Type).ToList();
var requestedFields = new List<ApiRequest>();
foreach (var topField in topFields.Where(f => f.Name == "result"))
{
requestedFields.AddRange(responseType.OriginalReturnType.GatherSingleApiRequest(topField, context));
}
var arguments = context.FieldAst.Arguments.ToJson(context).Property("input")?.Value as JObject;
var originalApiField = this.mergedField.OriginalFields.Values.FirstOrDefault();
if (originalApiField != null
&& !originalApiField.CheckAuthorization(requestContext, EnConnectionAction.Query))
{
var severity = originalApiField.LogAccessRules.Any()
? originalApiField.LogAccessRules.Max(l => l.Severity)
: EnSeverity.Trivial;
SecurityLog.CreateRecord(
EnSecurityLogType.OperationDenied,
severity,
context.UserContext as RequestContext,
"Unauthorized call to {ApiPath}",
context.FieldAst.Name);
var emptyResponse = new JObject
{
{
"clientMutationId",
arguments?.Property("clientMutationId")?.ToObject<string>()
}
};
return emptyResponse;
}
var request = new MutationApiRequest
{
Arguments = arguments,
FieldName = this.mergedField.FieldName,
Fields = requestedFields
};
var data = await this.provider.GetData(new List<ApiRequest> { request }, requestContext);
data?.Add("clientMutationId", arguments?.Property("clientMutationId")?.ToObject<string>());
return data;
}
}
}
}
|
{
"content_hash": "fa956760708850ae92900166139aba61",
"timestamp": "",
"source": "github",
"line_count": 530,
"max_line_length": 130,
"avg_line_length": 41.83207547169811,
"alnum_prop": 0.45861711244418385,
"repo_name": "KlusterKite/KlusterKite",
"id": "d5aafa79f364e5bde26b3e4a82234731b4a3143e",
"size": "22173",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "KlusterKite.Web/KlusterKite.Web.GraphQL.Publisher/Internals/MergedApiRoot.cs",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Batchfile",
"bytes": "1314"
},
{
"name": "C#",
"bytes": "2352078"
},
{
"name": "CSS",
"bytes": "6435"
},
{
"name": "F#",
"bytes": "30981"
},
{
"name": "HTML",
"bytes": "3955"
},
{
"name": "JavaScript",
"bytes": "530537"
},
{
"name": "PHP",
"bytes": "6921"
},
{
"name": "Shell",
"bytes": "912"
}
]
}
|
module Raml
# @private
module Merge
def merge(other)
other.scalar_properties.each do |prop|
prop_var = "@#{prop}"
prop_val = other.instance_variable_get prop_var
instance_variable_set prop_var, prop_val unless prop_val.nil?
end
end
def merge_properties(other, type)
match, no_match = other.send(type).values.partition { |param| self.send(type).has_key? param.name }
match.each { |param| self.send(type)[param.name].merge param }
# if its an optional property, and there is no match in self, don't merge it.
no_match.reject! { |node| node.optional }
no_match.map! { |node| node.clone }
no_match.each { |node| node.parent = self }
@children += no_match
end
end
end
|
{
"content_hash": "e8565a9cf0bcc0bdae8600748f0b3593",
"timestamp": "",
"source": "github",
"line_count": 24,
"max_line_length": 104,
"avg_line_length": 32.583333333333336,
"alnum_prop": 0.6163682864450127,
"repo_name": "quri/raml_ruby",
"id": "ccf1b34f5f8c1ae3105c302a34121cf19a699d92",
"size": "782",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "lib/raml/mixin/merge.rb",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "RAML",
"bytes": "3000254"
},
{
"name": "Ruby",
"bytes": "190555"
}
]
}
|
using System;
using System.Diagnostics;
using System.Reflection;
using System.Threading;
using System.Globalization;
using System.IO;
using NUnit.Framework.Constraints;
using NUnit.Framework.Internal.Execution;
using NUnit.Framework.Interfaces;
#if !NETSTANDARD1_3 && !NETSTANDARD1_6
using System.Security.Principal;
#endif
#if ASYNC
using System.Threading.Tasks;
#endif
namespace NUnit.Framework.Internal
{
/// <summary>
/// Summary description for TestExecutionContextTests.
/// </summary>
[TestFixture][Property("Question", "Why?")]
public class TestExecutionContextTests
{
private TestExecutionContext _fixtureContext;
private TestExecutionContext _setupContext;
private ResultState _fixtureResult;
#if !NETSTANDARD1_3 && !NETSTANDARD1_6
string originalDirectory;
IPrincipal originalPrincipal;
#endif
DateTime _fixtureCreateTime = DateTime.UtcNow;
long _fixtureCreateTicks = Stopwatch.GetTimestamp();
[OneTimeSetUp]
public void OneTimeSetUp()
{
_fixtureContext = TestExecutionContext.CurrentContext;
_fixtureResult = _fixtureContext.CurrentResult.ResultState;
}
[OneTimeTearDown]
public void OneTimeTearDown()
{
// TODO: We put some tests in one time teardown to verify that
// the context is still valid. It would be better if these tests
// were placed in a second-level test, invoked from this test class.
TestExecutionContext ec = TestExecutionContext.CurrentContext;
Assert.That(ec.CurrentTest.Name, Is.EqualTo("TestExecutionContextTests"));
Assert.That(ec.CurrentTest.FullName,
Is.EqualTo("NUnit.Framework.Internal.TestExecutionContextTests"));
Assert.That(_fixtureContext.CurrentTest.Id, Is.Not.Null.And.Not.Empty);
Assert.That(_fixtureContext.CurrentTest.Properties.Get("Question"), Is.EqualTo("Why?"));
}
[SetUp]
public void Initialize()
{
_setupContext = TestExecutionContext.CurrentContext;
#if !NETSTANDARD1_3 && !NETSTANDARD1_6
originalCulture = CultureInfo.CurrentCulture;
originalUICulture = CultureInfo.CurrentUICulture;
originalDirectory = Environment.CurrentDirectory;
originalPrincipal = Thread.CurrentPrincipal;
#endif
}
[TearDown]
public void Cleanup()
{
#if !NETSTANDARD1_3 && !NETSTANDARD1_6
Thread.CurrentThread.CurrentCulture = originalCulture;
Thread.CurrentThread.CurrentUICulture = originalUICulture;
#endif
#if !NETSTANDARD1_3 && !NETSTANDARD1_6
Environment.CurrentDirectory = originalDirectory;
Thread.CurrentPrincipal = originalPrincipal;
#endif
Assert.That(
TestExecutionContext.CurrentContext.CurrentTest.FullName,
Is.EqualTo(_setupContext.CurrentTest.FullName),
"Context at TearDown failed to match that saved from SetUp");
Assert.That(
TestExecutionContext.CurrentContext.CurrentResult.Name,
Is.EqualTo(_setupContext.CurrentResult.Name),
"Cannot access CurrentResult in TearDown");
}
#region CurrentContext
#if ASYNC
[Test]
public async Task CurrentContextFlowsWithAsyncExecution()
{
var context = TestExecutionContext.CurrentContext;
await YieldAsync();
Assert.AreSame(context, TestExecutionContext.CurrentContext);
}
[Test]
public async Task CurrentContextFlowsWithParallelAsyncExecution()
{
var expected = TestExecutionContext.CurrentContext;
var parallelResult = await WhenAllAsync(YieldAndReturnContext(), YieldAndReturnContext());
Assert.AreSame(expected, TestExecutionContext.CurrentContext);
Assert.AreSame(expected, parallelResult[0]);
Assert.AreSame(expected, parallelResult[1]);
}
#endif
#if !NETSTANDARD1_3 && !NETSTANDARD1_6
[Test]
public void CurrentContextFlowsToUserCreatedThread()
{
TestExecutionContext threadContext = null;
Thread thread = new Thread(() =>
{
threadContext = TestExecutionContext.CurrentContext;
});
thread.Start();
thread.Join();
Assert.That(threadContext, Is.Not.Null.And.SameAs(TestExecutionContext.CurrentContext));
}
#endif
#endregion
#region CurrentTest
[Test]
public void FixtureSetUpCanAccessFixtureName()
{
Assert.That(_fixtureContext.CurrentTest.Name, Is.EqualTo("TestExecutionContextTests"));
}
[Test]
public void FixtureSetUpCanAccessFixtureFullName()
{
Assert.That(_fixtureContext.CurrentTest.FullName,
Is.EqualTo("NUnit.Framework.Internal.TestExecutionContextTests"));
}
[Test]
public void FixtureSetUpHasNullMethodName()
{
Assert.That(_fixtureContext.CurrentTest.MethodName, Is.Null);
}
[Test]
public void FixtureSetUpCanAccessFixtureId()
{
Assert.That(_fixtureContext.CurrentTest.Id, Is.Not.Null.And.Not.Empty);
}
[Test]
public void FixtureSetUpCanAccessFixtureProperties()
{
Assert.That(_fixtureContext.CurrentTest.Properties.Get("Question"), Is.EqualTo("Why?"));
}
[Test]
public void SetUpCanAccessTestName()
{
Assert.That(_setupContext.CurrentTest.Name, Is.EqualTo("SetUpCanAccessTestName"));
}
[Test]
public void SetUpCanAccessTestFullName()
{
Assert.That(_setupContext.CurrentTest.FullName,
Is.EqualTo("NUnit.Framework.Internal.TestExecutionContextTests.SetUpCanAccessTestFullName"));
}
[Test]
public void SetUpCanAccessTestMethodName()
{
Assert.That(_setupContext.CurrentTest.MethodName,
Is.EqualTo("SetUpCanAccessTestMethodName"));
}
[Test]
public void SetUpCanAccessTestId()
{
Assert.That(_setupContext.CurrentTest.Id, Is.Not.Null.And.Not.Empty);
}
[Test]
[Property("Answer", 42)]
public void SetUpCanAccessTestProperties()
{
Assert.That(_setupContext.CurrentTest.Properties.Get("Answer"), Is.EqualTo(42));
}
[Test]
public void TestCanAccessItsOwnName()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Name, Is.EqualTo("TestCanAccessItsOwnName"));
}
[Test]
public void TestCanAccessItsOwnFullName()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.FullName,
Is.EqualTo("NUnit.Framework.Internal.TestExecutionContextTests.TestCanAccessItsOwnFullName"));
}
[Test]
public void TestCanAccessItsOwnMethodName()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.MethodName,
Is.EqualTo("TestCanAccessItsOwnMethodName"));
}
[Test]
public void TestCanAccessItsOwnId()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Id, Is.Not.Null.And.Not.Empty);
}
[Test]
[Property("Answer", 42)]
public void TestCanAccessItsOwnProperties()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Properties.Get("Answer"), Is.EqualTo(42));
}
[TestCase(123, "abc")]
public void TestCanAccessItsOwnArguments(int i, string s)
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Arguments, Is.EqualTo(new object[] {123, "abc"}));
}
#if ASYNC
[Test]
public async Task AsyncTestCanAccessItsOwnName()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Name, Is.EqualTo("AsyncTestCanAccessItsOwnName"));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Name, Is.EqualTo("AsyncTestCanAccessItsOwnName"));
}
[Test]
public async Task AsyncTestCanAccessItsOwnFullName()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.FullName,
Is.EqualTo("NUnit.Framework.Internal.TestExecutionContextTests.AsyncTestCanAccessItsOwnFullName"));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.FullName,
Is.EqualTo("NUnit.Framework.Internal.TestExecutionContextTests.AsyncTestCanAccessItsOwnFullName"));
}
[Test]
public async Task AsyncTestCanAccessItsOwnMethodName()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.MethodName,
Is.EqualTo("AsyncTestCanAccessItsOwnMethodName"));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.MethodName,
Is.EqualTo("AsyncTestCanAccessItsOwnMethodName"));
}
[Test]
public async Task AsyncTestCanAccessItsOwnId()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Id, Is.Not.Null.And.Not.Empty);
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Id, Is.Not.Null.And.Not.Empty);
}
[Test]
[Property("Answer", 42)]
public async Task AsyncTestCanAccessItsOwnProperties()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Properties.Get("Answer"), Is.EqualTo(42));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Properties.Get("Answer"), Is.EqualTo(42));
}
[TestCase(123, "abc")]
public async Task AsyncTestCanAccessItsOwnArguments(int i, string s)
{
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Arguments, Is.EqualTo(new object[] {123, "abc"}));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentTest.Arguments, Is.EqualTo(new object[] {123, "abc"}));
}
#endif
#if !NETSTANDARD1_3 && !NETSTANDARD1_6
[Test]
public void TestHasWorkerWhenParallel()
{
var worker = TestExecutionContext.CurrentContext.TestWorker;
var isRunningUnderTestWorker = TestExecutionContext.CurrentContext.Dispatcher is ParallelWorkItemDispatcher;
Assert.That(worker != null || !isRunningUnderTestWorker);
}
#endif
#endregion
#region CurrentResult
[Test]
public void CanAccessResultName()
{
Assert.That(_fixtureContext.CurrentResult.Name, Is.EqualTo("TestExecutionContextTests"));
Assert.That(_setupContext.CurrentResult.Name, Is.EqualTo("CanAccessResultName"));
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.Name, Is.EqualTo("CanAccessResultName"));
}
[Test]
public void CanAccessResultFullName()
{
Assert.That(_fixtureContext.CurrentResult.FullName, Is.EqualTo("NUnit.Framework.Internal.TestExecutionContextTests"));
Assert.That(_setupContext.CurrentResult.FullName,
Is.EqualTo("NUnit.Framework.Internal.TestExecutionContextTests.CanAccessResultFullName"));
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.FullName,
Is.EqualTo("NUnit.Framework.Internal.TestExecutionContextTests.CanAccessResultFullName"));
}
[Test]
public void CanAccessResultTest()
{
Assert.That(_fixtureContext.CurrentResult.Test, Is.SameAs(_fixtureContext.CurrentTest));
Assert.That(_setupContext.CurrentResult.Test, Is.SameAs(_setupContext.CurrentTest));
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.Test, Is.SameAs(TestExecutionContext.CurrentContext.CurrentTest));
}
[Test]
public void CanAccessResultState()
{
// This is copied in setup because it can change if any test fails
Assert.That(_fixtureResult, Is.EqualTo(ResultState.Success));
Assert.That(_setupContext.CurrentResult.ResultState, Is.EqualTo(ResultState.Inconclusive));
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.ResultState, Is.EqualTo(ResultState.Inconclusive));
}
#if ASYNC
[Test]
public async Task CanAccessResultName_Async()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.Name, Is.EqualTo("CanAccessResultName_Async"));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.Name, Is.EqualTo("CanAccessResultName_Async"));
}
[Test]
public async Task CanAccessResultFullName_Async()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.FullName,
Is.EqualTo("NUnit.Framework.Internal.TestExecutionContextTests.CanAccessResultFullName_Async"));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.FullName,
Is.EqualTo("NUnit.Framework.Internal.TestExecutionContextTests.CanAccessResultFullName_Async"));
}
[Test]
public async Task CanAccessResultTest_Async()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.Test,
Is.SameAs(TestExecutionContext.CurrentContext.CurrentTest));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.Test,
Is.SameAs(TestExecutionContext.CurrentContext.CurrentTest));
}
[Test]
public async Task CanAccessResultState_Async()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.ResultState, Is.EqualTo(ResultState.Inconclusive));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentResult.ResultState, Is.EqualTo(ResultState.Inconclusive));
}
#endif
#endregion
#region StartTime
[Test]
public void CanAccessStartTime()
{
Assert.That(_fixtureContext.StartTime, Is.GreaterThan(DateTime.MinValue).And.LessThanOrEqualTo(_fixtureCreateTime));
Assert.That(_setupContext.StartTime, Is.GreaterThanOrEqualTo(_fixtureContext.StartTime));
Assert.That(TestExecutionContext.CurrentContext.StartTime, Is.GreaterThanOrEqualTo(_setupContext.StartTime));
}
#if ASYNC
[Test]
public async Task CanAccessStartTime_Async()
{
var startTime = TestExecutionContext.CurrentContext.StartTime;
Assert.That(startTime, Is.GreaterThanOrEqualTo(_setupContext.StartTime));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.StartTime, Is.EqualTo(startTime));
}
#endif
#endregion
#region StartTicks
[Test]
public void CanAccessStartTicks()
{
Assert.That(_fixtureContext.StartTicks, Is.LessThanOrEqualTo(_fixtureCreateTicks));
Assert.That(_setupContext.StartTicks, Is.GreaterThanOrEqualTo(_fixtureContext.StartTicks));
Assert.That(TestExecutionContext.CurrentContext.StartTicks, Is.GreaterThanOrEqualTo(_setupContext.StartTicks));
}
#if ASYNC
[Test]
public async Task AsyncTestCanAccessStartTicks()
{
var startTicks = TestExecutionContext.CurrentContext.StartTicks;
Assert.That(startTicks, Is.GreaterThanOrEqualTo(_setupContext.StartTicks));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.StartTicks, Is.EqualTo(startTicks));
}
#endif
#endregion
#region OutWriter
[Test]
public void CanAccessOutWriter()
{
Assert.That(_fixtureContext.OutWriter, Is.Not.Null);
Assert.That(_setupContext.OutWriter, Is.Not.Null);
Assert.That(TestExecutionContext.CurrentContext.OutWriter, Is.SameAs(_setupContext.OutWriter));
}
#if ASYNC
[Test]
public async Task AsyncTestCanAccessOutWriter()
{
var outWriter = TestExecutionContext.CurrentContext.OutWriter;
Assert.That(outWriter, Is.SameAs(_setupContext.OutWriter));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.OutWriter, Is.SameAs(outWriter));
}
#endif
#endregion
#region TestObject
[Test]
public void CanAccessTestObject()
{
Assert.That(_fixtureContext.TestObject, Is.Not.Null.And.TypeOf(GetType()));
Assert.That(_setupContext.TestObject, Is.SameAs(_fixtureContext.TestObject));
Assert.That(TestExecutionContext.CurrentContext.TestObject, Is.SameAs(_setupContext.TestObject));
}
#if ASYNC
[Test]
public async Task CanAccessTestObject_Async()
{
var testObject = TestExecutionContext.CurrentContext.TestObject;
Assert.That(testObject, Is.SameAs(_setupContext.TestObject));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.TestObject, Is.SameAs(testObject));
}
#endif
#endregion
#region StopOnError
[Test]
public void CanAccessStopOnError()
{
Assert.That(_setupContext.StopOnError, Is.EqualTo(_fixtureContext.StopOnError));
Assert.That(TestExecutionContext.CurrentContext.StopOnError, Is.EqualTo(_setupContext.StopOnError));
}
#if ASYNC
[Test]
public async Task CanAccessStopOnError_Async()
{
var stop = TestExecutionContext.CurrentContext.StopOnError;
Assert.That(stop, Is.EqualTo(_setupContext.StopOnError));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.StopOnError, Is.EqualTo(stop));
}
#endif
#endregion
#region Listener
[Test]
public void CanAccessListener()
{
Assert.That(_fixtureContext.Listener, Is.Not.Null);
Assert.That(_setupContext.Listener, Is.SameAs(_fixtureContext.Listener));
Assert.That(TestExecutionContext.CurrentContext.Listener, Is.SameAs(_setupContext.Listener));
}
#if ASYNC
[Test]
public async Task CanAccessListener_Async()
{
var listener = TestExecutionContext.CurrentContext.Listener;
Assert.That(listener, Is.SameAs(_setupContext.Listener));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.Listener, Is.SameAs(listener));
}
#endif
#endregion
#region Dispatcher
[Test]
public void CanAccessDispatcher()
{
Assert.That(_fixtureContext.RandomGenerator, Is.Not.Null);
Assert.That(_setupContext.Dispatcher, Is.SameAs(_fixtureContext.Dispatcher));
Assert.That(TestExecutionContext.CurrentContext.Dispatcher, Is.SameAs(_setupContext.Dispatcher));
}
#if ASYNC
[Test]
public async Task CanAccessDispatcher_Async()
{
var dispatcher = TestExecutionContext.CurrentContext.Dispatcher;
Assert.That(dispatcher, Is.SameAs(_setupContext.Dispatcher));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.Dispatcher, Is.SameAs(dispatcher));
}
#endif
#endregion
#region ParallelScope
[Test]
public void CanAccessParallelScope()
{
var scope = _fixtureContext.ParallelScope;
Assert.That(_setupContext.ParallelScope, Is.EqualTo(scope));
Assert.That(TestExecutionContext.CurrentContext.ParallelScope, Is.EqualTo(scope));
}
#if ASYNC
[Test]
public async Task CanAccessParallelScope_Async()
{
var scope = TestExecutionContext.CurrentContext.ParallelScope;
Assert.That(scope, Is.EqualTo(_setupContext.ParallelScope));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.ParallelScope, Is.EqualTo(scope));
}
#endif
#endregion
#region TestWorker
#if PARALLEL
[Test]
public void CanAccessTestWorker()
{
if (TestExecutionContext.CurrentContext.Dispatcher is ParallelWorkItemDispatcher)
{
Assert.That(_fixtureContext.TestWorker, Is.Not.Null);
Assert.That(_setupContext.TestWorker, Is.SameAs(_fixtureContext.TestWorker));
Assert.That(TestExecutionContext.CurrentContext.TestWorker, Is.SameAs(_setupContext.TestWorker));
}
}
#if ASYNC
[Test]
public async Task CanAccessTestWorker_Async()
{
var worker = TestExecutionContext.CurrentContext.TestWorker;
Assert.That(worker, Is.SameAs(_setupContext.TestWorker));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.TestWorker, Is.SameAs(worker));
}
#endif
#endif
#endregion
#region RandomGenerator
[Test]
public void CanAccessRandomGenerator()
{
Assert.That(_fixtureContext.RandomGenerator, Is.Not.Null);
Assert.That(_setupContext.RandomGenerator, Is.Not.Null);
Assert.That(TestExecutionContext.CurrentContext.RandomGenerator, Is.SameAs(_setupContext.RandomGenerator));
}
#if ASYNC
[Test]
public async Task CanAccessRandomGenerator_Async()
{
var random = TestExecutionContext.CurrentContext.RandomGenerator;
Assert.That(random, Is.SameAs(_setupContext.RandomGenerator));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.RandomGenerator, Is.SameAs(random));
}
#endif
#endregion
#region AssertCount
[Test]
public void CanAccessAssertCount()
{
Assert.That(_fixtureContext.AssertCount, Is.EqualTo(0));
Assert.That(_setupContext.AssertCount, Is.EqualTo(1));
Assert.That(TestExecutionContext.CurrentContext.AssertCount, Is.EqualTo(2));
Assert.That(2 + 2, Is.EqualTo(4));
Assert.That(TestExecutionContext.CurrentContext.AssertCount, Is.EqualTo(4));
}
#if ASYNC
[Test]
public async Task CanAccessAssertCount_Async()
{
Assert.That(2 + 2, Is.EqualTo(4));
Assert.That(TestExecutionContext.CurrentContext.AssertCount, Is.EqualTo(1));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.AssertCount, Is.EqualTo(2));
Assert.That(TestExecutionContext.CurrentContext.AssertCount, Is.EqualTo(3));
}
#endif
#endregion
#region MultipleAssertLevel
[Test]
public void CanAccessMultipleAssertLevel()
{
Assert.That(_fixtureContext.MultipleAssertLevel, Is.EqualTo(0));
Assert.That(_setupContext.MultipleAssertLevel, Is.EqualTo(0));
Assert.That(TestExecutionContext.CurrentContext.MultipleAssertLevel, Is.EqualTo(0));
Assert.Multiple(() =>
{
Assert.That(TestExecutionContext.CurrentContext.MultipleAssertLevel, Is.EqualTo(1));
});
}
#if ASYNC
[Test]
public async Task CanAccessMultipleAssertLevel_Async()
{
Assert.That(TestExecutionContext.CurrentContext.MultipleAssertLevel, Is.EqualTo(0));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.MultipleAssertLevel, Is.EqualTo(0));
Assert.Multiple(() =>
{
Assert.That(TestExecutionContext.CurrentContext.MultipleAssertLevel, Is.EqualTo(1));
});
}
#endif
#endregion
#region TestCaseTimeout
[Test]
public void CanAccessTestCaseTimeout()
{
var timeout = _fixtureContext.TestCaseTimeout;
Assert.That(_setupContext.TestCaseTimeout, Is.EqualTo(timeout));
Assert.That(TestExecutionContext.CurrentContext.TestCaseTimeout, Is.EqualTo(timeout));
}
#if ASYNC
[Test]
public async Task CanAccessTestCaseTimeout_Async()
{
var timeout = TestExecutionContext.CurrentContext.TestCaseTimeout;
Assert.That(timeout, Is.EqualTo(_setupContext.TestCaseTimeout));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.TestCaseTimeout, Is.EqualTo(timeout));
}
#endif
#endregion
#region UpstreamActions
[Test]
public void CanAccessUpstreamActions()
{
var actions = _fixtureContext.UpstreamActions;
Assert.That(_setupContext.UpstreamActions, Is.EqualTo(actions));
Assert.That(TestExecutionContext.CurrentContext.UpstreamActions, Is.EqualTo(actions));
}
#if ASYNC
[Test]
public async Task CanAccessUpstreamAcxtions_Async()
{
var actions = TestExecutionContext.CurrentContext.UpstreamActions;
Assert.That(actions, Is.SameAs(_setupContext.UpstreamActions));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.UpstreamActions, Is.SameAs(actions));
}
#endif
#endregion
#region CurrentCulture and CurrentUICulture
#if !NETSTANDARD1_3 && !NETSTANDARD1_6
CultureInfo originalCulture;
CultureInfo originalUICulture;
[Test]
public void CanAccessCurrentCulture()
{
Assert.That(_fixtureContext.CurrentCulture, Is.EqualTo(CultureInfo.CurrentCulture));
Assert.That(_setupContext.CurrentCulture, Is.EqualTo(CultureInfo.CurrentCulture));
Assert.That(TestExecutionContext.CurrentContext.CurrentCulture, Is.EqualTo(CultureInfo.CurrentCulture));
}
[Test]
public void CanAccessCurrentUICulture()
{
Assert.That(_fixtureContext.CurrentUICulture, Is.EqualTo(CultureInfo.CurrentUICulture));
Assert.That(_setupContext.CurrentUICulture, Is.EqualTo(CultureInfo.CurrentUICulture));
Assert.That(TestExecutionContext.CurrentContext.CurrentUICulture, Is.EqualTo(CultureInfo.CurrentUICulture));
}
#if ASYNC
[Test]
public async Task CanAccessCurrentCulture_Async()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentCulture, Is.EqualTo(CultureInfo.CurrentCulture));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentCulture, Is.EqualTo(CultureInfo.CurrentCulture));
}
[Test]
public async Task CanAccessCurrentUICulture_Async()
{
Assert.That(TestExecutionContext.CurrentContext.CurrentUICulture, Is.EqualTo(CultureInfo.CurrentUICulture));
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentUICulture, Is.EqualTo(CultureInfo.CurrentUICulture));
}
#endif
[Test]
public void SetAndRestoreCurrentCulture()
{
var context = new TestExecutionContext(_setupContext);
try
{
CultureInfo otherCulture =
new CultureInfo(originalCulture.Name == "fr-FR" ? "en-GB" : "fr-FR");
context.CurrentCulture = otherCulture;
Assert.AreEqual(otherCulture, CultureInfo.CurrentCulture, "Culture was not set");
Assert.AreEqual(otherCulture, context.CurrentCulture, "Culture not in new context");
Assert.AreEqual(_setupContext.CurrentCulture, originalCulture, "Original context should not change");
}
finally
{
_setupContext.EstablishExecutionEnvironment();
}
Assert.AreEqual(CultureInfo.CurrentCulture, originalCulture, "Culture was not restored");
Assert.AreEqual(_setupContext.CurrentCulture, originalCulture, "Culture not in final context");
}
[Test]
public void SetAndRestoreCurrentUICulture()
{
var context = new TestExecutionContext(_setupContext);
try
{
CultureInfo otherCulture =
new CultureInfo(originalUICulture.Name == "fr-FR" ? "en-GB" : "fr-FR");
context.CurrentUICulture = otherCulture;
Assert.AreEqual(otherCulture, CultureInfo.CurrentUICulture, "UICulture was not set");
Assert.AreEqual(otherCulture, context.CurrentUICulture, "UICulture not in new context");
Assert.AreEqual(_setupContext.CurrentUICulture, originalUICulture, "Original context should not change");
}
finally
{
_setupContext.EstablishExecutionEnvironment();
}
Assert.AreEqual(CultureInfo.CurrentUICulture, originalUICulture, "UICulture was not restored");
Assert.AreEqual(_setupContext.CurrentUICulture, originalUICulture, "UICulture not in final context");
}
#endif
#endregion
#region CurrentPrincipal
#if !NETSTANDARD1_3 && !NETSTANDARD1_6
[Test]
public void CanAccessCurrentPrincipal()
{
Type expectedType = Thread.CurrentPrincipal.GetType();
Assert.That(_fixtureContext.CurrentPrincipal, Is.TypeOf(expectedType), "Fixture");
Assert.That(_setupContext.CurrentPrincipal, Is.TypeOf(expectedType), "SetUp");
Assert.That(TestExecutionContext.CurrentContext.CurrentPrincipal, Is.TypeOf(expectedType), "Test");
}
#if ASYNC
[Test]
public async Task CanAccessCurrentPrincipal_Async()
{
Type expectedType = Thread.CurrentPrincipal.GetType();
Assert.That(TestExecutionContext.CurrentContext.CurrentPrincipal, Is.TypeOf(expectedType), "Before yield");
await YieldAsync();
Assert.That(TestExecutionContext.CurrentContext.CurrentPrincipal, Is.TypeOf(expectedType), "After yield");
}
#endif
[Test]
public void SetAndRestoreCurrentPrincipal()
{
var context = new TestExecutionContext(_setupContext);
try
{
GenericIdentity identity = new GenericIdentity("foo");
context.CurrentPrincipal = new GenericPrincipal(identity, new string[0]);
Assert.AreEqual("foo", Thread.CurrentPrincipal.Identity.Name, "Principal was not set");
Assert.AreEqual("foo", context.CurrentPrincipal.Identity.Name, "Principal not in new context");
Assert.AreEqual(_setupContext.CurrentPrincipal, originalPrincipal, "Original context should not change");
}
finally
{
_setupContext.EstablishExecutionEnvironment();
}
Assert.AreEqual(Thread.CurrentPrincipal, originalPrincipal, "Principal was not restored");
Assert.AreEqual(_setupContext.CurrentPrincipal, originalPrincipal, "Principal not in final context");
}
#endif
#endregion
#region ValueFormatter
[Test]
public void SetAndRestoreValueFormatter()
{
var context = new TestExecutionContext(_setupContext);
var originalFormatter = context.CurrentValueFormatter;
try
{
ValueFormatter f = val => "dummy";
context.AddFormatter(next => f);
Assert.That(context.CurrentValueFormatter, Is.EqualTo(f));
context.EstablishExecutionEnvironment();
Assert.That(MsgUtils.FormatValue(123), Is.EqualTo("dummy"));
}
finally
{
_setupContext.EstablishExecutionEnvironment();
}
Assert.That(TestExecutionContext.CurrentContext.CurrentValueFormatter, Is.EqualTo(originalFormatter));
Assert.That(MsgUtils.FormatValue(123), Is.EqualTo("123"));
}
#endregion
#region SingleThreaded
[Test]
public void SingleThreadedDefaultsToFalse()
{
Assert.False(new TestExecutionContext().IsSingleThreaded);
}
[Test]
public void SingleThreadedIsInherited()
{
var parent = new TestExecutionContext();
parent.IsSingleThreaded = true;
Assert.True(new TestExecutionContext(parent).IsSingleThreaded);
}
#endregion
#region ExecutionStatus
[Test]
public void ExecutionStatusIsPushedToHigherContext()
{
var topContext = new TestExecutionContext();
var bottomContext = new TestExecutionContext(new TestExecutionContext(new TestExecutionContext(topContext)));
bottomContext.ExecutionStatus = TestExecutionStatus.StopRequested;
Assert.That(topContext.ExecutionStatus, Is.EqualTo(TestExecutionStatus.StopRequested));
}
[Test]
public void ExecutionStatusIsPulledFromHigherContext()
{
var topContext = new TestExecutionContext();
var bottomContext = new TestExecutionContext(new TestExecutionContext(new TestExecutionContext(topContext)));
topContext.ExecutionStatus = TestExecutionStatus.AbortRequested;
Assert.That(bottomContext.ExecutionStatus, Is.EqualTo(TestExecutionStatus.AbortRequested));
}
[Test]
public void ExecutionStatusIsPromulgatedAcrossBranches()
{
var topContext = new TestExecutionContext();
var leftContext = new TestExecutionContext(new TestExecutionContext(new TestExecutionContext(topContext)));
var rightContext = new TestExecutionContext(new TestExecutionContext(new TestExecutionContext(topContext)));
leftContext.ExecutionStatus = TestExecutionStatus.StopRequested;
Assert.That(rightContext.ExecutionStatus, Is.EqualTo(TestExecutionStatus.StopRequested));
}
#endregion
#region Cross-domain Tests
#if !NETSTANDARD1_3 && !NETSTANDARD1_6
[Test, Platform(Exclude="Mono", Reason="Intermittent failures")]
public void CanCreateObjectInAppDomain()
{
AppDomain domain = AppDomain.CreateDomain(
"TestCanCreateAppDomain",
AppDomain.CurrentDomain.Evidence,
AssemblyHelper.GetDirectoryName(Assembly.GetExecutingAssembly()),
null,
false);
var obj = domain.CreateInstanceAndUnwrap("nunit.framework.tests", "NUnit.Framework.Internal.TestExecutionContextTests+TestClass");
Assert.NotNull(obj);
}
[Serializable]
private class TestClass
{
}
#endif
#endregion
#region Helper Methods
#if ASYNC
private async Task YieldAsync()
{
#if NET_4_0
await TaskEx.Yield();
#else
await Task.Yield();
#endif
}
private Task<T[]> WhenAllAsync<T>(params Task<T>[] tasks)
{
#if NET_4_0
return TaskEx.WhenAll(tasks);
#else
return Task.WhenAll(tasks);
#endif
}
private async Task<TestExecutionContext> YieldAndReturnContext()
{
await YieldAsync();
return TestExecutionContext.CurrentContext;
}
#endif
#endregion
}
#if !NETSTANDARD1_3 && !NETSTANDARD1_6
[TestFixture, Platform(Exclude="Mono", Reason="Intermittent failures")]
public class TextExecutionContextInAppDomain
{
private RunsInAppDomain _runsInAppDomain;
[SetUp]
public void SetUp()
{
var domain = AppDomain.CreateDomain("TestDomain", null, AppDomain.CurrentDomain.BaseDirectory, AppDomain.CurrentDomain.RelativeSearchPath, false);
_runsInAppDomain = domain.CreateInstanceAndUnwrap(Assembly.GetExecutingAssembly().FullName,
"NUnit.Framework.Internal.RunsInAppDomain") as RunsInAppDomain;
Assert.That(_runsInAppDomain, Is.Not.Null);
}
[Test]
[Description("Issue 71 - NUnit swallows console output from AppDomains created within tests")]
public void CanWriteToConsoleInAppDomain()
{
_runsInAppDomain.WriteToConsole();
}
[Test]
[Description("Issue 210 - TestContext.WriteLine in an AppDomain causes an error")]
public void CanWriteToTestContextInAppDomain()
{
_runsInAppDomain.WriteToTestContext();
}
}
internal class RunsInAppDomain : MarshalByRefObject
{
public void WriteToConsole()
{
Console.WriteLine("RunsInAppDomain.WriteToConsole");
}
public void WriteToTestContext()
{
TestContext.WriteLine("RunsInAppDomain.WriteToTestContext");
}
}
#endif
}
|
{
"content_hash": "571afb85d6e318d592da9da52215e01e",
"timestamp": "",
"source": "github",
"line_count": 1047,
"max_line_length": 158,
"avg_line_length": 35.96084049665711,
"alnum_prop": 0.6437544819526706,
"repo_name": "ggeurts/nunit",
"id": "8341b537cfe81026f20db293565a1de174700104",
"size": "38928",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "src/NUnitFramework/tests/Internal/TestExecutionContextTests.cs",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Batchfile",
"bytes": "48"
},
{
"name": "C#",
"bytes": "4328053"
},
{
"name": "PowerShell",
"bytes": "7834"
},
{
"name": "Shell",
"bytes": "3695"
},
{
"name": "Visual Basic",
"bytes": "1689"
}
]
}
|
<?php
namespace Model;
use Illuminate\Database\Eloquent\Model;
class Period extends Model
{
protected $fillable = [ 'name'];
public function courses()
{
return $this->hasMany('Model\CourseTime');
}
}
|
{
"content_hash": "aa9d43a63429ee684f781958deb4981a",
"timestamp": "",
"source": "github",
"line_count": 15,
"max_line_length": 50,
"avg_line_length": 15.2,
"alnum_prop": 0.6491228070175439,
"repo_name": "iattempt/elegant-selection",
"id": "316b8363d4ee3c5b974541c84279b3c97a200b44",
"size": "228",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "app/CourseSelection/Models/Period.php",
"mode": "33261",
"license": "mit",
"language": [
{
"name": "ApacheConf",
"bytes": "553"
},
{
"name": "HTML",
"bytes": "111991"
},
{
"name": "JavaScript",
"bytes": "1616"
},
{
"name": "PHP",
"bytes": "198307"
},
{
"name": "Shell",
"bytes": "62"
},
{
"name": "Vue",
"bytes": "563"
}
]
}
|
#import "UIViewAdditions.h"
@implementation UIView (KalAdditions)
- (CGFloat)left
{
return self.frame.origin.x;
}
- (void)setLeft:(CGFloat)x
{
CGRect frame = self.frame;
frame.origin.x = x;
self.frame = frame;
}
- (CGFloat)right
{
return self.frame.origin.x + self.frame.size.width;
}
- (void)setRight:(CGFloat)right
{
CGRect frame = self.frame;
frame.origin.x = right - frame.size.width;
self.frame = frame;
}
- (CGFloat)top
{
return self.frame.origin.y;
}
- (void)setTop:(CGFloat)y
{
CGRect frame = self.frame;
frame.origin.y = y;
self.frame = frame;
}
- (CGFloat)bottom
{
return self.frame.origin.y + self.frame.size.height;
}
- (void)setBottom:(CGFloat)bottom
{
CGRect frame = self.frame;
frame.origin.y = bottom - frame.size.height;
self.frame = frame;
}
- (CGFloat)width
{
return self.frame.size.width;
}
- (void)setWidth:(CGFloat)width
{
CGRect frame = self.frame;
frame.size.width = width;
self.frame = frame;
}
- (CGFloat)height
{
return self.frame.size.height;
}
- (void)setHeight:(CGFloat)height
{
CGRect frame = self.frame;
frame.size.height = height;
self.frame = frame;
}
@end
|
{
"content_hash": "c852963d90a7424df562bc21c841cb8c",
"timestamp": "",
"source": "github",
"line_count": 79,
"max_line_length": 54,
"avg_line_length": 14.620253164556962,
"alnum_prop": 0.6692640692640692,
"repo_name": "kyleconroy/rollcall",
"id": "ca8e4fb55b2492c4fb7fb1172221225bd37703c2",
"size": "1261",
"binary": false,
"copies": "8",
"ref": "refs/heads/master",
"path": "Roll Call/Kal/UIViewAdditions.m",
"mode": "33261",
"license": "mit",
"language": [
{
"name": "Objective-C",
"bytes": "390098"
}
]
}
|
Imports BVSoftware.Bvc5.Core
Partial Class Win_Big_Congratulations
Inherits System.Web.UI.Page
Protected Sub Page_Init(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.PreInit
Me.MasterPageFile = PersonalizationServices.GetSafeMasterPage("Custom.master")
End Sub
End Class
|
{
"content_hash": "dc0fceedaa77a97483d436e9466fd40e",
"timestamp": "",
"source": "github",
"line_count": 10,
"max_line_length": 99,
"avg_line_length": 31.2,
"alnum_prop": 0.7724358974358975,
"repo_name": "ajaydex/Scopelist_2015",
"id": "e6446147bcbdb913877812d0b65478e9595757b5",
"size": "314",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "Win-Big-Congratulations.aspx.vb",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "ASP",
"bytes": "2866898"
},
{
"name": "C#",
"bytes": "208973"
},
{
"name": "CSS",
"bytes": "618701"
},
{
"name": "HTML",
"bytes": "30821"
},
{
"name": "JavaScript",
"bytes": "194117"
},
{
"name": "SQLPL",
"bytes": "3612"
},
{
"name": "Visual Basic",
"bytes": "4572224"
}
]
}
|
// Copyright (c) 2009-2010 Satoshi Nakamoto
// Copyright (c) 2009-2012 The Bitcoin developers
// Distributed under the MIT/X11 software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_MAIN_H
#define BITCOIN_MAIN_H
#include "bignum.h"
#include "sync.h"
#include "net.h"
#include "script.h"
#include "scrypt.h"
#include "zerocoin/Zerocoin.h"
#include <list>
class CWallet;
class CBlock;
class CBlockIndex;
class CKeyItem;
class CReserveKey;
class COutPoint;
class CAddress;
class CInv;
class CRequestTracker;
class CNode;
static const unsigned int MAX_BLOCK_SIZE = 1000000;
static const unsigned int MAX_BLOCK_SIZE_GEN = MAX_BLOCK_SIZE/2;
static const unsigned int MAX_BLOCK_SIGOPS = MAX_BLOCK_SIZE/50;
static const unsigned int MAX_ORPHAN_TRANSACTIONS = MAX_BLOCK_SIZE/100;
static const unsigned int MAX_INV_SZ = 50000;
static const int64 MIN_TX_FEE = CENT/10;
static const int64 MIN_RELAY_TX_FEE = CENT/10;
static const int64 MAX_MONEY = 2000000000 * COIN;
static const int64 MIN_TXOUT_AMOUNT = MIN_TX_FEE;
static const int64 PREMINE = 380000000 * COIN;
static const int CUTOFF_POW_BLOCK = 1000;
inline bool MoneyRange(int64 nValue) { return (nValue >= 0 && nValue <= MAX_MONEY); }
// Threshold for nLockTime: below this value it is interpreted as block number, otherwise as UNIX timestamp.
static const unsigned int LOCKTIME_THRESHOLD = 500000000; // Tue Nov 5 00:53:20 1985 UTC
#ifdef USE_UPNP
static const int fHaveUPnP = true;
#else
static const int fHaveUPnP = false;
#endif
static const uint256 hashGenesisBlock("0x3cdd9c2facce405f5cc220fb21a10e493041451c463a22e1ff6fe903fc5769fc");
static const uint256 hashGenesisBlockTestNet("0x000c763e402f2436da9ed36c7286f62c3f6e5dbafce9ff289bd43d7459327eb");
inline int64 PastDrift(int64 nTime) { return nTime - 2 * 60 * 60; } // up to 2 hours from the past
inline int64 FutureDrift(int64 nTime) { return nTime + 2 * 60 * 60; } // up to 2 hours from the future
extern libzerocoin::Params* ZCParams;
extern CScript COINBASE_FLAGS;
extern CCriticalSection cs_main;
extern std::map<uint256, CBlockIndex*> mapBlockIndex;
extern std::set<std::pair<COutPoint, unsigned int> > setStakeSeen;
extern CBlockIndex* pindexGenesisBlock;
extern unsigned int nStakeMinAge;
extern unsigned int nNodeLifespan;
extern int nCoinbaseMaturity;
extern int nBestHeight;
extern uint256 nBestChainTrust;
extern uint256 nBestInvalidTrust;
extern uint256 hashBestChain;
extern CBlockIndex* pindexBest;
extern unsigned int nTransactionsUpdated;
extern uint64 nLastBlockTx;
extern uint64 nLastBlockSize;
extern int64 nLastCoinStakeSearchInterval;
extern const std::string strMessageMagic;
extern int64 nTimeBestReceived;
extern CCriticalSection cs_setpwalletRegistered;
extern std::set<CWallet*> setpwalletRegistered;
extern unsigned char pchMessageStart[4];
extern std::map<uint256, CBlock*> mapOrphanBlocks;
extern bool disablePOW;
// Settings
extern int64 nTransactionFee;
extern int64 nMinimumInputValue;
extern bool fUseFastIndex;
extern unsigned int nDerivationMethodIndex;
extern bool fEnforceCanonical;
// Minimum disk space required - used in CheckDiskSpace()
static const uint64 nMinDiskSpace = 52428800;
class CReserveKey;
class CTxDB;
class CTxIndex;
void RegisterWallet(CWallet* pwalletIn);
void UnregisterWallet(CWallet* pwalletIn);
void SyncWithWallets(const CTransaction& tx, const CBlock* pblock = NULL, bool fUpdate = false, bool fConnect = true);
bool ProcessBlock(CNode* pfrom, CBlock* pblock);
bool CheckDiskSpace(uint64 nAdditionalBytes=0);
FILE* OpenBlockFile(unsigned int nFile, unsigned int nBlockPos, const char* pszMode="rb");
FILE* AppendBlockFile(unsigned int& nFileRet);
bool LoadBlockIndex(bool fAllowNew=true);
void PrintBlockTree();
CBlockIndex* FindBlockByHeight(int nHeight);
bool ProcessMessages(CNode* pfrom);
bool SendMessages(CNode* pto, bool fSendTrickle);
bool LoadExternalBlockFile(FILE* fileIn);
bool CheckProofOfWork(uint256 hash, unsigned int nBits);
unsigned int GetNextTargetRequired(const CBlockIndex* pindexLast, bool fProofOfStake);
int64 GetProofOfWorkReward(int nBits);
int64 GetProofOfStakeReward(int64 nCoinAge, int nHeight);
unsigned int ComputeMinWork(unsigned int nBase, int64 nTime);
unsigned int ComputeMinStake(unsigned int nBase, int64 nTime, unsigned int nBlockTime);
int GetNumBlocksOfPeers();
bool IsInitialBlockDownload();
std::string GetWarnings(std::string strFor);
bool GetTransaction(const uint256 &hash, CTransaction &tx, uint256 &hashBlock);
uint256 WantedByOrphan(const CBlock* pblockOrphan);
const CBlockIndex* GetLastBlockIndex(const CBlockIndex* pindex, bool fProofOfStake);
void StakeMiner(CWallet *pwallet);
void ResendWalletTransactions();
bool GetWalletFile(CWallet* pwallet, std::string &strWalletFileOut);
/** Position on disk for a particular transaction. */
class CDiskTxPos
{
public:
unsigned int nFile;
unsigned int nBlockPos;
unsigned int nTxPos;
CDiskTxPos()
{
SetNull();
}
CDiskTxPos(unsigned int nFileIn, unsigned int nBlockPosIn, unsigned int nTxPosIn)
{
nFile = nFileIn;
nBlockPos = nBlockPosIn;
nTxPos = nTxPosIn;
}
IMPLEMENT_SERIALIZE( READWRITE(FLATDATA(*this)); )
void SetNull() { nFile = (unsigned int) -1; nBlockPos = 0; nTxPos = 0; }
bool IsNull() const { return (nFile == (unsigned int) -1); }
friend bool operator==(const CDiskTxPos& a, const CDiskTxPos& b)
{
return (a.nFile == b.nFile &&
a.nBlockPos == b.nBlockPos &&
a.nTxPos == b.nTxPos);
}
friend bool operator!=(const CDiskTxPos& a, const CDiskTxPos& b)
{
return !(a == b);
}
std::string ToString() const
{
if (IsNull())
return "null";
else
return strprintf("(nFile=%u, nBlockPos=%u, nTxPos=%u)", nFile, nBlockPos, nTxPos);
}
void print() const
{
printf("%s", ToString().c_str());
}
};
/** An inpoint - a combination of a transaction and an index n into its vin */
class CInPoint
{
public:
CTransaction* ptx;
unsigned int n;
CInPoint() { SetNull(); }
CInPoint(CTransaction* ptxIn, unsigned int nIn) { ptx = ptxIn; n = nIn; }
void SetNull() { ptx = NULL; n = (unsigned int) -1; }
bool IsNull() const { return (ptx == NULL && n == (unsigned int) -1); }
};
/** An outpoint - a combination of a transaction hash and an index n into its vout */
class COutPoint
{
public:
uint256 hash;
unsigned int n;
COutPoint() { SetNull(); }
COutPoint(uint256 hashIn, unsigned int nIn) { hash = hashIn; n = nIn; }
IMPLEMENT_SERIALIZE( READWRITE(FLATDATA(*this)); )
void SetNull() { hash = 0; n = (unsigned int) -1; }
bool IsNull() const { return (hash == 0 && n == (unsigned int) -1); }
friend bool operator<(const COutPoint& a, const COutPoint& b)
{
return (a.hash < b.hash || (a.hash == b.hash && a.n < b.n));
}
friend bool operator==(const COutPoint& a, const COutPoint& b)
{
return (a.hash == b.hash && a.n == b.n);
}
friend bool operator!=(const COutPoint& a, const COutPoint& b)
{
return !(a == b);
}
std::string ToString() const
{
return strprintf("COutPoint(%s, %u)", hash.ToString().substr(0,10).c_str(), n);
}
void print() const
{
printf("%s\n", ToString().c_str());
}
};
/** An input of a transaction. It contains the location of the previous
* transaction's output that it claims and a signature that matches the
* output's public key.
*/
class CTxIn
{
public:
COutPoint prevout;
CScript scriptSig;
unsigned int nSequence;
CTxIn()
{
nSequence = std::numeric_limits<unsigned int>::max();
}
explicit CTxIn(COutPoint prevoutIn, CScript scriptSigIn=CScript(), unsigned int nSequenceIn=std::numeric_limits<unsigned int>::max())
{
prevout = prevoutIn;
scriptSig = scriptSigIn;
nSequence = nSequenceIn;
}
CTxIn(uint256 hashPrevTx, unsigned int nOut, CScript scriptSigIn=CScript(), unsigned int nSequenceIn=std::numeric_limits<unsigned int>::max())
{
prevout = COutPoint(hashPrevTx, nOut);
scriptSig = scriptSigIn;
nSequence = nSequenceIn;
}
IMPLEMENT_SERIALIZE
(
READWRITE(prevout);
READWRITE(scriptSig);
READWRITE(nSequence);
)
bool IsFinal() const
{
return (nSequence == std::numeric_limits<unsigned int>::max());
}
friend bool operator==(const CTxIn& a, const CTxIn& b)
{
return (a.prevout == b.prevout &&
a.scriptSig == b.scriptSig &&
a.nSequence == b.nSequence);
}
friend bool operator!=(const CTxIn& a, const CTxIn& b)
{
return !(a == b);
}
std::string ToStringShort() const
{
return strprintf(" %s %d", prevout.hash.ToString().c_str(), prevout.n);
}
std::string ToString() const
{
std::string str;
str += "CTxIn(";
str += prevout.ToString();
if (prevout.IsNull())
str += strprintf(", coinbase %s", HexStr(scriptSig).c_str());
else
str += strprintf(", scriptSig=%s", scriptSig.ToString().substr(0,24).c_str());
if (nSequence != std::numeric_limits<unsigned int>::max())
str += strprintf(", nSequence=%u", nSequence);
str += ")";
return str;
}
void print() const
{
printf("%s\n", ToString().c_str());
}
};
/** An output of a transaction. It contains the public key that the next input
* must be able to sign with to claim it.
*/
class CTxOut
{
public:
int64 nValue;
CScript scriptPubKey;
CTxOut()
{
SetNull();
}
CTxOut(int64 nValueIn, CScript scriptPubKeyIn)
{
nValue = nValueIn;
scriptPubKey = scriptPubKeyIn;
}
IMPLEMENT_SERIALIZE
(
READWRITE(nValue);
READWRITE(scriptPubKey);
)
void SetNull()
{
nValue = -1;
scriptPubKey.clear();
}
bool IsNull()
{
return (nValue == -1);
}
void SetEmpty()
{
nValue = 0;
scriptPubKey.clear();
}
bool IsEmpty() const
{
return (nValue == 0 && scriptPubKey.empty());
}
uint256 GetHash() const
{
return SerializeHash(*this);
}
friend bool operator==(const CTxOut& a, const CTxOut& b)
{
return (a.nValue == b.nValue &&
a.scriptPubKey == b.scriptPubKey);
}
friend bool operator!=(const CTxOut& a, const CTxOut& b)
{
return !(a == b);
}
std::string ToStringShort() const
{
return strprintf(" out %s %s", FormatMoney(nValue).c_str(), scriptPubKey.ToString(true).c_str());
}
std::string ToString() const
{
if (IsEmpty()) return "CTxOut(empty)";
if (scriptPubKey.size() < 6)
return "CTxOut(error)";
return strprintf("CTxOut(nValue=%s, scriptPubKey=%s)", FormatMoney(nValue).c_str(), scriptPubKey.ToString().c_str());
}
void print() const
{
printf("%s\n", ToString().c_str());
}
};
enum GetMinFee_mode
{
GMF_BLOCK,
GMF_RELAY,
GMF_SEND,
};
typedef std::map<uint256, std::pair<CTxIndex, CTransaction> > MapPrevTx;
/** The basic transaction that is broadcasted on the network and contained in
* blocks. A transaction can contain multiple inputs and outputs.
*/
class CTransaction
{
public:
static const int CURRENT_VERSION=1;
int nVersion;
unsigned int nTime;
std::vector<CTxIn> vin;
std::vector<CTxOut> vout;
unsigned int nLockTime;
// Denial-of-service detection:
mutable int nDoS;
bool DoS(int nDoSIn, bool fIn) const { nDoS += nDoSIn; return fIn; }
CTransaction()
{
SetNull();
}
IMPLEMENT_SERIALIZE
(
READWRITE(this->nVersion);
nVersion = this->nVersion;
READWRITE(nTime);
READWRITE(vin);
READWRITE(vout);
READWRITE(nLockTime);
)
void SetNull()
{
nVersion = CTransaction::CURRENT_VERSION;
nTime = GetAdjustedTime();
vin.clear();
vout.clear();
nLockTime = 0;
nDoS = 0; // Denial-of-service prevention
}
bool IsNull() const
{
return (vin.empty() && vout.empty());
}
uint256 GetHash() const
{
return SerializeHash(*this);
}
bool IsFinal(int nBlockHeight=0, int64 nBlockTime=0) const
{
// Time based nLockTime implemented in 0.1.6
if (nLockTime == 0)
return true;
if (nBlockHeight == 0)
nBlockHeight = nBestHeight;
if (nBlockTime == 0)
nBlockTime = GetAdjustedTime();
if ((int64)nLockTime < ((int64)nLockTime < LOCKTIME_THRESHOLD ? (int64)nBlockHeight : nBlockTime))
return true;
BOOST_FOREACH(const CTxIn& txin, vin)
if (!txin.IsFinal())
return false;
return true;
}
bool IsNewerThan(const CTransaction& old) const
{
if (vin.size() != old.vin.size())
return false;
for (unsigned int i = 0; i < vin.size(); i++)
if (vin[i].prevout != old.vin[i].prevout)
return false;
bool fNewer = false;
unsigned int nLowest = std::numeric_limits<unsigned int>::max();
for (unsigned int i = 0; i < vin.size(); i++)
{
if (vin[i].nSequence != old.vin[i].nSequence)
{
if (vin[i].nSequence <= nLowest)
{
fNewer = false;
nLowest = vin[i].nSequence;
}
if (old.vin[i].nSequence < nLowest)
{
fNewer = true;
nLowest = old.vin[i].nSequence;
}
}
}
return fNewer;
}
bool IsCoinBase() const
{
return (vin.size() == 1 && vin[0].prevout.IsNull() && vout.size() >= 1);
}
bool IsCoinStake() const
{
// ppcoin: the coin stake transaction is marked with the first output empty
return (vin.size() > 0 && (!vin[0].prevout.IsNull()) && vout.size() >= 2 && vout[0].IsEmpty());
}
/** Check for standard transaction types
@return True if all outputs (scriptPubKeys) use only standard transaction forms
*/
bool IsStandard() const;
/** Check for standard transaction types
@param[in] mapInputs Map of previous transactions that have outputs we're spending
@return True if all inputs (scriptSigs) use only standard transaction forms
@see CTransaction::FetchInputs
*/
bool AreInputsStandard(const MapPrevTx& mapInputs) const;
/** Count ECDSA signature operations the old-fashioned (pre-0.6) way
@return number of sigops this transaction's outputs will produce when spent
@see CTransaction::FetchInputs
*/
unsigned int GetLegacySigOpCount() const;
/** Count ECDSA signature operations in pay-to-script-hash inputs.
@param[in] mapInputs Map of previous transactions that have outputs we're spending
@return maximum number of sigops required to validate this transaction's inputs
@see CTransaction::FetchInputs
*/
unsigned int GetP2SHSigOpCount(const MapPrevTx& mapInputs) const;
/** Amount of bitcoins spent by this transaction.
@return sum of all outputs (note: does not include fees)
*/
int64 GetValueOut() const
{
int64 nValueOut = 0;
BOOST_FOREACH(const CTxOut& txout, vout)
{
nValueOut += txout.nValue;
if (!MoneyRange(txout.nValue) || !MoneyRange(nValueOut))
throw std::runtime_error("CTransaction::GetValueOut() : value out of range");
}
return nValueOut;
}
/** Amount of bitcoins coming in to this transaction
Note that lightweight clients may not know anything besides the hash of previous transactions,
so may not be able to calculate this.
@param[in] mapInputs Map of previous transactions that have outputs we're spending
@return Sum of value of all inputs (scriptSigs)
@see CTransaction::FetchInputs
*/
int64 GetValueIn(const MapPrevTx& mapInputs) const;
static bool AllowFree(double dPriority)
{
// Large (in bytes) low-priority (new, small-coin) transactions
// need a fee.
return dPriority > COIN * 144 / 250;
}
int64 GetMinFee(unsigned int nBlockSize=1, bool fAllowFree=false, enum GetMinFee_mode mode=GMF_BLOCK, unsigned int nBytes = 0) const;
bool ReadFromDisk(CDiskTxPos pos, FILE** pfileRet=NULL)
{
CAutoFile filein = CAutoFile(OpenBlockFile(pos.nFile, 0, pfileRet ? "rb+" : "rb"), SER_DISK, CLIENT_VERSION);
if (!filein)
return error("CTransaction::ReadFromDisk() : OpenBlockFile failed");
// Read transaction
if (fseek(filein, pos.nTxPos, SEEK_SET) != 0)
return error("CTransaction::ReadFromDisk() : fseek failed");
try {
filein >> *this;
}
catch (std::exception &e) {
return error("%s() : deserialize or I/O error", __PRETTY_FUNCTION__);
}
// Return file pointer
if (pfileRet)
{
if (fseek(filein, pos.nTxPos, SEEK_SET) != 0)
return error("CTransaction::ReadFromDisk() : second fseek failed");
*pfileRet = filein.release();
}
return true;
}
friend bool operator==(const CTransaction& a, const CTransaction& b)
{
return (a.nVersion == b.nVersion &&
a.nTime == b.nTime &&
a.vin == b.vin &&
a.vout == b.vout &&
a.nLockTime == b.nLockTime);
}
friend bool operator!=(const CTransaction& a, const CTransaction& b)
{
return !(a == b);
}
std::string ToStringShort() const
{
std::string str;
str += strprintf("%s %s", GetHash().ToString().c_str(), IsCoinBase()? "base" : (IsCoinStake()? "stake" : "user"));
return str;
}
std::string ToString() const
{
std::string str;
str += IsCoinBase()? "Coinbase" : (IsCoinStake()? "Coinstake" : "CTransaction");
str += strprintf("(hash=%s, nTime=%d, ver=%d, vin.size=%"PRIszu", vout.size=%"PRIszu", nLockTime=%d)\n",
GetHash().ToString().substr(0,10).c_str(),
nTime,
nVersion,
vin.size(),
vout.size(),
nLockTime);
for (unsigned int i = 0; i < vin.size(); i++)
str += " " + vin[i].ToString() + "\n";
for (unsigned int i = 0; i < vout.size(); i++)
str += " " + vout[i].ToString() + "\n";
return str;
}
void print() const
{
printf("%s", ToString().c_str());
}
bool ReadFromDisk(CTxDB& txdb, COutPoint prevout, CTxIndex& txindexRet);
bool ReadFromDisk(CTxDB& txdb, COutPoint prevout);
bool ReadFromDisk(COutPoint prevout);
bool DisconnectInputs(CTxDB& txdb);
/** Fetch from memory and/or disk. inputsRet keys are transaction hashes.
@param[in] txdb Transaction database
@param[in] mapTestPool List of pending changes to the transaction index database
@param[in] fBlock True if being called to add a new best-block to the chain
@param[in] fMiner True if being called by CreateNewBlock
@param[out] inputsRet Pointers to this transaction's inputs
@param[out] fInvalid returns true if transaction is invalid
@return Returns true if all inputs are in txdb or mapTestPool
*/
bool FetchInputs(CTxDB& txdb, const std::map<uint256, CTxIndex>& mapTestPool,
bool fBlock, bool fMiner, MapPrevTx& inputsRet, bool& fInvalid);
/** Sanity check previous transactions, then, if all checks succeed,
mark them as spent by this transaction.
@param[in] inputs Previous transactions (from FetchInputs)
@param[out] mapTestPool Keeps track of inputs that need to be updated on disk
@param[in] posThisTx Position of this transaction on disk
@param[in] pindexBlock
@param[in] fBlock true if called from ConnectBlock
@param[in] fMiner true if called from CreateNewBlock
@param[in] fStrictPayToScriptHash true if fully validating p2sh transactions
@return Returns true if all checks succeed
*/
bool ConnectInputs(CTxDB& txdb, MapPrevTx inputs,
std::map<uint256, CTxIndex>& mapTestPool, const CDiskTxPos& posThisTx,
const CBlockIndex* pindexBlock, bool fBlock, bool fMiner, bool fStrictPayToScriptHash=true);
bool ClientConnectInputs();
bool CheckTransaction() const;
bool AcceptToMemoryPool(CTxDB& txdb, bool fCheckInputs=true, bool* pfMissingInputs=NULL);
bool GetCoinAge(CTxDB& txdb, uint64& nCoinAge) const; // ppcoin: get transaction coin age
protected:
const CTxOut& GetOutputFor(const CTxIn& input, const MapPrevTx& inputs) const;
};
/** A transaction with a merkle branch linking it to the block chain. */
class CMerkleTx : public CTransaction
{
public:
uint256 hashBlock;
std::vector<uint256> vMerkleBranch;
int nIndex;
// memory only
mutable bool fMerkleVerified;
CMerkleTx()
{
Init();
}
CMerkleTx(const CTransaction& txIn) : CTransaction(txIn)
{
Init();
}
void Init()
{
hashBlock = 0;
nIndex = -1;
fMerkleVerified = false;
}
IMPLEMENT_SERIALIZE
(
nSerSize += SerReadWrite(s, *(CTransaction*)this, nType, nVersion, ser_action);
nVersion = this->nVersion;
READWRITE(hashBlock);
READWRITE(vMerkleBranch);
READWRITE(nIndex);
)
int SetMerkleBranch(const CBlock* pblock=NULL);
int GetDepthInMainChain(CBlockIndex* &pindexRet) const;
int GetDepthInMainChain() const { CBlockIndex *pindexRet; return GetDepthInMainChain(pindexRet); }
bool IsInMainChain() const { return GetDepthInMainChain() > 0; }
int GetBlocksToMaturity() const;
bool AcceptToMemoryPool(CTxDB& txdb, bool fCheckInputs=true);
bool AcceptToMemoryPool();
};
/** A txdb record that contains the disk location of a transaction and the
* locations of transactions that spend its outputs. vSpent is really only
* used as a flag, but having the location is very helpful for debugging.
*/
class CTxIndex
{
public:
CDiskTxPos pos;
std::vector<CDiskTxPos> vSpent;
CTxIndex()
{
SetNull();
}
CTxIndex(const CDiskTxPos& posIn, unsigned int nOutputs)
{
pos = posIn;
vSpent.resize(nOutputs);
}
IMPLEMENT_SERIALIZE
(
if (!(nType & SER_GETHASH))
READWRITE(nVersion);
READWRITE(pos);
READWRITE(vSpent);
)
void SetNull()
{
pos.SetNull();
vSpent.clear();
}
bool IsNull()
{
return pos.IsNull();
}
friend bool operator==(const CTxIndex& a, const CTxIndex& b)
{
return (a.pos == b.pos &&
a.vSpent == b.vSpent);
}
friend bool operator!=(const CTxIndex& a, const CTxIndex& b)
{
return !(a == b);
}
int GetDepthInMainChain() const;
};
/** Nodes collect new transactions into a block, hash them into a hash tree,
* and scan through nonce values to make the block's hash satisfy proof-of-work
* requirements. When they solve the proof-of-work, they broadcast the block
* to everyone and the block is added to the block chain. The first transaction
* in the block is a special one that creates a new coin owned by the creator
* of the block.
*
* Blocks are appended to blk0001.dat files on disk. Their location on disk
* is indexed by CBlockIndex objects in memory.
*/
class CBlock
{
public:
// header
static const int CURRENT_VERSION=6;
int nVersion;
uint256 hashPrevBlock;
uint256 hashMerkleRoot;
unsigned int nTime;
unsigned int nBits;
unsigned int nNonce;
// network and disk
std::vector<CTransaction> vtx;
// ppcoin: block signature - signed by one of the coin base txout[N]'s owner
std::vector<unsigned char> vchBlockSig;
// memory only
mutable std::vector<uint256> vMerkleTree;
// Denial-of-service detection:
mutable int nDoS;
bool DoS(int nDoSIn, bool fIn) const { nDoS += nDoSIn; return fIn; }
CBlock()
{
SetNull();
}
IMPLEMENT_SERIALIZE
(
READWRITE(this->nVersion);
nVersion = this->nVersion;
READWRITE(hashPrevBlock);
READWRITE(hashMerkleRoot);
READWRITE(nTime);
READWRITE(nBits);
READWRITE(nNonce);
// ConnectBlock depends on vtx following header to generate CDiskTxPos
if (!(nType & (SER_GETHASH|SER_BLOCKHEADERONLY)))
{
READWRITE(vtx);
READWRITE(vchBlockSig);
}
else if (fRead)
{
const_cast<CBlock*>(this)->vtx.clear();
const_cast<CBlock*>(this)->vchBlockSig.clear();
}
)
void SetNull()
{
nVersion = CBlock::CURRENT_VERSION;
hashPrevBlock = 0;
hashMerkleRoot = 0;
nTime = 0;
nBits = 0;
nNonce = 0;
vtx.clear();
vchBlockSig.clear();
vMerkleTree.clear();
nDoS = 0;
}
bool IsNull() const
{
return (nBits == 0);
}
uint256 GetHash() const
{
return scrypt_blockhash(CVOIDBEGIN(nVersion));
}
int64 GetBlockTime() const
{
return (int64)nTime;
}
void UpdateTime(const CBlockIndex* pindexPrev);
// ppcoin: entropy bit for stake modifier if chosen by modifier
unsigned int GetStakeEntropyBit(unsigned int nTime) const {
// Take last bit of block hash as entropy bit
unsigned int nEntropyBit = ((GetHash().Get64()) & 1llu);
if (fDebug && GetBoolArg("-printstakemodifier"))
printf("GetStakeEntropyBit: nTime=%u hashBlock=%s nEntropyBit=%u\n", nTime, GetHash().ToString().c_str(), nEntropyBit);
return nEntropyBit;
}
// ppcoin: two types of block: proof-of-work or proof-of-stake
bool IsProofOfStake() const
{
return (vtx.size() > 1 && vtx[1].IsCoinStake());
}
bool IsProofOfWork() const
{
return !IsProofOfStake();
}
std::pair<COutPoint, unsigned int> GetProofOfStake() const
{
return IsProofOfStake()? std::make_pair(vtx[1].vin[0].prevout, vtx[1].nTime) : std::make_pair(COutPoint(), (unsigned int)0);
}
// ppcoin: get max transaction timestamp
int64 GetMaxTransactionTime() const
{
int64 maxTransactionTime = 0;
BOOST_FOREACH(const CTransaction& tx, vtx)
maxTransactionTime = std::max(maxTransactionTime, (int64)tx.nTime);
return maxTransactionTime;
}
uint256 BuildMerkleTree() const
{
vMerkleTree.clear();
BOOST_FOREACH(const CTransaction& tx, vtx)
vMerkleTree.push_back(tx.GetHash());
int j = 0;
for (int nSize = vtx.size(); nSize > 1; nSize = (nSize + 1) / 2)
{
for (int i = 0; i < nSize; i += 2)
{
int i2 = std::min(i+1, nSize-1);
vMerkleTree.push_back(Hash(BEGIN(vMerkleTree[j+i]), END(vMerkleTree[j+i]),
BEGIN(vMerkleTree[j+i2]), END(vMerkleTree[j+i2])));
}
j += nSize;
}
return (vMerkleTree.empty() ? 0 : vMerkleTree.back());
}
std::vector<uint256> GetMerkleBranch(int nIndex) const
{
if (vMerkleTree.empty())
BuildMerkleTree();
std::vector<uint256> vMerkleBranch;
int j = 0;
for (int nSize = vtx.size(); nSize > 1; nSize = (nSize + 1) / 2)
{
int i = std::min(nIndex^1, nSize-1);
vMerkleBranch.push_back(vMerkleTree[j+i]);
nIndex >>= 1;
j += nSize;
}
return vMerkleBranch;
}
static uint256 CheckMerkleBranch(uint256 hash, const std::vector<uint256>& vMerkleBranch, int nIndex)
{
if (nIndex == -1)
return 0;
BOOST_FOREACH(const uint256& otherside, vMerkleBranch)
{
if (nIndex & 1)
hash = Hash(BEGIN(otherside), END(otherside), BEGIN(hash), END(hash));
else
hash = Hash(BEGIN(hash), END(hash), BEGIN(otherside), END(otherside));
nIndex >>= 1;
}
return hash;
}
bool WriteToDisk(unsigned int& nFileRet, unsigned int& nBlockPosRet)
{
// Open history file to append
CAutoFile fileout = CAutoFile(AppendBlockFile(nFileRet), SER_DISK, CLIENT_VERSION);
if (!fileout)
return error("CBlock::WriteToDisk() : AppendBlockFile failed");
// Write index header
unsigned int nSize = fileout.GetSerializeSize(*this);
fileout << FLATDATA(pchMessageStart) << nSize;
// Write block
long fileOutPos = ftell(fileout);
if (fileOutPos < 0)
return error("CBlock::WriteToDisk() : ftell failed");
nBlockPosRet = fileOutPos;
fileout << *this;
// Flush stdio buffers and commit to disk before returning
fflush(fileout);
if (!IsInitialBlockDownload() || (nBestHeight+1) % 500 == 0)
FileCommit(fileout);
return true;
}
bool ReadFromDisk(unsigned int nFile, unsigned int nBlockPos, bool fReadTransactions=true)
{
SetNull();
// Open history file to read
CAutoFile filein = CAutoFile(OpenBlockFile(nFile, nBlockPos, "rb"), SER_DISK, CLIENT_VERSION);
if (!filein)
return error("CBlock::ReadFromDisk() : OpenBlockFile failed");
if (!fReadTransactions)
filein.nType |= SER_BLOCKHEADERONLY;
// Read block
try {
filein >> *this;
}
catch (std::exception &e) {
return error("%s() : deserialize or I/O error", __PRETTY_FUNCTION__);
}
// Check the header
if (fReadTransactions && IsProofOfWork() && !CheckProofOfWork(GetHash(), nBits))
return error("CBlock::ReadFromDisk() : errors in block header");
return true;
}
void print() const
{
printf("CBlock(hash=%s, ver=%d, hashPrevBlock=%s, hashMerkleRoot=%s, nTime=%u, nBits=%08x, nNonce=%u, vtx=%"PRIszu", vchBlockSig=%s)\n",
GetHash().ToString().c_str(),
nVersion,
hashPrevBlock.ToString().c_str(),
hashMerkleRoot.ToString().c_str(),
nTime, nBits, nNonce,
vtx.size(),
HexStr(vchBlockSig.begin(), vchBlockSig.end()).c_str());
for (unsigned int i = 0; i < vtx.size(); i++)
{
printf(" ");
vtx[i].print();
}
printf(" vMerkleTree: ");
for (unsigned int i = 0; i < vMerkleTree.size(); i++)
printf("%s ", vMerkleTree[i].ToString().substr(0,10).c_str());
printf("\n");
}
bool DisconnectBlock(CTxDB& txdb, CBlockIndex* pindex);
bool ConnectBlock(CTxDB& txdb, CBlockIndex* pindex, bool fJustCheck=false);
bool ReadFromDisk(const CBlockIndex* pindex, bool fReadTransactions=true);
bool SetBestChain(CTxDB& txdb, CBlockIndex* pindexNew);
bool AddToBlockIndex(unsigned int nFile, unsigned int nBlockPos);
bool CheckBlock(bool fCheckPOW=true, bool fCheckMerkleRoot=true, bool fCheckSig=true) const;
bool AcceptBlock();
bool GetCoinAge(uint64& nCoinAge) const; // ppcoin: calculate total coin age spent in block
bool SignBlock(CWallet& keystore);
bool CheckBlockSignature(bool fProofOfStake) const;
private:
bool SetBestChainInner(CTxDB& txdb, CBlockIndex *pindexNew);
};
/** The block chain is a tree shaped structure starting with the
* genesis block at the root, with each block potentially having multiple
* candidates to be the next block. pprev and pnext link a path through the
* main/longest chain. A blockindex may have multiple pprev pointing back
* to it, but pnext will only point forward to the longest branch, or will
* be null if the block is not part of the longest chain.
*/
class CBlockIndex
{
public:
const uint256* phashBlock;
CBlockIndex* pprev;
CBlockIndex* pnext;
unsigned int nFile;
unsigned int nBlockPos;
uint256 nChainTrust; // ppcoin: trust score of block chain
int nHeight;
int64 nMint;
int64 nMoneySupply;
unsigned int nFlags; // ppcoin: block index flags
enum
{
BLOCK_PROOF_OF_STAKE = (1 << 0), // is proof-of-stake block
BLOCK_STAKE_ENTROPY = (1 << 1), // entropy bit for stake modifier
BLOCK_STAKE_MODIFIER = (1 << 2), // regenerated stake modifier
};
uint64 nStakeModifier; // hash modifier for proof-of-stake
unsigned int nStakeModifierChecksum; // checksum of index; in-memeory only
// proof-of-stake specific fields
COutPoint prevoutStake;
unsigned int nStakeTime;
uint256 hashProofOfStake;
// block header
int nVersion;
uint256 hashMerkleRoot;
unsigned int nTime;
unsigned int nBits;
unsigned int nNonce;
CBlockIndex()
{
phashBlock = NULL;
pprev = NULL;
pnext = NULL;
nFile = 0;
nBlockPos = 0;
nHeight = 0;
nChainTrust = 0;
nMint = 0;
nMoneySupply = 0;
nFlags = 0;
nStakeModifier = 0;
nStakeModifierChecksum = 0;
hashProofOfStake = 0;
prevoutStake.SetNull();
nStakeTime = 0;
nVersion = 0;
hashMerkleRoot = 0;
nTime = 0;
nBits = 0;
nNonce = 0;
}
CBlockIndex(unsigned int nFileIn, unsigned int nBlockPosIn, CBlock& block)
{
phashBlock = NULL;
pprev = NULL;
pnext = NULL;
nFile = nFileIn;
nBlockPos = nBlockPosIn;
nHeight = 0;
nChainTrust = 0;
nMint = 0;
nMoneySupply = 0;
nFlags = 0;
nStakeModifier = 0;
nStakeModifierChecksum = 0;
hashProofOfStake = 0;
if (block.IsProofOfStake())
{
SetProofOfStake();
prevoutStake = block.vtx[1].vin[0].prevout;
nStakeTime = block.vtx[1].nTime;
}
else
{
prevoutStake.SetNull();
nStakeTime = 0;
}
nVersion = block.nVersion;
hashMerkleRoot = block.hashMerkleRoot;
nTime = block.nTime;
nBits = block.nBits;
nNonce = block.nNonce;
}
CBlock GetBlockHeader() const
{
CBlock block;
block.nVersion = nVersion;
if (pprev)
block.hashPrevBlock = pprev->GetBlockHash();
block.hashMerkleRoot = hashMerkleRoot;
block.nTime = nTime;
block.nBits = nBits;
block.nNonce = nNonce;
return block;
}
uint256 GetBlockHash() const
{
return *phashBlock;
}
int64 GetBlockTime() const
{
return (int64)nTime;
}
uint256 GetBlockTrust() const;
bool IsInMainChain() const
{
return (pnext || this == pindexBest);
}
bool CheckIndex() const
{
return true;
}
enum { nMedianTimeSpan=11 };
int64 GetMedianTimePast() const
{
int64 pmedian[nMedianTimeSpan];
int64* pbegin = &pmedian[nMedianTimeSpan];
int64* pend = &pmedian[nMedianTimeSpan];
const CBlockIndex* pindex = this;
for (int i = 0; i < nMedianTimeSpan && pindex; i++, pindex = pindex->pprev)
*(--pbegin) = pindex->GetBlockTime();
std::sort(pbegin, pend);
return pbegin[(pend - pbegin)/2];
}
int64 GetMedianTime() const
{
const CBlockIndex* pindex = this;
for (int i = 0; i < nMedianTimeSpan/2; i++)
{
if (!pindex->pnext)
return GetBlockTime();
pindex = pindex->pnext;
}
return pindex->GetMedianTimePast();
}
/**
* Returns true if there are nRequired or more blocks of minVersion or above
* in the last nToCheck blocks, starting at pstart and going backwards.
*/
static bool IsSuperMajority(int minVersion, const CBlockIndex* pstart,
unsigned int nRequired, unsigned int nToCheck);
bool IsProofOfWork() const
{
return !(nFlags & BLOCK_PROOF_OF_STAKE);
}
bool IsProofOfStake() const
{
return (nFlags & BLOCK_PROOF_OF_STAKE);
}
void SetProofOfStake()
{
nFlags |= BLOCK_PROOF_OF_STAKE;
}
unsigned int GetStakeEntropyBit() const
{
return ((nFlags & BLOCK_STAKE_ENTROPY) >> 1);
}
bool SetStakeEntropyBit(unsigned int nEntropyBit)
{
if (nEntropyBit > 1)
return false;
nFlags |= (nEntropyBit? BLOCK_STAKE_ENTROPY : 0);
return true;
}
bool GeneratedStakeModifier() const
{
return (nFlags & BLOCK_STAKE_MODIFIER);
}
void SetStakeModifier(uint64 nModifier, bool fGeneratedStakeModifier)
{
nStakeModifier = nModifier;
if (fGeneratedStakeModifier)
nFlags |= BLOCK_STAKE_MODIFIER;
}
std::string ToString() const
{
return strprintf("CBlockIndex(nprev=%p, pnext=%p, nFile=%u, nBlockPos=%-6d nHeight=%d, nMint=%s, nMoneySupply=%s, nFlags=(%s)(%d)(%s), nStakeModifier=%016"PRI64x", nStakeModifierChecksum=%08x, hashProofOfStake=%s, prevoutStake=(%s), nStakeTime=%d merkle=%s, hashBlock=%s)",
pprev, pnext, nFile, nBlockPos, nHeight,
FormatMoney(nMint).c_str(), FormatMoney(nMoneySupply).c_str(),
GeneratedStakeModifier() ? "MOD" : "-", GetStakeEntropyBit(), IsProofOfStake()? "PoS" : "PoW",
nStakeModifier, nStakeModifierChecksum,
hashProofOfStake.ToString().c_str(),
prevoutStake.ToString().c_str(), nStakeTime,
hashMerkleRoot.ToString().c_str(),
GetBlockHash().ToString().c_str());
}
void print() const
{
printf("%s\n", ToString().c_str());
}
};
/** Used to marshal pointers into hashes for db storage. */
class CDiskBlockIndex : public CBlockIndex
{
private:
uint256 blockHash;
public:
uint256 hashPrev;
uint256 hashNext;
CDiskBlockIndex()
{
hashPrev = 0;
hashNext = 0;
blockHash = 0;
}
explicit CDiskBlockIndex(CBlockIndex* pindex) : CBlockIndex(*pindex)
{
hashPrev = (pprev ? pprev->GetBlockHash() : 0);
hashNext = (pnext ? pnext->GetBlockHash() : 0);
}
IMPLEMENT_SERIALIZE
(
if (!(nType & SER_GETHASH))
READWRITE(nVersion);
READWRITE(hashNext);
READWRITE(nFile);
READWRITE(nBlockPos);
READWRITE(nHeight);
READWRITE(nMint);
READWRITE(nMoneySupply);
READWRITE(nFlags);
READWRITE(nStakeModifier);
if (IsProofOfStake())
{
READWRITE(prevoutStake);
READWRITE(nStakeTime);
READWRITE(hashProofOfStake);
}
else if (fRead)
{
const_cast<CDiskBlockIndex*>(this)->prevoutStake.SetNull();
const_cast<CDiskBlockIndex*>(this)->nStakeTime = 0;
const_cast<CDiskBlockIndex*>(this)->hashProofOfStake = 0;
}
// block header
READWRITE(this->nVersion);
READWRITE(hashPrev);
READWRITE(hashMerkleRoot);
READWRITE(nTime);
READWRITE(nBits);
READWRITE(nNonce);
READWRITE(blockHash);
)
uint256 GetBlockHash() const
{
if (fUseFastIndex && (nTime < GetAdjustedTime() - 24 * 60 * 60) && blockHash != 0)
return blockHash;
CBlock block;
block.nVersion = nVersion;
block.hashPrevBlock = hashPrev;
block.hashMerkleRoot = hashMerkleRoot;
block.nTime = nTime;
block.nBits = nBits;
block.nNonce = nNonce;
const_cast<CDiskBlockIndex*>(this)->blockHash = block.GetHash();
return blockHash;
}
std::string ToString() const
{
std::string str = "CDiskBlockIndex(";
str += CBlockIndex::ToString();
str += strprintf("\n hashBlock=%s, hashPrev=%s, hashNext=%s)",
GetBlockHash().ToString().c_str(),
hashPrev.ToString().c_str(),
hashNext.ToString().c_str());
return str;
}
void print() const
{
printf("%s\n", ToString().c_str());
}
};
/** Describes a place in the block chain to another node such that if the
* other node doesn't have the same branch, it can find a recent common trunk.
* The further back it is, the further before the fork it may be.
*/
class CBlockLocator
{
protected:
std::vector<uint256> vHave;
public:
CBlockLocator()
{
}
explicit CBlockLocator(const CBlockIndex* pindex)
{
Set(pindex);
}
explicit CBlockLocator(uint256 hashBlock)
{
std::map<uint256, CBlockIndex*>::iterator mi = mapBlockIndex.find(hashBlock);
if (mi != mapBlockIndex.end())
Set((*mi).second);
}
CBlockLocator(const std::vector<uint256>& vHaveIn)
{
vHave = vHaveIn;
}
IMPLEMENT_SERIALIZE
(
if (!(nType & SER_GETHASH))
READWRITE(nVersion);
READWRITE(vHave);
)
void SetNull()
{
vHave.clear();
}
bool IsNull()
{
return vHave.empty();
}
void Set(const CBlockIndex* pindex)
{
vHave.clear();
int nStep = 1;
while (pindex)
{
vHave.push_back(pindex->GetBlockHash());
// Exponentially larger steps back
for (int i = 0; pindex && i < nStep; i++)
pindex = pindex->pprev;
if (vHave.size() > 10)
nStep *= 2;
}
vHave.push_back((!fTestNet ? hashGenesisBlock : hashGenesisBlockTestNet));
}
int GetDistanceBack()
{
// Retrace how far back it was in the sender's branch
int nDistance = 0;
int nStep = 1;
BOOST_FOREACH(const uint256& hash, vHave)
{
std::map<uint256, CBlockIndex*>::iterator mi = mapBlockIndex.find(hash);
if (mi != mapBlockIndex.end())
{
CBlockIndex* pindex = (*mi).second;
if (pindex->IsInMainChain())
return nDistance;
}
nDistance += nStep;
if (nDistance > 10)
nStep *= 2;
}
return nDistance;
}
CBlockIndex* GetBlockIndex()
{
// Find the first block the caller has in the main chain
BOOST_FOREACH(const uint256& hash, vHave)
{
std::map<uint256, CBlockIndex*>::iterator mi = mapBlockIndex.find(hash);
if (mi != mapBlockIndex.end())
{
CBlockIndex* pindex = (*mi).second;
if (pindex->IsInMainChain())
return pindex;
}
}
return pindexGenesisBlock;
}
uint256 GetBlockHash()
{
// Find the first block the caller has in the main chain
BOOST_FOREACH(const uint256& hash, vHave)
{
std::map<uint256, CBlockIndex*>::iterator mi = mapBlockIndex.find(hash);
if (mi != mapBlockIndex.end())
{
CBlockIndex* pindex = (*mi).second;
if (pindex->IsInMainChain())
return hash;
}
}
return (!fTestNet ? hashGenesisBlock : hashGenesisBlockTestNet);
}
int GetHeight()
{
CBlockIndex* pindex = GetBlockIndex();
if (!pindex)
return 0;
return pindex->nHeight;
}
};
class CTxMemPool
{
public:
mutable CCriticalSection cs;
std::map<uint256, CTransaction> mapTx;
std::map<COutPoint, CInPoint> mapNextTx;
bool accept(CTxDB& txdb, CTransaction &tx,
bool fCheckInputs, bool* pfMissingInputs);
bool addUnchecked(const uint256& hash, CTransaction &tx);
bool remove(CTransaction &tx);
void clear();
void queryHashes(std::vector<uint256>& vtxid);
unsigned long size()
{
LOCK(cs);
return mapTx.size();
}
bool exists(uint256 hash)
{
return (mapTx.count(hash) != 0);
}
CTransaction& lookup(uint256 hash)
{
return mapTx[hash];
}
};
extern CTxMemPool mempool;
#endif
|
{
"content_hash": "090c1f146c72bd1d1dab95d22d9e5f91",
"timestamp": "",
"source": "github",
"line_count": 1606,
"max_line_length": 281,
"avg_line_length": 28.093399750933997,
"alnum_prop": 0.6021986790194601,
"repo_name": "Rimbit/Wallets",
"id": "1e2d723275be78a487c7344211b0bcf5dac3beb4",
"size": "45118",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "src/main.h",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Assembly",
"bytes": "51312"
},
{
"name": "C",
"bytes": "32649"
},
{
"name": "C++",
"bytes": "2557238"
},
{
"name": "CSS",
"bytes": "1127"
},
{
"name": "Groff",
"bytes": "12582"
},
{
"name": "HTML",
"bytes": "50615"
},
{
"name": "Makefile",
"bytes": "12660"
},
{
"name": "NSIS",
"bytes": "5840"
},
{
"name": "Objective-C",
"bytes": "747"
},
{
"name": "Objective-C++",
"bytes": "2451"
},
{
"name": "Python",
"bytes": "44097"
},
{
"name": "QMake",
"bytes": "14105"
},
{
"name": "Shell",
"bytes": "8193"
}
]
}
|
Tiny but fully functional box label generator software sample built using Python tkinter and HTML.
You can use this software in your own purpose or just a learning sample.
Requirements:
1) For source code:
- Python 3.4 or higher
- Python modules:
- pillow
- qrcode
- pdfkit
- Wkhtmltopdf from http://wkhtmltopdf.org/downloads.html
2) For binary:
- just wkhtmltopdf
|
{
"content_hash": "fe5241515073ce23331fa6c672beec63",
"timestamp": "",
"source": "github",
"line_count": 16,
"max_line_length": 98,
"avg_line_length": 23.4375,
"alnum_prop": 0.76,
"repo_name": "bzimor/LabelGenerator",
"id": "6ec58ebb238e709e0b09eaabf54afd2ad2fe3c72",
"size": "392",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "README.md",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "53953"
}
]
}
|
// Copyright (c) 2009-2010 Satoshi Nakamoto
// Copyright (c) 2009-2013 The Bitcoin developers
// Distributed under the MIT/X11 software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_CHAIN_PARAMS_H
#define BITCOIN_CHAIN_PARAMS_H
#include "bignum.h"
#include "uint256.h"
#include "util.h"
#include <vector>
#define MESSAGE_START_SIZE 4
typedef unsigned char MessageStartChars[MESSAGE_START_SIZE];
class CAddress;
class CBlock;
class CBlockIndex;
struct CDNSSeedData {
std::string name, host;
CDNSSeedData(const std::string &strName, const std::string &strHost) : name(strName), host(strHost) {}
};
/**
* CChainParams defines various tweakable parameters of a given instance of the
* Bitcoin system. There are three: the main network on which people trade goods
* and services, the public test network which gets reset from time to time and
* a regression test mode which is intended for private networks only. It has
* minimal difficulty to ensure that blocks can be found instantly.
*/
class CChainParams
{
public:
enum Network {
MAIN,
TESTNET,
REGTEST,
MAX_NETWORK_TYPES
};
enum Base58Type {
PUBKEY_ADDRESS,
SCRIPT_ADDRESS,
SECRET_KEY,
STEALTH_ADDRESS,
EXT_PUBLIC_KEY,
EXT_SECRET_KEY,
EXT_KEY_HASH,
EXT_ACC_HASH,
EXT_PUBLIC_KEY_BTC,
EXT_SECRET_KEY_BTC,
MAX_BASE58_TYPES
};
const uint256& HashGenesisBlock() const { return hashGenesisBlock; }
const MessageStartChars& MessageStart() const { return pchMessageStart; }
const std::vector<unsigned char>& AlertKey() const { return vAlertPubKey; }
int GetDefaultPort() const { return nDefaultPort; }
const bool IsProtocolV2(int nHeight) const { return nHeight > nFirstPosv2Block; }
const bool IsProtocolV3(int nHeight) const { return nHeight > nFirstPosv3Block; }
const CBigNum& ProofOfWorkLimit() const { return bnProofOfWorkLimit; }
const CBigNum& ProofOfStakeLimit(int nHeight) const { return IsProtocolV2(nHeight) ? bnProofOfStakeLimitV2 : bnProofOfStakeLimit; }
virtual const CBlock& GenesisBlock() const = 0;
virtual bool RequireRPCPassword() const { return true; }
const std::string& DataDir() const { return strDataDir; }
virtual Network NetworkID() const = 0;
const std::vector<CDNSSeedData>& DNSSeeds() const { return vSeeds; }
const std::vector<unsigned char> &Base58Prefix(Base58Type type) const { return base58Prefixes[type]; }
virtual const std::vector<CAddress>& FixedSeeds() const = 0;
std::string NetworkIDString() const { return strNetworkID; }
int RPCPort() const { return nRPCPort; }
int BIP44ID() const { return nBIP44ID; }
int LastPOWBlock() const { return nLastPOWBlock; }
int64_t GetProofOfWorkReward(int nHeight, int64_t nFees) const;
int64_t GetProofOfStakeReward(const CBlockIndex* pindexPrev, int64_t nCoinAge, int64_t nFees) const;
int64_t COIN_YEAR_REWARD() const;
protected:
CChainParams() {};
uint256 hashGenesisBlock;
MessageStartChars pchMessageStart;
// Raw pub key bytes for the broadcast alert signing key.
std::vector<unsigned char> vAlertPubKey;
std::string strNetworkID;
int nDefaultPort;
int nRPCPort;
int nBIP44ID;
int nFirstPosv2Block;
int nFirstPosv3Block;
CBigNum bnProofOfWorkLimit;
CBigNum bnProofOfStakeLimit;
CBigNum bnProofOfStakeLimitV2;
std::string strDataDir;
std::vector<CDNSSeedData> vSeeds;
std::vector<unsigned char> base58Prefixes[MAX_BASE58_TYPES];
int nLastPOWBlock;
};
/**
* Return the currently selected parameters. This won't change after app startup
* outside of the unit tests.
*/
const CChainParams &Params();
/**
* Return the testnet parameters.
*/
const CChainParams &TestNetParams();
/**
* Return the mainnet parameters.
*/
const CChainParams &MainNetParams();
/** Sets the params returned by Params() to those for the given network. */
void SelectParams(CChainParams::Network network);
/**
* Looks for -regtest or -testnet and then calls SelectParams as appropriate.
* Returns false if an invalid combination is given.
*/
bool SelectParamsFromCommandLine();
inline bool TestNet() {
// Note: it's deliberate that this returns "false" for regression test mode.
return Params().NetworkID() == CChainParams::TESTNET;
}
#endif
|
{
"content_hash": "e7fd11e7dd7f3b370c3baad1e63e8397",
"timestamp": "",
"source": "github",
"line_count": 149,
"max_line_length": 135,
"avg_line_length": 29.973154362416107,
"alnum_prop": 0.7133900582176445,
"repo_name": "Leocoin-project/LEOcoin",
"id": "140a321d13d9d6ae59848b571a20dc9221fbed12",
"size": "4466",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "src/chainparams.h",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Assembly",
"bytes": "51312"
},
{
"name": "C",
"bytes": "635469"
},
{
"name": "C++",
"bytes": "3590054"
},
{
"name": "CSS",
"bytes": "563703"
},
{
"name": "HTML",
"bytes": "249803"
},
{
"name": "JavaScript",
"bytes": "376828"
},
{
"name": "Makefile",
"bytes": "13225"
},
{
"name": "NSIS",
"bytes": "5914"
},
{
"name": "Objective-C",
"bytes": "1052"
},
{
"name": "Objective-C++",
"bytes": "6310"
},
{
"name": "Python",
"bytes": "48293"
},
{
"name": "QMake",
"bytes": "14790"
},
{
"name": "Roff",
"bytes": "12684"
},
{
"name": "Shell",
"bytes": "9817"
}
]
}
|
using System;
using System.Collections.Generic;
using System.Collections.Specialized;
using System.Threading.Tasks;
using Newtonsoft.Json;
namespace CloudStack.Net
{
public class ListPortableIpRangesRequest : APIListRequest
{
public ListPortableIpRangesRequest() : base("listPortableIpRanges") {}
/// <summary>
/// Id of the portable ip range
/// </summary>
public Guid? Id {
get { return GetParameterValue<Guid?>(nameof(Id).ToLower()); }
set { SetParameterValue(nameof(Id).ToLower(), value); }
}
/// <summary>
/// List by keyword
/// </summary>
public string Keyword {
get { return GetParameterValue<string>(nameof(Keyword).ToLower()); }
set { SetParameterValue(nameof(Keyword).ToLower(), value); }
}
/// <summary>
/// Id of a Region
/// </summary>
public int? RegionId {
get { return GetParameterValue<int?>(nameof(RegionId).ToLower()); }
set { SetParameterValue(nameof(RegionId).ToLower(), value); }
}
}
/// <summary>
/// list portable IP ranges
/// </summary>
public partial interface ICloudStackAPIClient
{
ListResponse<PortableIpRangeResponse> ListPortableIpRanges(ListPortableIpRangesRequest request);
Task<ListResponse<PortableIpRangeResponse>> ListPortableIpRangesAsync(ListPortableIpRangesRequest request);
ListResponse<PortableIpRangeResponse> ListPortableIpRangesAllPages(ListPortableIpRangesRequest request);
Task<ListResponse<PortableIpRangeResponse>> ListPortableIpRangesAllPagesAsync(ListPortableIpRangesRequest request);
}
public partial class CloudStackAPIClient : ICloudStackAPIClient
{
public ListResponse<PortableIpRangeResponse> ListPortableIpRanges(ListPortableIpRangesRequest request) => Proxy.Request<ListResponse<PortableIpRangeResponse>>(request);
public Task<ListResponse<PortableIpRangeResponse>> ListPortableIpRangesAsync(ListPortableIpRangesRequest request) => Proxy.RequestAsync<ListResponse<PortableIpRangeResponse>>(request);
public ListResponse<PortableIpRangeResponse> ListPortableIpRangesAllPages(ListPortableIpRangesRequest request) => Proxy.RequestAllPages<PortableIpRangeResponse>(request);
public Task<ListResponse<PortableIpRangeResponse>> ListPortableIpRangesAllPagesAsync(ListPortableIpRangesRequest request) => Proxy.RequestAllPagesAsync<PortableIpRangeResponse>(request);
}
}
|
{
"content_hash": "f94e64a338bc3081f389b2b8dd055178",
"timestamp": "",
"source": "github",
"line_count": 55,
"max_line_length": 194,
"avg_line_length": 46.163636363636364,
"alnum_prop": 0.7156360771957464,
"repo_name": "richardlawley/cloudstack.net",
"id": "1a5a874edf765bf1e28ca34d6c92a00d27007cb3",
"size": "2539",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "src/CloudStack.Net/Generated/ListPortableIpRanges.cs",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Batchfile",
"bytes": "34"
},
{
"name": "C#",
"bytes": "1822509"
},
{
"name": "Java",
"bytes": "75375"
},
{
"name": "PowerShell",
"bytes": "3037"
}
]
}
|
import React, { PropTypes } from 'react';
import { Popover, OverlayTrigger, Tooltip } from 'react_bootstrap';
import ChangesLinks from 'es6!display/changes/links';
import { Button } from 'es6!display/button';
import { ProgrammingError } from 'es6!display/errors';
import Request from 'es6!display/request';
import { buildSummaryText } from 'es6!display/changes/build_text';
import { COND_NO_BUILDS,
get_runnable_condition,
get_runnable_condition_icon,
get_runnables_summary_condition,
ConditionDot } from 'es6!display/changes/build_conditions';
import * as api from 'es6!server/api';
/*
* Shows the status of many builds run for a single code change (e.g. a commit
* or diff.) Despite the name, this widget can also handle showing a single
* build...you use this for any interface where you might be showing builds
* from more than one project.
*/
export var ManyBuildsStatus = React.createClass({
propTypes: {
builds: PropTypes.array,
},
render: function() {
var builds = this.props.builds;
if (builds.length === 0) {
return get_runnable_condition_icon(COND_NO_BUILDS);
}
// If this is a diff, we only want to look at builds that ran on the last
// code change
var builds_for_last_code_change = buildsForLastCodeChange(builds);
// grab the latest builds for each project
var builds_by_project = _.groupBy(builds_for_last_code_change,
b => b.project.slug);
var latest_builds = _.map(builds_by_project, builds => {
return _.chain(builds)
.sortBy(b => b.dateCreated)
.last()
.value();
});
// TOOD: how to order projects? Right now, I do it alphabetically by project name...
// I think that makes this easiest to instantly parse every time someone views this.
latest_builds = _.sortBy(latest_builds, b => b.project.name);
var tooltip_markup = _.map(latest_builds, b => {
var subtext = buildSummaryText(b, true);
return <div style={{textAlign: "left"}} key={b.id}>
<div style={{ display: "inline-block", paddingTop: 10, paddingRight: 5}}>
<ConditionDot condition={get_runnable_condition(b)} />
</div>
<div style={{ verticalAlign: "top", display: "inline-block"}}>
<div>{b.project.name}</div>
<span className="mediumGray">{subtext}</span>
</div>
</div>
});
var tooltip = <Tooltip>{tooltip_markup}</Tooltip>;
var summary_condition = get_runnables_summary_condition(latest_builds);
var multiIndicator = latest_builds.length > 1;
var builds_href = ChangesLinks.buildsHref(latest_builds);
return <OverlayTrigger
placement="right"
overlay={tooltip}>
<a className="buildStatus" href={builds_href}>
<ConditionDot condition={summary_condition} multiIndicator={multiIndicator} />
</a>
</OverlayTrigger>;
}
});
/*
* Shows the status of a single build. This tooltip can go into more details
* than ManyBuildsStatus (showing the names of the failing tests)
*/
export var SingleBuildStatus = React.createClass({
MAX_TESTS_IN_TOOLTIP: 15,
propTypes: {
build: PropTypes.object,
placement: PropTypes.string,
parentElem: PropTypes.object,
},
render: function() {
var build = this.props.build;
var condition = get_runnable_condition(build);
var href = ChangesLinks.buildHref(build);
var error_count = build.failures ?
_.filter(build.failures, f => f.id !== 'test_failures').length :
0; // if its 0, we don't know whether there are 0 failures or if the
// backend didn't return this info
var dotNum = null;
if (error_count > 0) {
dotNum = 'E';
} else if (build.stats['test_failures'] > 0) {
dotNum = build.stats['test_failures'];
}
// TODO: could show error messages in tooltip...
var tooltip = this.getStandardTooltip();
if (build.stats['test_failures'] > 0) {
tooltip = this.getFailedTestsTooltip();
}
var dot = <ConditionDot condition={condition} num={dotNum} />;
var widget = <a className="buildStatus" href={href}>
{dot}
</a>;
if (tooltip) {
return <div>
<OverlayTrigger
placement={this.props.placement || "right"}
overlay={tooltip}>
<div>{widget}</div>
</OverlayTrigger>
</div>;
}
return widget;
},
getStandardTooltip() {
return <Tooltip>{this.getTooltipHeader()}</Tooltip>;
},
getTooltipHeader() {
var build = this.props.build;
var subtext = buildSummaryText(build, true);
return <div style={{textAlign: "left"}}>
<div style={{ display: "inline-block", paddingTop: 10, paddingRight: 5}}>
<ConditionDot condition={get_runnable_condition(build)} />
</div>
<div style={{ verticalAlign: "top", display: "inline-block"}}>
<div>{build.project.name}</div>
<span className="mediumGray">{subtext}</span>
</div>
</div>;
},
getFailedTestsTooltip: function() {
var elem = this.props.parentElem, build = this.props.build;
var state_key = "_build_widget_failed_tests";
// make sure parentElem has a state object
// TODO: we could silently add this ourselves if missing
if (!elem.state && elem.state !== {}) {
return <Popover> <ProgrammingError>
Programming Error: The parentElem of BuildWidget must implement
getInitialState()! Just return{" {}"}
</ProgrammingError> </Popover>;
}
if (elem.state[state_key] &&
api.isLoaded(elem.state[state_key][build.id])) {
var data = elem.state[state_key][build.id].getReturnedData();
var tests = data.testFailures.tests.slice(0, this.MAX_TESTS_IN_TOOLTIP);
var list = _.map(tests, t => {
return <div key={"test-id-key:" + t.id}>{t.shortName}</div>;
});
if (tests.length < build.stats['test_failures']) {
list.push(
<div className="marginTopS" key="tests-more-key"> <em>
Showing{" "}
{tests.length}
{" "}out of{" "}
{build.stats['test_failures']}
{" "}test failures
</em> </div>
);
}
return <Tooltip key={+new Date()}>
{this.getTooltipHeader()}
<div style={{textAlign: "left", marginTop: 10, marginLeft: 25}}>
<span className="bb">Failed Tests:</span>
{list}
</div>
</Tooltip>;
} else {
// we want to fetch more build information and show a list of failed
// tests on hover. To do this, we'll create an anonymous react element
// that does data fetching on mount
var data_fetcher_defn = React.createClass({
componentDidMount() {
if (!elem.state[state_key] ||
!elem.state[state_key][build.id]) {
api.fetchMap(elem, state_key, {
[ build.id ]: `/api/0/builds/${build.id}/`
});
}
},
render() {
return <span />;
}
});
var data_fetcher = React.createElement(
data_fetcher_defn,
{elem: elem, buildID: build.id}
);
return <Tooltip>
{this.getTooltipHeader()}
{data_fetcher}
<div style={{textAlign: "left", marginTop: 10, marginLeft: 25}}>
Loading failed test list
</div>
</Tooltip>;
}
},
});
/*
* Shows the status for a missing build. In practice, this is merely a button
* that allows the user to create a new build for the corresponding commit.
*/
export var MissingBuildStatus = React.createClass({
propTypes: {
project_slug: PropTypes.string,
commit_sha: PropTypes.string,
parentElem: PropTypes.object,
selectiveTesting: PropTypes.bool,
},
defaultPropts: {
selectiveTesting: false,
},
render: function() {
var buttonName = "createBuild_" + this.props.commit_sha;
var tooltip = <Tooltip>Create a new build for this commit.</Tooltip>;
var project = this.props.project_slug;
var commit = this.props.commit_sha;
var selectiveTesting = this.props.selectiveTesting ? 'true' : 'false';
var build_widget = <div>
<Request
parentElem={this.props.parentElem}
name={buttonName}
endpoint={`/api/0/builds/?project=${project}&sha=${commit}&cause=manual&selective_testing=${selectiveTesting}`}
method="post">
<OverlayTrigger
placement="right"
overlay={tooltip}>
<Button type="white" className="iconButton">
<i className="fa fa-cogs blue" />
</Button>
</OverlayTrigger>
</Request>
</div>;
return build_widget;
},
});
// if a list of builds is for a differential diff, filter them so that we only
// have the builds for the latest update. Its safe to run this on non-diffs...we'll
// just return the original list
//
// we won't know about the latest update if no builds have run for it (instead
// returning builds for the second-latest update), but I think that's fine
export var buildsForLastCodeChange = function(builds) {
var revision_ids = [];
var diff_ids = [];
// we only do something if every build is from the same phabricator revision
// id
_.each(builds, build => {
var build_revision_id = build.source.patch &&
build.source.data['phabricator.revisionID'];
// must be from a phabricator revision
if (!build_revision_id) { return builds; }
revision_ids.push(build.source.data['phabricator.revisionID']);
diff_ids.push(build.source.data['phabricator.diffID']);
});
revision_ids = _.uniq(revision_ids);
diff_ids = _.uniq(diff_ids).sort().reverse();
if (revision_ids.length > 1) {
return builds;
}
var latest_diff_id = diff_ids[0];
return _.filter(builds,
b => b.source.data['phabricator.diffID'] === latest_diff_id);
}
|
{
"content_hash": "d0a809f129733a5236418863cf5561a9",
"timestamp": "",
"source": "github",
"line_count": 312,
"max_line_length": 121,
"avg_line_length": 31.69871794871795,
"alnum_prop": 0.6194135490394338,
"repo_name": "dropbox/changes",
"id": "29035d5f8145fcd42eae1224193cddb8d766a51b",
"size": "9890",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "webapp/display/changes/builds.js",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "CSS",
"bytes": "24837"
},
{
"name": "HTML",
"bytes": "21274"
},
{
"name": "JavaScript",
"bytes": "380548"
},
{
"name": "Makefile",
"bytes": "6148"
},
{
"name": "Mako",
"bytes": "412"
},
{
"name": "Python",
"bytes": "2189624"
},
{
"name": "Shell",
"bytes": "4150"
}
]
}
|
<bill session="115" type="s" number="1571" updated="2017-09-11T03:01:53Z">
<state datetime="2017-07-17">REFERRED</state>
<status>
<introduced datetime="2017-07-17"/>
</status>
<introduced datetime="2017-07-17"/>
<titles>
<title type="short" as="introduced">National Flood Insurance Program Reauthorization Act of 2017</title>
<title type="short" as="introduced">National Flood Insurance Program Reauthorization Act of 2017</title>
<title type="official" as="introduced">A bill to reauthorize the National Flood Insurance Program, and for other purposes.</title>
<title type="display">National Flood Insurance Program Reauthorization Act of 2017</title>
</titles>
<sponsor bioguide_id="C000880"/>
<cosponsors>
<cosponsor bioguide_id="B000944" joined="2017-07-17"/>
</cosponsors>
<actions>
<action datetime="2017-07-17">
<text>Introduced in Senate</text>
</action>
<action datetime="2017-07-17" state="REFERRED">
<text>Read twice and referred to the Committee on Banking, Housing, and Urban Affairs.</text>
</action>
</actions>
<committees>
<committee subcommittee="" code="SSBK" name="Senate Banking, Housing, and Urban Affairs" activity="Referral"/>
</committees>
<relatedbills/>
<subjects>
<term name="Finance and financial sector"/>
</subjects>
<amendments/>
<committee-reports/>
</bill>
|
{
"content_hash": "aabb09a07fe78588c0266ceb500464b7",
"timestamp": "",
"source": "github",
"line_count": 34,
"max_line_length": 134,
"avg_line_length": 40.76470588235294,
"alnum_prop": 0.6984126984126984,
"repo_name": "peter765/power-polls",
"id": "1de59e9c5c5a2465b79a392138bcab82505d60c9",
"size": "1386",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "db/bills/s/s1571/data.xml",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "HTML",
"bytes": "58567"
},
{
"name": "JavaScript",
"bytes": "7370"
},
{
"name": "Python",
"bytes": "22988"
}
]
}
|
package seedu.task.commons.exceptions;
/**
* Signals that some given data does not fulfill some constraints.
*/
public class IllegalValueException extends Exception {
/**
* @param message
* should contain relevant information on the failed constraint(s)
*/
public IllegalValueException(String message) {
super(message);
}
}
|
{
"content_hash": "e26358d2008572fcf10f02e965f71726",
"timestamp": "",
"source": "github",
"line_count": 14,
"max_line_length": 81,
"avg_line_length": 26.642857142857142,
"alnum_prop": 0.675603217158177,
"repo_name": "CS2103JAN2017-F11-B2/main",
"id": "518fdb0485e464dffeba72e2aab7d8abb628cbd8",
"size": "373",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "src/main/java/seedu/task/commons/exceptions/IllegalValueException.java",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "6351"
},
{
"name": "Java",
"bytes": "412487"
},
{
"name": "Shell",
"bytes": "1525"
}
]
}
|
#ifndef _TUPL_PVT_NONPAGEDB_HPP
#define _TUPL_PVT_NONPAGEDB_HPP
#include "PageDb.hpp"
namespace tupl { namespace pvt {
class NonPageDb: public PageDb {
const size_t mPageSize;
public:
NonPageDb(size_t pageSize): mPageSize (pageSize) {}
size_t pageSize() const override { return mPageSize; }
long allocPage() override { return 2; }
};
} }
#endif
|
{
"content_hash": "c66acc821a64aeee1263af2821e85d50",
"timestamp": "",
"source": "github",
"line_count": 21,
"max_line_length": 58,
"avg_line_length": 17.857142857142858,
"alnum_prop": 0.68,
"repo_name": "cojen/TuplNative",
"id": "9778a00ff9675e8ffaf9ddb32a997f6e8fb16dad",
"size": "1030",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "libTupl/src/tupl/pvt/NonPageDb.hpp",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "C++",
"bytes": "93819"
}
]
}
|
ACCEPTED
#### According to
Index Fungorum
#### Published in
Mycopathologia 5: 169 (1951)
#### Original name
Asterina solanicoloides var. atypica Bat.
### Remarks
null
|
{
"content_hash": "e71c8ab2dabfc8129df475762918536a",
"timestamp": "",
"source": "github",
"line_count": 13,
"max_line_length": 41,
"avg_line_length": 13.076923076923077,
"alnum_prop": 0.7235294117647059,
"repo_name": "mdoering/backbone",
"id": "357979719ed6ad8eb2da8db02b33b394be32c525",
"size": "235",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "life/Fungi/Ascomycota/Dothideomycetes/Capnodiales/Asterinaceae/Asterina/Asterina solanicoloides/Asterina solanicoloides atypica/README.md",
"mode": "33188",
"license": "apache-2.0",
"language": []
}
|
var imgcomId =null;
var wxmsDomain = 'http://120.132.50.71/wxms';
QUnit.asyncTest('addImgcom--新增单个图片组件', function (assert) {
var projectEntity = {
name: 'testPage',
description: '我是自动化测试',
updatetime: new Date()
};
$.ajax({
method: "POST",
url: wxmsDomain+"/addProject",
data: projectEntity
}).done(function (msg) {
var projectId = msg.model._id;
testImgAddPage(projectId, assert);
}).fail(function (msg) {
});
function testImgAddPage(projectId,assert) {
var pageEntity = {
name: 'testPage',
sortindex: 2,
background: '333'
};
$.ajax({
method: "POST",
url: wxmsDomain+"/addPage",
data: {
projectId: projectId,
page: pageEntity
}
}).done(function (msg) {
var pageId = msg.model._id;
addImgcom(pageId,assert)
}).fail(function (msg) {
});
}
function addImgcom(pageId,assert){
var imgcomEntity = {
imgurl:'http://img.yhd.com/test',
zindex:1,
top:'10px',
left:'10px',
right:'10px',
bottom:'10px',
width:'300px',
height:'300px',
opcity:'0.5',
transform:'rotate(-51deg)',
bordercolo:'rgba(246,22,22,1.00)',
borderwidth:'5px',
borderstyle:'solid',
borderradius:'3px',
transformrotate:'-51',
boxshadowcolor:'rgba(1,255,1,0.40)',
boxshadowwidth:'10px',
boxshadowblur:'10px',
boxshadowsize:'10px',
boxshadowdegree:'120',
paddingtop:'10px',
paddingleft:'10px',
paddingright:'10px',
paddingbottom:'10px',
animationname:'leftan',
animationduration:'10s',
animationdelay:'2s',
animationcount:'10',
verticalalign:'moddloe',
href:'#test',
hreftype:'1',
dataurl:'http://www.bejson.com/test',
datamapping:'data.test'
};
$.ajax({
method: "POST",
url: wxmsDomain+"/addImgcom",
data: {
pageId:pageId,
imgcom:imgcomEntity
}
}).done(function (msg) {
console.log(msg.model);
imgcomId = msg.model._id;
assert.equal(msg.model.imgurl, imgcomEntity.imgurl, '添加个图片组件成功');
QUnit.start();
getImgcom(pageId);
}).fail(function (msg) {
assert.ok(false, msg.responseText);
QUnit.start();
});
}
function getImgcom(pageId){
QUnit.asyncTest('getImgcom--查询图片组件', function (assert) {
$.ajax({
method: "GET",
url: wxmsDomain+"/getImgcom",
data: {
imgcomId: imgcomId
}
}).done(function (msg) {
assert.equal(msg.model._id, imgcomId, '查询图片组件成功');
QUnit.start();
testUpdateImgcom(msg.model);
getImgcomListByPageId(pageId)
}).fail(function (msg) {
assert.ok(false, msg.responseText);
QUnit.start();
});
});
}
function getImgcomListByPageId(pageId){
QUnit.asyncTest('getImgcomListByPageId--查询同一pageid的图片组件', function (assert) {
$.ajax({
method: "GET",
url: wxmsDomain+"/getImgcomListByPageId",
data: {
pageId:pageId
}
}).done(function (msg) {
assert.ok(true,'查询同一pageid的图片组件成功');
QUnit.start();
deleteImgcom()
}).fail(function (msg) {
assert.ok(false, msg.responseText);
QUnit.start();
});
});
}
function testUpdateImgcom(imgcomEntity) {
imgcomEntity.imgurl = 'http://img.yhd.com/test2';
QUnit.asyncTest('updateImgcom--更新单个图片组件', function (assert) {
$.ajax({
method: "POST",
url: wxmsDomain+"/updateImgcom",
data:imgcomEntity
}).done(function (msg) {
assert.equal(msg.model.imgurl, 'http://img.yhd.com/test2', '更新单个图片组件成功');
QUnit.start();
}).fail(function (msg) {
assert.ok(false, msg.responseText);
QUnit.start();
});
});
}
function deleteImgcom(){
QUnit.asyncTest('deleteImgcom--删除单个图片组件', function (assert) {
$.ajax({
method: "POST",
url: wxmsDomain+"/deleteImgcom",
data: {
imgcomId: imgcomId
}
}).done(function (msg) {
assert.ok(msg.success, '删除单个图片组件成功');
QUnit.start();
}).fail(function (msg) {
assert.ok(false, msg.responseText);
QUnit.start();
});
});
}
});
|
{
"content_hash": "482b8934039c2dbdbffb9507e1fd1a42",
"timestamp": "",
"source": "github",
"line_count": 213,
"max_line_length": 89,
"avg_line_length": 22.704225352112676,
"alnum_prop": 0.5018610421836228,
"repo_name": "knowThis/wxms",
"id": "a82bdb5bc3892a10421a1faf4e470bfc3493700f",
"size": "5024",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "public/wxms/test/testImgcom.js",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "4991"
},
{
"name": "HTML",
"bytes": "74138"
},
{
"name": "JavaScript",
"bytes": "219119"
}
]
}
|
.. _debian-install:
******************************************
Debian/Ubuntu: Install from apt.mopidy.com
******************************************
If you run a Debian based Linux distribution, like Ubuntu, the easiest way to
install Mopidy is from the `Mopidy APT archive <https://apt.mopidy.com/>`_.
When installing from the APT archive, you will automatically get updates to
Mopidy in the same way as you get updates to the rest of your system.
If you're on a Raspberry Pi running Debian or Raspbian, the following
instructions should work for you as well. If you're setting up a Raspberry Pi
from scratch, we have a guide for installing Debian/Raspbian and Mopidy. See
:ref:`raspberrypi-installation`.
#. Add the archive's GPG key::
wget -q -O - https://apt.mopidy.com/mopidy.gpg | sudo apt-key add -
#. Add the following to ``/etc/apt/sources.list``, or if you have the directory
``/etc/apt/sources.list.d/``, add it to a file called ``mopidy.list`` in
that directory::
# Mopidy APT archive
deb http://apt.mopidy.com/ stable main contrib non-free
deb-src http://apt.mopidy.com/ stable main contrib non-free
For the lazy, you can simply run the following command to create
``/etc/apt/sources.list.d/mopidy.list``::
sudo wget -q -O /etc/apt/sources.list.d/mopidy.list https://apt.mopidy.com/mopidy.list
#. Install Mopidy and all dependencies::
sudo apt-get update
sudo apt-get install mopidy
#. Optional: If you want to use any Mopidy extensions, like Spotify support or
Last.fm scrobbling, you need to install additional packages.
To list all the extensions available from apt.mopidy.com, you can run::
apt-cache search mopidy
To install one of the listed packages, e.g. ``mopidy-spotify``, simply run::
sudo apt-get install mopidy-spotify
For a full list of available Mopidy extensions, including those not
installable from apt.mopidy.com, see :ref:`ext`.
#. Before continuing, make sure you've read the :ref:`debian` section to learn
about the differences between running Mopidy as a system service and
manually as your own system user.
#. Finally, you need to set a couple of :doc:`config values </config>`, and then
you're ready to :doc:`run Mopidy </running>`.
When a new release of Mopidy is out, and you can't wait for you system to
figure it out for itself, run the following to upgrade right away::
sudo apt-get update
sudo apt-get dist-upgrade
|
{
"content_hash": "7a97f93a9ec8c547f13b6599f529978a",
"timestamp": "",
"source": "github",
"line_count": 64,
"max_line_length": 93,
"avg_line_length": 38.859375,
"alnum_prop": 0.6964213912344189,
"repo_name": "liamw9534/mopidy",
"id": "268649866167a4d0b13c90a5eac01d17c6a0720f",
"size": "2487",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "docs/installation/debian.rst",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "CSS",
"bytes": "610"
},
{
"name": "JavaScript",
"bytes": "122821"
},
{
"name": "Python",
"bytes": "893039"
}
]
}
|
using UnityEngine;
using System.Collections;
//[CreateAssetMenu(menuName="Brains/Player Controlled")]
public class PlayerControlledSnack //: SnackBrain
{
/*public int PlayerNumber;
private string m_MovementAxisName;
private string m_TurnAxisName;
private string m_FireButton;
public void OnEnable()
{
m_MovementAxisName = "Vertical" + PlayerNumber;
m_TurnAxisName = "Horizontal" + PlayerNumber;
m_FireButton = "Fire" + PlayerNumber;
}
public override void Think(SnackThinker Snack)
{
var movement = Snack.GetComponent<SnackMovement>();
movement.Steer(Input.GetAxis(m_MovementAxisName), Input.GetAxis(m_TurnAxisName));
var shooting = Snack.GetComponent<SnackShooting>();
if (Input.GetButton(m_FireButton))
shooting.BeginChargingShot();
else
shooting.FireChargedShot();
}*/
}
|
{
"content_hash": "ddc273c551e2ebd8c2601f66b81a8a66",
"timestamp": "",
"source": "github",
"line_count": 34,
"max_line_length": 83,
"avg_line_length": 24,
"alnum_prop": 0.7487745098039216,
"repo_name": "DavidAzouz29/Food-Fight-2016-Prototype-Unity",
"id": "bf6901b87975f4ae18df6d52ed5e24283bedc04e",
"size": "818",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "Assets/ScriptableObject/Brains/PlayerControlledSnack.cs",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "C#",
"bytes": "612952"
},
{
"name": "GLSL",
"bytes": "354902"
}
]
}
|
require 'spec_helper'
describe EnvironmentsHelper do
set(:environment) { create(:environment) }
set(:project) { environment.project }
set(:user) { create(:user) }
describe '#metrics_data' do
before do
# This is so that this spec also passes in EE.
allow(helper).to receive(:current_user).and_return(user)
allow(helper).to receive(:can?).and_return(true)
end
let(:metrics_data) { helper.metrics_data(project, environment) }
it 'returns data' do
expect(metrics_data).to include(
'settings-path' => edit_project_service_path(project, 'prometheus'),
'clusters-path' => project_clusters_path(project),
'current-environment-name': environment.name,
'documentation-path' => help_page_path('administration/monitoring/prometheus/index.md'),
'empty-getting-started-svg-path' => match_asset_path('/assets/illustrations/monitoring/getting_started.svg'),
'empty-loading-svg-path' => match_asset_path('/assets/illustrations/monitoring/loading.svg'),
'empty-no-data-svg-path' => match_asset_path('/assets/illustrations/monitoring/no_data.svg'),
'empty-unable-to-connect-svg-path' => match_asset_path('/assets/illustrations/monitoring/unable_to_connect.svg'),
'metrics-endpoint' => additional_metrics_project_environment_path(project, environment, format: :json),
'deployments-endpoint' => project_environment_deployments_path(project, environment, format: :json),
'environments-endpoint': project_environments_path(project, format: :json),
'project-path' => project_path(project),
'tags-path' => project_tags_path(project),
'has-metrics' => "#{environment.has_metrics?}",
'external-dashboard-url' => nil
)
end
context 'with metrics_setting' do
before do
create(:project_metrics_setting, project: project, external_dashboard_url: 'http://gitlab.com')
end
it 'adds external_dashboard_url' do
expect(metrics_data['external-dashboard-url']).to eq('http://gitlab.com')
end
end
end
end
|
{
"content_hash": "c4d3368100d7b8fa2867288bc6f244c4",
"timestamp": "",
"source": "github",
"line_count": 47,
"max_line_length": 121,
"avg_line_length": 44.659574468085104,
"alnum_prop": 0.6731777036684136,
"repo_name": "stoplightio/gitlabhq",
"id": "2b8bf9319fc2f392fa8af7dfdf8d87e45d58c0b2",
"size": "2130",
"binary": false,
"copies": "1",
"ref": "refs/heads/stoplight/develop",
"path": "spec/helpers/environments_helper_spec.rb",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "632980"
},
{
"name": "Clojure",
"bytes": "79"
},
{
"name": "Dockerfile",
"bytes": "1676"
},
{
"name": "HTML",
"bytes": "1264236"
},
{
"name": "JavaScript",
"bytes": "3425347"
},
{
"name": "Ruby",
"bytes": "16497064"
},
{
"name": "Shell",
"bytes": "34509"
},
{
"name": "Vue",
"bytes": "752795"
}
]
}
|
// Copyright (c) 2022, WNProject Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// clang-format off
#ifndef ROCKETCONTROLSINPUTTYPEBUTTON_H
#define ROCKETCONTROLSINPUTTYPEBUTTON_H
#include "../../Include/Rocket/Core/ElementDocument.h"
#include "../../Include/Rocket/Core/EventListener.h"
#include "InputType.h"
namespace Rocket {
namespace Controls {
/**
A button input type handler. The only functionality a button provides
over a normal element is
the ability to be disabled to prevent 'click' events from being
propagated any further than the
element's document.
@author Peter Curry
*/
class InputTypeButton : public InputType, public Core::EventListener {
public:
InputTypeButton(ElementFormControlInput* element);
virtual ~InputTypeButton();
/// Returns if this value should be submitted with the form.
/// @return True if the form control is to be submitted, false otherwise.
virtual bool IsSubmitted();
/// Checks for necessary functional changes in the control as a result of the
/// event.
/// @param[in] event The event to process.
virtual void ProcessEvent(Core::Event& event);
/// Sizes the dimensions to the element's inherent size.
/// @return True.
virtual bool GetIntrinsicDimensions(Rocket::Core::Vector2f& dimensions);
// Called when the element is added into a hierarchy.
virtual void OnChildAdd();
/// Called when the element is removed from a hierarchy.
virtual void OnChildRemove();
private:
Core::ElementDocument* document;
};
}
}
#endif
|
{
"content_hash": "1aaf57f8b2a34418f78d5dfe77512176",
"timestamp": "",
"source": "github",
"line_count": 58,
"max_line_length": 79,
"avg_line_length": 28.20689655172414,
"alnum_prop": 0.7292176039119804,
"repo_name": "WNProject/WNFramework",
"id": "034253907554498e0a37e3e5a38102e39313e6cc",
"size": "2925",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "externals/librocket/Source/Controls/InputTypeButton.h",
"mode": "33188",
"license": "bsd-2-clause",
"language": [
{
"name": "Assembly",
"bytes": "21585"
},
{
"name": "C",
"bytes": "178806"
},
{
"name": "C++",
"bytes": "4268348"
},
{
"name": "CMake",
"bytes": "163273"
},
{
"name": "GAP",
"bytes": "40323"
},
{
"name": "GLSL",
"bytes": "1516"
},
{
"name": "HLSL",
"bytes": "6436"
},
{
"name": "Objective-C",
"bytes": "14271"
},
{
"name": "PostScript",
"bytes": "394"
},
{
"name": "PowerShell",
"bytes": "3448"
},
{
"name": "Python",
"bytes": "66228"
},
{
"name": "Shell",
"bytes": "5208"
}
]
}
|
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical"
android:padding="8dp">
<ImageView
android:id="@+id/img_item_news"
android:layout_width="72dp"
android:layout_height="72dp"
android:layout_marginRight="8dp"/>
<TextView
android:id="@+id/txt_item_news_title"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_toRightOf="@id/img_item_news"
tools:text="TITLE"/>
<TextView
android:id="@+id/txt_item_news_desc"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/txt_item_news_title"
android:layout_marginTop="8dp"
android:layout_toRightOf="@id/img_item_news"
android:maxLines="2"
tools:text="描述"/>
</RelativeLayout>
|
{
"content_hash": "839ec0ff13fcbeee7d5a13814e045b2e",
"timestamp": "",
"source": "github",
"line_count": 33,
"max_line_length": 74,
"avg_line_length": 34.60606060606061,
"alnum_prop": 0.6112084063047285,
"repo_name": "beanu/smart-farmer-android",
"id": "747b81d7c6bc383ee2d1793d744898703f8cca0b",
"size": "1146",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "app/src/main/res/layout/item_news.xml",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Java",
"bytes": "912373"
}
]
}
|
<?php
namespace Elemecca\HipchatBundle\Model;
class Installation {
private $id;
private $secret;
private $group_id;
private $room_id;
private $token;
private $token_expires;
private $capability_url;
private $token_url;
private $api_url;
public function __construct($id, $secret, $group_id, $room_id)
{
$this->id = $id;
$this->secret = $secret;
$this->group_id = $group_id;
$this->room_id = $room_id;
}
}
|
{
"content_hash": "6d308ed3fb31dc4e61946043a7e4f71e",
"timestamp": "",
"source": "github",
"line_count": 23,
"max_line_length": 66,
"avg_line_length": 21.130434782608695,
"alnum_prop": 0.5864197530864198,
"repo_name": "Elemecca/hipchat-bundle",
"id": "714fa0d9fb96f339c150ddc700add0228c786e12",
"size": "486",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "Model/Installation.php",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "PHP",
"bytes": "12249"
}
]
}
|
End of preview.
No dataset card yet
- Downloads last month
- 5