file_name large_stringlengths 4 69 | prefix large_stringlengths 0 26.7k | suffix large_stringlengths 0 24.8k | middle large_stringlengths 0 2.12k | fim_type large_stringclasses 4
values |
|---|---|---|---|---|
registry.rs | . Git sources can also have commits deleted through
//! rebasings where registries cannot have their versions deleted.
//!
//! # The Index of a Registry
//!
//! One of the major difficulties with a registry is that hosting so many
//! packages may quickly run into performance problems when dealing with
//! dependency graphs. It's infeasible for cargo to download the entire contents
//! of the registry just to resolve one package's dependencies, for example. As
//! a result, cargo needs some efficient method of querying what packages are
//! available on a registry, what versions are available, and what the
//! dependencies for each version is.
//!
//! One method of doing so would be having the registry expose an HTTP endpoint
//! which can be queried with a list of packages and a response of their
//! dependencies and versions is returned. This is somewhat inefficient however
//! as we may have to hit the endpoint many times and we may have already
//! queried for much of the data locally already (for other packages, for
//! example). This also involves inventing a transport format between the
//! registry and Cargo itself, so this route was not taken.
//!
//! Instead, Cargo communicates with registries through a git repository
//! referred to as the Index. The Index of a registry is essentially an easily
//! query-able version of the registry's database for a list of versions of a
//! package as well as a list of dependencies for each version.
//!
//! Using git to host this index provides a number of benefits:
//!
//! * The entire index can be stored efficiently locally on disk. This means
//! that all queries of a registry can happen locally and don't need to touch
//! the network.
//!
//! * Updates of the index are quite efficient. Using git buys incremental
//! updates, compressed transmission, etc for free. The index must be updated
//! each time we need fresh information from a registry, but this is one
//! update of a git repository that probably hasn't changed a whole lot so
//! it shouldn't be too expensive.
//!
//! Additionally, each modification to the index is just appending a line at
//! the end of a file (the exact format is described later). This means that
//! the commits for an index are quite small and easily applied/compressable.
//!
//! ## The format of the Index
//!
//! The index is a store for the list of versions for all packages known, so its
//! format on disk is optimized slightly to ensure that `ls registry` doesn't
//! produce a list of all packages ever known. The index also wants to ensure
//! that there's not a million files which may actually end up hitting
//! filesystem limits at some point. To this end, a few decisions were made
//! about the format of the registry:
//!
//! 1. Each crate will have one file corresponding to it. Each version for a
//! crate will just be a line in this file.
//! 2. There will be two tiers of directories for crate names, under which
//! crates corresponding to those tiers will be located.
//!
//! As an example, this is an example hierarchy of an index:
//!
//! ```notrust
//!.
//! βββ 3
//! βΒ Β βββ u
//! βΒ Β βββ url
//! βββ bz
//! βΒ Β βββ ip
//! βΒ Β βββ bzip2
//! βββ config.json
//! βββ en
//! βΒ Β βββ co
//! βΒ Β βββ encoding
//! βββ li
//! Β Β βββ bg
//! Β Β βΒ Β βββ libgit2
//! Β Β βββ nk
//! Β Β βββ link-config
//! ```
//!
//! The root of the index contains a `config.json` file with a few entries
//! corresponding to the registry (see `RegistryConfig` below).
//!
//! Otherwise, there are three numbered directories (1, 2, 3) for crates with
//! names 1, 2, and 3 characters in length. The 1/2 directories simply have the
//! crate files underneath them, while the 3 directory is sharded by the first
//! letter of the crate name.
//!
//! Otherwise the top-level directory contains many two-letter directory names,
//! each of which has many sub-folders with two letters. At the end of all these
//! are the actual crate files themselves.
//!
//! The purpose of this layout is to hopefully cut down on `ls` sizes as well as
//! efficient lookup based on the crate name itself.
//!
//! ## Crate files
//!
//! Each file in the index is the history of one crate over time. Each line in
//! the file corresponds to one version of a crate, stored in JSON format (see
//! the `RegistryPackage` structure below).
//!
//! As new versions are published, new lines are appended to this file. The only
//! modifications to this file that should happen over time are yanks of a
//! particular version.
//!
//! # Downloading Packages
//!
//! The purpose of the Index was to provide an efficient method to resolve the
//! dependency graph for a package. So far we only required one network
//! interaction to update the registry's repository (yay!). After resolution has
//! been performed, however we need to download the contents of packages so we
//! can read the full manifest and build the source code.
//!
//! To accomplish this, this source's `download` method will make an HTTP
//! request per-package requested to download tarballs into a local cache. These
//! tarballs will then be unpacked into a destination folder.
//!
//! Note that because versions uploaded to the registry are frozen forever that
//! the HTTP download and unpacking can all be skipped if the version has
//! already been downloaded and unpacked. This caching allows us to only
//! download a package when absolutely necessary.
//!
//! # Filesystem Hierarchy
//!
//! Overall, the `$HOME/.cargo` looks like this when talking about the registry:
//!
//! ```notrust
//! # A folder under which all registry metadata is hosted (similar to
//! # $HOME/.cargo/git)
//! $HOME/.cargo/registry/
//!
//! # For each registry that cargo knows about (keyed by hostname + hash)
//! # there is a folder which is the checked out version of the index for
//! # the registry in this location. Note that this is done so cargo can
//! # support multiple registries simultaneously
//! index/
//! registry1-<hash>/
//! registry2-<hash>/
//! ...
//!
//! # This folder is a cache for all downloaded tarballs from a registry.
//! # Once downloaded and verified, a tarball never changes.
//! cache/
//! registry1-<hash>/<pkg>-<version>.crate
//! ...
//!
//! # Location in which all tarballs are unpacked. Each tarball is known to
//! # be frozen after downloading, so transitively this folder is also
//! # frozen once its unpacked (it's never unpacked again)
//! src/
//! registry1-<hash>/<pkg>-<version>/...
//! ...
//! ```
use std::collections::HashMap;
use std::fs::{self, File};
use std::io::prelude::*;
use std::path::PathBuf;
use curl::http;
use flate2::read::GzDecoder;
use git2;
use rustc_serialize::hex::ToHex;
use rustc_serialize::json;
use tar::Archive;
use url::Url;
use core::{Source, SourceId, PackageId, Package, Summary, Registry};
use core::dependency::{Dependency, DependencyInner, Kind};
use sources::{PathSource, git};
use util::{CargoResult, Config, internal, ChainError, ToUrl, human};
use util::{hex, Sha256, paths};
use ops;
static DEFAULT: &'static str = "https://github.com/rust-lang/crates.io-index";
pub struct RegistrySource<'cfg> {
source_id: SourceId,
checkout_path: PathBuf,
cache_path: PathBuf,
src_path: PathBuf,
config: &'cfg Config,
handle: Option<http::Handle>,
sources: Vec<PathSource<'cfg>>,
hashes: HashMap<(String, String), String>, // (name, vers) => cksum
cache: HashMap<String, Vec<(Summary, bool)>>,
updated: bool,
}
#[derive(RustcDecodable)]
pub struct RegistryConfig {
/// Download endpoint for all crates. This will be appended with
/// `/<crate>/<version>/download` and then will be hit with an HTTP GET
/// request to download the tarball for a crate.
pub dl: String,
/// API endpoint for the registry. This is what's actually hit to perform
/// operations like yanks, owner modifications, publish new crates, etc.
pub api: String,
}
#[derive(RustcDecodable)]
struct RegistryPackage {
name: String,
vers: String,
deps: Vec<RegistryDependency>,
features: HashMap<String, Vec<String>>,
cksum: String,
yanked: Option<bool>,
}
#[derive(RustcDecodable)]
struct RegistryDependency {
name: String,
req: String,
features: Vec<String>,
optional: bool,
default_features: bool,
target: Option<String>,
kind: Option<String>,
}
impl<'cfg> RegistrySource<'cfg> {
pub fn new(source_id: &SourceId,
config: &'cfg Config) -> RegistrySource<'cfg> {
let hash = hex::short_hash(source_id);
let ident = source_id.url().host().unwrap().to_string();
let part = format!("{}-{}", ident, hash);
RegistrySource {
checkout_path: config.registry_index_path().join(&part),
cache_path: config.registry_cache_path().join(&part),
src_path: config.registry_source_path().join(&part),
config: config,
source_id: source_id.clone(),
handle: None,
sources: Vec::new(),
hashes: HashMap::new(),
cache: HashMap::new(),
updated: false,
}
}
/// Get the configured default registry URL.
///
/// This is the main cargo registry by default, but it can be overridden in
/// a.cargo/config
pub fn url(config: &Config) -> CargoResult<Url> {
let config = try!(ops::registry_configuration(config));
let url = config.index.unwrap_or(DEFAULT.to_string());
url.to_url().map_err(human)
}
/// Get the default url for the registry
pub fn default_url() -> String {
DEFAULT.to_string()
}
/// Decode the configuration stored within the registry.
///
/// This requires that the index has been at least checked out.
pub fn config(&self) -> CargoResult<RegistryConfig> {
let contents = try!(paths::read(&self.checkout_path.join("config.json")));
let config = try!(json::decode(&contents));
Ok(config)
}
/// Open the git repository for the index of the registry.
///
/// This will attempt to open an existing checkout, and failing that it will
/// initialize a fresh new directory and git checkout. No remotes will be
/// configured by default.
fn open(&self) -> CargoResult<git2::Repository> {
match git2::Repository::open(&self.checkout_path) {
Ok(repo) => return Ok(repo),
Err(..) => {}
}
try!(fs::create_dir_all(&self.checkout_path));
let _ = fs::remove_dir_all(&self.checkout_path);
let repo = try!(git2::Repository::init(&self.checkout_path));
Ok(repo)
}
/// Download the given package from the given url into the local cache.
///
/// This will perform the HTTP request to fetch the package. This function
/// will only succeed if the HTTP download was successful and the file is
/// then ready for inspection.
///
/// No action is taken if the package is already downloaded.
fn download_package(&mut self, pkg: &PackageId, url: &Url)
-> CargoResult<PathBuf> {
// TODO: should discover filename from the S3 redirect
let filename = format!("{}-{}.crate", pkg.name(), pkg.version());
let dst = self.cache_path.join(&filename);
if fs::metadata(&dst).is_ok() { return Ok(dst) }
try!(self.config.shell().status("Downloading", pkg));
try!(fs::create_dir_all(dst.parent().unwrap()));
let expected_hash = try!(self.hash(pkg));
let handle = match self.handle {
Some(ref mut handle) => handle,
None => {
self.handle = Some(try!(ops::http_handle(self.config)));
self.handle.as_mut().unwrap()
}
};
// TODO: don't download into memory (curl-rust doesn't expose it)
let resp = try!(handle.get(url.to_string()).follow_redirects(true).exec());
if resp.get_code()!= 200 && resp.get_code()!= 0 {
return Err(internal(format!("Failed to get 200 response from {}\n{}",
url, resp)))
}
// Verify what we just downloaded
let actual = {
let mut state = Sha256::new();
state.update(resp.get_body());
state.finish()
};
if actual.to_hex()!= expected_hash {
return Err(human(format!("Failed to verify the checksum of `{}`",
pkg)))
}
try!(paths::write(&dst, resp.get_body()));
Ok(dst)
}
/// Return the hash listed for a specified PackageId.
fn hash(&mut self, pkg: &PackageId) -> CargoResult<String> {
let key = (pkg.name().to_string(), pkg.version().to_string());
if let Some(s) = self.hashes.get(&key) {
return Ok(s.clone())
}
// Ok, we're missing the key, so parse the index file to load it.
try!(self.summaries(pkg.name()));
self.hashes.get(&key).chain_error(|| {
internal(format!("no hash listed for {}", pkg))
}).map(|s| s.clone())
}
/// Unpacks a downloaded package into a location where it's ready to be
/// compiled.
///
/// No action is taken if the source looks like it's already unpacked.
fn unpack_package(&self, pkg: &PackageId, tarball: PathBuf)
-> CargoResult<PathBuf> {
let dst = self.src_path.join(&format!("{}-{}", pkg.name(),
pkg.version()));
if fs::metadata(&dst.join(".cargo-ok")).is_ok() { return Ok(dst) }
try!(fs::create_dir_all(dst.parent().unwrap()));
let f = try!(File::open(&tarball));
let gz = try!(GzDecoder::new(f));
let mut tar = Archive::new(gz);
try!(tar.unpack(dst.parent().unwrap()));
try!(File::create(&dst.join(".cargo-ok")));
Ok(dst)
}
/// Parse the on-disk metadata for the package provided
fn summaries(&mut self, name: &str) -> CargoResult<&Vec<(Summary, bool)>> {
if self.cache.contains_key(name) {
return Ok(self.cache.get(name).unwrap());
}
// see module comment for why this is structured the way it is
let path = self.checkout_path.clone();
let fs_name = name.chars().flat_map(|c| c.to_lowercase()).collect::<String>();
let path = match fs_name.len() {
1 => path.join("1").join(&fs_name),
2 => path.join("2").join(&fs_name),
3 => path.join("3").join(&fs_name[..1]).join(&fs_name),
_ => path.join(&fs_name[0..2])
.join(&fs_name[2..4])
.join(&fs_name),
};
let summaries = match File::open(&path) {
Ok(mut f) => {
let mut contents = String::new();
try!(f.read_to_string(&mut contents));
let ret: CargoResult<Vec<(Summary, bool)>>;
ret = contents.lines().filter(|l| l.trim().len() > 0)
.map(|l| self.parse_registry_package(l))
.collect();
try!(ret.chain_error(|| {
internal(format!("Failed to parse registry's information \
for: {}", name))
}))
}
Err(..) => Vec::new(),
};
let summaries = summaries.into_iter().filter(|summary| {
summary.0.package_id().name() == name
}).collect();
self.cache.insert(name.to_string(), summaries);
Ok(self.cache.get(name).unwrap())
}
/// Parse a line from the registry's index file into a Summary for a
/// package.
///
/// The returned boolean is whether or not the summary has been yanked.
fn parse_registry_package(&mut self, line: &str)
-> CargoResult<(Summary, bool)> {
let RegistryPackage {
name, vers, cksum, deps, features, yanked
} = try!(json::decode::<RegistryPackage>(line));
let pkgid = try!(PackageId::new(&name, &vers, &self.source_id));
let deps: CargoResult<Vec<Dependency>> = deps.into_iter().map(|dep| {
self.parse_registry_dependency(dep)
}).collect();
let deps = try!(deps);
self.hashes.insert((name, vers), cksum);
Ok((try!(Summary::new(pkgid, deps, features)), yanked.unwrap_or(false)))
}
/// Converts an encoded dependency in the registry to a cargo dependency
fn parse_registry_dependency(&self, dep: RegistryDependency)
-> CargoResult<Dependency> {
let RegistryDependency {
name, req, features, optional, default_features, target, kind
} = dep;
let dep = try!(DependencyInner::parse(&name, Some(&req),
&self.source_id));
let kind = match kind.as_ref().map(|s| &s[..]).unwrap_or("") {
"dev" => Kind::Development,
"build" => Kind::Build,
_ => Kind::Normal,
};
// Unfortunately older versions of cargo and/or the registry ended up
// publishing lots of entries where the features array contained the
// empty feature, "", inside. This confuses the resolution process much
// later on and these features aren't actually valid, so filter them all
// out here.
let features = features.into_iter().filter(|s|!s.is_empty()).collect();
Ok(dep.set_optional(optional)
.set_default_features(default_features)
.set_features(features)
.set_only_for_platform(target)
.set_kind(kind)
.into_dependency())
}
/// Actually perform network operations to update the registry
fn do_update(&mut self) -> CargoResult<()> {
if self.updated { return Ok(()) }
try!(self.config.shell().status("Updating",
format!("registr | }
impl<'cfg> Registry for RegistrySource<'cfg> {
fn query(&mut self, dep: &Dependency) -> CargoResult<Vec<Summary>> {
// If this is a precise dependency, then it came from a lockfile and in
// theory the registry is known to contain this version. If, however, we
// come back with no summaries, then our registry may need to be
// updated, so we fall back to performing a lazy update.
if dep.source_id().precise().is_some() {
let mut summaries = try!(self.summaries(dep.name())).iter().map(|s| {
s.0.clone()
}).collect::<Vec<_>>();
if try!(summaries.query(dep)).len() == 0 {
try!(self.do_update());
}
}
let mut summaries = {
let summaries = try!(self.summaries(dep.name()));
summaries.iter().filter(|&&(_, yanked)| {
dep.source_id().precise().is_some() ||!yanked
}).map(|s| s.0.clone()).collect::<Vec<_>>()
};
// Handle `cargo update --precise` here. If specified, our own source
// will have a precise version listed of the form `<pkg>=<req>` where
// `<pkg>` is the name of a crate on this source and `<req>` is the
// version requested (agument to `--precise`).
summaries.retain(|s| {
match self.source_id.precise() {
Some(p) if p.starts_with(dep.name()) => {
let vers = &p[dep.name().len() + 1..];
s.version().to_string() == vers
}
_ => true,
}
});
summaries.query(dep)
}
}
impl<'cfg> Source for RegistrySource<'cfg> {
fn update(&mut self) -> CargoResult<()> {
// If we have an imprecise version then we don't know what we're going
// to look for, so we always attempt to perform an update here.
//
// If we have a precise version, then we'll update lazily during the
// querying phase. Note that precise in this case is only
// `Some("locked")` as other `Some` values indicate a `cargo update
// --precise` request
if self.source_id.precise()!= Some("locked") {
try!(self.do_update());
}
Ok(())
}
fn download(&mut self, packages: &[PackageId]) -> CargoResult<()> {
let config = try!(self.config());
let url = try!(config.dl.to_url().map_err(internal));
for package in packages.iter() {
if self.source_id!= *package.source_id() { continue }
let mut url = url.clone();
url.path_mut().unwrap().push(package.name().to_string());
url.path_mut().unwrap().push(package.version().to_string());
url.path_mut().unwrap().push("download".to_string());
let path = try!(self.download_package(package, &url).chain_error(|| {
internal(format!("Failed to download package `{}` from {}",
package, url))
}));
let path = try!(self.unpack_package(package, path).chain_error(|| {
internal(format!("Failed to unpack package `{}`", package))
}));
let mut src = PathSource::new(&path, &self.source_id, self.config);
| y `{}`", self.source_id.url())));
let repo = try!(self.open());
// git fetch origin
let url = self.source_id.url().to_string();
let refspec = "refs/heads/*:refs/remotes/origin/*";
try!(git::fetch(&repo, &url, refspec).chain_error(|| {
internal(format!("failed to fetch `{}`", url))
}));
// git reset --hard origin/master
let reference = "refs/remotes/origin/master";
let oid = try!(repo.refname_to_id(reference));
trace!("[{}] updating to rev {}", self.source_id, oid);
let object = try!(repo.find_object(oid, None));
try!(repo.reset(&object, git2::ResetType::Hard, None));
self.updated = true;
self.cache.clear();
Ok(())
} | identifier_body |
registry.rs | Git sources can also have commits deleted through
//! rebasings where registries cannot have their versions deleted.
//!
//! # The Index of a Registry
//!
//! One of the major difficulties with a registry is that hosting so many
//! packages may quickly run into performance problems when dealing with
//! dependency graphs. It's infeasible for cargo to download the entire contents
//! of the registry just to resolve one package's dependencies, for example. As
//! a result, cargo needs some efficient method of querying what packages are
//! available on a registry, what versions are available, and what the
//! dependencies for each version is.
//!
//! One method of doing so would be having the registry expose an HTTP endpoint
//! which can be queried with a list of packages and a response of their
//! dependencies and versions is returned. This is somewhat inefficient however
//! as we may have to hit the endpoint many times and we may have already
//! queried for much of the data locally already (for other packages, for
//! example). This also involves inventing a transport format between the
//! registry and Cargo itself, so this route was not taken.
//!
//! Instead, Cargo communicates with registries through a git repository
//! referred to as the Index. The Index of a registry is essentially an easily
//! query-able version of the registry's database for a list of versions of a
//! package as well as a list of dependencies for each version.
//!
//! Using git to host this index provides a number of benefits:
//!
//! * The entire index can be stored efficiently locally on disk. This means
//! that all queries of a registry can happen locally and don't need to touch
//! the network.
//!
//! * Updates of the index are quite efficient. Using git buys incremental
//! updates, compressed transmission, etc for free. The index must be updated
//! each time we need fresh information from a registry, but this is one
//! update of a git repository that probably hasn't changed a whole lot so
//! it shouldn't be too expensive.
//!
//! Additionally, each modification to the index is just appending a line at
//! the end of a file (the exact format is described later). This means that
//! the commits for an index are quite small and easily applied/compressable.
//!
//! ## The format of the Index
//!
//! The index is a store for the list of versions for all packages known, so its
//! format on disk is optimized slightly to ensure that `ls registry` doesn't
//! produce a list of all packages ever known. The index also wants to ensure
//! that there's not a million files which may actually end up hitting
//! filesystem limits at some point. To this end, a few decisions were made
//! about the format of the registry:
//!
//! 1. Each crate will have one file corresponding to it. Each version for a
//! crate will just be a line in this file.
//! 2. There will be two tiers of directories for crate names, under which
//! crates corresponding to those tiers will be located.
//!
//! As an example, this is an example hierarchy of an index:
//!
//! ```notrust
//!.
//! βββ 3
//! βΒ Β βββ u
//! βΒ Β βββ url
//! βββ bz
//! βΒ Β βββ ip
//! βΒ Β βββ bzip2
//! βββ config.json
//! βββ en
//! βΒ Β βββ co
//! βΒ Β βββ encoding
//! βββ li
//! Β Β βββ bg
//! Β Β βΒ Β βββ libgit2
//! Β Β βββ nk
//! Β Β βββ link-config
//! ```
//!
//! The root of the index contains a `config.json` file with a few entries
//! corresponding to the registry (see `RegistryConfig` below).
//!
//! Otherwise, there are three numbered directories (1, 2, 3) for crates with
//! names 1, 2, and 3 characters in length. The 1/2 directories simply have the
//! crate files underneath them, while the 3 directory is sharded by the first
//! letter of the crate name.
//!
//! Otherwise the top-level directory contains many two-letter directory names,
//! each of which has many sub-folders with two letters. At the end of all these
//! are the actual crate files themselves.
//!
//! The purpose of this layout is to hopefully cut down on `ls` sizes as well as
//! efficient lookup based on the crate name itself.
//!
//! ## Crate files
//!
//! Each file in the index is the history of one crate over time. Each line in
//! the file corresponds to one version of a crate, stored in JSON format (see
//! the `RegistryPackage` structure below).
//!
//! As new versions are published, new lines are appended to this file. The only
//! modifications to this file that should happen over time are yanks of a
//! particular version.
//!
//! # Downloading Packages
//!
//! The purpose of the Index was to provide an efficient method to resolve the
//! dependency graph for a package. So far we only required one network
//! interaction to update the registry's repository (yay!). After resolution has
//! been performed, however we need to download the contents of packages so we
//! can read the full manifest and build the source code.
//!
//! To accomplish this, this source's `download` method will make an HTTP
//! request per-package requested to download tarballs into a local cache. These
//! tarballs will then be unpacked into a destination folder.
//!
//! Note that because versions uploaded to the registry are frozen forever that
//! the HTTP download and unpacking can all be skipped if the version has
//! already been downloaded and unpacked. This caching allows us to only
//! download a package when absolutely necessary.
//!
//! # Filesystem Hierarchy
//!
//! Overall, the `$HOME/.cargo` looks like this when talking about the registry:
//!
//! ```notrust
//! # A folder under which all registry metadata is hosted (similar to
//! # $HOME/.cargo/git)
//! $HOME/.cargo/registry/
//!
//! # For each registry that cargo knows about (keyed by hostname + hash)
//! # there is a folder which is the checked out version of the index for
//! # the registry in this location. Note that this is done so cargo can
//! # support multiple registries simultaneously
//! index/
//! registry1-<hash>/
//! registry2-<hash>/
//! ...
//!
//! # This folder is a cache for all downloaded tarballs from a registry.
//! # Once downloaded and verified, a tarball never changes.
//! cache/
//! registry1-<hash>/<pkg>-<version>.crate
//! ...
//!
//! # Location in which all tarballs are unpacked. Each tarball is known to
//! # be frozen after downloading, so transitively this folder is also
//! # frozen once its unpacked (it's never unpacked again)
//! src/
//! registry1-<hash>/<pkg>-<version>/...
//! ...
//! ```
use std::collections::HashMap;
use std::fs::{self, File};
use std::io::prelude::*;
use std::path::PathBuf;
use curl::http;
use flate2::read::GzDecoder;
use git2;
use rustc_serialize::hex::ToHex;
use rustc_serialize::json;
use tar::Archive;
use url::Url;
use core::{Source, SourceId, PackageId, Package, Summary, Registry};
use core::dependency::{Dependency, DependencyInner, Kind};
use sources::{PathSource, git};
use util::{CargoResult, Config, internal, ChainError, ToUrl, human};
use util::{hex, Sha256, paths};
use ops;
static DEFAULT: &'static str = "https://github.com/rust-lang/crates.io-index";
pub struct RegistrySource<'cfg> {
source_id: SourceId,
checkout_path: PathBuf,
cache_path: PathBuf,
src_path: PathBuf,
config: &'cfg Config,
handle: Option<http::Handle>,
sources: Vec<PathSource<'cfg>>,
hashes: HashMap<(String, String), String>, // (name, vers) => cksum
cache: HashMap<String, Vec<(Summary, bool)>>,
updated: bool,
}
#[derive(RustcDecodable)]
pub struct RegistryConfig {
/// Download endpoint for all crates. This will be appended with
/// `/<crate>/<version>/download` and then will be hit with an HTTP GET
/// request to download the tarball for a crate.
pub dl: String,
/// API endpoint for the registry. This is what's actually hit to perform
/// operations like yanks, owner modifications, publish new crates, etc.
pub api: String,
}
#[derive(RustcDecodable)]
struct RegistryPackage {
name: String,
vers: String,
deps: Vec<RegistryDependency>,
features: HashMap<String, Vec<String>>,
cksum: String,
yanked: Option<bool>,
}
#[derive(RustcDecodable)]
struct RegistryDependency {
name: String,
req: String,
features: Vec<String>,
optional: bool,
default_features: bool,
target: Option<String>,
kind: Option<String>,
}
impl<'cfg> RegistrySource<'cfg> {
pub fn new(source_id: &SourceId,
config: &'cfg Config) -> RegistrySource<'cfg> {
let hash = hex::short_hash(source_id);
let ident = source_id.url().host().unwrap().to_string();
let part = format!("{}-{}", ident, hash);
RegistrySource {
checkout_path: config.registry_index_path().join(&part),
cache_path: config.registry_cache_path().join(&part),
src_path: config.registry_source_path().join(&part),
config: config,
source_id: source_id.clone(),
handle: None,
sources: Vec::new(),
hashes: HashMap::new(),
cache: HashMap::new(),
updated: false,
}
}
/// Get the configured default registry URL.
///
/// This is the main cargo registry by default, but it can be overridden in
/// a.cargo/config
pub fn url(config: &Config) -> CargoResult<Url> {
let config = try!(ops::registry_configuration(config));
let url = c | ig.index.unwrap_or(DEFAULT.to_string());
url.to_url().map_err(human)
}
/// Get the default url for the registry
pub fn default_url() -> String {
DEFAULT.to_string()
}
/// Decode the configuration stored within the registry.
///
/// This requires that the index has been at least checked out.
pub fn config(&self) -> CargoResult<RegistryConfig> {
let contents = try!(paths::read(&self.checkout_path.join("config.json")));
let config = try!(json::decode(&contents));
Ok(config)
}
/// Open the git repository for the index of the registry.
///
/// This will attempt to open an existing checkout, and failing that it will
/// initialize a fresh new directory and git checkout. No remotes will be
/// configured by default.
fn open(&self) -> CargoResult<git2::Repository> {
match git2::Repository::open(&self.checkout_path) {
Ok(repo) => return Ok(repo),
Err(..) => {}
}
try!(fs::create_dir_all(&self.checkout_path));
let _ = fs::remove_dir_all(&self.checkout_path);
let repo = try!(git2::Repository::init(&self.checkout_path));
Ok(repo)
}
/// Download the given package from the given url into the local cache.
///
/// This will perform the HTTP request to fetch the package. This function
/// will only succeed if the HTTP download was successful and the file is
/// then ready for inspection.
///
/// No action is taken if the package is already downloaded.
fn download_package(&mut self, pkg: &PackageId, url: &Url)
-> CargoResult<PathBuf> {
// TODO: should discover filename from the S3 redirect
let filename = format!("{}-{}.crate", pkg.name(), pkg.version());
let dst = self.cache_path.join(&filename);
if fs::metadata(&dst).is_ok() { return Ok(dst) }
try!(self.config.shell().status("Downloading", pkg));
try!(fs::create_dir_all(dst.parent().unwrap()));
let expected_hash = try!(self.hash(pkg));
let handle = match self.handle {
Some(ref mut handle) => handle,
None => {
self.handle = Some(try!(ops::http_handle(self.config)));
self.handle.as_mut().unwrap()
}
};
// TODO: don't download into memory (curl-rust doesn't expose it)
let resp = try!(handle.get(url.to_string()).follow_redirects(true).exec());
if resp.get_code()!= 200 && resp.get_code()!= 0 {
return Err(internal(format!("Failed to get 200 response from {}\n{}",
url, resp)))
}
// Verify what we just downloaded
let actual = {
let mut state = Sha256::new();
state.update(resp.get_body());
state.finish()
};
if actual.to_hex()!= expected_hash {
return Err(human(format!("Failed to verify the checksum of `{}`",
pkg)))
}
try!(paths::write(&dst, resp.get_body()));
Ok(dst)
}
/// Return the hash listed for a specified PackageId.
fn hash(&mut self, pkg: &PackageId) -> CargoResult<String> {
let key = (pkg.name().to_string(), pkg.version().to_string());
if let Some(s) = self.hashes.get(&key) {
return Ok(s.clone())
}
// Ok, we're missing the key, so parse the index file to load it.
try!(self.summaries(pkg.name()));
self.hashes.get(&key).chain_error(|| {
internal(format!("no hash listed for {}", pkg))
}).map(|s| s.clone())
}
/// Unpacks a downloaded package into a location where it's ready to be
/// compiled.
///
/// No action is taken if the source looks like it's already unpacked.
fn unpack_package(&self, pkg: &PackageId, tarball: PathBuf)
-> CargoResult<PathBuf> {
let dst = self.src_path.join(&format!("{}-{}", pkg.name(),
pkg.version()));
if fs::metadata(&dst.join(".cargo-ok")).is_ok() { return Ok(dst) }
try!(fs::create_dir_all(dst.parent().unwrap()));
let f = try!(File::open(&tarball));
let gz = try!(GzDecoder::new(f));
let mut tar = Archive::new(gz);
try!(tar.unpack(dst.parent().unwrap()));
try!(File::create(&dst.join(".cargo-ok")));
Ok(dst)
}
/// Parse the on-disk metadata for the package provided
fn summaries(&mut self, name: &str) -> CargoResult<&Vec<(Summary, bool)>> {
if self.cache.contains_key(name) {
return Ok(self.cache.get(name).unwrap());
}
// see module comment for why this is structured the way it is
let path = self.checkout_path.clone();
let fs_name = name.chars().flat_map(|c| c.to_lowercase()).collect::<String>();
let path = match fs_name.len() {
1 => path.join("1").join(&fs_name),
2 => path.join("2").join(&fs_name),
3 => path.join("3").join(&fs_name[..1]).join(&fs_name),
_ => path.join(&fs_name[0..2])
.join(&fs_name[2..4])
.join(&fs_name),
};
let summaries = match File::open(&path) {
Ok(mut f) => {
let mut contents = String::new();
try!(f.read_to_string(&mut contents));
let ret: CargoResult<Vec<(Summary, bool)>>;
ret = contents.lines().filter(|l| l.trim().len() > 0)
.map(|l| self.parse_registry_package(l))
.collect();
try!(ret.chain_error(|| {
internal(format!("Failed to parse registry's information \
for: {}", name))
}))
}
Err(..) => Vec::new(),
};
let summaries = summaries.into_iter().filter(|summary| {
summary.0.package_id().name() == name
}).collect();
self.cache.insert(name.to_string(), summaries);
Ok(self.cache.get(name).unwrap())
}
/// Parse a line from the registry's index file into a Summary for a
/// package.
///
/// The returned boolean is whether or not the summary has been yanked.
fn parse_registry_package(&mut self, line: &str)
-> CargoResult<(Summary, bool)> {
let RegistryPackage {
name, vers, cksum, deps, features, yanked
} = try!(json::decode::<RegistryPackage>(line));
let pkgid = try!(PackageId::new(&name, &vers, &self.source_id));
let deps: CargoResult<Vec<Dependency>> = deps.into_iter().map(|dep| {
self.parse_registry_dependency(dep)
}).collect();
let deps = try!(deps);
self.hashes.insert((name, vers), cksum);
Ok((try!(Summary::new(pkgid, deps, features)), yanked.unwrap_or(false)))
}
/// Converts an encoded dependency in the registry to a cargo dependency
fn parse_registry_dependency(&self, dep: RegistryDependency)
-> CargoResult<Dependency> {
let RegistryDependency {
name, req, features, optional, default_features, target, kind
} = dep;
let dep = try!(DependencyInner::parse(&name, Some(&req),
&self.source_id));
let kind = match kind.as_ref().map(|s| &s[..]).unwrap_or("") {
"dev" => Kind::Development,
"build" => Kind::Build,
_ => Kind::Normal,
};
// Unfortunately older versions of cargo and/or the registry ended up
// publishing lots of entries where the features array contained the
// empty feature, "", inside. This confuses the resolution process much
// later on and these features aren't actually valid, so filter them all
// out here.
let features = features.into_iter().filter(|s|!s.is_empty()).collect();
Ok(dep.set_optional(optional)
.set_default_features(default_features)
.set_features(features)
.set_only_for_platform(target)
.set_kind(kind)
.into_dependency())
}
/// Actually perform network operations to update the registry
fn do_update(&mut self) -> CargoResult<()> {
if self.updated { return Ok(()) }
try!(self.config.shell().status("Updating",
format!("registry `{}`", self.source_id.url())));
let repo = try!(self.open());
// git fetch origin
let url = self.source_id.url().to_string();
let refspec = "refs/heads/*:refs/remotes/origin/*";
try!(git::fetch(&repo, &url, refspec).chain_error(|| {
internal(format!("failed to fetch `{}`", url))
}));
// git reset --hard origin/master
let reference = "refs/remotes/origin/master";
let oid = try!(repo.refname_to_id(reference));
trace!("[{}] updating to rev {}", self.source_id, oid);
let object = try!(repo.find_object(oid, None));
try!(repo.reset(&object, git2::ResetType::Hard, None));
self.updated = true;
self.cache.clear();
Ok(())
}
}
impl<'cfg> Registry for RegistrySource<'cfg> {
fn query(&mut self, dep: &Dependency) -> CargoResult<Vec<Summary>> {
// If this is a precise dependency, then it came from a lockfile and in
// theory the registry is known to contain this version. If, however, we
// come back with no summaries, then our registry may need to be
// updated, so we fall back to performing a lazy update.
if dep.source_id().precise().is_some() {
let mut summaries = try!(self.summaries(dep.name())).iter().map(|s| {
s.0.clone()
}).collect::<Vec<_>>();
if try!(summaries.query(dep)).len() == 0 {
try!(self.do_update());
}
}
let mut summaries = {
let summaries = try!(self.summaries(dep.name()));
summaries.iter().filter(|&&(_, yanked)| {
dep.source_id().precise().is_some() ||!yanked
}).map(|s| s.0.clone()).collect::<Vec<_>>()
};
// Handle `cargo update --precise` here. If specified, our own source
// will have a precise version listed of the form `<pkg>=<req>` where
// `<pkg>` is the name of a crate on this source and `<req>` is the
// version requested (agument to `--precise`).
summaries.retain(|s| {
match self.source_id.precise() {
Some(p) if p.starts_with(dep.name()) => {
let vers = &p[dep.name().len() + 1..];
s.version().to_string() == vers
}
_ => true,
}
});
summaries.query(dep)
}
}
impl<'cfg> Source for RegistrySource<'cfg> {
fn update(&mut self) -> CargoResult<()> {
// If we have an imprecise version then we don't know what we're going
// to look for, so we always attempt to perform an update here.
//
// If we have a precise version, then we'll update lazily during the
// querying phase. Note that precise in this case is only
// `Some("locked")` as other `Some` values indicate a `cargo update
// --precise` request
if self.source_id.precise()!= Some("locked") {
try!(self.do_update());
}
Ok(())
}
fn download(&mut self, packages: &[PackageId]) -> CargoResult<()> {
let config = try!(self.config());
let url = try!(config.dl.to_url().map_err(internal));
for package in packages.iter() {
if self.source_id!= *package.source_id() { continue }
let mut url = url.clone();
url.path_mut().unwrap().push(package.name().to_string());
url.path_mut().unwrap().push(package.version().to_string());
url.path_mut().unwrap().push("download".to_string());
let path = try!(self.download_package(package, &url).chain_error(|| {
internal(format!("Failed to download package `{}` from {}",
package, url))
}));
let path = try!(self.unpack_package(package, path).chain_error(|| {
internal(format!("Failed to unpack package `{}`", package))
}));
let mut src = PathSource::new(&path, &self.source_id, self.config);
| onf | identifier_name |
registry.rs | . Git sources can also have commits deleted through
//! rebasings where registries cannot have their versions deleted.
//!
//! # The Index of a Registry
//!
//! One of the major difficulties with a registry is that hosting so many
//! packages may quickly run into performance problems when dealing with
//! dependency graphs. It's infeasible for cargo to download the entire contents
//! of the registry just to resolve one package's dependencies, for example. As
//! a result, cargo needs some efficient method of querying what packages are
//! available on a registry, what versions are available, and what the
//! dependencies for each version is.
//!
//! One method of doing so would be having the registry expose an HTTP endpoint
//! which can be queried with a list of packages and a response of their
//! dependencies and versions is returned. This is somewhat inefficient however
//! as we may have to hit the endpoint many times and we may have already
//! queried for much of the data locally already (for other packages, for
//! example). This also involves inventing a transport format between the
//! registry and Cargo itself, so this route was not taken.
//!
//! Instead, Cargo communicates with registries through a git repository
//! referred to as the Index. The Index of a registry is essentially an easily
//! query-able version of the registry's database for a list of versions of a
//! package as well as a list of dependencies for each version.
//!
//! Using git to host this index provides a number of benefits:
//!
//! * The entire index can be stored efficiently locally on disk. This means
//! that all queries of a registry can happen locally and don't need to touch
//! the network.
//!
//! * Updates of the index are quite efficient. Using git buys incremental
//! updates, compressed transmission, etc for free. The index must be updated
//! each time we need fresh information from a registry, but this is one
//! update of a git repository that probably hasn't changed a whole lot so
//! it shouldn't be too expensive.
//!
//! Additionally, each modification to the index is just appending a line at
//! the end of a file (the exact format is described later). This means that
//! the commits for an index are quite small and easily applied/compressable.
//!
//! ## The format of the Index
//!
//! The index is a store for the list of versions for all packages known, so its
//! format on disk is optimized slightly to ensure that `ls registry` doesn't
//! produce a list of all packages ever known. The index also wants to ensure
//! that there's not a million files which may actually end up hitting
//! filesystem limits at some point. To this end, a few decisions were made
//! about the format of the registry:
//!
//! 1. Each crate will have one file corresponding to it. Each version for a
//! crate will just be a line in this file.
//! 2. There will be two tiers of directories for crate names, under which
//! crates corresponding to those tiers will be located.
//!
//! As an example, this is an example hierarchy of an index:
//!
//! ```notrust
//!.
//! βββ 3
//! βΒ Β βββ u
//! βΒ Β βββ url
//! βββ bz
//! βΒ Β βββ ip
//! βΒ Β βββ bzip2
//! βββ config.json
//! βββ en
//! βΒ Β βββ co
//! βΒ Β βββ encoding
//! βββ li
//! Β Β βββ bg
//! Β Β βΒ Β βββ libgit2
//! Β Β βββ nk
//! Β Β βββ link-config
//! ```
//!
//! The root of the index contains a `config.json` file with a few entries
//! corresponding to the registry (see `RegistryConfig` below).
//!
//! Otherwise, there are three numbered directories (1, 2, 3) for crates with
//! names 1, 2, and 3 characters in length. The 1/2 directories simply have the
//! crate files underneath them, while the 3 directory is sharded by the first
//! letter of the crate name.
//!
//! Otherwise the top-level directory contains many two-letter directory names,
//! each of which has many sub-folders with two letters. At the end of all these
//! are the actual crate files themselves.
//!
//! The purpose of this layout is to hopefully cut down on `ls` sizes as well as
//! efficient lookup based on the crate name itself.
//!
//! ## Crate files
//!
//! Each file in the index is the history of one crate over time. Each line in
//! the file corresponds to one version of a crate, stored in JSON format (see
//! the `RegistryPackage` structure below).
//!
//! As new versions are published, new lines are appended to this file. The only
//! modifications to this file that should happen over time are yanks of a
//! particular version.
//!
//! # Downloading Packages
//!
//! The purpose of the Index was to provide an efficient method to resolve the
//! dependency graph for a package. So far we only required one network
//! interaction to update the registry's repository (yay!). After resolution has
//! been performed, however we need to download the contents of packages so we
//! can read the full manifest and build the source code.
//!
//! To accomplish this, this source's `download` method will make an HTTP
//! request per-package requested to download tarballs into a local cache. These
//! tarballs will then be unpacked into a destination folder.
//!
//! Note that because versions uploaded to the registry are frozen forever that
//! the HTTP download and unpacking can all be skipped if the version has
//! already been downloaded and unpacked. This caching allows us to only
//! download a package when absolutely necessary.
//!
//! # Filesystem Hierarchy
//!
//! Overall, the `$HOME/.cargo` looks like this when talking about the registry:
//!
//! ```notrust
//! # A folder under which all registry metadata is hosted (similar to
//! # $HOME/.cargo/git)
//! $HOME/.cargo/registry/
//!
//! # For each registry that cargo knows about (keyed by hostname + hash)
//! # there is a folder which is the checked out version of the index for
//! # the registry in this location. Note that this is done so cargo can
//! # support multiple registries simultaneously
//! index/
//! registry1-<hash>/
//! registry2-<hash>/
//! ...
//!
//! # This folder is a cache for all downloaded tarballs from a registry.
//! # Once downloaded and verified, a tarball never changes.
//! cache/
//! registry1-<hash>/<pkg>-<version>.crate
//! ...
//!
//! # Location in which all tarballs are unpacked. Each tarball is known to
//! # be frozen after downloading, so transitively this folder is also
//! # frozen once its unpacked (it's never unpacked again)
//! src/
//! registry1-<hash>/<pkg>-<version>/...
//! ...
//! ```
use std::collections::HashMap;
use std::fs::{self, File};
use std::io::prelude::*;
use std::path::PathBuf;
use curl::http;
use flate2::read::GzDecoder;
use git2;
use rustc_serialize::hex::ToHex;
use rustc_serialize::json;
use tar::Archive;
use url::Url;
use core::{Source, SourceId, PackageId, Package, Summary, Registry};
use core::dependency::{Dependency, DependencyInner, Kind};
use sources::{PathSource, git};
use util::{CargoResult, Config, internal, ChainError, ToUrl, human};
use util::{hex, Sha256, paths};
use ops;
static DEFAULT: &'static str = "https://github.com/rust-lang/crates.io-index";
pub struct RegistrySource<'cfg> {
source_id: SourceId,
checkout_path: PathBuf,
cache_path: PathBuf,
src_path: PathBuf,
config: &'cfg Config,
handle: Option<http::Handle>,
sources: Vec<PathSource<'cfg>>,
hashes: HashMap<(String, String), String>, // (name, vers) => cksum
cache: HashMap<String, Vec<(Summary, bool)>>,
updated: bool,
}
#[derive(RustcDecodable)]
pub struct RegistryConfig {
/// Download endpoint for all crates. This will be appended with
/// `/<crate>/<version>/download` and then will be hit with an HTTP GET
/// request to download the tarball for a crate.
pub dl: String,
/// API endpoint for the registry. This is what's actually hit to perform
/// operations like yanks, owner modifications, publish new crates, etc.
pub api: String,
}
#[derive(RustcDecodable)]
struct RegistryPackage {
name: String,
vers: String,
deps: Vec<RegistryDependency>,
features: HashMap<String, Vec<String>>,
cksum: String,
yanked: Option<bool>,
}
#[derive(RustcDecodable)]
struct RegistryDependency {
name: String,
req: String,
features: Vec<String>,
optional: bool,
default_features: bool,
target: Option<String>,
kind: Option<String>,
}
impl<'cfg> RegistrySource<'cfg> {
pub fn new(source_id: &SourceId,
config: &'cfg Config) -> RegistrySource<'cfg> {
let hash = hex::short_hash(source_id);
let ident = source_id.url().host().unwrap().to_string();
let part = format!("{}-{}", ident, hash);
RegistrySource {
checkout_path: config.registry_index_path().join(&part),
cache_path: config.registry_cache_path().join(&part),
src_path: config.registry_source_path().join(&part),
config: config,
source_id: source_id.clone(),
handle: None,
sources: Vec::new(),
hashes: HashMap::new(),
cache: HashMap::new(),
updated: false,
}
}
/// Get the configured default registry URL.
///
/// This is the main cargo registry by default, but it can be overridden in
/// a.cargo/config
pub fn url(config: &Config) -> CargoResult<Url> {
let config = try!(ops::registry_configuration(config));
let url = config.index.unwrap_or(DEFAULT.to_string());
url.to_url().map_err(human)
}
/// Get the default url for the registry
pub fn default_url() -> String {
DEFAULT.to_string()
}
/// Decode the configuration stored within the registry.
///
/// This requires that the index has been at least checked out.
pub fn config(&self) -> CargoResult<RegistryConfig> {
let contents = try!(paths::read(&self.checkout_path.join("config.json")));
let config = try!(json::decode(&contents));
Ok(config)
}
/// Open the git repository for the index of the registry.
///
/// This will attempt to open an existing checkout, and failing that it will
/// initialize a fresh new directory and git checkout. No remotes will be
/// configured by default.
fn open(&self) -> CargoResult<git2::Repository> {
match git2::Repository::open(&self.checkout_path) {
Ok(repo) => return Ok(repo),
Err(..) => {}
}
try!(fs::create_dir_all(&self.checkout_path));
let _ = fs::remove_dir_all(&self.checkout_path);
let repo = try!(git2::Repository::init(&self.checkout_path));
Ok(repo)
}
/// Download the given package from the given url into the local cache.
///
/// This will perform the HTTP request to fetch the package. This function
/// will only succeed if the HTTP download was successful and the file is
/// then ready for inspection.
///
/// No action is taken if the package is already downloaded.
fn download_package(&mut self, pkg: &PackageId, url: &Url)
-> CargoResult<PathBuf> {
// TODO: should discover filename from the S3 redirect
let filename = format!("{}-{}.crate", pkg.name(), pkg.version());
let dst = self.cache_path.join(&filename);
if fs::metadata(&dst).is_ok() { return Ok(dst) }
try!(self.config.shell().status("Downloading", pkg));
try!(fs::create_dir_all(dst.parent().unwrap()));
let expected_hash = try!(self.hash(pkg));
let handle = match self.handle {
Some(ref mut handle) => handle,
None => {
self.handle = Some(try!(ops::http_handle(self.config)));
self.handle.as_mut().unwrap()
}
};
// TODO: don't download into memory (curl-rust doesn't expose it)
let resp = try!(handle.get(url.to_string()).follow_redirects(true).exec());
if resp.get_code()!= 200 && resp.get_code()!= 0 {
return Err(internal(format!("Failed to get 200 response from {}\n{}",
url, resp)))
}
// Verify what we just downloaded
let actual = {
let mut state = Sha256::new();
state.update(resp.get_body());
state.finish()
};
if actual.to_hex()!= expected_hash {
return Err(human(format!("Failed to verify the checksum of `{}`",
pkg)))
}
try!(paths::write(&dst, resp.get_body()));
Ok(dst)
}
/// Return the hash listed for a specified PackageId.
fn hash(&mut self, pkg: &PackageId) -> CargoResult<String> {
let key = (pkg.name().to_string(), pkg.version().to_string());
if let Some(s) = self.hashes.get(&key) {
return Ok(s.clone())
}
// Ok, we're missing the key, so parse the index file to load it.
try!(self.summaries(pkg.name()));
self.hashes.get(&key).chain_error(|| {
internal(format!("no hash listed for {}", pkg))
}).map(|s| s.clone())
}
/// Unpacks a downloaded package into a location where it's ready to be
/// compiled.
///
/// No action is taken if the source looks like it's already unpacked.
fn unpack_package(&self, pkg: &PackageId, tarball: PathBuf)
-> CargoResult<PathBuf> {
let dst = self.src_path.join(&format!("{}-{}", pkg.name(),
pkg.version()));
if fs::metadata(&dst.join(".cargo-ok")).is_ok() { return Ok(dst) }
try!(fs::create_dir_all(dst.parent().unwrap()));
let f = try!(File::open(&tarball));
let gz = try!(GzDecoder::new(f));
let mut tar = Archive::new(gz);
try!(tar.unpack(dst.parent().unwrap()));
try!(File::create(&dst.join(".cargo-ok")));
Ok(dst)
}
/// Parse the on-disk metadata for the package provided
fn summaries(&mut self, name: &str) -> CargoResult<&Vec<(Summary, bool)>> {
if self.cache.contains_key(name) {
return Ok(self.cache.get(name).unwrap());
}
// see module comment for why this is structured the way it is
let path = self.checkout_path.clone();
let fs_name = name.chars().flat_map(|c| c.to_lowercase()).collect::<String>();
let path = match fs_name.len() {
1 => path.join("1").join(&fs_name),
2 => path.join("2").join(&fs_name),
3 => path.join("3").join(&fs_name[..1]).join(&fs_name),
_ => path.join(&fs_name[0..2])
.join(&fs_name[2..4])
.join(&fs_name),
};
let summaries = match File::open(&path) {
Ok(mut f) => {
let mut contents = String::new();
try!(f.read_to_string(&mut contents));
let ret: CargoResult<Vec<(Summary, bool)>>;
ret = contents.lines().filter(|l| l.trim().len() > 0)
.map(|l| self.parse_registry_package(l))
.collect();
try!(ret.chain_error(|| {
internal(format!("Failed to parse registry's information \
for: {}", name))
}))
}
Err(..) => Vec::new(),
};
let summaries = summaries.into_iter().filter(|summary| {
summary.0.package_id().name() == name
}).collect();
self.cache.insert(name.to_string(), summaries);
Ok(self.cache.get(name).unwrap())
}
/// Parse a line from the registry's index file into a Summary for a
/// package.
///
/// The returned boolean is whether or not the summary has been yanked.
fn parse_registry_package(&mut self, line: &str)
-> CargoResult<(Summary, bool)> {
let RegistryPackage {
name, vers, cksum, deps, features, yanked
} = try!(json::decode::<RegistryPackage>(line));
let pkgid = try!(PackageId::new(&name, &vers, &self.source_id));
let deps: CargoResult<Vec<Dependency>> = deps.into_iter().map(|dep| {
self.parse_registry_dependency(dep)
}).collect();
let deps = try!(deps);
self.hashes.insert((name, vers), cksum);
Ok((try!(Summary::new(pkgid, deps, features)), yanked.unwrap_or(false)))
}
/// Converts an encoded dependency in the registry to a cargo dependency
fn parse_registry_dependency(&self, dep: RegistryDependency)
-> CargoResult<Dependency> {
let RegistryDependency {
name, req, features, optional, default_features, target, kind
} = dep;
let dep = try!(DependencyInner::parse(&name, Some(&req),
&self.source_id));
let kind = match kind.as_ref().map(|s| &s[..]).unwrap_or("") {
"dev" => Kind::Development,
"build" => Kind::Build,
_ => Kind::Normal,
};
// Unfortunately older versions of cargo and/or the registry ended up
// publishing lots of entries where the features array contained the
// empty feature, "", inside. This confuses the resolution process much
// later on and these features aren't actually valid, so filter them all
// out here.
let features = features.into_iter().filter(|s|!s.is_empty()).collect();
Ok(dep.set_optional(optional)
.set_default_features(default_features)
.set_features(features)
.set_only_for_platform(target)
.set_kind(kind)
.into_dependency())
}
/// Actually perform network operations to update the registry
fn do_update(&mut self) -> CargoResult<()> {
if self.updated { return Ok(()) }
try!(self.config.shell().status("Updating",
format!("registry `{}`", self.source_id.url())));
let repo = try!(self.open());
// git fetch origin
let url = self.source_id.url().to_string();
let refspec = "refs/heads/*:refs/remotes/origin/*";
try!(git::fetch(&repo, &url, refspec).chain_error(|| {
internal(format!("failed to fetch `{}`", url))
}));
// git reset --hard origin/master
let reference = "refs/remotes/origin/master";
let oid = try!(repo.refname_to_id(reference));
trace!("[{}] updating to rev {}", self.source_id, oid);
let object = try!(repo.find_object(oid, None));
try!(repo.reset(&object, git2::ResetType::Hard, None));
self.updated = true;
self.cache.clear();
Ok(())
}
}
impl<'cfg> Registry for RegistrySource<'cfg> {
fn query(&mut self, dep: &Dependency) -> CargoResult<Vec<Summary>> {
// If this is a precise dependency, then it came from a lockfile and in
// theory the registry is known to contain this version. If, however, we
// come back with no summaries, then our registry may need to be
// updated, so we fall back to performing a lazy update.
if dep.source_id().precise().is_some() {
let mut summaries = try!(self.summaries(dep.name())).iter().map(|s| {
s.0.clone()
}).collect::<Vec<_>>();
if try!(summaries.query(dep)).len() == 0 {
try!(self.do_update());
}
}
let mut summaries = {
let summaries = try!(self.summaries(dep.name()));
summaries.iter().filter(|&&(_, yanked)| {
dep.source_id().precise().is_some() ||!yanked
}).map(|s| s.0.clone()).collect::<Vec<_>>()
};
// Handle `cargo update --precise` here. If specified, our own source
// will have a precise version listed of the form `<pkg>=<req>` where
// `<pkg>` is the name of a crate on this source and `<req>` is the
// version requested (agument to `--precise`).
summaries.retain(|s| {
match self.source_id.precise() { | let vers = &p[dep.name().len() + 1..];
s.version().to_string() == vers
}
_ => true,
}
});
summaries.query(dep)
}
}
impl<'cfg> Source for RegistrySource<'cfg> {
fn update(&mut self) -> CargoResult<()> {
// If we have an imprecise version then we don't know what we're going
// to look for, so we always attempt to perform an update here.
//
// If we have a precise version, then we'll update lazily during the
// querying phase. Note that precise in this case is only
// `Some("locked")` as other `Some` values indicate a `cargo update
// --precise` request
if self.source_id.precise()!= Some("locked") {
try!(self.do_update());
}
Ok(())
}
fn download(&mut self, packages: &[PackageId]) -> CargoResult<()> {
let config = try!(self.config());
let url = try!(config.dl.to_url().map_err(internal));
for package in packages.iter() {
if self.source_id!= *package.source_id() { continue }
let mut url = url.clone();
url.path_mut().unwrap().push(package.name().to_string());
url.path_mut().unwrap().push(package.version().to_string());
url.path_mut().unwrap().push("download".to_string());
let path = try!(self.download_package(package, &url).chain_error(|| {
internal(format!("Failed to download package `{}` from {}",
package, url))
}));
let path = try!(self.unpack_package(package, path).chain_error(|| {
internal(format!("Failed to unpack package `{}`", package))
}));
let mut src = PathSource::new(&path, &self.source_id, self.config);
| Some(p) if p.starts_with(dep.name()) => { | random_line_split |
transcribe.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use self::LockstepIterSize::*;
use ast;
use ast::{TokenTree, TtDelimited, TtToken, TtSequence, Ident};
use codemap::{Span, DUMMY_SP};
use diagnostic::SpanHandler;
use ext::tt::macro_parser::{NamedMatch, MatchedSeq, MatchedNonterminal};
use parse::token::{Eof, DocComment, Interpolated, MatchNt, SubstNt};
use parse::token::{Token, NtIdent, SpecialMacroVar};
use parse::token;
use parse::lexer::TokenAndSpan;
use std::rc::Rc;
use std::ops::Add;
use std::collections::HashMap;
///an unzipping of `TokenTree`s
#[derive(Clone)]
struct TtFrame {
forest: TokenTree,
idx: uint,
dotdotdoted: bool,
sep: Option<Token>,
}
#[derive(Clone)]
pub struct TtReader<'a> {
pub sp_diag: &'a SpanHandler,
/// the unzipped tree:
stack: Vec<TtFrame>,
/* for MBE-style macro transcription */
interpolations: HashMap<Ident, Rc<NamedMatch>>,
imported_from: Option<Ident>,
// Some => return imported_from as the next token
crate_name_next: Option<Span>,
repeat_idx: Vec<uint>,
repeat_len: Vec<uint>,
/* cached: */
pub cur_tok: Token,
pub cur_span: Span,
/// Transform doc comments. Only useful in macro invocations
pub desugar_doc_comments: bool,
}
/// This can do Macro-By-Example transcription. On the other hand, if
/// `src` contains no `TtSequence`s, `MatchNt`s or `SubstNt`s, `interp` can
/// (and should) be None.
pub fn new_tt_reader<'a>(sp_diag: &'a SpanHandler,
interp: Option<HashMap<Ident, Rc<NamedMatch>>>,
imported_from: Option<Ident>,
src: Vec<ast::TokenTree>)
-> TtReader<'a> {
new_tt_reader_with_doc_flag(sp_diag, interp, imported_from, src, false)
}
/// The extra `desugar_doc_comments` flag enables reading doc comments
/// like any other attribute which consists of `meta` and surrounding #[ ] tokens.
///
/// This can do Macro-By-Example transcription. On the other hand, if
/// `src` contains no `TtSequence`s, `MatchNt`s or `SubstNt`s, `interp` can
/// (and should) be None.
pub fn new_tt_reader_with_doc_flag<'a>(sp_diag: &'a SpanHandler,
interp: Option<HashMap<Ident, Rc<NamedMatch>>>,
imported_from: Option<Ident>,
src: Vec<ast::TokenTree>,
desugar_doc_comments: bool)
-> TtReader<'a> {
let mut r = TtReader {
sp_diag: sp_diag,
stack: vec!(TtFrame {
forest: TtSequence(DUMMY_SP, Rc::new(ast::SequenceRepetition {
tts: src,
// doesn't matter. This merely holds the root unzipping.
separator: None, op: ast::ZeroOrMore, num_captures: 0
})),
idx: 0,
dotdotdoted: false,
sep: None,
}),
interpolations: match interp { /* just a convenience */
None => HashMap::new(),
Some(x) => x,
},
imported_from: imported_from,
crate_name_next: None,
repeat_idx: Vec::new(),
repeat_len: Vec::new(),
desugar_doc_comments: desugar_doc_comments,
/* dummy values, never read: */
cur_tok: token::Eof,
cur_span: DUMMY_SP,
};
tt_next_token(&mut r); /* get cur_tok and cur_span set up */
r
}
fn lookup_cur_matched_by_matched(r: &TtReader, start: Rc<NamedMatch>) -> Rc<NamedMatch> {
r.repeat_idx.iter().fold(start, |ad, idx| {
match *ad {
MatchedNonterminal(_) => {
// end of the line; duplicate henceforth
ad.clone()
}
MatchedSeq(ref ads, _) => ads[*idx].clone()
}
})
}
fn lookup_cur_matched(r: &TtReader, name: Ident) -> Option<Rc<NamedMatch>> {
let matched_opt = r.interpolations.get(&name).cloned();
matched_opt.map(|s| lookup_cur_matched_by_matched(r, s))
}
#[derive(Clone)]
enum LockstepIterSize {
LisUnconstrained,
LisConstraint(uint, Ident),
LisContradiction(String),
}
impl Add for LockstepIterSize {
type Output = LockstepIterSize;
fn add(self, other: LockstepIterSize) -> LockstepIterSize {
match self {
LisUnconstrained => other,
LisContradiction(_) => self,
LisConstraint(l_len, ref l_id) => match other {
LisUnconstrained => self.clone(),
LisContradiction(_) => other,
LisConstraint(r_len, _) if l_len == r_len => self.clone(),
LisConstraint(r_len, r_id) => {
let l_n = token::get_ident(l_id.clone());
let r_n = token::get_ident(r_id);
LisContradiction(format!("inconsistent lockstep iteration: \
'{:?}' has {} items, but '{:?}' has {}",
l_n, l_len, r_n, r_len).to_string())
}
},
}
}
}
fn lockstep_iter_size(t: &TokenTree, r: &TtReader) -> LockstepIterSize | TtToken(..) => LisUnconstrained,
}
}
/// Return the next token from the TtReader.
/// EFFECT: advances the reader's token field
pub fn tt_next_token(r: &mut TtReader) -> TokenAndSpan {
// FIXME(pcwalton): Bad copy?
let ret_val = TokenAndSpan {
tok: r.cur_tok.clone(),
sp: r.cur_span.clone(),
};
loop {
match r.crate_name_next.take() {
None => (),
Some(sp) => {
r.cur_span = sp;
r.cur_tok = token::Ident(r.imported_from.unwrap(), token::Plain);
return ret_val;
},
}
let should_pop = match r.stack.last() {
None => {
assert_eq!(ret_val.tok, token::Eof);
return ret_val;
}
Some(frame) => {
if frame.idx < frame.forest.len() {
break;
}
!frame.dotdotdoted ||
*r.repeat_idx.last().unwrap() == *r.repeat_len.last().unwrap() - 1
}
};
/* done with this set; pop or repeat? */
if should_pop {
let prev = r.stack.pop().unwrap();
match r.stack.last_mut() {
None => {
r.cur_tok = token::Eof;
return ret_val;
}
Some(frame) => {
frame.idx += 1;
}
}
if prev.dotdotdoted {
r.repeat_idx.pop();
r.repeat_len.pop();
}
} else { /* repeat */
*r.repeat_idx.last_mut().unwrap() += 1u;
r.stack.last_mut().unwrap().idx = 0;
match r.stack.last().unwrap().sep.clone() {
Some(tk) => {
r.cur_tok = tk; /* repeat same span, I guess */
return ret_val;
}
None => {}
}
}
}
loop { /* because it's easiest, this handles `TtDelimited` not starting
with a `TtToken`, even though it won't happen */
let t = {
let frame = r.stack.last().unwrap();
// FIXME(pcwalton): Bad copy.
frame.forest.get_tt(frame.idx)
};
match t {
TtSequence(sp, seq) => {
// FIXME(pcwalton): Bad copy.
match lockstep_iter_size(&TtSequence(sp, seq.clone()),
r) {
LisUnconstrained => {
r.sp_diag.span_fatal(
sp.clone(), /* blame macro writer */
"attempted to repeat an expression \
containing no syntax \
variables matched as repeating at this depth");
}
LisContradiction(ref msg) => {
// FIXME #2887 blame macro invoker instead
r.sp_diag.span_fatal(sp.clone(), &msg[]);
}
LisConstraint(len, _) => {
if len == 0 {
if seq.op == ast::OneOrMore {
// FIXME #2887 blame invoker
r.sp_diag.span_fatal(sp.clone(),
"this must repeat at least once");
}
r.stack.last_mut().unwrap().idx += 1;
return tt_next_token(r);
}
r.repeat_len.push(len);
r.repeat_idx.push(0);
r.stack.push(TtFrame {
idx: 0,
dotdotdoted: true,
sep: seq.separator.clone(),
forest: TtSequence(sp, seq),
});
}
}
}
// FIXME #2887: think about span stuff here
TtToken(sp, SubstNt(ident, namep)) => {
r.stack.last_mut().unwrap().idx += 1;
match lookup_cur_matched(r, ident) {
None => {
r.cur_span = sp;
r.cur_tok = SubstNt(ident, namep);
return ret_val;
// this can't be 0 length, just like TtDelimited
}
Some(cur_matched) => {
match *cur_matched {
// sidestep the interpolation tricks for ident because
// (a) idents can be in lots of places, so it'd be a pain
// (b) we actually can, since it's a token.
MatchedNonterminal(NtIdent(box sn, b)) => {
r.cur_span = sp;
r.cur_tok = token::Ident(sn, b);
return ret_val;
}
MatchedNonterminal(ref other_whole_nt) => {
// FIXME(pcwalton): Bad copy.
r.cur_span = sp;
r.cur_tok = token::Interpolated((*other_whole_nt).clone());
return ret_val;
}
MatchedSeq(..) => {
r.sp_diag.span_fatal(
r.cur_span, /* blame the macro writer */
&format!("variable '{:?}' is still repeating at this depth",
token::get_ident(ident))[]);
}
}
}
}
}
// TtDelimited or any token that can be unzipped
seq @ TtDelimited(..) | seq @ TtToken(_, MatchNt(..)) => {
// do not advance the idx yet
r.stack.push(TtFrame {
forest: seq,
idx: 0,
dotdotdoted: false,
sep: None
});
// if this could be 0-length, we'd need to potentially recur here
}
TtToken(sp, DocComment(name)) if r.desugar_doc_comments => {
r.stack.push(TtFrame {
forest: TtToken(sp, DocComment(name)),
idx: 0,
dotdotdoted: false,
sep: None
});
}
TtToken(sp, token::SpecialVarNt(SpecialMacroVar::CrateMacroVar)) => {
r.stack.last_mut().unwrap().idx += 1;
if r.imported_from.is_some() {
r.cur_span = sp;
r.cur_tok = token::ModSep;
r.crate_name_next = Some(sp);
return ret_val;
}
// otherwise emit nothing and proceed to the next token
}
TtToken(sp, tok) => {
r.cur_span = sp;
r.cur_tok = tok;
r.stack.last_mut().unwrap().idx += 1;
return ret_val;
}
}
}
}
| {
match *t {
TtDelimited(_, ref delimed) => {
delimed.tts.iter().fold(LisUnconstrained, |size, tt| {
size + lockstep_iter_size(tt, r)
})
},
TtSequence(_, ref seq) => {
seq.tts.iter().fold(LisUnconstrained, |size, tt| {
size + lockstep_iter_size(tt, r)
})
},
TtToken(_, SubstNt(name, _)) | TtToken(_, MatchNt(name, _, _, _)) =>
match lookup_cur_matched(r, name) {
Some(matched) => match *matched {
MatchedNonterminal(_) => LisUnconstrained,
MatchedSeq(ref ads, _) => LisConstraint(ads.len(), name),
},
_ => LisUnconstrained
}, | identifier_body |
transcribe.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use self::LockstepIterSize::*;
use ast;
use ast::{TokenTree, TtDelimited, TtToken, TtSequence, Ident};
use codemap::{Span, DUMMY_SP};
use diagnostic::SpanHandler;
use ext::tt::macro_parser::{NamedMatch, MatchedSeq, MatchedNonterminal};
use parse::token::{Eof, DocComment, Interpolated, MatchNt, SubstNt};
use parse::token::{Token, NtIdent, SpecialMacroVar};
use parse::token;
use parse::lexer::TokenAndSpan;
use std::rc::Rc;
use std::ops::Add;
use std::collections::HashMap;
///an unzipping of `TokenTree`s
#[derive(Clone)]
struct TtFrame {
forest: TokenTree,
idx: uint,
dotdotdoted: bool,
sep: Option<Token>,
}
#[derive(Clone)]
pub struct TtReader<'a> {
pub sp_diag: &'a SpanHandler,
/// the unzipped tree:
stack: Vec<TtFrame>,
/* for MBE-style macro transcription */
interpolations: HashMap<Ident, Rc<NamedMatch>>,
imported_from: Option<Ident>,
// Some => return imported_from as the next token
crate_name_next: Option<Span>,
repeat_idx: Vec<uint>,
repeat_len: Vec<uint>,
/* cached: */
pub cur_tok: Token,
pub cur_span: Span,
/// Transform doc comments. Only useful in macro invocations
pub desugar_doc_comments: bool,
}
/// This can do Macro-By-Example transcription. On the other hand, if
/// `src` contains no `TtSequence`s, `MatchNt`s or `SubstNt`s, `interp` can
/// (and should) be None.
pub fn new_tt_reader<'a>(sp_diag: &'a SpanHandler,
interp: Option<HashMap<Ident, Rc<NamedMatch>>>,
imported_from: Option<Ident>,
src: Vec<ast::TokenTree>)
-> TtReader<'a> {
new_tt_reader_with_doc_flag(sp_diag, interp, imported_from, src, false)
}
/// The extra `desugar_doc_comments` flag enables reading doc comments
/// like any other attribute which consists of `meta` and surrounding #[ ] tokens.
///
/// This can do Macro-By-Example transcription. On the other hand, if
/// `src` contains no `TtSequence`s, `MatchNt`s or `SubstNt`s, `interp` can
/// (and should) be None.
pub fn new_tt_reader_with_doc_flag<'a>(sp_diag: &'a SpanHandler,
interp: Option<HashMap<Ident, Rc<NamedMatch>>>,
imported_from: Option<Ident>,
src: Vec<ast::TokenTree>,
desugar_doc_comments: bool)
-> TtReader<'a> {
let mut r = TtReader {
sp_diag: sp_diag,
stack: vec!(TtFrame {
forest: TtSequence(DUMMY_SP, Rc::new(ast::SequenceRepetition {
tts: src,
// doesn't matter. This merely holds the root unzipping.
separator: None, op: ast::ZeroOrMore, num_captures: 0
})),
idx: 0,
dotdotdoted: false,
sep: None,
}),
interpolations: match interp { /* just a convenience */
None => HashMap::new(),
Some(x) => x,
},
imported_from: imported_from,
crate_name_next: None,
repeat_idx: Vec::new(),
repeat_len: Vec::new(),
desugar_doc_comments: desugar_doc_comments,
/* dummy values, never read: */
cur_tok: token::Eof,
cur_span: DUMMY_SP,
};
tt_next_token(&mut r); /* get cur_tok and cur_span set up */
r
}
fn lookup_cur_matched_by_matched(r: &TtReader, start: Rc<NamedMatch>) -> Rc<NamedMatch> {
r.repeat_idx.iter().fold(start, |ad, idx| {
match *ad {
MatchedNonterminal(_) => {
// end of the line; duplicate henceforth
ad.clone()
}
MatchedSeq(ref ads, _) => ads[*idx].clone()
}
})
}
fn lookup_cur_matched(r: &TtReader, name: Ident) -> Option<Rc<NamedMatch>> {
let matched_opt = r.interpolations.get(&name).cloned();
matched_opt.map(|s| lookup_cur_matched_by_matched(r, s))
}
#[derive(Clone)]
enum LockstepIterSize {
LisUnconstrained,
LisConstraint(uint, Ident),
LisContradiction(String),
}
impl Add for LockstepIterSize {
type Output = LockstepIterSize;
fn | (self, other: LockstepIterSize) -> LockstepIterSize {
match self {
LisUnconstrained => other,
LisContradiction(_) => self,
LisConstraint(l_len, ref l_id) => match other {
LisUnconstrained => self.clone(),
LisContradiction(_) => other,
LisConstraint(r_len, _) if l_len == r_len => self.clone(),
LisConstraint(r_len, r_id) => {
let l_n = token::get_ident(l_id.clone());
let r_n = token::get_ident(r_id);
LisContradiction(format!("inconsistent lockstep iteration: \
'{:?}' has {} items, but '{:?}' has {}",
l_n, l_len, r_n, r_len).to_string())
}
},
}
}
}
fn lockstep_iter_size(t: &TokenTree, r: &TtReader) -> LockstepIterSize {
match *t {
TtDelimited(_, ref delimed) => {
delimed.tts.iter().fold(LisUnconstrained, |size, tt| {
size + lockstep_iter_size(tt, r)
})
},
TtSequence(_, ref seq) => {
seq.tts.iter().fold(LisUnconstrained, |size, tt| {
size + lockstep_iter_size(tt, r)
})
},
TtToken(_, SubstNt(name, _)) | TtToken(_, MatchNt(name, _, _, _)) =>
match lookup_cur_matched(r, name) {
Some(matched) => match *matched {
MatchedNonterminal(_) => LisUnconstrained,
MatchedSeq(ref ads, _) => LisConstraint(ads.len(), name),
},
_ => LisUnconstrained
},
TtToken(..) => LisUnconstrained,
}
}
/// Return the next token from the TtReader.
/// EFFECT: advances the reader's token field
pub fn tt_next_token(r: &mut TtReader) -> TokenAndSpan {
// FIXME(pcwalton): Bad copy?
let ret_val = TokenAndSpan {
tok: r.cur_tok.clone(),
sp: r.cur_span.clone(),
};
loop {
match r.crate_name_next.take() {
None => (),
Some(sp) => {
r.cur_span = sp;
r.cur_tok = token::Ident(r.imported_from.unwrap(), token::Plain);
return ret_val;
},
}
let should_pop = match r.stack.last() {
None => {
assert_eq!(ret_val.tok, token::Eof);
return ret_val;
}
Some(frame) => {
if frame.idx < frame.forest.len() {
break;
}
!frame.dotdotdoted ||
*r.repeat_idx.last().unwrap() == *r.repeat_len.last().unwrap() - 1
}
};
/* done with this set; pop or repeat? */
if should_pop {
let prev = r.stack.pop().unwrap();
match r.stack.last_mut() {
None => {
r.cur_tok = token::Eof;
return ret_val;
}
Some(frame) => {
frame.idx += 1;
}
}
if prev.dotdotdoted {
r.repeat_idx.pop();
r.repeat_len.pop();
}
} else { /* repeat */
*r.repeat_idx.last_mut().unwrap() += 1u;
r.stack.last_mut().unwrap().idx = 0;
match r.stack.last().unwrap().sep.clone() {
Some(tk) => {
r.cur_tok = tk; /* repeat same span, I guess */
return ret_val;
}
None => {}
}
}
}
loop { /* because it's easiest, this handles `TtDelimited` not starting
with a `TtToken`, even though it won't happen */
let t = {
let frame = r.stack.last().unwrap();
// FIXME(pcwalton): Bad copy.
frame.forest.get_tt(frame.idx)
};
match t {
TtSequence(sp, seq) => {
// FIXME(pcwalton): Bad copy.
match lockstep_iter_size(&TtSequence(sp, seq.clone()),
r) {
LisUnconstrained => {
r.sp_diag.span_fatal(
sp.clone(), /* blame macro writer */
"attempted to repeat an expression \
containing no syntax \
variables matched as repeating at this depth");
}
LisContradiction(ref msg) => {
// FIXME #2887 blame macro invoker instead
r.sp_diag.span_fatal(sp.clone(), &msg[]);
}
LisConstraint(len, _) => {
if len == 0 {
if seq.op == ast::OneOrMore {
// FIXME #2887 blame invoker
r.sp_diag.span_fatal(sp.clone(),
"this must repeat at least once");
}
r.stack.last_mut().unwrap().idx += 1;
return tt_next_token(r);
}
r.repeat_len.push(len);
r.repeat_idx.push(0);
r.stack.push(TtFrame {
idx: 0,
dotdotdoted: true,
sep: seq.separator.clone(),
forest: TtSequence(sp, seq),
});
}
}
}
// FIXME #2887: think about span stuff here
TtToken(sp, SubstNt(ident, namep)) => {
r.stack.last_mut().unwrap().idx += 1;
match lookup_cur_matched(r, ident) {
None => {
r.cur_span = sp;
r.cur_tok = SubstNt(ident, namep);
return ret_val;
// this can't be 0 length, just like TtDelimited
}
Some(cur_matched) => {
match *cur_matched {
// sidestep the interpolation tricks for ident because
// (a) idents can be in lots of places, so it'd be a pain
// (b) we actually can, since it's a token.
MatchedNonterminal(NtIdent(box sn, b)) => {
r.cur_span = sp;
r.cur_tok = token::Ident(sn, b);
return ret_val;
}
MatchedNonterminal(ref other_whole_nt) => {
// FIXME(pcwalton): Bad copy.
r.cur_span = sp;
r.cur_tok = token::Interpolated((*other_whole_nt).clone());
return ret_val;
}
MatchedSeq(..) => {
r.sp_diag.span_fatal(
r.cur_span, /* blame the macro writer */
&format!("variable '{:?}' is still repeating at this depth",
token::get_ident(ident))[]);
}
}
}
}
}
// TtDelimited or any token that can be unzipped
seq @ TtDelimited(..) | seq @ TtToken(_, MatchNt(..)) => {
// do not advance the idx yet
r.stack.push(TtFrame {
forest: seq,
idx: 0,
dotdotdoted: false,
sep: None
});
// if this could be 0-length, we'd need to potentially recur here
}
TtToken(sp, DocComment(name)) if r.desugar_doc_comments => {
r.stack.push(TtFrame {
forest: TtToken(sp, DocComment(name)),
idx: 0,
dotdotdoted: false,
sep: None
});
}
TtToken(sp, token::SpecialVarNt(SpecialMacroVar::CrateMacroVar)) => {
r.stack.last_mut().unwrap().idx += 1;
if r.imported_from.is_some() {
r.cur_span = sp;
r.cur_tok = token::ModSep;
r.crate_name_next = Some(sp);
return ret_val;
}
// otherwise emit nothing and proceed to the next token
}
TtToken(sp, tok) => {
r.cur_span = sp;
r.cur_tok = tok;
r.stack.last_mut().unwrap().idx += 1;
return ret_val;
}
}
}
}
| add | identifier_name |
transcribe.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use self::LockstepIterSize::*;
use ast;
use ast::{TokenTree, TtDelimited, TtToken, TtSequence, Ident};
use codemap::{Span, DUMMY_SP};
use diagnostic::SpanHandler;
use ext::tt::macro_parser::{NamedMatch, MatchedSeq, MatchedNonterminal};
use parse::token::{Eof, DocComment, Interpolated, MatchNt, SubstNt};
use parse::token::{Token, NtIdent, SpecialMacroVar};
use parse::token;
use parse::lexer::TokenAndSpan;
use std::rc::Rc;
use std::ops::Add;
use std::collections::HashMap;
///an unzipping of `TokenTree`s
#[derive(Clone)]
struct TtFrame {
forest: TokenTree,
idx: uint,
dotdotdoted: bool,
sep: Option<Token>,
}
#[derive(Clone)]
pub struct TtReader<'a> {
pub sp_diag: &'a SpanHandler,
/// the unzipped tree:
stack: Vec<TtFrame>,
/* for MBE-style macro transcription */
interpolations: HashMap<Ident, Rc<NamedMatch>>,
imported_from: Option<Ident>,
// Some => return imported_from as the next token
crate_name_next: Option<Span>,
repeat_idx: Vec<uint>,
repeat_len: Vec<uint>,
/* cached: */
pub cur_tok: Token,
pub cur_span: Span,
/// Transform doc comments. Only useful in macro invocations
pub desugar_doc_comments: bool,
}
/// This can do Macro-By-Example transcription. On the other hand, if
/// `src` contains no `TtSequence`s, `MatchNt`s or `SubstNt`s, `interp` can
/// (and should) be None.
pub fn new_tt_reader<'a>(sp_diag: &'a SpanHandler,
interp: Option<HashMap<Ident, Rc<NamedMatch>>>,
imported_from: Option<Ident>,
src: Vec<ast::TokenTree>)
-> TtReader<'a> {
new_tt_reader_with_doc_flag(sp_diag, interp, imported_from, src, false)
}
/// The extra `desugar_doc_comments` flag enables reading doc comments
/// like any other attribute which consists of `meta` and surrounding #[ ] tokens.
///
/// This can do Macro-By-Example transcription. On the other hand, if
/// `src` contains no `TtSequence`s, `MatchNt`s or `SubstNt`s, `interp` can
/// (and should) be None.
pub fn new_tt_reader_with_doc_flag<'a>(sp_diag: &'a SpanHandler,
interp: Option<HashMap<Ident, Rc<NamedMatch>>>,
imported_from: Option<Ident>,
src: Vec<ast::TokenTree>,
desugar_doc_comments: bool)
-> TtReader<'a> {
let mut r = TtReader {
sp_diag: sp_diag,
stack: vec!(TtFrame {
forest: TtSequence(DUMMY_SP, Rc::new(ast::SequenceRepetition {
tts: src,
// doesn't matter. This merely holds the root unzipping.
separator: None, op: ast::ZeroOrMore, num_captures: 0
})),
idx: 0,
dotdotdoted: false,
sep: None,
}),
interpolations: match interp { /* just a convenience */
None => HashMap::new(),
Some(x) => x,
},
imported_from: imported_from,
crate_name_next: None,
repeat_idx: Vec::new(),
repeat_len: Vec::new(),
desugar_doc_comments: desugar_doc_comments,
/* dummy values, never read: */
cur_tok: token::Eof,
cur_span: DUMMY_SP,
};
tt_next_token(&mut r); /* get cur_tok and cur_span set up */
r
}
fn lookup_cur_matched_by_matched(r: &TtReader, start: Rc<NamedMatch>) -> Rc<NamedMatch> {
r.repeat_idx.iter().fold(start, |ad, idx| {
match *ad {
MatchedNonterminal(_) => {
// end of the line; duplicate henceforth
ad.clone()
}
MatchedSeq(ref ads, _) => ads[*idx].clone()
}
})
}
fn lookup_cur_matched(r: &TtReader, name: Ident) -> Option<Rc<NamedMatch>> {
let matched_opt = r.interpolations.get(&name).cloned();
matched_opt.map(|s| lookup_cur_matched_by_matched(r, s))
}
#[derive(Clone)]
enum LockstepIterSize {
LisUnconstrained,
LisConstraint(uint, Ident),
LisContradiction(String),
}
impl Add for LockstepIterSize {
type Output = LockstepIterSize;
fn add(self, other: LockstepIterSize) -> LockstepIterSize {
match self {
LisUnconstrained => other,
LisContradiction(_) => self,
LisConstraint(l_len, ref l_id) => match other {
LisUnconstrained => self.clone(),
LisContradiction(_) => other,
LisConstraint(r_len, _) if l_len == r_len => self.clone(),
LisConstraint(r_len, r_id) => {
let l_n = token::get_ident(l_id.clone());
let r_n = token::get_ident(r_id);
LisContradiction(format!("inconsistent lockstep iteration: \
'{:?}' has {} items, but '{:?}' has {}",
l_n, l_len, r_n, r_len).to_string())
}
},
}
}
}
fn lockstep_iter_size(t: &TokenTree, r: &TtReader) -> LockstepIterSize {
match *t {
TtDelimited(_, ref delimed) => {
delimed.tts.iter().fold(LisUnconstrained, |size, tt| {
size + lockstep_iter_size(tt, r)
})
},
TtSequence(_, ref seq) => {
seq.tts.iter().fold(LisUnconstrained, |size, tt| {
size + lockstep_iter_size(tt, r)
})
},
TtToken(_, SubstNt(name, _)) | TtToken(_, MatchNt(name, _, _, _)) =>
match lookup_cur_matched(r, name) {
Some(matched) => match *matched {
MatchedNonterminal(_) => LisUnconstrained,
MatchedSeq(ref ads, _) => LisConstraint(ads.len(), name),
},
_ => LisUnconstrained
},
TtToken(..) => LisUnconstrained,
}
}
/// Return the next token from the TtReader.
/// EFFECT: advances the reader's token field
pub fn tt_next_token(r: &mut TtReader) -> TokenAndSpan {
// FIXME(pcwalton): Bad copy?
let ret_val = TokenAndSpan {
tok: r.cur_tok.clone(),
sp: r.cur_span.clone(),
};
loop {
match r.crate_name_next.take() {
None => (),
Some(sp) => {
r.cur_span = sp;
r.cur_tok = token::Ident(r.imported_from.unwrap(), token::Plain);
return ret_val;
},
}
let should_pop = match r.stack.last() {
None => {
assert_eq!(ret_val.tok, token::Eof);
return ret_val;
}
Some(frame) => {
if frame.idx < frame.forest.len() {
break;
}
!frame.dotdotdoted ||
*r.repeat_idx.last().unwrap() == *r.repeat_len.last().unwrap() - 1
}
};
/* done with this set; pop or repeat? */
if should_pop {
let prev = r.stack.pop().unwrap();
match r.stack.last_mut() {
None => {
r.cur_tok = token::Eof;
return ret_val;
}
Some(frame) => {
frame.idx += 1;
}
}
if prev.dotdotdoted {
r.repeat_idx.pop();
r.repeat_len.pop();
}
} else { /* repeat */
*r.repeat_idx.last_mut().unwrap() += 1u;
r.stack.last_mut().unwrap().idx = 0;
match r.stack.last().unwrap().sep.clone() {
Some(tk) => |
None => {}
}
}
}
loop { /* because it's easiest, this handles `TtDelimited` not starting
with a `TtToken`, even though it won't happen */
let t = {
let frame = r.stack.last().unwrap();
// FIXME(pcwalton): Bad copy.
frame.forest.get_tt(frame.idx)
};
match t {
TtSequence(sp, seq) => {
// FIXME(pcwalton): Bad copy.
match lockstep_iter_size(&TtSequence(sp, seq.clone()),
r) {
LisUnconstrained => {
r.sp_diag.span_fatal(
sp.clone(), /* blame macro writer */
"attempted to repeat an expression \
containing no syntax \
variables matched as repeating at this depth");
}
LisContradiction(ref msg) => {
// FIXME #2887 blame macro invoker instead
r.sp_diag.span_fatal(sp.clone(), &msg[]);
}
LisConstraint(len, _) => {
if len == 0 {
if seq.op == ast::OneOrMore {
// FIXME #2887 blame invoker
r.sp_diag.span_fatal(sp.clone(),
"this must repeat at least once");
}
r.stack.last_mut().unwrap().idx += 1;
return tt_next_token(r);
}
r.repeat_len.push(len);
r.repeat_idx.push(0);
r.stack.push(TtFrame {
idx: 0,
dotdotdoted: true,
sep: seq.separator.clone(),
forest: TtSequence(sp, seq),
});
}
}
}
// FIXME #2887: think about span stuff here
TtToken(sp, SubstNt(ident, namep)) => {
r.stack.last_mut().unwrap().idx += 1;
match lookup_cur_matched(r, ident) {
None => {
r.cur_span = sp;
r.cur_tok = SubstNt(ident, namep);
return ret_val;
// this can't be 0 length, just like TtDelimited
}
Some(cur_matched) => {
match *cur_matched {
// sidestep the interpolation tricks for ident because
// (a) idents can be in lots of places, so it'd be a pain
// (b) we actually can, since it's a token.
MatchedNonterminal(NtIdent(box sn, b)) => {
r.cur_span = sp;
r.cur_tok = token::Ident(sn, b);
return ret_val;
}
MatchedNonterminal(ref other_whole_nt) => {
// FIXME(pcwalton): Bad copy.
r.cur_span = sp;
r.cur_tok = token::Interpolated((*other_whole_nt).clone());
return ret_val;
}
MatchedSeq(..) => {
r.sp_diag.span_fatal(
r.cur_span, /* blame the macro writer */
&format!("variable '{:?}' is still repeating at this depth",
token::get_ident(ident))[]);
}
}
}
}
}
// TtDelimited or any token that can be unzipped
seq @ TtDelimited(..) | seq @ TtToken(_, MatchNt(..)) => {
// do not advance the idx yet
r.stack.push(TtFrame {
forest: seq,
idx: 0,
dotdotdoted: false,
sep: None
});
// if this could be 0-length, we'd need to potentially recur here
}
TtToken(sp, DocComment(name)) if r.desugar_doc_comments => {
r.stack.push(TtFrame {
forest: TtToken(sp, DocComment(name)),
idx: 0,
dotdotdoted: false,
sep: None
});
}
TtToken(sp, token::SpecialVarNt(SpecialMacroVar::CrateMacroVar)) => {
r.stack.last_mut().unwrap().idx += 1;
if r.imported_from.is_some() {
r.cur_span = sp;
r.cur_tok = token::ModSep;
r.crate_name_next = Some(sp);
return ret_val;
}
// otherwise emit nothing and proceed to the next token
}
TtToken(sp, tok) => {
r.cur_span = sp;
r.cur_tok = tok;
r.stack.last_mut().unwrap().idx += 1;
return ret_val;
}
}
}
}
| {
r.cur_tok = tk; /* repeat same span, I guess */
return ret_val;
} | conditional_block |
transcribe.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use self::LockstepIterSize::*;
use ast;
use ast::{TokenTree, TtDelimited, TtToken, TtSequence, Ident};
use codemap::{Span, DUMMY_SP};
use diagnostic::SpanHandler;
use ext::tt::macro_parser::{NamedMatch, MatchedSeq, MatchedNonterminal};
use parse::token::{Eof, DocComment, Interpolated, MatchNt, SubstNt};
use parse::token::{Token, NtIdent, SpecialMacroVar};
use parse::token;
use parse::lexer::TokenAndSpan;
use std::rc::Rc;
use std::ops::Add;
use std::collections::HashMap;
///an unzipping of `TokenTree`s
#[derive(Clone)]
struct TtFrame {
forest: TokenTree,
idx: uint,
dotdotdoted: bool,
sep: Option<Token>,
}
#[derive(Clone)]
pub struct TtReader<'a> {
pub sp_diag: &'a SpanHandler,
/// the unzipped tree:
stack: Vec<TtFrame>,
/* for MBE-style macro transcription */
interpolations: HashMap<Ident, Rc<NamedMatch>>,
imported_from: Option<Ident>,
// Some => return imported_from as the next token
crate_name_next: Option<Span>,
repeat_idx: Vec<uint>,
repeat_len: Vec<uint>,
/* cached: */
pub cur_tok: Token,
pub cur_span: Span,
/// Transform doc comments. Only useful in macro invocations
pub desugar_doc_comments: bool,
}
/// This can do Macro-By-Example transcription. On the other hand, if
/// `src` contains no `TtSequence`s, `MatchNt`s or `SubstNt`s, `interp` can
/// (and should) be None.
pub fn new_tt_reader<'a>(sp_diag: &'a SpanHandler,
interp: Option<HashMap<Ident, Rc<NamedMatch>>>,
imported_from: Option<Ident>,
src: Vec<ast::TokenTree>)
-> TtReader<'a> {
new_tt_reader_with_doc_flag(sp_diag, interp, imported_from, src, false)
}
/// The extra `desugar_doc_comments` flag enables reading doc comments
/// like any other attribute which consists of `meta` and surrounding #[ ] tokens.
///
/// This can do Macro-By-Example transcription. On the other hand, if
/// `src` contains no `TtSequence`s, `MatchNt`s or `SubstNt`s, `interp` can
/// (and should) be None.
pub fn new_tt_reader_with_doc_flag<'a>(sp_diag: &'a SpanHandler,
interp: Option<HashMap<Ident, Rc<NamedMatch>>>,
imported_from: Option<Ident>,
src: Vec<ast::TokenTree>,
desugar_doc_comments: bool)
-> TtReader<'a> {
let mut r = TtReader {
sp_diag: sp_diag,
stack: vec!(TtFrame {
forest: TtSequence(DUMMY_SP, Rc::new(ast::SequenceRepetition {
tts: src,
// doesn't matter. This merely holds the root unzipping.
separator: None, op: ast::ZeroOrMore, num_captures: 0
})),
idx: 0,
dotdotdoted: false,
sep: None,
}),
interpolations: match interp { /* just a convenience */
None => HashMap::new(),
Some(x) => x,
},
imported_from: imported_from,
crate_name_next: None,
repeat_idx: Vec::new(),
repeat_len: Vec::new(),
desugar_doc_comments: desugar_doc_comments,
/* dummy values, never read: */
cur_tok: token::Eof,
cur_span: DUMMY_SP,
};
tt_next_token(&mut r); /* get cur_tok and cur_span set up */
r
}
fn lookup_cur_matched_by_matched(r: &TtReader, start: Rc<NamedMatch>) -> Rc<NamedMatch> {
r.repeat_idx.iter().fold(start, |ad, idx| {
match *ad {
MatchedNonterminal(_) => {
// end of the line; duplicate henceforth
ad.clone()
}
MatchedSeq(ref ads, _) => ads[*idx].clone()
}
})
}
fn lookup_cur_matched(r: &TtReader, name: Ident) -> Option<Rc<NamedMatch>> {
let matched_opt = r.interpolations.get(&name).cloned();
matched_opt.map(|s| lookup_cur_matched_by_matched(r, s))
}
#[derive(Clone)]
enum LockstepIterSize {
LisUnconstrained,
LisConstraint(uint, Ident),
LisContradiction(String),
}
impl Add for LockstepIterSize {
type Output = LockstepIterSize;
fn add(self, other: LockstepIterSize) -> LockstepIterSize {
match self {
LisUnconstrained => other,
LisContradiction(_) => self,
LisConstraint(l_len, ref l_id) => match other {
LisUnconstrained => self.clone(),
LisContradiction(_) => other,
LisConstraint(r_len, _) if l_len == r_len => self.clone(),
LisConstraint(r_len, r_id) => {
let l_n = token::get_ident(l_id.clone());
let r_n = token::get_ident(r_id);
LisContradiction(format!("inconsistent lockstep iteration: \
'{:?}' has {} items, but '{:?}' has {}",
l_n, l_len, r_n, r_len).to_string())
}
},
}
}
}
fn lockstep_iter_size(t: &TokenTree, r: &TtReader) -> LockstepIterSize {
match *t {
TtDelimited(_, ref delimed) => {
delimed.tts.iter().fold(LisUnconstrained, |size, tt| {
size + lockstep_iter_size(tt, r)
})
},
TtSequence(_, ref seq) => {
seq.tts.iter().fold(LisUnconstrained, |size, tt| {
size + lockstep_iter_size(tt, r)
})
},
TtToken(_, SubstNt(name, _)) | TtToken(_, MatchNt(name, _, _, _)) =>
match lookup_cur_matched(r, name) {
Some(matched) => match *matched {
MatchedNonterminal(_) => LisUnconstrained,
MatchedSeq(ref ads, _) => LisConstraint(ads.len(), name),
},
_ => LisUnconstrained
},
TtToken(..) => LisUnconstrained,
}
}
/// Return the next token from the TtReader.
/// EFFECT: advances the reader's token field
pub fn tt_next_token(r: &mut TtReader) -> TokenAndSpan {
// FIXME(pcwalton): Bad copy?
let ret_val = TokenAndSpan {
tok: r.cur_tok.clone(),
sp: r.cur_span.clone(),
};
loop {
match r.crate_name_next.take() {
None => (),
Some(sp) => {
r.cur_span = sp;
r.cur_tok = token::Ident(r.imported_from.unwrap(), token::Plain);
return ret_val;
},
}
let should_pop = match r.stack.last() {
None => {
assert_eq!(ret_val.tok, token::Eof);
return ret_val;
}
Some(frame) => {
if frame.idx < frame.forest.len() {
break;
}
!frame.dotdotdoted ||
*r.repeat_idx.last().unwrap() == *r.repeat_len.last().unwrap() - 1
}
};
/* done with this set; pop or repeat? */
if should_pop {
let prev = r.stack.pop().unwrap();
match r.stack.last_mut() {
None => {
r.cur_tok = token::Eof;
return ret_val;
}
Some(frame) => {
frame.idx += 1;
}
}
if prev.dotdotdoted {
r.repeat_idx.pop();
r.repeat_len.pop();
}
} else { /* repeat */
*r.repeat_idx.last_mut().unwrap() += 1u;
r.stack.last_mut().unwrap().idx = 0;
match r.stack.last().unwrap().sep.clone() {
Some(tk) => {
r.cur_tok = tk; /* repeat same span, I guess */
return ret_val;
}
None => {}
}
}
}
loop { /* because it's easiest, this handles `TtDelimited` not starting
with a `TtToken`, even though it won't happen */
let t = {
let frame = r.stack.last().unwrap();
// FIXME(pcwalton): Bad copy.
frame.forest.get_tt(frame.idx)
};
match t {
TtSequence(sp, seq) => {
// FIXME(pcwalton): Bad copy.
match lockstep_iter_size(&TtSequence(sp, seq.clone()),
r) {
LisUnconstrained => {
r.sp_diag.span_fatal(
sp.clone(), /* blame macro writer */
"attempted to repeat an expression \
containing no syntax \
variables matched as repeating at this depth");
}
LisContradiction(ref msg) => {
// FIXME #2887 blame macro invoker instead
r.sp_diag.span_fatal(sp.clone(), &msg[]);
}
LisConstraint(len, _) => {
if len == 0 {
if seq.op == ast::OneOrMore {
// FIXME #2887 blame invoker
r.sp_diag.span_fatal(sp.clone(),
"this must repeat at least once");
}
r.stack.last_mut().unwrap().idx += 1;
return tt_next_token(r);
}
r.repeat_len.push(len);
r.repeat_idx.push(0);
r.stack.push(TtFrame {
idx: 0,
dotdotdoted: true,
sep: seq.separator.clone(),
forest: TtSequence(sp, seq),
});
}
}
}
// FIXME #2887: think about span stuff here
TtToken(sp, SubstNt(ident, namep)) => {
r.stack.last_mut().unwrap().idx += 1;
match lookup_cur_matched(r, ident) {
None => {
r.cur_span = sp;
r.cur_tok = SubstNt(ident, namep);
return ret_val;
// this can't be 0 length, just like TtDelimited
}
Some(cur_matched) => {
match *cur_matched {
// sidestep the interpolation tricks for ident because
// (a) idents can be in lots of places, so it'd be a pain
// (b) we actually can, since it's a token.
MatchedNonterminal(NtIdent(box sn, b)) => {
r.cur_span = sp;
r.cur_tok = token::Ident(sn, b);
return ret_val;
}
MatchedNonterminal(ref other_whole_nt) => {
// FIXME(pcwalton): Bad copy.
r.cur_span = sp;
r.cur_tok = token::Interpolated((*other_whole_nt).clone());
return ret_val;
}
MatchedSeq(..) => {
r.sp_diag.span_fatal(
r.cur_span, /* blame the macro writer */
&format!("variable '{:?}' is still repeating at this depth",
token::get_ident(ident))[]);
}
}
}
}
}
// TtDelimited or any token that can be unzipped
seq @ TtDelimited(..) | seq @ TtToken(_, MatchNt(..)) => {
// do not advance the idx yet
r.stack.push(TtFrame {
forest: seq,
idx: 0,
dotdotdoted: false,
sep: None
});
// if this could be 0-length, we'd need to potentially recur here
}
TtToken(sp, DocComment(name)) if r.desugar_doc_comments => {
r.stack.push(TtFrame {
forest: TtToken(sp, DocComment(name)),
idx: 0,
dotdotdoted: false,
sep: None
});
}
TtToken(sp, token::SpecialVarNt(SpecialMacroVar::CrateMacroVar)) => {
r.stack.last_mut().unwrap().idx += 1;
if r.imported_from.is_some() {
r.cur_span = sp;
r.cur_tok = token::ModSep;
r.crate_name_next = Some(sp);
return ret_val;
}
// otherwise emit nothing and proceed to the next token
}
TtToken(sp, tok) => {
r.cur_span = sp;
r.cur_tok = tok;
r.stack.last_mut().unwrap().idx += 1;
return ret_val;
} | }
}
} | random_line_split | |
collada.rs | /// Implements the logic behind converting COLLADA documents to polygon-rs meshes.
extern crate parse_collada as collada;
use math::*;
use polygon::geometry::mesh::*;
pub use self::collada::{
AnyUri,
ArrayElement,
Collada,
GeometricElement,
Geometry,
Node,
PrimitiveElements,
UriFragment,
VisualScene
};
#[derive(Debug)]
pub enum Error {
/// Indicates an error that occurred when the MeshBuilder was validating the mesh data. If the
/// COLLADA document passed parsing this should not occur.
BuildMeshError(BuildMeshError),
IncorrectPrimitiveIndicesCount {
primitive_count: usize,
stride: usize,
index_count: usize,
},
/// Indicates an error in loading or parsing the original collada document (i.e. the error
/// ocurred within the parse-collada library).
ParseColladaError(collada::Error),
/// Indicates that there was an input with the "NORMAL" semantic but the associated source
/// was missing.
MissingNormalSource,
/// Indicates that an <input> element specified a <source> element that was missing.
MissingSourceData,
/// Indicates that the <source> element with the "POSITION" semantic was missing an
/// array element.
MissingPositionData,
/// Indicates that the <source> element with the "NORMAL" semantic was missing an array element.
MissingNormalData,
/// Indicates that a <vertices> element had and <input> element with no "POSITION" semantic.
///
/// NOTE: This error means that the COLLADA document is ill-formed and should have failed
/// parsing. This indicates that there is a bug in the parse-collada library that should be
/// fixed.
MissingPositionSemantic,
/// Indicates that the <mesh> had no primitive elements.
MissingPrimitiveElement,
/// Indicates that one of the primitive elements (e.g. <trianges> et al) were missing a <p>
/// child element. While this is technically allowed by the standard I'm not really sure what
/// to do with that? Like how do you define a mesh without indices?
MissingPrimitiveIndices,
/// Indicates that a uri referenced an asset outside the document.
NonLocalUri(String),
UnsupportedGeometricElement,
UnsupportedPrimitiveType,
/// Indicates that a <source> element's array element was of a type other than <float_array>.
UnsupportedSourceData,
}
impl From<collada::Error> for Error {
fn | (from: collada::Error) -> Error {
Error::ParseColladaError(from)
}
}
pub type Result<T> = ::std::result::Result<T, Error>;
pub enum VertexSemantic {
Position,
Normal,
TexCoord,
}
/// Loads all resources from a COLLADA document and adds them to the resource manager.
pub fn load_resources<T: Into<String>>(source: T) -> Result<Mesh> {
let collada_data = Collada::parse(source)?;
// Load all meshes from the document and add them to the resource manager.
if let Some(library_geometries) = collada_data.library_geometries {
for geometry in library_geometries.geometry {
// // Retrieve the id for the geometry.
// // TODO: Generate an id for the geometry if it doesn't already have one.
// let id = match geometry.id {
// None => {
// println!("WARNING: COLLADA file contained a <geometry> element with no \"id\" attribute");
// println!("WARNING: This is unsupported because there is no way to reference that geometry to instantiate it");
// continue;
// },
// Some(id) => id,
// };
let mesh = match geometry.geometric_element {
GeometricElement::Mesh(ref mesh) => try!(collada_mesh_to_mesh(mesh)),
_ => return Err(Error::UnsupportedGeometricElement),
};
// TODO: Actually finish parsing all the other data from the file.
return Ok(mesh);
}
}
unimplemented!();
}
fn collada_mesh_to_mesh(mesh: &collada::Mesh) -> Result<Mesh> {
if mesh.primitive_elements.len() > 1 {
println!("WARNING: Mesh is composed of more than one geometric primitive, which is not currently supported, only part of the mesh will be loaded");
}
// Grab the first primitive element in the mesh.
// TODO: Handle all primitive elements in the mesh, not just one. This is dependent on polygon
// being able to support submeshes.
let primitive = try!(
mesh.primitive_elements.first()
.ok_or(Error::MissingPrimitiveElement));
let triangles = match *primitive {
PrimitiveElements::Triangles(ref triangles) => triangles,
_ => return Err(Error::UnsupportedPrimitiveType),
};
let primitive_indices =
triangles.p
.as_ref()
.ok_or(Error::MissingPrimitiveIndices)?;
// Iterate over the indices, rearranging the normal data to match the position data.
let stride = triangles.input.len(); // TODO: Do we have a better way of calculating stride? What if one of the sources isn't used? OR USED TWICE!?
let count = triangles.count;
let index_count = primitive_indices.len();
let vertex_count = count as u32 * 3;
// Verify we have the right number of indices to build the vertices.
if count * stride * 3!= index_count {
return Err(Error::IncorrectPrimitiveIndicesCount {
primitive_count: count,
stride: stride,
index_count: index_count,
});
}
// The indices list is a just a raw list of indices. They are implicity grouped based on the
// number of inputs for the primitive element (e.g. if there are 3 inputs for the primitive
// then there are 3 indices per vertex). To handle this we use GroupBy to do a strided
// iteration over indices list and build each vertex one at a time. Internally the mesh
// builder handles the details of how to assemble the vertex data in memory.
// Build a mapping between the vertex indices and the source that they use.
let mut source_map = Vec::new();
for (offset, input) in triangles.input.iter().enumerate() {
// Retrieve the approriate source. If the semantic is "VERTEX" then the offset is
// associated with all of the sources specified by the <vertex> element.
let source_ids = match &*input.semantic {
"VERTEX" => {
mesh.vertices.input
.iter()
.map(|input| (input.semantic.as_ref(), input.source.as_ref()))
.collect()
},
_ => vec![(input.semantic.as_ref(), input.source.as_ref())],
};
// For each of the semantics at the current offset, push their info into the source map.
for (semantic, source_id) in source_ids {
// Retrieve the <source> element for the input.
let source = try!(mesh.source
.iter()
.find(|source| source.id == source_id)
.ok_or(Error::MissingSourceData));
// Retrieve it's array_element, which is technically optional according to the spec but is
// probably going to be there for the position data.
let array_element = try!(
source.array_element
.as_ref()
.ok_or(Error::MissingPositionData));
// Get float data. Raw mesh data should only be float data (the only one that even
// remotely makes sense is int data, and even then that seems unlikely), so emit an
// error if the data is in the wrong format.
let data = match *array_element {
ArrayElement::Float(ref float_array) => float_array.contents.as_ref(),
_ => return Err(Error::UnsupportedSourceData),
};
source_map.push(IndexMapper {
offset: offset,
semantic: semantic,
data: data,
});
}
}
let mut mesh_builder = MeshBuilder::new();
let mut unsupported_semantic_flag = false;
for vertex_indices in GroupBy::new(primitive_indices, stride).unwrap() { // TODO: This can't fail... right? I'm pretty sure the above checks make sure this is correct.
// We iterate over each group of indices where each group represents the indices for a
// single vertex. Within that vertex we need
let mut vertex = Vertex::new(Point::origin());
for (offset, index) in vertex_indices.iter().enumerate() {
for mapper in source_map.iter().filter(|mapper| mapper.offset == offset) {
match mapper.semantic {
"POSITION" => {
vertex.position = Point::new(
// TODO: Don't assume that the position data is encoded as 3 coordinate
// vectors. The <technique_common> element for the source should have
// an <accessor> describing how the data is laid out.
mapper.data[index * 3 + 0],
mapper.data[index * 3 + 1],
mapper.data[index * 3 + 2],
);
},
"NORMAL" => {
vertex.normal = Some(Vector3::new(
mapper.data[index * 3 + 0],
mapper.data[index * 3 + 1],
mapper.data[index * 3 + 2],
));
},
"TEXCOORD" => {
vertex.texcoord.push(Vector2::new(
mapper.data[index * 2 + 0],
mapper.data[index * 2 + 1],
));
},
_ => if!unsupported_semantic_flag {
unsupported_semantic_flag = true;
println!("WARNING: Unsupported vertex semantic {} in mesh will not be used", mapper.semantic);
},
}
}
}
mesh_builder.add_vertex(vertex);
}
let indices: Vec<u32> = (0..vertex_count).collect();
mesh_builder
.set_indices(&*indices)
.build()
.map_err(|err| Error::BuildMeshError(err))
}
struct IndexMapper<'a> {
offset: usize,
semantic: &'a str,
data: &'a [f32],
}
// TODO: Where even should this live? It's generally useful but I'm only using it here right now.
struct GroupBy<'a, T: 'a> {
next: *const T,
end: *const T,
stride: usize,
_phantom: ::std::marker::PhantomData<&'a T>,
}
impl<'a, T: 'a> GroupBy<'a, T> {
fn new(slice: &'a [T], stride: usize) -> ::std::result::Result<GroupBy<'a, T>, ()> {
if slice.len() % stride!= 0 {
return Err(());
}
Ok(GroupBy {
next: slice.as_ptr(),
end: unsafe { slice.as_ptr().offset(slice.len() as isize) },
stride: stride,
_phantom: ::std::marker::PhantomData,
})
}
}
impl<'a, T: 'a> Iterator for GroupBy<'a, T> {
type Item = &'a [T];
fn next(&mut self) -> Option<&'a [T]> {
if self.next == self.end {
return None;
}
let next = self.next;
self.next = unsafe { self.next.offset(self.stride as isize) };
Some(unsafe {
::std::slice::from_raw_parts(next, self.stride)
})
}
}
| from | identifier_name |
collada.rs | /// Implements the logic behind converting COLLADA documents to polygon-rs meshes.
extern crate parse_collada as collada;
use math::*;
use polygon::geometry::mesh::*;
pub use self::collada::{
AnyUri,
ArrayElement,
Collada,
GeometricElement,
Geometry,
Node,
PrimitiveElements,
UriFragment,
VisualScene
};
#[derive(Debug)]
pub enum Error {
/// Indicates an error that occurred when the MeshBuilder was validating the mesh data. If the
/// COLLADA document passed parsing this should not occur.
BuildMeshError(BuildMeshError),
IncorrectPrimitiveIndicesCount {
primitive_count: usize,
stride: usize,
index_count: usize,
},
/// Indicates an error in loading or parsing the original collada document (i.e. the error
/// ocurred within the parse-collada library).
ParseColladaError(collada::Error),
/// Indicates that there was an input with the "NORMAL" semantic but the associated source
/// was missing.
MissingNormalSource,
/// Indicates that an <input> element specified a <source> element that was missing.
MissingSourceData,
/// Indicates that the <source> element with the "POSITION" semantic was missing an
/// array element.
MissingPositionData,
/// Indicates that the <source> element with the "NORMAL" semantic was missing an array element.
MissingNormalData,
/// Indicates that a <vertices> element had and <input> element with no "POSITION" semantic.
///
/// NOTE: This error means that the COLLADA document is ill-formed and should have failed
/// parsing. This indicates that there is a bug in the parse-collada library that should be
/// fixed.
MissingPositionSemantic,
/// Indicates that the <mesh> had no primitive elements.
MissingPrimitiveElement,
/// Indicates that one of the primitive elements (e.g. <trianges> et al) were missing a <p>
/// child element. While this is technically allowed by the standard I'm not really sure what
/// to do with that? Like how do you define a mesh without indices?
MissingPrimitiveIndices,
/// Indicates that a uri referenced an asset outside the document.
NonLocalUri(String),
UnsupportedGeometricElement,
UnsupportedPrimitiveType,
/// Indicates that a <source> element's array element was of a type other than <float_array>.
UnsupportedSourceData,
}
impl From<collada::Error> for Error {
fn from(from: collada::Error) -> Error {
Error::ParseColladaError(from)
}
}
pub type Result<T> = ::std::result::Result<T, Error>;
pub enum VertexSemantic {
Position,
Normal,
TexCoord,
}
/// Loads all resources from a COLLADA document and adds them to the resource manager.
pub fn load_resources<T: Into<String>>(source: T) -> Result<Mesh> {
let collada_data = Collada::parse(source)?;
// Load all meshes from the document and add them to the resource manager.
if let Some(library_geometries) = collada_data.library_geometries {
for geometry in library_geometries.geometry {
// // Retrieve the id for the geometry.
// // TODO: Generate an id for the geometry if it doesn't already have one.
// let id = match geometry.id {
// None => {
// println!("WARNING: COLLADA file contained a <geometry> element with no \"id\" attribute");
// println!("WARNING: This is unsupported because there is no way to reference that geometry to instantiate it");
// continue;
// },
// Some(id) => id,
// };
let mesh = match geometry.geometric_element {
GeometricElement::Mesh(ref mesh) => try!(collada_mesh_to_mesh(mesh)),
_ => return Err(Error::UnsupportedGeometricElement),
};
// TODO: Actually finish parsing all the other data from the file.
return Ok(mesh);
}
}
unimplemented!();
}
fn collada_mesh_to_mesh(mesh: &collada::Mesh) -> Result<Mesh> {
if mesh.primitive_elements.len() > 1 {
println!("WARNING: Mesh is composed of more than one geometric primitive, which is not currently supported, only part of the mesh will be loaded");
}
// Grab the first primitive element in the mesh.
// TODO: Handle all primitive elements in the mesh, not just one. This is dependent on polygon
// being able to support submeshes.
let primitive = try!(
mesh.primitive_elements.first()
.ok_or(Error::MissingPrimitiveElement));
let triangles = match *primitive {
PrimitiveElements::Triangles(ref triangles) => triangles,
_ => return Err(Error::UnsupportedPrimitiveType),
};
let primitive_indices =
triangles.p
.as_ref()
.ok_or(Error::MissingPrimitiveIndices)?;
// Iterate over the indices, rearranging the normal data to match the position data.
let stride = triangles.input.len(); // TODO: Do we have a better way of calculating stride? What if one of the sources isn't used? OR USED TWICE!?
let count = triangles.count;
let index_count = primitive_indices.len();
let vertex_count = count as u32 * 3;
// Verify we have the right number of indices to build the vertices.
if count * stride * 3!= index_count {
return Err(Error::IncorrectPrimitiveIndicesCount {
primitive_count: count,
stride: stride,
index_count: index_count,
});
}
// The indices list is a just a raw list of indices. They are implicity grouped based on the
// number of inputs for the primitive element (e.g. if there are 3 inputs for the primitive
// then there are 3 indices per vertex). To handle this we use GroupBy to do a strided
// iteration over indices list and build each vertex one at a time. Internally the mesh
// builder handles the details of how to assemble the vertex data in memory.
// Build a mapping between the vertex indices and the source that they use.
let mut source_map = Vec::new();
for (offset, input) in triangles.input.iter().enumerate() {
// Retrieve the approriate source. If the semantic is "VERTEX" then the offset is
// associated with all of the sources specified by the <vertex> element.
let source_ids = match &*input.semantic {
"VERTEX" => {
mesh.vertices.input
.iter()
.map(|input| (input.semantic.as_ref(), input.source.as_ref()))
.collect()
},
_ => vec![(input.semantic.as_ref(), input.source.as_ref())],
};
// For each of the semantics at the current offset, push their info into the source map.
for (semantic, source_id) in source_ids {
// Retrieve the <source> element for the input.
let source = try!(mesh.source
.iter()
.find(|source| source.id == source_id)
.ok_or(Error::MissingSourceData));
// Retrieve it's array_element, which is technically optional according to the spec but is
// probably going to be there for the position data.
let array_element = try!(
source.array_element
.as_ref()
.ok_or(Error::MissingPositionData));
// Get float data. Raw mesh data should only be float data (the only one that even
// remotely makes sense is int data, and even then that seems unlikely), so emit an
// error if the data is in the wrong format.
let data = match *array_element {
ArrayElement::Float(ref float_array) => float_array.contents.as_ref(),
_ => return Err(Error::UnsupportedSourceData),
};
source_map.push(IndexMapper {
offset: offset,
semantic: semantic,
data: data,
});
}
}
let mut mesh_builder = MeshBuilder::new();
let mut unsupported_semantic_flag = false;
for vertex_indices in GroupBy::new(primitive_indices, stride).unwrap() { // TODO: This can't fail... right? I'm pretty sure the above checks make sure this is correct.
// We iterate over each group of indices where each group represents the indices for a
// single vertex. Within that vertex we need
let mut vertex = Vertex::new(Point::origin());
for (offset, index) in vertex_indices.iter().enumerate() {
for mapper in source_map.iter().filter(|mapper| mapper.offset == offset) {
match mapper.semantic {
"POSITION" => {
vertex.position = Point::new(
// TODO: Don't assume that the position data is encoded as 3 coordinate
// vectors. The <technique_common> element for the source should have
// an <accessor> describing how the data is laid out.
mapper.data[index * 3 + 0],
mapper.data[index * 3 + 1],
mapper.data[index * 3 + 2],
);
},
"NORMAL" => {
vertex.normal = Some(Vector3::new(
mapper.data[index * 3 + 0],
mapper.data[index * 3 + 1],
mapper.data[index * 3 + 2],
));
},
"TEXCOORD" => {
vertex.texcoord.push(Vector2::new(
mapper.data[index * 2 + 0],
mapper.data[index * 2 + 1],
));
},
_ => if!unsupported_semantic_flag {
unsupported_semantic_flag = true;
println!("WARNING: Unsupported vertex semantic {} in mesh will not be used", mapper.semantic);
},
}
}
}
mesh_builder.add_vertex(vertex);
}
|
mesh_builder
.set_indices(&*indices)
.build()
.map_err(|err| Error::BuildMeshError(err))
}
struct IndexMapper<'a> {
offset: usize,
semantic: &'a str,
data: &'a [f32],
}
// TODO: Where even should this live? It's generally useful but I'm only using it here right now.
struct GroupBy<'a, T: 'a> {
next: *const T,
end: *const T,
stride: usize,
_phantom: ::std::marker::PhantomData<&'a T>,
}
impl<'a, T: 'a> GroupBy<'a, T> {
fn new(slice: &'a [T], stride: usize) -> ::std::result::Result<GroupBy<'a, T>, ()> {
if slice.len() % stride!= 0 {
return Err(());
}
Ok(GroupBy {
next: slice.as_ptr(),
end: unsafe { slice.as_ptr().offset(slice.len() as isize) },
stride: stride,
_phantom: ::std::marker::PhantomData,
})
}
}
impl<'a, T: 'a> Iterator for GroupBy<'a, T> {
type Item = &'a [T];
fn next(&mut self) -> Option<&'a [T]> {
if self.next == self.end {
return None;
}
let next = self.next;
self.next = unsafe { self.next.offset(self.stride as isize) };
Some(unsafe {
::std::slice::from_raw_parts(next, self.stride)
})
}
} | let indices: Vec<u32> = (0..vertex_count).collect(); | random_line_split |
collada.rs | /// Implements the logic behind converting COLLADA documents to polygon-rs meshes.
extern crate parse_collada as collada;
use math::*;
use polygon::geometry::mesh::*;
pub use self::collada::{
AnyUri,
ArrayElement,
Collada,
GeometricElement,
Geometry,
Node,
PrimitiveElements,
UriFragment,
VisualScene
};
#[derive(Debug)]
pub enum Error {
/// Indicates an error that occurred when the MeshBuilder was validating the mesh data. If the
/// COLLADA document passed parsing this should not occur.
BuildMeshError(BuildMeshError),
IncorrectPrimitiveIndicesCount {
primitive_count: usize,
stride: usize,
index_count: usize,
},
/// Indicates an error in loading or parsing the original collada document (i.e. the error
/// ocurred within the parse-collada library).
ParseColladaError(collada::Error),
/// Indicates that there was an input with the "NORMAL" semantic but the associated source
/// was missing.
MissingNormalSource,
/// Indicates that an <input> element specified a <source> element that was missing.
MissingSourceData,
/// Indicates that the <source> element with the "POSITION" semantic was missing an
/// array element.
MissingPositionData,
/// Indicates that the <source> element with the "NORMAL" semantic was missing an array element.
MissingNormalData,
/// Indicates that a <vertices> element had and <input> element with no "POSITION" semantic.
///
/// NOTE: This error means that the COLLADA document is ill-formed and should have failed
/// parsing. This indicates that there is a bug in the parse-collada library that should be
/// fixed.
MissingPositionSemantic,
/// Indicates that the <mesh> had no primitive elements.
MissingPrimitiveElement,
/// Indicates that one of the primitive elements (e.g. <trianges> et al) were missing a <p>
/// child element. While this is technically allowed by the standard I'm not really sure what
/// to do with that? Like how do you define a mesh without indices?
MissingPrimitiveIndices,
/// Indicates that a uri referenced an asset outside the document.
NonLocalUri(String),
UnsupportedGeometricElement,
UnsupportedPrimitiveType,
/// Indicates that a <source> element's array element was of a type other than <float_array>.
UnsupportedSourceData,
}
impl From<collada::Error> for Error {
fn from(from: collada::Error) -> Error {
Error::ParseColladaError(from)
}
}
pub type Result<T> = ::std::result::Result<T, Error>;
pub enum VertexSemantic {
Position,
Normal,
TexCoord,
}
/// Loads all resources from a COLLADA document and adds them to the resource manager.
pub fn load_resources<T: Into<String>>(source: T) -> Result<Mesh> {
let collada_data = Collada::parse(source)?;
// Load all meshes from the document and add them to the resource manager.
if let Some(library_geometries) = collada_data.library_geometries {
for geometry in library_geometries.geometry {
// // Retrieve the id for the geometry.
// // TODO: Generate an id for the geometry if it doesn't already have one.
// let id = match geometry.id {
// None => {
// println!("WARNING: COLLADA file contained a <geometry> element with no \"id\" attribute");
// println!("WARNING: This is unsupported because there is no way to reference that geometry to instantiate it");
// continue;
// },
// Some(id) => id,
// };
let mesh = match geometry.geometric_element {
GeometricElement::Mesh(ref mesh) => try!(collada_mesh_to_mesh(mesh)),
_ => return Err(Error::UnsupportedGeometricElement),
};
// TODO: Actually finish parsing all the other data from the file.
return Ok(mesh);
}
}
unimplemented!();
}
fn collada_mesh_to_mesh(mesh: &collada::Mesh) -> Result<Mesh> {
if mesh.primitive_elements.len() > 1 {
println!("WARNING: Mesh is composed of more than one geometric primitive, which is not currently supported, only part of the mesh will be loaded");
}
// Grab the first primitive element in the mesh.
// TODO: Handle all primitive elements in the mesh, not just one. This is dependent on polygon
// being able to support submeshes.
let primitive = try!(
mesh.primitive_elements.first()
.ok_or(Error::MissingPrimitiveElement));
let triangles = match *primitive {
PrimitiveElements::Triangles(ref triangles) => triangles,
_ => return Err(Error::UnsupportedPrimitiveType),
};
let primitive_indices =
triangles.p
.as_ref()
.ok_or(Error::MissingPrimitiveIndices)?;
// Iterate over the indices, rearranging the normal data to match the position data.
let stride = triangles.input.len(); // TODO: Do we have a better way of calculating stride? What if one of the sources isn't used? OR USED TWICE!?
let count = triangles.count;
let index_count = primitive_indices.len();
let vertex_count = count as u32 * 3;
// Verify we have the right number of indices to build the vertices.
if count * stride * 3!= index_count {
return Err(Error::IncorrectPrimitiveIndicesCount {
primitive_count: count,
stride: stride,
index_count: index_count,
});
}
// The indices list is a just a raw list of indices. They are implicity grouped based on the
// number of inputs for the primitive element (e.g. if there are 3 inputs for the primitive
// then there are 3 indices per vertex). To handle this we use GroupBy to do a strided
// iteration over indices list and build each vertex one at a time. Internally the mesh
// builder handles the details of how to assemble the vertex data in memory.
// Build a mapping between the vertex indices and the source that they use.
let mut source_map = Vec::new();
for (offset, input) in triangles.input.iter().enumerate() {
// Retrieve the approriate source. If the semantic is "VERTEX" then the offset is
// associated with all of the sources specified by the <vertex> element.
let source_ids = match &*input.semantic {
"VERTEX" => {
mesh.vertices.input
.iter()
.map(|input| (input.semantic.as_ref(), input.source.as_ref()))
.collect()
},
_ => vec![(input.semantic.as_ref(), input.source.as_ref())],
};
// For each of the semantics at the current offset, push their info into the source map.
for (semantic, source_id) in source_ids {
// Retrieve the <source> element for the input.
let source = try!(mesh.source
.iter()
.find(|source| source.id == source_id)
.ok_or(Error::MissingSourceData));
// Retrieve it's array_element, which is technically optional according to the spec but is
// probably going to be there for the position data.
let array_element = try!(
source.array_element
.as_ref()
.ok_or(Error::MissingPositionData));
// Get float data. Raw mesh data should only be float data (the only one that even
// remotely makes sense is int data, and even then that seems unlikely), so emit an
// error if the data is in the wrong format.
let data = match *array_element {
ArrayElement::Float(ref float_array) => float_array.contents.as_ref(),
_ => return Err(Error::UnsupportedSourceData),
};
source_map.push(IndexMapper {
offset: offset,
semantic: semantic,
data: data,
});
}
}
let mut mesh_builder = MeshBuilder::new();
let mut unsupported_semantic_flag = false;
for vertex_indices in GroupBy::new(primitive_indices, stride).unwrap() { // TODO: This can't fail... right? I'm pretty sure the above checks make sure this is correct.
// We iterate over each group of indices where each group represents the indices for a
// single vertex. Within that vertex we need
let mut vertex = Vertex::new(Point::origin());
for (offset, index) in vertex_indices.iter().enumerate() {
for mapper in source_map.iter().filter(|mapper| mapper.offset == offset) {
match mapper.semantic {
"POSITION" => {
vertex.position = Point::new(
// TODO: Don't assume that the position data is encoded as 3 coordinate
// vectors. The <technique_common> element for the source should have
// an <accessor> describing how the data is laid out.
mapper.data[index * 3 + 0],
mapper.data[index * 3 + 1],
mapper.data[index * 3 + 2],
);
},
"NORMAL" => {
vertex.normal = Some(Vector3::new(
mapper.data[index * 3 + 0],
mapper.data[index * 3 + 1],
mapper.data[index * 3 + 2],
));
},
"TEXCOORD" => {
vertex.texcoord.push(Vector2::new(
mapper.data[index * 2 + 0],
mapper.data[index * 2 + 1],
));
},
_ => if!unsupported_semantic_flag {
unsupported_semantic_flag = true;
println!("WARNING: Unsupported vertex semantic {} in mesh will not be used", mapper.semantic);
},
}
}
}
mesh_builder.add_vertex(vertex);
}
let indices: Vec<u32> = (0..vertex_count).collect();
mesh_builder
.set_indices(&*indices)
.build()
.map_err(|err| Error::BuildMeshError(err))
}
struct IndexMapper<'a> {
offset: usize,
semantic: &'a str,
data: &'a [f32],
}
// TODO: Where even should this live? It's generally useful but I'm only using it here right now.
struct GroupBy<'a, T: 'a> {
next: *const T,
end: *const T,
stride: usize,
_phantom: ::std::marker::PhantomData<&'a T>,
}
impl<'a, T: 'a> GroupBy<'a, T> {
fn new(slice: &'a [T], stride: usize) -> ::std::result::Result<GroupBy<'a, T>, ()> {
if slice.len() % stride!= 0 {
return Err(());
}
Ok(GroupBy {
next: slice.as_ptr(),
end: unsafe { slice.as_ptr().offset(slice.len() as isize) },
stride: stride,
_phantom: ::std::marker::PhantomData,
})
}
}
impl<'a, T: 'a> Iterator for GroupBy<'a, T> {
type Item = &'a [T];
fn next(&mut self) -> Option<&'a [T]> |
}
| {
if self.next == self.end {
return None;
}
let next = self.next;
self.next = unsafe { self.next.offset(self.stride as isize) };
Some(unsafe {
::std::slice::from_raw_parts(next, self.stride)
})
} | identifier_body |
mutref.rs | #[derive(Debug)]
enum List {
Cons(Rc<RefCell<i32>>, Rc<List>),
Nil,
}
use self::List::{Cons, Nil};
use std::rc::Rc;
use std::cell::RefCell;
pub fn run() | {
let value = Rc::new(RefCell::new(5));
let a = Rc::new(Cons(Rc::clone(&value), Rc::new(Nil)));
let b = Cons(Rc::new(RefCell::new(6)), Rc::clone(&a));
let c = Cons(Rc::new(RefCell::new(10)), Rc::clone(&a));
println!("a before = {:?}", a);
println!("b before = {:?}", b);
println!("c before = {:?}", c);
*value.borrow_mut() += 10;
println!("a after = {:?}", a);
println!("b after = {:?}", b);
println!("c after = {:?}", c);
} | identifier_body | |
mutref.rs | #[derive(Debug)]
enum | {
Cons(Rc<RefCell<i32>>, Rc<List>),
Nil,
}
use self::List::{Cons, Nil};
use std::rc::Rc;
use std::cell::RefCell;
pub fn run() {
let value = Rc::new(RefCell::new(5));
let a = Rc::new(Cons(Rc::clone(&value), Rc::new(Nil)));
let b = Cons(Rc::new(RefCell::new(6)), Rc::clone(&a));
let c = Cons(Rc::new(RefCell::new(10)), Rc::clone(&a));
println!("a before = {:?}", a);
println!("b before = {:?}", b);
println!("c before = {:?}", c);
*value.borrow_mut() += 10;
println!("a after = {:?}", a);
println!("b after = {:?}", b);
println!("c after = {:?}", c);
}
| List | identifier_name |
mutref.rs | #[derive(Debug)]
enum List {
Cons(Rc<RefCell<i32>>, Rc<List>),
Nil,
}
use self::List::{Cons, Nil};
use std::rc::Rc;
use std::cell::RefCell;
| let value = Rc::new(RefCell::new(5));
let a = Rc::new(Cons(Rc::clone(&value), Rc::new(Nil)));
let b = Cons(Rc::new(RefCell::new(6)), Rc::clone(&a));
let c = Cons(Rc::new(RefCell::new(10)), Rc::clone(&a));
println!("a before = {:?}", a);
println!("b before = {:?}", b);
println!("c before = {:?}", c);
*value.borrow_mut() += 10;
println!("a after = {:?}", a);
println!("b after = {:?}", b);
println!("c after = {:?}", c);
} | pub fn run() { | random_line_split |
sectionalize_pass.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Breaks rustdocs into sections according to their headers
use core::prelude::*;
use astsrv;
use doc::ItemUtils;
use doc;
use fold::Fold;
use fold;
use pass::Pass;
use core::str;
pub fn mk_pass() -> Pass {
Pass {
name: ~"sectionalize",
f: run
}
}
pub fn run(_srv: astsrv::Srv, doc: doc::Doc) -> doc::Doc {
let fold = Fold {
fold_item: fold_item,
fold_trait: fold_trait,
fold_impl: fold_impl,
.. fold::default_any_fold(())
};
(fold.fold_doc)(&fold, doc)
}
fn fold_item(fold: &fold::Fold<()>, doc: doc::ItemDoc) -> doc::ItemDoc {
let doc = fold::default_seq_fold_item(fold, doc);
let (desc, sections) = sectionalize(copy doc.desc);
doc::ItemDoc {
desc: desc,
sections: sections,
.. doc
}
}
fn fold_trait(fold: &fold::Fold<()>, doc: doc::TraitDoc) -> doc::TraitDoc {
let doc = fold::default_seq_fold_trait(fold, doc);
doc::TraitDoc {
methods: do doc.methods.map |method| {
let (desc, sections) = sectionalize(copy method.desc);
doc::MethodDoc {
desc: desc,
sections: sections,
.. copy *method
}
},
.. doc
}
}
fn fold_impl(fold: &fold::Fold<()>, doc: doc::ImplDoc) -> doc::ImplDoc {
let doc = fold::default_seq_fold_impl(fold, doc);
doc::ImplDoc {
methods: do doc.methods.map |method| {
let (desc, sections) = sectionalize(copy method.desc);
doc::MethodDoc {
desc: desc,
sections: sections,
.. copy *method
}
},
.. doc
}
}
fn sectionalize(desc: Option<~str>) -> (Option<~str>, ~[doc::Section]) {
/*!
* Take a description of the form
*
* General text
*
* # Section header
*
* Section text
*
* # Section header
*
* Section text
*
* and remove each header and accompanying text into section records.
*/
if desc.is_none() {
return (None, ~[]);
}
let mut lines = ~[];
for str::each_line_any(*desc.get_ref()) |line| { lines.push(line.to_owned()); }
let mut new_desc = None::<~str>;
let mut current_section = None;
let mut sections = ~[];
for lines.each |line| {
match parse_header(copy *line) {
Some(header) => {
if current_section.is_some() {
sections += ~[(¤t_section).get()];
}
current_section = Some(doc::Section {
header: header,
body: ~""
});
}
None => {
match copy current_section {
Some(section) => {
current_section = Some(doc::Section {
body: section.body + ~"\n" + *line,
.. section
});
}
None => {
new_desc = match copy new_desc {
Some(desc) => {
Some(desc + ~"\n" + *line)
}
None => {
Some(copy *line)
}
};
}
}
}
}
}
if current_section.is_some() {
sections += ~[current_section.get()];
}
(new_desc, sections)
}
fn parse_header(line: ~str) -> Option<~str> {
if str::starts_with(line, ~"# ") {
Some(str::slice(line, 2u, str::len(line)).to_owned())
} else {
None
}
}
#[test]
fn should_create_section_headers() {
let doc = test::mk_doc(
~"#[doc = \"\
# Header\n\
Body\"]\
mod a {
}");
assert!(str::contains(
doc.cratemod().mods()[0].item.sections[0].header,
~"Header"));
}
#[test]
fn | () {
let doc = test::mk_doc(
~"#[doc = \"\
# Header\n\
Body\"]\
mod a {
}");
assert!(str::contains(
doc.cratemod().mods()[0].item.sections[0].body,
~"Body"));
}
#[test]
fn should_not_create_sections_from_indented_headers() {
let doc = test::mk_doc(
~"#[doc = \"\n\
Text\n # Header\n\
Body\"]\
mod a {
}");
assert!(vec::is_empty(doc.cratemod().mods()[0].item.sections));
}
#[test]
fn should_remove_section_text_from_main_desc() {
let doc = test::mk_doc(
~"#[doc = \"\
Description\n\n\
# Header\n\
Body\"]\
mod a {
}");
assert!(!str::contains(
doc.cratemod().mods()[0].desc().get(),
~"Header"));
assert!(!str::contains(
doc.cratemod().mods()[0].desc().get(),
~"Body"));
}
#[test]
fn should_eliminate_desc_if_it_is_just_whitespace() {
let doc = test::mk_doc(
~"#[doc = \"\
# Header\n\
Body\"]\
mod a {
}");
assert!(doc.cratemod().mods()[0].desc() == None);
}
#[test]
fn should_sectionalize_trait_methods() {
let doc = test::mk_doc(
~"trait i {
#[doc = \"\
# Header\n\
Body\"]\
fn a(); }");
assert!(doc.cratemod().traits()[0].methods[0].sections.len() == 1u);
}
#[test]
fn should_sectionalize_impl_methods() {
let doc = test::mk_doc(
~"impl bool {
#[doc = \"\
# Header\n\
Body\"]\
fn a() { } }");
assert!(doc.cratemod().impls()[0].methods[0].sections.len() == 1u);
}
#[cfg(test)]
pub mod test {
use astsrv;
use attr_pass;
use doc;
use extract;
use sectionalize_pass::run;
pub fn mk_doc(source: ~str) -> doc::Doc {
do astsrv::from_str(copy source) |srv| {
let doc = extract::from_srv(srv.clone(), ~"");
let doc = (attr_pass::mk_pass().f)(srv.clone(), doc);
run(srv.clone(), doc)
}
}
}
| should_create_section_bodies | identifier_name |
sectionalize_pass.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Breaks rustdocs into sections according to their headers
use core::prelude::*;
use astsrv;
use doc::ItemUtils;
use doc;
use fold::Fold;
use fold;
use pass::Pass;
use core::str;
pub fn mk_pass() -> Pass {
Pass {
name: ~"sectionalize",
f: run
}
}
pub fn run(_srv: astsrv::Srv, doc: doc::Doc) -> doc::Doc {
let fold = Fold {
fold_item: fold_item,
fold_trait: fold_trait,
fold_impl: fold_impl,
.. fold::default_any_fold(())
};
(fold.fold_doc)(&fold, doc)
}
fn fold_item(fold: &fold::Fold<()>, doc: doc::ItemDoc) -> doc::ItemDoc {
let doc = fold::default_seq_fold_item(fold, doc);
let (desc, sections) = sectionalize(copy doc.desc);
doc::ItemDoc {
desc: desc,
sections: sections,
.. doc
}
}
fn fold_trait(fold: &fold::Fold<()>, doc: doc::TraitDoc) -> doc::TraitDoc {
let doc = fold::default_seq_fold_trait(fold, doc);
doc::TraitDoc {
methods: do doc.methods.map |method| {
let (desc, sections) = sectionalize(copy method.desc);
doc::MethodDoc {
desc: desc,
sections: sections,
.. copy *method
}
},
.. doc
}
}
fn fold_impl(fold: &fold::Fold<()>, doc: doc::ImplDoc) -> doc::ImplDoc {
let doc = fold::default_seq_fold_impl(fold, doc);
doc::ImplDoc {
methods: do doc.methods.map |method| {
let (desc, sections) = sectionalize(copy method.desc);
doc::MethodDoc {
desc: desc,
sections: sections,
.. copy *method
}
},
.. doc
}
}
fn sectionalize(desc: Option<~str>) -> (Option<~str>, ~[doc::Section]) {
/*!
* Take a description of the form
*
* General text
*
* # Section header
*
* Section text
*
* # Section header
*
* Section text
*
* and remove each header and accompanying text into section records.
*/
if desc.is_none() {
return (None, ~[]);
}
let mut lines = ~[];
for str::each_line_any(*desc.get_ref()) |line| { lines.push(line.to_owned()); }
let mut new_desc = None::<~str>;
let mut current_section = None;
let mut sections = ~[];
for lines.each |line| {
match parse_header(copy *line) {
Some(header) => {
if current_section.is_some() {
sections += ~[(¤t_section).get()];
}
current_section = Some(doc::Section {
header: header,
body: ~""
});
}
None => {
match copy current_section {
Some(section) => {
current_section = Some(doc::Section {
body: section.body + ~"\n" + *line,
.. section
});
}
None => {
new_desc = match copy new_desc {
Some(desc) => {
Some(desc + ~"\n" + *line)
}
None => {
Some(copy *line)
}
};
}
}
}
}
}
if current_section.is_some() {
sections += ~[current_section.get()];
}
(new_desc, sections)
}
fn parse_header(line: ~str) -> Option<~str> {
if str::starts_with(line, ~"# ") {
Some(str::slice(line, 2u, str::len(line)).to_owned())
} else {
None
}
}
#[test]
fn should_create_section_headers() {
let doc = test::mk_doc(
~"#[doc = \"\
# Header\n\
Body\"]\
mod a {
}");
assert!(str::contains(
doc.cratemod().mods()[0].item.sections[0].header,
~"Header"));
}
#[test]
fn should_create_section_bodies() {
let doc = test::mk_doc(
~"#[doc = \"\
# Header\n\
Body\"]\
mod a {
}");
assert!(str::contains(
doc.cratemod().mods()[0].item.sections[0].body,
~"Body"));
}
#[test]
fn should_not_create_sections_from_indented_headers() {
let doc = test::mk_doc(
~"#[doc = \"\n\
Text\n # Header\n\
Body\"]\
mod a {
}");
assert!(vec::is_empty(doc.cratemod().mods()[0].item.sections));
}
#[test]
fn should_remove_section_text_from_main_desc() {
let doc = test::mk_doc(
~"#[doc = \"\
Description\n\n\
# Header\n\
Body\"]\
mod a {
}");
assert!(!str::contains(
doc.cratemod().mods()[0].desc().get(),
~"Header"));
assert!(!str::contains(
doc.cratemod().mods()[0].desc().get(),
~"Body"));
}
#[test]
fn should_eliminate_desc_if_it_is_just_whitespace() {
let doc = test::mk_doc(
~"#[doc = \"\
# Header\n\
Body\"]\
mod a {
}");
assert!(doc.cratemod().mods()[0].desc() == None);
}
#[test]
fn should_sectionalize_trait_methods() {
let doc = test::mk_doc(
~"trait i {
#[doc = \"\
# Header\n\
Body\"]\
fn a(); }");
assert!(doc.cratemod().traits()[0].methods[0].sections.len() == 1u);
}
#[test]
fn should_sectionalize_impl_methods() {
let doc = test::mk_doc(
~"impl bool {
#[doc = \"\
# Header\n\
Body\"]\
fn a() | }");
assert!(doc.cratemod().impls()[0].methods[0].sections.len() == 1u);
}
#[cfg(test)]
pub mod test {
use astsrv;
use attr_pass;
use doc;
use extract;
use sectionalize_pass::run;
pub fn mk_doc(source: ~str) -> doc::Doc {
do astsrv::from_str(copy source) |srv| {
let doc = extract::from_srv(srv.clone(), ~"");
let doc = (attr_pass::mk_pass().f)(srv.clone(), doc);
run(srv.clone(), doc)
}
}
}
| { } | identifier_body |
sectionalize_pass.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Breaks rustdocs into sections according to their headers
use core::prelude::*;
use astsrv;
use doc::ItemUtils;
use doc;
use fold::Fold;
use fold;
use pass::Pass;
use core::str;
pub fn mk_pass() -> Pass {
Pass {
name: ~"sectionalize",
f: run
}
}
pub fn run(_srv: astsrv::Srv, doc: doc::Doc) -> doc::Doc {
let fold = Fold {
fold_item: fold_item,
fold_trait: fold_trait,
fold_impl: fold_impl,
.. fold::default_any_fold(())
};
(fold.fold_doc)(&fold, doc)
}
fn fold_item(fold: &fold::Fold<()>, doc: doc::ItemDoc) -> doc::ItemDoc {
let doc = fold::default_seq_fold_item(fold, doc);
let (desc, sections) = sectionalize(copy doc.desc);
doc::ItemDoc {
desc: desc,
sections: sections,
.. doc
}
}
fn fold_trait(fold: &fold::Fold<()>, doc: doc::TraitDoc) -> doc::TraitDoc {
let doc = fold::default_seq_fold_trait(fold, doc);
doc::TraitDoc {
methods: do doc.methods.map |method| {
let (desc, sections) = sectionalize(copy method.desc);
doc::MethodDoc {
desc: desc,
sections: sections,
.. copy *method
}
},
.. doc
}
}
fn fold_impl(fold: &fold::Fold<()>, doc: doc::ImplDoc) -> doc::ImplDoc {
let doc = fold::default_seq_fold_impl(fold, doc);
doc::ImplDoc {
methods: do doc.methods.map |method| {
let (desc, sections) = sectionalize(copy method.desc);
doc::MethodDoc {
desc: desc,
sections: sections,
.. copy *method
}
},
.. doc
}
}
fn sectionalize(desc: Option<~str>) -> (Option<~str>, ~[doc::Section]) {
/*!
* Take a description of the form
*
* General text
*
* # Section header
*
* Section text
*
* # Section header
*
* Section text
*
* and remove each header and accompanying text into section records.
*/
if desc.is_none() {
return (None, ~[]);
}
let mut lines = ~[];
for str::each_line_any(*desc.get_ref()) |line| { lines.push(line.to_owned()); }
let mut new_desc = None::<~str>;
let mut current_section = None;
let mut sections = ~[];
for lines.each |line| {
match parse_header(copy *line) {
Some(header) => {
if current_section.is_some() {
sections += ~[(¤t_section).get()];
}
current_section = Some(doc::Section {
header: header,
body: ~""
});
}
None => {
match copy current_section {
Some(section) => {
current_section = Some(doc::Section { | new_desc = match copy new_desc {
Some(desc) => {
Some(desc + ~"\n" + *line)
}
None => {
Some(copy *line)
}
};
}
}
}
}
}
if current_section.is_some() {
sections += ~[current_section.get()];
}
(new_desc, sections)
}
fn parse_header(line: ~str) -> Option<~str> {
if str::starts_with(line, ~"# ") {
Some(str::slice(line, 2u, str::len(line)).to_owned())
} else {
None
}
}
#[test]
fn should_create_section_headers() {
let doc = test::mk_doc(
~"#[doc = \"\
# Header\n\
Body\"]\
mod a {
}");
assert!(str::contains(
doc.cratemod().mods()[0].item.sections[0].header,
~"Header"));
}
#[test]
fn should_create_section_bodies() {
let doc = test::mk_doc(
~"#[doc = \"\
# Header\n\
Body\"]\
mod a {
}");
assert!(str::contains(
doc.cratemod().mods()[0].item.sections[0].body,
~"Body"));
}
#[test]
fn should_not_create_sections_from_indented_headers() {
let doc = test::mk_doc(
~"#[doc = \"\n\
Text\n # Header\n\
Body\"]\
mod a {
}");
assert!(vec::is_empty(doc.cratemod().mods()[0].item.sections));
}
#[test]
fn should_remove_section_text_from_main_desc() {
let doc = test::mk_doc(
~"#[doc = \"\
Description\n\n\
# Header\n\
Body\"]\
mod a {
}");
assert!(!str::contains(
doc.cratemod().mods()[0].desc().get(),
~"Header"));
assert!(!str::contains(
doc.cratemod().mods()[0].desc().get(),
~"Body"));
}
#[test]
fn should_eliminate_desc_if_it_is_just_whitespace() {
let doc = test::mk_doc(
~"#[doc = \"\
# Header\n\
Body\"]\
mod a {
}");
assert!(doc.cratemod().mods()[0].desc() == None);
}
#[test]
fn should_sectionalize_trait_methods() {
let doc = test::mk_doc(
~"trait i {
#[doc = \"\
# Header\n\
Body\"]\
fn a(); }");
assert!(doc.cratemod().traits()[0].methods[0].sections.len() == 1u);
}
#[test]
fn should_sectionalize_impl_methods() {
let doc = test::mk_doc(
~"impl bool {
#[doc = \"\
# Header\n\
Body\"]\
fn a() { } }");
assert!(doc.cratemod().impls()[0].methods[0].sections.len() == 1u);
}
#[cfg(test)]
pub mod test {
use astsrv;
use attr_pass;
use doc;
use extract;
use sectionalize_pass::run;
pub fn mk_doc(source: ~str) -> doc::Doc {
do astsrv::from_str(copy source) |srv| {
let doc = extract::from_srv(srv.clone(), ~"");
let doc = (attr_pass::mk_pass().f)(srv.clone(), doc);
run(srv.clone(), doc)
}
}
} | body: section.body + ~"\n" + *line,
.. section
});
}
None => { | random_line_split |
cargo_test.rs | use std::ffi::{OsString, OsStr};
use std::path::Path;
use ops::{self, ExecEngine, ProcessEngine, Compilation};
use util::{self, CargoResult, CargoTestError, ProcessError};
pub struct TestOptions<'a> {
pub compile_opts: ops::CompileOptions<'a>,
pub no_run: bool,
pub no_fail_fast: bool,
}
#[allow(deprecated)] // connect => join in 1.3
pub fn run_tests(manifest_path: &Path,
options: &TestOptions,
test_args: &[String]) -> CargoResult<Option<CargoTestError>> {
let compilation = try!(compile_tests(manifest_path, options));
if options.no_run {
return Ok(None)
}
let mut errors = try!(run_unit_tests(options, test_args, &compilation));
// If we have an error and want to fail fast, return
if errors.len() > 0 &&!options.no_fail_fast {
return Ok(Some(CargoTestError::new(errors)))
}
// If a specific test was requested or we're not running any tests at all,
// don't run any doc tests.
if let ops::CompileFilter::Only {.. } = options.compile_opts.filter {
match errors.len() {
0 => return Ok(None),
_ => return Ok(Some(CargoTestError::new(errors)))
}
}
errors.extend(try!(run_doc_tests(options, test_args, &compilation)));
if errors.len() == 0 {
Ok(None)
} else {
Ok(Some(CargoTestError::new(errors)))
}
}
pub fn run_benches(manifest_path: &Path,
options: &TestOptions,
args: &[String]) -> CargoResult<Option<CargoTestError>> {
let mut args = args.to_vec();
args.push("--bench".to_string());
let compilation = try!(compile_tests(manifest_path, options));
let errors = try!(run_unit_tests(options, &args, &compilation));
match errors.len() {
0 => Ok(None),
_ => Ok(Some(CargoTestError::new(errors))),
}
}
fn compile_tests<'a>(manifest_path: &Path,
options: &TestOptions<'a>)
-> CargoResult<Compilation<'a>> {
let mut compilation = try!(ops::compile(manifest_path,
&options.compile_opts));
compilation.tests.sort_by(|a, b| {
(a.0.package_id(), &a.1).cmp(&(b.0.package_id(), &b.1))
});
Ok(compilation)
}
/// Run the unit and integration tests of a project.
fn run_unit_tests(options: &TestOptions,
test_args: &[String],
compilation: &Compilation)
-> CargoResult<Vec<ProcessError>> {
let config = options.compile_opts.config;
let cwd = options.compile_opts.config.cwd();
let mut errors = Vec::new();
for &(ref pkg, _, ref exe) in &compilation.tests {
let to_display = match util::without_prefix(exe, &cwd) {
Some(path) => path,
None => &**exe,
};
let mut cmd = try!(compilation.target_process(exe, pkg));
cmd.args(test_args);
try!(config.shell().concise(|shell| {
shell.status("Running", to_display.display().to_string())
}));
try!(config.shell().verbose(|shell| {
shell.status("Running", cmd.to_string())
}));
if let Err(e) = ExecEngine::exec(&mut ProcessEngine, cmd) {
errors.push(e);
if!options.no_fail_fast {
break
}
}
}
Ok(errors)
}
#[allow(deprecated)] // connect => join in 1.3
fn | (options: &TestOptions,
test_args: &[String],
compilation: &Compilation)
-> CargoResult<Vec<ProcessError>> {
let mut errors = Vec::new();
let config = options.compile_opts.config;
let libs = compilation.to_doc_test.iter().map(|package| {
(package, package.targets().iter().filter(|t| t.doctested())
.map(|t| (t.src_path(), t.name(), t.crate_name())))
});
for (package, tests) in libs {
for (lib, name, crate_name) in tests {
try!(config.shell().status("Doc-tests", name));
let mut p = try!(compilation.rustdoc_process(package));
p.arg("--test").arg(lib)
.arg("--crate-name").arg(&crate_name);
for &rust_dep in &[&compilation.deps_output, &compilation.root_output] {
let mut arg = OsString::from("dependency=");
arg.push(rust_dep);
p.arg("-L").arg(arg);
}
for native_dep in compilation.native_dirs.values() {
p.arg("-L").arg(native_dep);
}
if test_args.len() > 0 {
p.arg("--test-args").arg(&test_args.connect(" "));
}
for feat in compilation.features.iter() {
p.arg("--cfg").arg(&format!("feature=\"{}\"", feat));
}
for (_, libs) in compilation.libraries.iter() {
for &(ref target, ref lib) in libs.iter() {
// Note that we can *only* doctest rlib outputs here. A
// staticlib output cannot be linked by the compiler (it just
// doesn't do that). A dylib output, however, can be linked by
// the compiler, but will always fail. Currently all dylibs are
// built as "static dylibs" where the standard library is
// statically linked into the dylib. The doc tests fail,
// however, for now as they try to link the standard library
// dynamically as well, causing problems. As a result we only
// pass `--extern` for rlib deps and skip out on all other
// artifacts.
if lib.extension()!= Some(OsStr::new("rlib")) &&
!target.for_host() {
continue
}
let mut arg = OsString::from(target.crate_name());
arg.push("=");
arg.push(lib);
p.arg("--extern").arg(&arg);
}
}
try!(config.shell().verbose(|shell| {
shell.status("Running", p.to_string())
}));
if let Err(e) = ExecEngine::exec(&mut ProcessEngine, p) {
errors.push(e);
if!options.no_fail_fast {
return Ok(errors);
}
}
}
}
Ok(errors)
}
| run_doc_tests | identifier_name |
cargo_test.rs | use std::ffi::{OsString, OsStr};
use std::path::Path;
use ops::{self, ExecEngine, ProcessEngine, Compilation};
use util::{self, CargoResult, CargoTestError, ProcessError};
pub struct TestOptions<'a> {
pub compile_opts: ops::CompileOptions<'a>,
pub no_run: bool,
pub no_fail_fast: bool,
}
#[allow(deprecated)] // connect => join in 1.3
pub fn run_tests(manifest_path: &Path,
options: &TestOptions,
test_args: &[String]) -> CargoResult<Option<CargoTestError>> {
let compilation = try!(compile_tests(manifest_path, options));
if options.no_run {
return Ok(None)
}
let mut errors = try!(run_unit_tests(options, test_args, &compilation));
// If we have an error and want to fail fast, return
if errors.len() > 0 &&!options.no_fail_fast {
return Ok(Some(CargoTestError::new(errors)))
}
// If a specific test was requested or we're not running any tests at all,
// don't run any doc tests.
if let ops::CompileFilter::Only {.. } = options.compile_opts.filter {
match errors.len() {
0 => return Ok(None),
_ => return Ok(Some(CargoTestError::new(errors)))
}
}
errors.extend(try!(run_doc_tests(options, test_args, &compilation)));
if errors.len() == 0 {
Ok(None)
} else {
Ok(Some(CargoTestError::new(errors)))
}
}
pub fn run_benches(manifest_path: &Path,
options: &TestOptions,
args: &[String]) -> CargoResult<Option<CargoTestError>> {
let mut args = args.to_vec();
args.push("--bench".to_string());
let compilation = try!(compile_tests(manifest_path, options));
let errors = try!(run_unit_tests(options, &args, &compilation));
match errors.len() {
0 => Ok(None),
_ => Ok(Some(CargoTestError::new(errors))),
}
}
fn compile_tests<'a>(manifest_path: &Path,
options: &TestOptions<'a>)
-> CargoResult<Compilation<'a>> {
let mut compilation = try!(ops::compile(manifest_path,
&options.compile_opts));
compilation.tests.sort_by(|a, b| {
(a.0.package_id(), &a.1).cmp(&(b.0.package_id(), &b.1))
});
Ok(compilation)
}
/// Run the unit and integration tests of a project.
fn run_unit_tests(options: &TestOptions,
test_args: &[String],
compilation: &Compilation)
-> CargoResult<Vec<ProcessError>> {
let config = options.compile_opts.config;
let cwd = options.compile_opts.config.cwd();
let mut errors = Vec::new();
for &(ref pkg, _, ref exe) in &compilation.tests {
let to_display = match util::without_prefix(exe, &cwd) {
Some(path) => path,
None => &**exe,
};
let mut cmd = try!(compilation.target_process(exe, pkg));
cmd.args(test_args);
try!(config.shell().concise(|shell| {
shell.status("Running", to_display.display().to_string())
}));
try!(config.shell().verbose(|shell| {
shell.status("Running", cmd.to_string())
}));
if let Err(e) = ExecEngine::exec(&mut ProcessEngine, cmd) {
errors.push(e);
if!options.no_fail_fast {
break
}
}
}
Ok(errors)
}
#[allow(deprecated)] // connect => join in 1.3
fn run_doc_tests(options: &TestOptions,
test_args: &[String],
compilation: &Compilation)
-> CargoResult<Vec<ProcessError>> {
let mut errors = Vec::new();
let config = options.compile_opts.config;
let libs = compilation.to_doc_test.iter().map(|package| {
(package, package.targets().iter().filter(|t| t.doctested())
.map(|t| (t.src_path(), t.name(), t.crate_name())))
});
for (package, tests) in libs {
for (lib, name, crate_name) in tests {
try!(config.shell().status("Doc-tests", name));
let mut p = try!(compilation.rustdoc_process(package));
p.arg("--test").arg(lib)
.arg("--crate-name").arg(&crate_name);
for &rust_dep in &[&compilation.deps_output, &compilation.root_output] {
let mut arg = OsString::from("dependency=");
arg.push(rust_dep);
p.arg("-L").arg(arg);
}
for native_dep in compilation.native_dirs.values() {
p.arg("-L").arg(native_dep);
}
if test_args.len() > 0 |
for feat in compilation.features.iter() {
p.arg("--cfg").arg(&format!("feature=\"{}\"", feat));
}
for (_, libs) in compilation.libraries.iter() {
for &(ref target, ref lib) in libs.iter() {
// Note that we can *only* doctest rlib outputs here. A
// staticlib output cannot be linked by the compiler (it just
// doesn't do that). A dylib output, however, can be linked by
// the compiler, but will always fail. Currently all dylibs are
// built as "static dylibs" where the standard library is
// statically linked into the dylib. The doc tests fail,
// however, for now as they try to link the standard library
// dynamically as well, causing problems. As a result we only
// pass `--extern` for rlib deps and skip out on all other
// artifacts.
if lib.extension()!= Some(OsStr::new("rlib")) &&
!target.for_host() {
continue
}
let mut arg = OsString::from(target.crate_name());
arg.push("=");
arg.push(lib);
p.arg("--extern").arg(&arg);
}
}
try!(config.shell().verbose(|shell| {
shell.status("Running", p.to_string())
}));
if let Err(e) = ExecEngine::exec(&mut ProcessEngine, p) {
errors.push(e);
if!options.no_fail_fast {
return Ok(errors);
}
}
}
}
Ok(errors)
}
| {
p.arg("--test-args").arg(&test_args.connect(" "));
} | conditional_block |
cargo_test.rs | use std::ffi::{OsString, OsStr};
use std::path::Path;
use ops::{self, ExecEngine, ProcessEngine, Compilation};
use util::{self, CargoResult, CargoTestError, ProcessError};
pub struct TestOptions<'a> {
pub compile_opts: ops::CompileOptions<'a>,
pub no_run: bool,
pub no_fail_fast: bool,
}
#[allow(deprecated)] // connect => join in 1.3
pub fn run_tests(manifest_path: &Path,
options: &TestOptions,
test_args: &[String]) -> CargoResult<Option<CargoTestError>> {
let compilation = try!(compile_tests(manifest_path, options));
if options.no_run {
return Ok(None)
}
let mut errors = try!(run_unit_tests(options, test_args, &compilation));
// If we have an error and want to fail fast, return
if errors.len() > 0 &&!options.no_fail_fast {
return Ok(Some(CargoTestError::new(errors)))
}
// If a specific test was requested or we're not running any tests at all,
// don't run any doc tests.
if let ops::CompileFilter::Only {.. } = options.compile_opts.filter {
match errors.len() {
0 => return Ok(None),
_ => return Ok(Some(CargoTestError::new(errors)))
}
}
errors.extend(try!(run_doc_tests(options, test_args, &compilation)));
if errors.len() == 0 {
Ok(None)
} else {
Ok(Some(CargoTestError::new(errors)))
}
}
pub fn run_benches(manifest_path: &Path,
options: &TestOptions,
args: &[String]) -> CargoResult<Option<CargoTestError>> {
let mut args = args.to_vec();
args.push("--bench".to_string());
let compilation = try!(compile_tests(manifest_path, options));
let errors = try!(run_unit_tests(options, &args, &compilation));
match errors.len() {
0 => Ok(None),
_ => Ok(Some(CargoTestError::new(errors))),
}
}
fn compile_tests<'a>(manifest_path: &Path,
options: &TestOptions<'a>)
-> CargoResult<Compilation<'a>> |
/// Run the unit and integration tests of a project.
fn run_unit_tests(options: &TestOptions,
test_args: &[String],
compilation: &Compilation)
-> CargoResult<Vec<ProcessError>> {
let config = options.compile_opts.config;
let cwd = options.compile_opts.config.cwd();
let mut errors = Vec::new();
for &(ref pkg, _, ref exe) in &compilation.tests {
let to_display = match util::without_prefix(exe, &cwd) {
Some(path) => path,
None => &**exe,
};
let mut cmd = try!(compilation.target_process(exe, pkg));
cmd.args(test_args);
try!(config.shell().concise(|shell| {
shell.status("Running", to_display.display().to_string())
}));
try!(config.shell().verbose(|shell| {
shell.status("Running", cmd.to_string())
}));
if let Err(e) = ExecEngine::exec(&mut ProcessEngine, cmd) {
errors.push(e);
if!options.no_fail_fast {
break
}
}
}
Ok(errors)
}
#[allow(deprecated)] // connect => join in 1.3
fn run_doc_tests(options: &TestOptions,
test_args: &[String],
compilation: &Compilation)
-> CargoResult<Vec<ProcessError>> {
let mut errors = Vec::new();
let config = options.compile_opts.config;
let libs = compilation.to_doc_test.iter().map(|package| {
(package, package.targets().iter().filter(|t| t.doctested())
.map(|t| (t.src_path(), t.name(), t.crate_name())))
});
for (package, tests) in libs {
for (lib, name, crate_name) in tests {
try!(config.shell().status("Doc-tests", name));
let mut p = try!(compilation.rustdoc_process(package));
p.arg("--test").arg(lib)
.arg("--crate-name").arg(&crate_name);
for &rust_dep in &[&compilation.deps_output, &compilation.root_output] {
let mut arg = OsString::from("dependency=");
arg.push(rust_dep);
p.arg("-L").arg(arg);
}
for native_dep in compilation.native_dirs.values() {
p.arg("-L").arg(native_dep);
}
if test_args.len() > 0 {
p.arg("--test-args").arg(&test_args.connect(" "));
}
for feat in compilation.features.iter() {
p.arg("--cfg").arg(&format!("feature=\"{}\"", feat));
}
for (_, libs) in compilation.libraries.iter() {
for &(ref target, ref lib) in libs.iter() {
// Note that we can *only* doctest rlib outputs here. A
// staticlib output cannot be linked by the compiler (it just
// doesn't do that). A dylib output, however, can be linked by
// the compiler, but will always fail. Currently all dylibs are
// built as "static dylibs" where the standard library is
// statically linked into the dylib. The doc tests fail,
// however, for now as they try to link the standard library
// dynamically as well, causing problems. As a result we only
// pass `--extern` for rlib deps and skip out on all other
// artifacts.
if lib.extension()!= Some(OsStr::new("rlib")) &&
!target.for_host() {
continue
}
let mut arg = OsString::from(target.crate_name());
arg.push("=");
arg.push(lib);
p.arg("--extern").arg(&arg);
}
}
try!(config.shell().verbose(|shell| {
shell.status("Running", p.to_string())
}));
if let Err(e) = ExecEngine::exec(&mut ProcessEngine, p) {
errors.push(e);
if!options.no_fail_fast {
return Ok(errors);
}
}
}
}
Ok(errors)
}
| {
let mut compilation = try!(ops::compile(manifest_path,
&options.compile_opts));
compilation.tests.sort_by(|a, b| {
(a.0.package_id(), &a.1).cmp(&(b.0.package_id(), &b.1))
});
Ok(compilation)
} | identifier_body |
cargo_test.rs | use std::ffi::{OsString, OsStr};
use std::path::Path;
use ops::{self, ExecEngine, ProcessEngine, Compilation};
use util::{self, CargoResult, CargoTestError, ProcessError};
pub struct TestOptions<'a> {
pub compile_opts: ops::CompileOptions<'a>,
pub no_run: bool,
pub no_fail_fast: bool,
}
#[allow(deprecated)] // connect => join in 1.3
pub fn run_tests(manifest_path: &Path,
options: &TestOptions,
test_args: &[String]) -> CargoResult<Option<CargoTestError>> {
let compilation = try!(compile_tests(manifest_path, options));
if options.no_run {
return Ok(None)
}
let mut errors = try!(run_unit_tests(options, test_args, &compilation));
// If we have an error and want to fail fast, return
if errors.len() > 0 &&!options.no_fail_fast {
return Ok(Some(CargoTestError::new(errors)))
}
// If a specific test was requested or we're not running any tests at all,
// don't run any doc tests.
if let ops::CompileFilter::Only {.. } = options.compile_opts.filter {
match errors.len() {
0 => return Ok(None),
_ => return Ok(Some(CargoTestError::new(errors)))
}
}
errors.extend(try!(run_doc_tests(options, test_args, &compilation)));
if errors.len() == 0 {
Ok(None)
} else {
Ok(Some(CargoTestError::new(errors)))
}
}
pub fn run_benches(manifest_path: &Path,
options: &TestOptions,
args: &[String]) -> CargoResult<Option<CargoTestError>> {
let mut args = args.to_vec();
args.push("--bench".to_string());
let compilation = try!(compile_tests(manifest_path, options));
let errors = try!(run_unit_tests(options, &args, &compilation));
match errors.len() {
0 => Ok(None),
_ => Ok(Some(CargoTestError::new(errors))),
}
}
fn compile_tests<'a>(manifest_path: &Path,
options: &TestOptions<'a>)
-> CargoResult<Compilation<'a>> {
let mut compilation = try!(ops::compile(manifest_path,
&options.compile_opts));
compilation.tests.sort_by(|a, b| {
(a.0.package_id(), &a.1).cmp(&(b.0.package_id(), &b.1))
});
Ok(compilation)
}
/// Run the unit and integration tests of a project.
fn run_unit_tests(options: &TestOptions,
test_args: &[String],
compilation: &Compilation)
-> CargoResult<Vec<ProcessError>> {
let config = options.compile_opts.config;
let cwd = options.compile_opts.config.cwd();
let mut errors = Vec::new();
for &(ref pkg, _, ref exe) in &compilation.tests {
let to_display = match util::without_prefix(exe, &cwd) {
Some(path) => path,
None => &**exe,
};
let mut cmd = try!(compilation.target_process(exe, pkg));
cmd.args(test_args);
try!(config.shell().concise(|shell| {
shell.status("Running", to_display.display().to_string())
}));
try!(config.shell().verbose(|shell| {
shell.status("Running", cmd.to_string())
}));
if let Err(e) = ExecEngine::exec(&mut ProcessEngine, cmd) {
errors.push(e);
if!options.no_fail_fast {
break
}
}
}
Ok(errors)
}
#[allow(deprecated)] // connect => join in 1.3
fn run_doc_tests(options: &TestOptions,
test_args: &[String],
compilation: &Compilation)
-> CargoResult<Vec<ProcessError>> {
let mut errors = Vec::new();
let config = options.compile_opts.config;
let libs = compilation.to_doc_test.iter().map(|package| {
(package, package.targets().iter().filter(|t| t.doctested())
.map(|t| (t.src_path(), t.name(), t.crate_name())))
});
for (package, tests) in libs {
for (lib, name, crate_name) in tests {
try!(config.shell().status("Doc-tests", name));
let mut p = try!(compilation.rustdoc_process(package));
p.arg("--test").arg(lib)
.arg("--crate-name").arg(&crate_name);
for &rust_dep in &[&compilation.deps_output, &compilation.root_output] { | arg.push(rust_dep);
p.arg("-L").arg(arg);
}
for native_dep in compilation.native_dirs.values() {
p.arg("-L").arg(native_dep);
}
if test_args.len() > 0 {
p.arg("--test-args").arg(&test_args.connect(" "));
}
for feat in compilation.features.iter() {
p.arg("--cfg").arg(&format!("feature=\"{}\"", feat));
}
for (_, libs) in compilation.libraries.iter() {
for &(ref target, ref lib) in libs.iter() {
// Note that we can *only* doctest rlib outputs here. A
// staticlib output cannot be linked by the compiler (it just
// doesn't do that). A dylib output, however, can be linked by
// the compiler, but will always fail. Currently all dylibs are
// built as "static dylibs" where the standard library is
// statically linked into the dylib. The doc tests fail,
// however, for now as they try to link the standard library
// dynamically as well, causing problems. As a result we only
// pass `--extern` for rlib deps and skip out on all other
// artifacts.
if lib.extension()!= Some(OsStr::new("rlib")) &&
!target.for_host() {
continue
}
let mut arg = OsString::from(target.crate_name());
arg.push("=");
arg.push(lib);
p.arg("--extern").arg(&arg);
}
}
try!(config.shell().verbose(|shell| {
shell.status("Running", p.to_string())
}));
if let Err(e) = ExecEngine::exec(&mut ProcessEngine, p) {
errors.push(e);
if!options.no_fail_fast {
return Ok(errors);
}
}
}
}
Ok(errors)
} | let mut arg = OsString::from("dependency="); | random_line_split |
main.rs | // #![deny(warnings)]
extern crate futures;
extern crate hyper;
extern crate hyper_tls;
extern crate tokio_core;
#[macro_use]
extern crate clap;
use clap::{Arg, App};
use std::io::{self, Write};
use futures::{Future, Stream};
use tokio_core::reactor::Core;
fn | () {
let matches = App::new(crate_name!())
.version(crate_version!())
.author(crate_authors!())
.about(crate_description!())
.arg(Arg::with_name("URL")
.help("The URL to reach")
.required(true)
.index(1))
.arg(Arg::with_name("v")
.short("v")
.multiple(true)
.help("Sets the level of verbosity"))
.get_matches();
let mut url = matches.value_of("URL").unwrap().parse::<hyper::Uri>().unwrap();
// TODO: this is really sloppy, need a better way to make uri. should i assume https?
if url.scheme() == None {
url = ("https://".to_string() + matches.value_of("URL").unwrap()).parse::<hyper::Uri>().unwrap();
}
if! ( url.scheme() == Some("http") || url.scheme() == Some("https") ) {
println!("This example only works with 'http' URLs.");
return;
}
let mut core = Core::new().unwrap();
let handle = core.handle();
let client = hyper::Client::configure()
.connector(hyper_tls::HttpsConnector::new(4, &handle).unwrap())
.build(&handle);
let work = client.get(url).and_then(|res| {
if matches.occurrences_of("v") > 0 {
// TODO: 1.1 is hard coded for now
println!("> HTTP/1.1 {}", res.status());
// TODO: Should consider changing Display for hyper::Headers or using regex
println!("> {}", res.headers().to_string().replace("\n", "\n> "));
}
res.body().for_each(|chunk| {
io::stdout().write_all(&chunk).map_err(From::from)
})
}).map(|_| {
if matches.occurrences_of("v") > 0 {
println!("\n\nDone.");
}
});
core.run(work).unwrap();
}
| main | identifier_name |
main.rs | // #![deny(warnings)]
extern crate futures;
extern crate hyper;
extern crate hyper_tls;
extern crate tokio_core;
#[macro_use]
extern crate clap;
use clap::{Arg, App};
use std::io::{self, Write};
use futures::{Future, Stream};
use tokio_core::reactor::Core;
fn main() {
let matches = App::new(crate_name!())
.version(crate_version!())
.author(crate_authors!())
.about(crate_description!())
.arg(Arg::with_name("URL")
.help("The URL to reach")
.required(true)
.index(1))
.arg(Arg::with_name("v")
.short("v")
.multiple(true)
.help("Sets the level of verbosity"))
.get_matches();
let mut url = matches.value_of("URL").unwrap().parse::<hyper::Uri>().unwrap();
// TODO: this is really sloppy, need a better way to make uri. should i assume https?
if url.scheme() == None {
url = ("https://".to_string() + matches.value_of("URL").unwrap()).parse::<hyper::Uri>().unwrap();
}
if! ( url.scheme() == Some("http") || url.scheme() == Some("https") ) |
let mut core = Core::new().unwrap();
let handle = core.handle();
let client = hyper::Client::configure()
.connector(hyper_tls::HttpsConnector::new(4, &handle).unwrap())
.build(&handle);
let work = client.get(url).and_then(|res| {
if matches.occurrences_of("v") > 0 {
// TODO: 1.1 is hard coded for now
println!("> HTTP/1.1 {}", res.status());
// TODO: Should consider changing Display for hyper::Headers or using regex
println!("> {}", res.headers().to_string().replace("\n", "\n> "));
}
res.body().for_each(|chunk| {
io::stdout().write_all(&chunk).map_err(From::from)
})
}).map(|_| {
if matches.occurrences_of("v") > 0 {
println!("\n\nDone.");
}
});
core.run(work).unwrap();
}
| {
println!("This example only works with 'http' URLs.");
return;
} | conditional_block |
main.rs | // #![deny(warnings)]
extern crate futures;
extern crate hyper;
extern crate hyper_tls;
extern crate tokio_core;
#[macro_use]
extern crate clap;
use clap::{Arg, App};
use std::io::{self, Write};
use futures::{Future, Stream};
use tokio_core::reactor::Core;
fn main() | if! ( url.scheme() == Some("http") || url.scheme() == Some("https") ) {
println!("This example only works with 'http' URLs.");
return;
}
let mut core = Core::new().unwrap();
let handle = core.handle();
let client = hyper::Client::configure()
.connector(hyper_tls::HttpsConnector::new(4, &handle).unwrap())
.build(&handle);
let work = client.get(url).and_then(|res| {
if matches.occurrences_of("v") > 0 {
// TODO: 1.1 is hard coded for now
println!("> HTTP/1.1 {}", res.status());
// TODO: Should consider changing Display for hyper::Headers or using regex
println!("> {}", res.headers().to_string().replace("\n", "\n> "));
}
res.body().for_each(|chunk| {
io::stdout().write_all(&chunk).map_err(From::from)
})
}).map(|_| {
if matches.occurrences_of("v") > 0 {
println!("\n\nDone.");
}
});
core.run(work).unwrap();
}
| {
let matches = App::new(crate_name!())
.version(crate_version!())
.author(crate_authors!())
.about(crate_description!())
.arg(Arg::with_name("URL")
.help("The URL to reach")
.required(true)
.index(1))
.arg(Arg::with_name("v")
.short("v")
.multiple(true)
.help("Sets the level of verbosity"))
.get_matches();
let mut url = matches.value_of("URL").unwrap().parse::<hyper::Uri>().unwrap();
// TODO: this is really sloppy, need a better way to make uri. should i assume https?
if url.scheme() == None {
url = ("https://".to_string() + matches.value_of("URL").unwrap()).parse::<hyper::Uri>().unwrap();
} | identifier_body |
main.rs | // #![deny(warnings)]
extern crate futures;
extern crate hyper;
extern crate hyper_tls;
extern crate tokio_core;
#[macro_use]
extern crate clap;
use clap::{Arg, App};
use std::io::{self, Write};
use futures::{Future, Stream};
use tokio_core::reactor::Core;
fn main() {
let matches = App::new(crate_name!())
.version(crate_version!())
.author(crate_authors!())
.about(crate_description!())
.arg(Arg::with_name("URL")
.help("The URL to reach")
.required(true)
.index(1))
.arg(Arg::with_name("v")
.short("v")
.multiple(true)
.help("Sets the level of verbosity"))
.get_matches();
let mut url = matches.value_of("URL").unwrap().parse::<hyper::Uri>().unwrap();
// TODO: this is really sloppy, need a better way to make uri. should i assume https?
if url.scheme() == None {
url = ("https://".to_string() + matches.value_of("URL").unwrap()).parse::<hyper::Uri>().unwrap();
}
if! ( url.scheme() == Some("http") || url.scheme() == Some("https") ) {
println!("This example only works with 'http' URLs.");
return;
}
let mut core = Core::new().unwrap();
let handle = core.handle();
let client = hyper::Client::configure()
.connector(hyper_tls::HttpsConnector::new(4, &handle).unwrap())
.build(&handle);
let work = client.get(url).and_then(|res| {
if matches.occurrences_of("v") > 0 {
// TODO: 1.1 is hard coded for now
println!("> HTTP/1.1 {}", res.status());
// TODO: Should consider changing Display for hyper::Headers or using regex
println!("> {}", res.headers().to_string().replace("\n", "\n> "));
}
res.body().for_each(|chunk| {
io::stdout().write_all(&chunk).map_err(From::from)
}) | if matches.occurrences_of("v") > 0 {
println!("\n\nDone.");
}
});
core.run(work).unwrap();
} | }).map(|_| { | random_line_split |
loopback.rs | //! Serial loopback via USART1
#![feature(const_fn)]
#![feature(used)]
#![no_std]
extern crate blue_pill;
extern crate embedded_hal as hal;
// version = "0.2.3"
extern crate cortex_m_rt;
// version = "0.1.0"
#[macro_use]
extern crate cortex_m_rtfm as rtfm;
use blue_pill::serial::Event;
use blue_pill::time::Hertz;
use blue_pill::{Serial, stm32f103xx};
use hal::prelude::*;
use rtfm::{P0, P1, T0, T1, TMax};
use stm32f103xx::interrupt::USART1;
// CONFIGURATION
pub const BAUD_RATE: Hertz = Hertz(115_200);
// RESOURCES
peripherals!(stm32f103xx, {
AFIO: Peripheral {
ceiling: C0,
},
GPIOA: Peripheral {
ceiling: C0,
},
RCC: Peripheral {
ceiling: C0,
},
USART1: Peripheral {
ceiling: C1,
},
});
// INITIALIZATION PHASE
fn init(ref prio: P0, thr: &TMax) |
// IDLE LOOP
fn idle(_prio: P0, _thr: T0) ->! {
// Sleep
loop {
rtfm::wfi();
}
}
// TASKS
tasks!(stm32f103xx, {
loopback: Task {
interrupt: USART1,
priority: P1,
enabled: true,
},
});
fn loopback(_task: USART1, ref prio: P1, ref thr: T1) {
let usart1 = USART1.access(prio, thr);
let serial = Serial(&*usart1);
let byte = serial.read().unwrap();
serial.write(byte).unwrap();
}
| {
let afio = &AFIO.access(prio, thr);
let gpioa = &GPIOA.access(prio, thr);
let rcc = &RCC.access(prio, thr);
let usart1 = USART1.access(prio, thr);
let serial = Serial(&*usart1);
serial.init(BAUD_RATE.invert(), afio, None, gpioa, rcc);
serial.listen(Event::Rxne);
} | identifier_body |
loopback.rs | //! Serial loopback via USART1
#![feature(const_fn)]
#![feature(used)]
#![no_std]
extern crate blue_pill;
extern crate embedded_hal as hal;
// version = "0.2.3"
extern crate cortex_m_rt;
// version = "0.1.0" | #[macro_use]
extern crate cortex_m_rtfm as rtfm;
use blue_pill::serial::Event;
use blue_pill::time::Hertz;
use blue_pill::{Serial, stm32f103xx};
use hal::prelude::*;
use rtfm::{P0, P1, T0, T1, TMax};
use stm32f103xx::interrupt::USART1;
// CONFIGURATION
pub const BAUD_RATE: Hertz = Hertz(115_200);
// RESOURCES
peripherals!(stm32f103xx, {
AFIO: Peripheral {
ceiling: C0,
},
GPIOA: Peripheral {
ceiling: C0,
},
RCC: Peripheral {
ceiling: C0,
},
USART1: Peripheral {
ceiling: C1,
},
});
// INITIALIZATION PHASE
fn init(ref prio: P0, thr: &TMax) {
let afio = &AFIO.access(prio, thr);
let gpioa = &GPIOA.access(prio, thr);
let rcc = &RCC.access(prio, thr);
let usart1 = USART1.access(prio, thr);
let serial = Serial(&*usart1);
serial.init(BAUD_RATE.invert(), afio, None, gpioa, rcc);
serial.listen(Event::Rxne);
}
// IDLE LOOP
fn idle(_prio: P0, _thr: T0) ->! {
// Sleep
loop {
rtfm::wfi();
}
}
// TASKS
tasks!(stm32f103xx, {
loopback: Task {
interrupt: USART1,
priority: P1,
enabled: true,
},
});
fn loopback(_task: USART1, ref prio: P1, ref thr: T1) {
let usart1 = USART1.access(prio, thr);
let serial = Serial(&*usart1);
let byte = serial.read().unwrap();
serial.write(byte).unwrap();
} | random_line_split | |
loopback.rs | //! Serial loopback via USART1
#![feature(const_fn)]
#![feature(used)]
#![no_std]
extern crate blue_pill;
extern crate embedded_hal as hal;
// version = "0.2.3"
extern crate cortex_m_rt;
// version = "0.1.0"
#[macro_use]
extern crate cortex_m_rtfm as rtfm;
use blue_pill::serial::Event;
use blue_pill::time::Hertz;
use blue_pill::{Serial, stm32f103xx};
use hal::prelude::*;
use rtfm::{P0, P1, T0, T1, TMax};
use stm32f103xx::interrupt::USART1;
// CONFIGURATION
pub const BAUD_RATE: Hertz = Hertz(115_200);
// RESOURCES
peripherals!(stm32f103xx, {
AFIO: Peripheral {
ceiling: C0,
},
GPIOA: Peripheral {
ceiling: C0,
},
RCC: Peripheral {
ceiling: C0,
},
USART1: Peripheral {
ceiling: C1,
},
});
// INITIALIZATION PHASE
fn | (ref prio: P0, thr: &TMax) {
let afio = &AFIO.access(prio, thr);
let gpioa = &GPIOA.access(prio, thr);
let rcc = &RCC.access(prio, thr);
let usart1 = USART1.access(prio, thr);
let serial = Serial(&*usart1);
serial.init(BAUD_RATE.invert(), afio, None, gpioa, rcc);
serial.listen(Event::Rxne);
}
// IDLE LOOP
fn idle(_prio: P0, _thr: T0) ->! {
// Sleep
loop {
rtfm::wfi();
}
}
// TASKS
tasks!(stm32f103xx, {
loopback: Task {
interrupt: USART1,
priority: P1,
enabled: true,
},
});
fn loopback(_task: USART1, ref prio: P1, ref thr: T1) {
let usart1 = USART1.access(prio, thr);
let serial = Serial(&*usart1);
let byte = serial.read().unwrap();
serial.write(byte).unwrap();
}
| init | identifier_name |
uievent.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use dom::bindings::codegen::Bindings::EventBinding::EventMethods;
use dom::bindings::codegen::Bindings::UIEventBinding;
use dom::bindings::codegen::Bindings::UIEventBinding::UIEventMethods;
use dom::bindings::error::Fallible;
use dom::bindings::global::GlobalRef;
use dom::bindings::inheritance::Castable;
use dom::bindings::js::Root;
use dom::bindings::js::{JS, MutNullableHeap, RootedReference};
use dom::bindings::reflector::reflect_dom_object;
use dom::event::{Event, EventBubbles, EventCancelable};
use dom::window::Window;
use std::cell::Cell;
use std::default::Default;
use string_cache::Atom;
use util::str::DOMString;
// https://dvcs.w3.org/hg/dom3events/raw-file/tip/html/DOM3-Events.html#interface-UIEvent
#[dom_struct]
pub struct UIEvent {
event: Event,
view: MutNullableHeap<JS<Window>>,
detail: Cell<i32>
}
impl UIEvent {
pub fn new_inherited() -> UIEvent {
UIEvent {
event: Event::new_inherited(),
view: Default::default(),
detail: Cell::new(0),
}
}
pub fn | (window: &Window) -> Root<UIEvent> {
reflect_dom_object(box UIEvent::new_inherited(),
GlobalRef::Window(window),
UIEventBinding::Wrap)
}
pub fn new(window: &Window,
type_: DOMString,
can_bubble: EventBubbles,
cancelable: EventCancelable,
view: Option<&Window>,
detail: i32) -> Root<UIEvent> {
let ev = UIEvent::new_uninitialized(window);
ev.InitUIEvent(type_, can_bubble == EventBubbles::Bubbles,
cancelable == EventCancelable::Cancelable, view, detail);
ev
}
pub fn Constructor(global: GlobalRef,
type_: DOMString,
init: &UIEventBinding::UIEventInit) -> Fallible<Root<UIEvent>> {
let bubbles = if init.parent.bubbles { EventBubbles::Bubbles } else { EventBubbles::DoesNotBubble };
let cancelable = if init.parent.cancelable {
EventCancelable::Cancelable
} else {
EventCancelable::NotCancelable
};
let event = UIEvent::new(global.as_window(), type_,
bubbles, cancelable,
init.view.r(), init.detail);
Ok(event)
}
}
impl UIEventMethods for UIEvent {
// https://w3c.github.io/uievents/#widl-UIEvent-view
fn GetView(&self) -> Option<Root<Window>> {
self.view.get()
}
// https://w3c.github.io/uievents/#widl-UIEvent-detail
fn Detail(&self) -> i32 {
self.detail.get()
}
// https://w3c.github.io/uievents/#widl-UIEvent-initUIEvent
fn InitUIEvent(&self,
type_: DOMString,
can_bubble: bool,
cancelable: bool,
view: Option<&Window>,
detail: i32) {
let event = self.upcast::<Event>();
if event.dispatching() {
return;
}
event.init_event(Atom::from(&*type_), can_bubble, cancelable);
self.view.set(view);
self.detail.set(detail);
}
// https://dom.spec.whatwg.org/#dom-event-istrusted
fn IsTrusted(&self) -> bool {
self.event.IsTrusted()
}
}
| new_uninitialized | identifier_name |
uievent.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use dom::bindings::codegen::Bindings::EventBinding::EventMethods;
use dom::bindings::codegen::Bindings::UIEventBinding;
use dom::bindings::codegen::Bindings::UIEventBinding::UIEventMethods;
use dom::bindings::error::Fallible;
use dom::bindings::global::GlobalRef;
use dom::bindings::inheritance::Castable;
use dom::bindings::js::Root;
use dom::bindings::js::{JS, MutNullableHeap, RootedReference};
use dom::bindings::reflector::reflect_dom_object;
use dom::event::{Event, EventBubbles, EventCancelable};
use dom::window::Window;
use std::cell::Cell;
use std::default::Default;
use string_cache::Atom;
use util::str::DOMString;
// https://dvcs.w3.org/hg/dom3events/raw-file/tip/html/DOM3-Events.html#interface-UIEvent
#[dom_struct]
pub struct UIEvent {
event: Event,
view: MutNullableHeap<JS<Window>>,
detail: Cell<i32>
}
impl UIEvent {
pub fn new_inherited() -> UIEvent {
UIEvent {
event: Event::new_inherited(),
view: Default::default(),
detail: Cell::new(0),
}
}
pub fn new_uninitialized(window: &Window) -> Root<UIEvent> {
reflect_dom_object(box UIEvent::new_inherited(),
GlobalRef::Window(window),
UIEventBinding::Wrap)
}
pub fn new(window: &Window,
type_: DOMString,
can_bubble: EventBubbles,
cancelable: EventCancelable,
view: Option<&Window>,
detail: i32) -> Root<UIEvent> {
let ev = UIEvent::new_uninitialized(window);
ev.InitUIEvent(type_, can_bubble == EventBubbles::Bubbles,
cancelable == EventCancelable::Cancelable, view, detail);
ev
}
pub fn Constructor(global: GlobalRef,
type_: DOMString,
init: &UIEventBinding::UIEventInit) -> Fallible<Root<UIEvent>> {
let bubbles = if init.parent.bubbles { EventBubbles::Bubbles } else { EventBubbles::DoesNotBubble }; | let cancelable = if init.parent.cancelable {
EventCancelable::Cancelable
} else {
EventCancelable::NotCancelable
};
let event = UIEvent::new(global.as_window(), type_,
bubbles, cancelable,
init.view.r(), init.detail);
Ok(event)
}
}
impl UIEventMethods for UIEvent {
// https://w3c.github.io/uievents/#widl-UIEvent-view
fn GetView(&self) -> Option<Root<Window>> {
self.view.get()
}
// https://w3c.github.io/uievents/#widl-UIEvent-detail
fn Detail(&self) -> i32 {
self.detail.get()
}
// https://w3c.github.io/uievents/#widl-UIEvent-initUIEvent
fn InitUIEvent(&self,
type_: DOMString,
can_bubble: bool,
cancelable: bool,
view: Option<&Window>,
detail: i32) {
let event = self.upcast::<Event>();
if event.dispatching() {
return;
}
event.init_event(Atom::from(&*type_), can_bubble, cancelable);
self.view.set(view);
self.detail.set(detail);
}
// https://dom.spec.whatwg.org/#dom-event-istrusted
fn IsTrusted(&self) -> bool {
self.event.IsTrusted()
}
} | random_line_split | |
uievent.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use dom::bindings::codegen::Bindings::EventBinding::EventMethods;
use dom::bindings::codegen::Bindings::UIEventBinding;
use dom::bindings::codegen::Bindings::UIEventBinding::UIEventMethods;
use dom::bindings::error::Fallible;
use dom::bindings::global::GlobalRef;
use dom::bindings::inheritance::Castable;
use dom::bindings::js::Root;
use dom::bindings::js::{JS, MutNullableHeap, RootedReference};
use dom::bindings::reflector::reflect_dom_object;
use dom::event::{Event, EventBubbles, EventCancelable};
use dom::window::Window;
use std::cell::Cell;
use std::default::Default;
use string_cache::Atom;
use util::str::DOMString;
// https://dvcs.w3.org/hg/dom3events/raw-file/tip/html/DOM3-Events.html#interface-UIEvent
#[dom_struct]
pub struct UIEvent {
event: Event,
view: MutNullableHeap<JS<Window>>,
detail: Cell<i32>
}
impl UIEvent {
pub fn new_inherited() -> UIEvent {
UIEvent {
event: Event::new_inherited(),
view: Default::default(),
detail: Cell::new(0),
}
}
pub fn new_uninitialized(window: &Window) -> Root<UIEvent> {
reflect_dom_object(box UIEvent::new_inherited(),
GlobalRef::Window(window),
UIEventBinding::Wrap)
}
pub fn new(window: &Window,
type_: DOMString,
can_bubble: EventBubbles,
cancelable: EventCancelable,
view: Option<&Window>,
detail: i32) -> Root<UIEvent> {
let ev = UIEvent::new_uninitialized(window);
ev.InitUIEvent(type_, can_bubble == EventBubbles::Bubbles,
cancelable == EventCancelable::Cancelable, view, detail);
ev
}
pub fn Constructor(global: GlobalRef,
type_: DOMString,
init: &UIEventBinding::UIEventInit) -> Fallible<Root<UIEvent>> {
let bubbles = if init.parent.bubbles { EventBubbles::Bubbles } else { EventBubbles::DoesNotBubble };
let cancelable = if init.parent.cancelable {
EventCancelable::Cancelable
} else {
EventCancelable::NotCancelable
};
let event = UIEvent::new(global.as_window(), type_,
bubbles, cancelable,
init.view.r(), init.detail);
Ok(event)
}
}
impl UIEventMethods for UIEvent {
// https://w3c.github.io/uievents/#widl-UIEvent-view
fn GetView(&self) -> Option<Root<Window>> {
self.view.get()
}
// https://w3c.github.io/uievents/#widl-UIEvent-detail
fn Detail(&self) -> i32 {
self.detail.get()
}
// https://w3c.github.io/uievents/#widl-UIEvent-initUIEvent
fn InitUIEvent(&self,
type_: DOMString,
can_bubble: bool,
cancelable: bool,
view: Option<&Window>,
detail: i32) {
let event = self.upcast::<Event>();
if event.dispatching() |
event.init_event(Atom::from(&*type_), can_bubble, cancelable);
self.view.set(view);
self.detail.set(detail);
}
// https://dom.spec.whatwg.org/#dom-event-istrusted
fn IsTrusted(&self) -> bool {
self.event.IsTrusted()
}
}
| {
return;
} | conditional_block |
uievent.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use dom::bindings::codegen::Bindings::EventBinding::EventMethods;
use dom::bindings::codegen::Bindings::UIEventBinding;
use dom::bindings::codegen::Bindings::UIEventBinding::UIEventMethods;
use dom::bindings::error::Fallible;
use dom::bindings::global::GlobalRef;
use dom::bindings::inheritance::Castable;
use dom::bindings::js::Root;
use dom::bindings::js::{JS, MutNullableHeap, RootedReference};
use dom::bindings::reflector::reflect_dom_object;
use dom::event::{Event, EventBubbles, EventCancelable};
use dom::window::Window;
use std::cell::Cell;
use std::default::Default;
use string_cache::Atom;
use util::str::DOMString;
// https://dvcs.w3.org/hg/dom3events/raw-file/tip/html/DOM3-Events.html#interface-UIEvent
#[dom_struct]
pub struct UIEvent {
event: Event,
view: MutNullableHeap<JS<Window>>,
detail: Cell<i32>
}
impl UIEvent {
pub fn new_inherited() -> UIEvent {
UIEvent {
event: Event::new_inherited(),
view: Default::default(),
detail: Cell::new(0),
}
}
pub fn new_uninitialized(window: &Window) -> Root<UIEvent> |
pub fn new(window: &Window,
type_: DOMString,
can_bubble: EventBubbles,
cancelable: EventCancelable,
view: Option<&Window>,
detail: i32) -> Root<UIEvent> {
let ev = UIEvent::new_uninitialized(window);
ev.InitUIEvent(type_, can_bubble == EventBubbles::Bubbles,
cancelable == EventCancelable::Cancelable, view, detail);
ev
}
pub fn Constructor(global: GlobalRef,
type_: DOMString,
init: &UIEventBinding::UIEventInit) -> Fallible<Root<UIEvent>> {
let bubbles = if init.parent.bubbles { EventBubbles::Bubbles } else { EventBubbles::DoesNotBubble };
let cancelable = if init.parent.cancelable {
EventCancelable::Cancelable
} else {
EventCancelable::NotCancelable
};
let event = UIEvent::new(global.as_window(), type_,
bubbles, cancelable,
init.view.r(), init.detail);
Ok(event)
}
}
impl UIEventMethods for UIEvent {
// https://w3c.github.io/uievents/#widl-UIEvent-view
fn GetView(&self) -> Option<Root<Window>> {
self.view.get()
}
// https://w3c.github.io/uievents/#widl-UIEvent-detail
fn Detail(&self) -> i32 {
self.detail.get()
}
// https://w3c.github.io/uievents/#widl-UIEvent-initUIEvent
fn InitUIEvent(&self,
type_: DOMString,
can_bubble: bool,
cancelable: bool,
view: Option<&Window>,
detail: i32) {
let event = self.upcast::<Event>();
if event.dispatching() {
return;
}
event.init_event(Atom::from(&*type_), can_bubble, cancelable);
self.view.set(view);
self.detail.set(detail);
}
// https://dom.spec.whatwg.org/#dom-event-istrusted
fn IsTrusted(&self) -> bool {
self.event.IsTrusted()
}
}
| {
reflect_dom_object(box UIEvent::new_inherited(),
GlobalRef::Window(window),
UIEventBinding::Wrap)
} | identifier_body |
gradient.rs | // This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at http://mozilla.org/MPL/2.0/.
// external
use crate::qt;
// self
use super::prelude::*;
pub fn | (
g: &usvg::LinearGradient,
opacity: usvg::Opacity,
bbox: Rect,
brush: &mut qt::Brush,
) {
let mut grad = qt::LinearGradient::new(g.x1, g.y1, g.x2, g.y2);
prepare_base(&g.base, opacity, &mut grad);
brush.set_linear_gradient(grad);
apply_ts(&g.base, bbox, brush);
}
pub fn prepare_radial(
g: &usvg::RadialGradient,
opacity: usvg::Opacity,
bbox: Rect,
brush: &mut qt::Brush,
) {
let mut grad = qt::RadialGradient::new(g.cx, g.cy, g.fx, g.fy, g.r.value());
prepare_base(&g.base, opacity, &mut grad);
brush.set_radial_gradient(grad);
apply_ts(&g.base, bbox, brush);
}
fn prepare_base(
g: &usvg::BaseGradient,
opacity: usvg::Opacity,
grad: &mut qt::Gradient,
) {
let spread_method = match g.spread_method {
usvg::SpreadMethod::Pad => qt::Spread::Pad,
usvg::SpreadMethod::Reflect => qt::Spread::Reflect,
usvg::SpreadMethod::Repeat => qt::Spread::Repeat,
};
grad.set_spread(spread_method);
for stop in &g.stops {
grad.set_color_at(
stop.offset.value(),
stop.color.red,
stop.color.green,
stop.color.blue,
(stop.opacity.value() * opacity.value() * 255.0) as u8,
);
}
}
fn apply_ts(
g: &usvg::BaseGradient,
bbox: Rect,
brush: &mut qt::Brush,
) {
// We don't use `QGradient::setCoordinateMode` because it works incorrectly.
//
// See QTBUG-67995
if g.units == usvg::Units::ObjectBoundingBox {
let mut ts = usvg::Transform::from_bbox(bbox);
ts.append(&g.transform);
brush.set_transform(ts.to_native());
} else {
brush.set_transform(g.transform.to_native());
}
}
| prepare_linear | identifier_name |
gradient.rs | // This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at http://mozilla.org/MPL/2.0/.
// external
use crate::qt;
// self
use super::prelude::*;
pub fn prepare_linear(
g: &usvg::LinearGradient,
opacity: usvg::Opacity,
bbox: Rect,
brush: &mut qt::Brush,
) {
let mut grad = qt::LinearGradient::new(g.x1, g.y1, g.x2, g.y2);
prepare_base(&g.base, opacity, &mut grad);
brush.set_linear_gradient(grad);
apply_ts(&g.base, bbox, brush);
}
pub fn prepare_radial(
g: &usvg::RadialGradient,
opacity: usvg::Opacity,
bbox: Rect,
brush: &mut qt::Brush,
) {
let mut grad = qt::RadialGradient::new(g.cx, g.cy, g.fx, g.fy, g.r.value());
prepare_base(&g.base, opacity, &mut grad);
brush.set_radial_gradient(grad);
apply_ts(&g.base, bbox, brush);
}
fn prepare_base(
g: &usvg::BaseGradient,
opacity: usvg::Opacity,
grad: &mut qt::Gradient,
) {
let spread_method = match g.spread_method {
usvg::SpreadMethod::Pad => qt::Spread::Pad,
usvg::SpreadMethod::Reflect => qt::Spread::Reflect,
usvg::SpreadMethod::Repeat => qt::Spread::Repeat,
};
grad.set_spread(spread_method);
for stop in &g.stops {
grad.set_color_at(
stop.offset.value(),
stop.color.red,
stop.color.green,
stop.color.blue,
(stop.opacity.value() * opacity.value() * 255.0) as u8,
);
}
}
fn apply_ts(
g: &usvg::BaseGradient,
bbox: Rect,
brush: &mut qt::Brush,
) | {
// We don't use `QGradient::setCoordinateMode` because it works incorrectly.
//
// See QTBUG-67995
if g.units == usvg::Units::ObjectBoundingBox {
let mut ts = usvg::Transform::from_bbox(bbox);
ts.append(&g.transform);
brush.set_transform(ts.to_native());
} else {
brush.set_transform(g.transform.to_native());
}
} | identifier_body | |
gradient.rs | use crate::qt;
// self
use super::prelude::*;
pub fn prepare_linear(
g: &usvg::LinearGradient,
opacity: usvg::Opacity,
bbox: Rect,
brush: &mut qt::Brush,
) {
let mut grad = qt::LinearGradient::new(g.x1, g.y1, g.x2, g.y2);
prepare_base(&g.base, opacity, &mut grad);
brush.set_linear_gradient(grad);
apply_ts(&g.base, bbox, brush);
}
pub fn prepare_radial(
g: &usvg::RadialGradient,
opacity: usvg::Opacity,
bbox: Rect,
brush: &mut qt::Brush,
) {
let mut grad = qt::RadialGradient::new(g.cx, g.cy, g.fx, g.fy, g.r.value());
prepare_base(&g.base, opacity, &mut grad);
brush.set_radial_gradient(grad);
apply_ts(&g.base, bbox, brush);
}
fn prepare_base(
g: &usvg::BaseGradient,
opacity: usvg::Opacity,
grad: &mut qt::Gradient,
) {
let spread_method = match g.spread_method {
usvg::SpreadMethod::Pad => qt::Spread::Pad,
usvg::SpreadMethod::Reflect => qt::Spread::Reflect,
usvg::SpreadMethod::Repeat => qt::Spread::Repeat,
};
grad.set_spread(spread_method);
for stop in &g.stops {
grad.set_color_at(
stop.offset.value(),
stop.color.red,
stop.color.green,
stop.color.blue,
(stop.opacity.value() * opacity.value() * 255.0) as u8,
);
}
}
fn apply_ts(
g: &usvg::BaseGradient,
bbox: Rect,
brush: &mut qt::Brush,
) {
// We don't use `QGradient::setCoordinateMode` because it works incorrectly.
//
// See QTBUG-67995
if g.units == usvg::Units::ObjectBoundingBox {
let mut ts = usvg::Transform::from_bbox(bbox);
ts.append(&g.transform);
brush.set_transform(ts.to_native());
} else {
brush.set_transform(g.transform.to_native());
}
} | // This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at http://mozilla.org/MPL/2.0/.
// external | random_line_split | |
lint-for-crate.rs | // force-host
#![feature(rustc_private)]
#![feature(box_syntax)]
extern crate rustc_driver;
extern crate rustc_hir;
#[macro_use]
extern crate rustc_lint;
#[macro_use]
extern crate rustc_session;
extern crate rustc_span;
extern crate rustc_ast;
use rustc_driver::plugin::Registry;
use rustc_lint::{LateContext, LateLintPass, LintArray, LintContext, LintPass};
use rustc_span::symbol::Symbol;
use rustc_ast::attr;
declare_lint! {
CRATE_NOT_OKAY,
Warn,
"crate not marked with #![crate_okay]"
}
declare_lint_pass!(Pass => [CRATE_NOT_OKAY]);
impl<'tcx> LateLintPass<'tcx> for Pass {
fn check_crate(&mut self, cx: &LateContext, krate: &rustc_hir::Crate) {
let attrs = cx.tcx.hir().attrs(rustc_hir::CRATE_HIR_ID);
if!cx.sess().contains_name(attrs, Symbol::intern("crate_okay")) {
cx.lint(CRATE_NOT_OKAY, |lint| {
lint.build("crate is not marked with #![crate_okay]")
.set_span(krate.module().inner)
.emit()
});
}
}
}
#[no_mangle]
fn | (reg: &mut Registry) {
reg.lint_store.register_lints(&[&CRATE_NOT_OKAY]);
reg.lint_store.register_late_pass(|| box Pass);
}
| __rustc_plugin_registrar | identifier_name |
lint-for-crate.rs | // force-host
#![feature(rustc_private)]
#![feature(box_syntax)]
extern crate rustc_driver;
extern crate rustc_hir; | extern crate rustc_session;
extern crate rustc_span;
extern crate rustc_ast;
use rustc_driver::plugin::Registry;
use rustc_lint::{LateContext, LateLintPass, LintArray, LintContext, LintPass};
use rustc_span::symbol::Symbol;
use rustc_ast::attr;
declare_lint! {
CRATE_NOT_OKAY,
Warn,
"crate not marked with #![crate_okay]"
}
declare_lint_pass!(Pass => [CRATE_NOT_OKAY]);
impl<'tcx> LateLintPass<'tcx> for Pass {
fn check_crate(&mut self, cx: &LateContext, krate: &rustc_hir::Crate) {
let attrs = cx.tcx.hir().attrs(rustc_hir::CRATE_HIR_ID);
if!cx.sess().contains_name(attrs, Symbol::intern("crate_okay")) {
cx.lint(CRATE_NOT_OKAY, |lint| {
lint.build("crate is not marked with #![crate_okay]")
.set_span(krate.module().inner)
.emit()
});
}
}
}
#[no_mangle]
fn __rustc_plugin_registrar(reg: &mut Registry) {
reg.lint_store.register_lints(&[&CRATE_NOT_OKAY]);
reg.lint_store.register_late_pass(|| box Pass);
} | #[macro_use]
extern crate rustc_lint;
#[macro_use] | random_line_split |
lint-for-crate.rs | // force-host
#![feature(rustc_private)]
#![feature(box_syntax)]
extern crate rustc_driver;
extern crate rustc_hir;
#[macro_use]
extern crate rustc_lint;
#[macro_use]
extern crate rustc_session;
extern crate rustc_span;
extern crate rustc_ast;
use rustc_driver::plugin::Registry;
use rustc_lint::{LateContext, LateLintPass, LintArray, LintContext, LintPass};
use rustc_span::symbol::Symbol;
use rustc_ast::attr;
declare_lint! {
CRATE_NOT_OKAY,
Warn,
"crate not marked with #![crate_okay]"
}
declare_lint_pass!(Pass => [CRATE_NOT_OKAY]);
impl<'tcx> LateLintPass<'tcx> for Pass {
fn check_crate(&mut self, cx: &LateContext, krate: &rustc_hir::Crate) {
let attrs = cx.tcx.hir().attrs(rustc_hir::CRATE_HIR_ID);
if!cx.sess().contains_name(attrs, Symbol::intern("crate_okay")) |
}
}
#[no_mangle]
fn __rustc_plugin_registrar(reg: &mut Registry) {
reg.lint_store.register_lints(&[&CRATE_NOT_OKAY]);
reg.lint_store.register_late_pass(|| box Pass);
}
| {
cx.lint(CRATE_NOT_OKAY, |lint| {
lint.build("crate is not marked with #![crate_okay]")
.set_span(krate.module().inner)
.emit()
});
} | conditional_block |
lint-for-crate.rs | // force-host
#![feature(rustc_private)]
#![feature(box_syntax)]
extern crate rustc_driver;
extern crate rustc_hir;
#[macro_use]
extern crate rustc_lint;
#[macro_use]
extern crate rustc_session;
extern crate rustc_span;
extern crate rustc_ast;
use rustc_driver::plugin::Registry;
use rustc_lint::{LateContext, LateLintPass, LintArray, LintContext, LintPass};
use rustc_span::symbol::Symbol;
use rustc_ast::attr;
declare_lint! {
CRATE_NOT_OKAY,
Warn,
"crate not marked with #![crate_okay]"
}
declare_lint_pass!(Pass => [CRATE_NOT_OKAY]);
impl<'tcx> LateLintPass<'tcx> for Pass {
fn check_crate(&mut self, cx: &LateContext, krate: &rustc_hir::Crate) {
let attrs = cx.tcx.hir().attrs(rustc_hir::CRATE_HIR_ID);
if!cx.sess().contains_name(attrs, Symbol::intern("crate_okay")) {
cx.lint(CRATE_NOT_OKAY, |lint| {
lint.build("crate is not marked with #![crate_okay]")
.set_span(krate.module().inner)
.emit()
});
}
}
}
#[no_mangle]
fn __rustc_plugin_registrar(reg: &mut Registry) | {
reg.lint_store.register_lints(&[&CRATE_NOT_OKAY]);
reg.lint_store.register_late_pass(|| box Pass);
} | identifier_body | |
ascribe_user_type.rs | // Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use infer::canonical::{Canonical, Canonicalized, CanonicalizedQueryResponse, QueryResponse};
use traits::query::Fallible;
use hir::def_id::DefId;
use mir::ProjectionKind;
use ty::{self, ParamEnvAnd, Ty, TyCtxt};
use ty::subst::UserSubsts;
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
pub struct AscribeUserType<'tcx> {
pub mir_ty: Ty<'tcx>,
pub variance: ty::Variance,
pub def_id: DefId,
pub user_substs: UserSubsts<'tcx>,
pub projs: &'tcx ty::List<ProjectionKind<'tcx>>,
}
impl<'tcx> AscribeUserType<'tcx> {
pub fn new(
mir_ty: Ty<'tcx>,
variance: ty::Variance,
def_id: DefId,
user_substs: UserSubsts<'tcx>,
projs: &'tcx ty::List<ProjectionKind<'tcx>>,
) -> Self {
AscribeUserType { mir_ty, variance, def_id, user_substs, projs }
}
}
impl<'gcx: 'tcx, 'tcx> super::QueryTypeOp<'gcx, 'tcx> for AscribeUserType<'tcx> {
type QueryResponse = ();
fn try_fast_path(
_tcx: TyCtxt<'_, 'gcx, 'tcx>,
_key: &ParamEnvAnd<'tcx, Self>,
) -> Option<Self::QueryResponse> |
fn perform_query(
tcx: TyCtxt<'_, 'gcx, 'tcx>,
canonicalized: Canonicalized<'gcx, ParamEnvAnd<'tcx, Self>>,
) -> Fallible<CanonicalizedQueryResponse<'gcx, ()>> {
tcx.type_op_ascribe_user_type(canonicalized)
}
fn shrink_to_tcx_lifetime(
v: &'a CanonicalizedQueryResponse<'gcx, ()>,
) -> &'a Canonical<'tcx, QueryResponse<'tcx, ()>> {
v
}
}
BraceStructTypeFoldableImpl! {
impl<'tcx> TypeFoldable<'tcx> for AscribeUserType<'tcx> {
mir_ty, variance, def_id, user_substs, projs
}
}
BraceStructLiftImpl! {
impl<'a, 'tcx> Lift<'tcx> for AscribeUserType<'a> {
type Lifted = AscribeUserType<'tcx>;
mir_ty, variance, def_id, user_substs, projs
}
}
impl_stable_hash_for! {
struct AscribeUserType<'tcx> {
mir_ty, variance, def_id, user_substs, projs
}
}
| {
None
} | identifier_body |
ascribe_user_type.rs | // Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use infer::canonical::{Canonical, Canonicalized, CanonicalizedQueryResponse, QueryResponse};
use traits::query::Fallible;
use hir::def_id::DefId;
use mir::ProjectionKind;
use ty::{self, ParamEnvAnd, Ty, TyCtxt};
use ty::subst::UserSubsts;
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
pub struct AscribeUserType<'tcx> {
pub mir_ty: Ty<'tcx>,
pub variance: ty::Variance,
pub def_id: DefId,
pub user_substs: UserSubsts<'tcx>,
pub projs: &'tcx ty::List<ProjectionKind<'tcx>>,
}
impl<'tcx> AscribeUserType<'tcx> {
pub fn new(
mir_ty: Ty<'tcx>,
variance: ty::Variance,
def_id: DefId,
user_substs: UserSubsts<'tcx>,
projs: &'tcx ty::List<ProjectionKind<'tcx>>,
) -> Self {
AscribeUserType { mir_ty, variance, def_id, user_substs, projs }
}
}
impl<'gcx: 'tcx, 'tcx> super::QueryTypeOp<'gcx, 'tcx> for AscribeUserType<'tcx> {
type QueryResponse = ();
fn | (
_tcx: TyCtxt<'_, 'gcx, 'tcx>,
_key: &ParamEnvAnd<'tcx, Self>,
) -> Option<Self::QueryResponse> {
None
}
fn perform_query(
tcx: TyCtxt<'_, 'gcx, 'tcx>,
canonicalized: Canonicalized<'gcx, ParamEnvAnd<'tcx, Self>>,
) -> Fallible<CanonicalizedQueryResponse<'gcx, ()>> {
tcx.type_op_ascribe_user_type(canonicalized)
}
fn shrink_to_tcx_lifetime(
v: &'a CanonicalizedQueryResponse<'gcx, ()>,
) -> &'a Canonical<'tcx, QueryResponse<'tcx, ()>> {
v
}
}
BraceStructTypeFoldableImpl! {
impl<'tcx> TypeFoldable<'tcx> for AscribeUserType<'tcx> {
mir_ty, variance, def_id, user_substs, projs
}
}
BraceStructLiftImpl! {
impl<'a, 'tcx> Lift<'tcx> for AscribeUserType<'a> {
type Lifted = AscribeUserType<'tcx>;
mir_ty, variance, def_id, user_substs, projs
}
}
impl_stable_hash_for! {
struct AscribeUserType<'tcx> {
mir_ty, variance, def_id, user_substs, projs
}
}
| try_fast_path | identifier_name |
ascribe_user_type.rs | // Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use infer::canonical::{Canonical, Canonicalized, CanonicalizedQueryResponse, QueryResponse};
use traits::query::Fallible;
use hir::def_id::DefId;
use mir::ProjectionKind;
use ty::{self, ParamEnvAnd, Ty, TyCtxt};
use ty::subst::UserSubsts;
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
pub struct AscribeUserType<'tcx> {
pub mir_ty: Ty<'tcx>,
pub variance: ty::Variance,
pub def_id: DefId,
pub user_substs: UserSubsts<'tcx>,
pub projs: &'tcx ty::List<ProjectionKind<'tcx>>,
}
impl<'tcx> AscribeUserType<'tcx> {
pub fn new(
mir_ty: Ty<'tcx>,
variance: ty::Variance,
def_id: DefId,
user_substs: UserSubsts<'tcx>,
projs: &'tcx ty::List<ProjectionKind<'tcx>>,
) -> Self {
AscribeUserType { mir_ty, variance, def_id, user_substs, projs }
}
}
impl<'gcx: 'tcx, 'tcx> super::QueryTypeOp<'gcx, 'tcx> for AscribeUserType<'tcx> {
type QueryResponse = ();
fn try_fast_path(
_tcx: TyCtxt<'_, 'gcx, 'tcx>,
_key: &ParamEnvAnd<'tcx, Self>,
) -> Option<Self::QueryResponse> {
None
}
fn perform_query(
tcx: TyCtxt<'_, 'gcx, 'tcx>,
canonicalized: Canonicalized<'gcx, ParamEnvAnd<'tcx, Self>>,
) -> Fallible<CanonicalizedQueryResponse<'gcx, ()>> {
tcx.type_op_ascribe_user_type(canonicalized)
}
fn shrink_to_tcx_lifetime(
v: &'a CanonicalizedQueryResponse<'gcx, ()>,
) -> &'a Canonical<'tcx, QueryResponse<'tcx, ()>> {
v
}
}
BraceStructTypeFoldableImpl! {
impl<'tcx> TypeFoldable<'tcx> for AscribeUserType<'tcx> {
mir_ty, variance, def_id, user_substs, projs | }
}
BraceStructLiftImpl! {
impl<'a, 'tcx> Lift<'tcx> for AscribeUserType<'a> {
type Lifted = AscribeUserType<'tcx>;
mir_ty, variance, def_id, user_substs, projs
}
}
impl_stable_hash_for! {
struct AscribeUserType<'tcx> {
mir_ty, variance, def_id, user_substs, projs
}
} | random_line_split | |
coerce-unify.rs | // run-pass
// Check that coercions can unify if-else, match arms and array elements.
// Try to construct if-else chains, matches and arrays out of given expressions.
macro_rules! check {
($last:expr $(, $rest:expr)+) => {
// Last expression comes first because of whacky ifs and matches.
let _ = $(if false { $rest })else+ else { $last };
let _ = match 0 { $(_ if false => $rest,)+ _ => $last };
let _ = [$($rest,)+ $last];
}
}
// Check all non-uniform cases of 2 and 3 expressions of 2 types.
macro_rules! check2 {
($a:expr, $b:expr) => {
check!($a, $b);
check!($b, $a);
check!($a, $a, $b);
check!($a, $b, $a);
check!($a, $b, $b);
check!($b, $a, $a);
check!($b, $a, $b);
check!($b, $b, $a);
}
}
// Check all non-uniform cases of 2 and 3 expressions of 3 types.
macro_rules! check3 {
($a:expr, $b:expr, $c:expr) => {
// Delegate to check2 for cases where a type repeats.
check2!($a, $b);
check2!($b, $c);
check2!($a, $c);
// Check the remaining cases, i.e., permutations of ($a, $b, $c).
check!($a, $b, $c);
check!($a, $c, $b);
check!($b, $a, $c);
check!($b, $c, $a);
check!($c, $a, $b);
check!($c, $b, $a);
}
}
use std::mem::size_of;
fn foo() {}
fn bar() {}
pub fn main() | {
check3!(foo, bar, foo as fn());
check3!(size_of::<u8>, size_of::<u16>, size_of::<usize> as fn() -> usize);
let s = String::from("bar");
check2!("foo", &s);
let a = [1, 2, 3];
let v = vec![1, 2, 3];
check2!(&a[..], &v);
// Make sure in-array coercion still works.
let _ = [("a", Default::default()), (Default::default(), "b"), (&s, &s)];
} | identifier_body | |
coerce-unify.rs | // run-pass
// Check that coercions can unify if-else, match arms and array elements.
// Try to construct if-else chains, matches and arrays out of given expressions.
macro_rules! check {
($last:expr $(, $rest:expr)+) => {
// Last expression comes first because of whacky ifs and matches.
let _ = $(if false { $rest })else+ else { $last };
let _ = match 0 { $(_ if false => $rest,)+ _ => $last };
let _ = [$($rest,)+ $last];
}
}
// Check all non-uniform cases of 2 and 3 expressions of 2 types.
macro_rules! check2 {
($a:expr, $b:expr) => {
check!($a, $b);
check!($b, $a);
check!($a, $a, $b);
check!($a, $b, $a);
check!($a, $b, $b);
check!($b, $a, $a);
check!($b, $a, $b);
check!($b, $b, $a);
}
}
// Check all non-uniform cases of 2 and 3 expressions of 3 types.
macro_rules! check3 {
($a:expr, $b:expr, $c:expr) => {
// Delegate to check2 for cases where a type repeats.
check2!($a, $b);
check2!($b, $c);
check2!($a, $c);
// Check the remaining cases, i.e., permutations of ($a, $b, $c).
check!($a, $b, $c);
check!($a, $c, $b);
check!($b, $a, $c);
check!($b, $c, $a);
check!($c, $a, $b);
check!($c, $b, $a);
}
}
use std::mem::size_of;
fn foo() {}
fn | () {}
pub fn main() {
check3!(foo, bar, foo as fn());
check3!(size_of::<u8>, size_of::<u16>, size_of::<usize> as fn() -> usize);
let s = String::from("bar");
check2!("foo", &s);
let a = [1, 2, 3];
let v = vec![1, 2, 3];
check2!(&a[..], &v);
// Make sure in-array coercion still works.
let _ = [("a", Default::default()), (Default::default(), "b"), (&s, &s)];
}
| bar | identifier_name |
coerce-unify.rs | // run-pass
// Check that coercions can unify if-else, match arms and array elements.
// Try to construct if-else chains, matches and arrays out of given expressions.
macro_rules! check {
($last:expr $(, $rest:expr)+) => {
// Last expression comes first because of whacky ifs and matches.
let _ = $(if false { $rest })else+ else { $last };
let _ = match 0 { $(_ if false => $rest,)+ _ => $last };
let _ = [$($rest,)+ $last];
}
}
// Check all non-uniform cases of 2 and 3 expressions of 2 types.
macro_rules! check2 {
($a:expr, $b:expr) => {
check!($a, $b);
check!($b, $a);
check!($a, $a, $b);
check!($a, $b, $a);
check!($a, $b, $b);
check!($b, $a, $a);
check!($b, $a, $b);
check!($b, $b, $a);
}
} |
// Check all non-uniform cases of 2 and 3 expressions of 3 types.
macro_rules! check3 {
($a:expr, $b:expr, $c:expr) => {
// Delegate to check2 for cases where a type repeats.
check2!($a, $b);
check2!($b, $c);
check2!($a, $c);
// Check the remaining cases, i.e., permutations of ($a, $b, $c).
check!($a, $b, $c);
check!($a, $c, $b);
check!($b, $a, $c);
check!($b, $c, $a);
check!($c, $a, $b);
check!($c, $b, $a);
}
}
use std::mem::size_of;
fn foo() {}
fn bar() {}
pub fn main() {
check3!(foo, bar, foo as fn());
check3!(size_of::<u8>, size_of::<u16>, size_of::<usize> as fn() -> usize);
let s = String::from("bar");
check2!("foo", &s);
let a = [1, 2, 3];
let v = vec![1, 2, 3];
check2!(&a[..], &v);
// Make sure in-array coercion still works.
let _ = [("a", Default::default()), (Default::default(), "b"), (&s, &s)];
} | random_line_split | |
basic-types-mut-globals.rs | // Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Caveats - gdb prints any 8-bit value (meaning rust I8 and u8 values)
// as its numerical value along with its associated ASCII char, there
// doesn't seem to be any way around this. Also, gdb doesn't know
// about UTF-32 character encoding and will print a rust char as only
// its numerical value.
// ignore-win32: FIXME #13256
// ignore-android: FIXME(#10381)
// compile-flags:-g
// debugger:rbreak zzz
// debugger:run
// debugger:finish
// Check initializers
// debugger:print 'basic-types-mut-globals::B'
// check:$1 = false
// debugger:print 'basic-types-mut-globals::I'
// check:$2 = -1
// debugger:print 'basic-types-mut-globals::C'
// check:$3 = 97
// debugger:print/d 'basic-types-mut-globals::I8'
// check:$4 = 68
// debugger:print 'basic-types-mut-globals::I16'
// check:$5 = -16
// debugger:print 'basic-types-mut-globals::I32'
// check:$6 = -32
// debugger:print 'basic-types-mut-globals::I64'
// check:$7 = -64
// debugger:print 'basic-types-mut-globals::U'
// check:$8 = 1
// debugger:print/d 'basic-types-mut-globals::U8'
// check:$9 = 100
// debugger:print 'basic-types-mut-globals::U16'
// check:$10 = 16
// debugger:print 'basic-types-mut-globals::U32'
// check:$11 = 32
// debugger:print 'basic-types-mut-globals::U64'
// check:$12 = 64
// debugger:print 'basic-types-mut-globals::F32'
// check:$13 = 2.5
// debugger:print 'basic-types-mut-globals::F64'
// check:$14 = 3.5
// debugger:continue
// Check new values
// debugger:print 'basic-types-mut-globals'::B
// check:$15 = true
// debugger:print 'basic-types-mut-globals'::I
// check:$16 = 2
// debugger:print 'basic-types-mut-globals'::C
// check:$17 = 102
// debugger:print/d 'basic-types-mut-globals'::I8
// check:$18 = 78
// debugger:print 'basic-types-mut-globals'::I16
// check:$19 = -26
// debugger:print 'basic-types-mut-globals'::I32
// check:$20 = -12
// debugger:print 'basic-types-mut-globals'::I64
// check:$21 = -54
// debugger:print 'basic-types-mut-globals'::U
// check:$22 = 5
// debugger:print/d 'basic-types-mut-globals'::U8
// check:$23 = 20
// debugger:print 'basic-types-mut-globals'::U16
// check:$24 = 32
// debugger:print 'basic-types-mut-globals'::U32
// check:$25 = 16
// debugger:print 'basic-types-mut-globals'::U64
// check:$26 = 128
// debugger:print 'basic-types-mut-globals'::F32
// check:$27 = 5.75
// debugger:print 'basic-types-mut-globals'::F64
// check:$28 = 9.25
// debugger:detach
// debugger:quit
#![allow(unused_variable)]
static mut B: bool = false;
static mut I: int = -1;
static mut C: char = 'a';
static mut I8: i8 = 68;
static mut I16: i16 = -16;
static mut I32: i32 = -32;
static mut I64: i64 = -64;
static mut U: uint = 1;
static mut U8: u8 = 100;
static mut U16: u16 = 16;
static mut U32: u32 = 32;
static mut U64: u64 = 64;
static mut F32: f32 = 2.5;
static mut F64: f64 = 3.5;
fn | () {
_zzz();
unsafe {
B = true;
I = 2;
C = 'f';
I8 = 78;
I16 = -26;
I32 = -12;
I64 = -54;
U = 5;
U8 = 20;
U16 = 32;
U32 = 16;
U64 = 128;
F32 = 5.75;
F64 = 9.25;
}
_zzz();
}
fn _zzz() {()}
| main | identifier_name |
basic-types-mut-globals.rs | // Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Caveats - gdb prints any 8-bit value (meaning rust I8 and u8 values)
// as its numerical value along with its associated ASCII char, there
// doesn't seem to be any way around this. Also, gdb doesn't know
// about UTF-32 character encoding and will print a rust char as only
// its numerical value.
// ignore-win32: FIXME #13256
// ignore-android: FIXME(#10381)
// compile-flags:-g
// debugger:rbreak zzz
// debugger:run
// debugger:finish
// Check initializers
// debugger:print 'basic-types-mut-globals::B'
// check:$1 = false
// debugger:print 'basic-types-mut-globals::I'
// check:$2 = -1
// debugger:print 'basic-types-mut-globals::C'
// check:$3 = 97
// debugger:print/d 'basic-types-mut-globals::I8'
// check:$4 = 68
// debugger:print 'basic-types-mut-globals::I16'
// check:$5 = -16
// debugger:print 'basic-types-mut-globals::I32'
// check:$6 = -32
// debugger:print 'basic-types-mut-globals::I64'
// check:$7 = -64
// debugger:print 'basic-types-mut-globals::U'
// check:$8 = 1
// debugger:print/d 'basic-types-mut-globals::U8'
// check:$9 = 100
// debugger:print 'basic-types-mut-globals::U16'
// check:$10 = 16
// debugger:print 'basic-types-mut-globals::U32'
// check:$11 = 32
// debugger:print 'basic-types-mut-globals::U64'
// check:$12 = 64
// debugger:print 'basic-types-mut-globals::F32'
// check:$13 = 2.5
// debugger:print 'basic-types-mut-globals::F64'
// check:$14 = 3.5
// debugger:continue
// Check new values
// debugger:print 'basic-types-mut-globals'::B
// check:$15 = true
// debugger:print 'basic-types-mut-globals'::I
// check:$16 = 2
// debugger:print 'basic-types-mut-globals'::C
// check:$17 = 102
// debugger:print/d 'basic-types-mut-globals'::I8
// check:$18 = 78
// debugger:print 'basic-types-mut-globals'::I16
// check:$19 = -26
// debugger:print 'basic-types-mut-globals'::I32
// check:$20 = -12
// debugger:print 'basic-types-mut-globals'::I64
// check:$21 = -54
// debugger:print 'basic-types-mut-globals'::U
// check:$22 = 5
// debugger:print/d 'basic-types-mut-globals'::U8
// check:$23 = 20
// debugger:print 'basic-types-mut-globals'::U16
// check:$24 = 32
// debugger:print 'basic-types-mut-globals'::U32
// check:$25 = 16
// debugger:print 'basic-types-mut-globals'::U64
// check:$26 = 128
// debugger:print 'basic-types-mut-globals'::F32
// check:$27 = 5.75
// debugger:print 'basic-types-mut-globals'::F64
// check:$28 = 9.25
// debugger:detach
// debugger:quit
#![allow(unused_variable)]
static mut B: bool = false;
static mut I: int = -1;
static mut C: char = 'a';
static mut I8: i8 = 68;
static mut I16: i16 = -16;
static mut I32: i32 = -32;
static mut I64: i64 = -64;
static mut U: uint = 1;
static mut U8: u8 = 100;
static mut U16: u16 = 16;
static mut U32: u32 = 32;
static mut U64: u64 = 64;
static mut F32: f32 = 2.5;
static mut F64: f64 = 3.5;
fn main() {
_zzz();
unsafe {
B = true;
I = 2;
C = 'f';
I8 = 78;
I16 = -26;
I32 = -12;
I64 = -54;
U = 5;
U8 = 20;
U16 = 32;
U32 = 16;
U64 = 128;
F32 = 5.75;
F64 = 9.25;
}
_zzz();
}
fn _zzz() | {()} | identifier_body | |
linear_transformation.rs | use crate::{Point, Transformation, Vector};
#[derive(Clone, Copy, Debug, PartialEq)]
#[repr(C)]
pub struct LinearTransformation {
pub x: Vector,
pub y: Vector,
}
impl LinearTransformation {
pub fn new(x: Vector, y: Vector) -> LinearTransformation {
LinearTransformation { x, y }
}
pub fn identity() -> LinearTransformation {
LinearTransformation::new(Vector::new(1.0, 0.0), Vector::new(0.0, 1.0))
}
pub fn | (v: Vector) -> LinearTransformation {
LinearTransformation::new(Vector::new(v.x, 0.0), Vector::new(0.0, v.y))
}
pub fn uniform_scaling(k: f32) -> LinearTransformation {
LinearTransformation::scaling(Vector::new(k, k))
}
pub fn scale(self, v: Vector) -> LinearTransformation {
LinearTransformation::new(self.x * v.x, self.y * v.y)
}
pub fn uniform_scale(self, k: f32) -> LinearTransformation {
LinearTransformation::new(self.x * k, self.y * k)
}
pub fn compose(self, other: LinearTransformation) -> LinearTransformation {
LinearTransformation::new(
self.transform_vector(other.x),
self.transform_vector(other.y),
)
}
}
impl Transformation for LinearTransformation {
fn transform_point(&self, p: Point) -> Point {
(self.x * p.x + self.y * p.y).to_point()
}
fn transform_vector(&self, v: Vector) -> Vector {
self.x * v.x + self.y * v.y
}
}
| scaling | identifier_name |
linear_transformation.rs | use crate::{Point, Transformation, Vector};
#[derive(Clone, Copy, Debug, PartialEq)]
#[repr(C)]
pub struct LinearTransformation {
pub x: Vector,
pub y: Vector,
}
impl LinearTransformation {
pub fn new(x: Vector, y: Vector) -> LinearTransformation {
LinearTransformation { x, y }
}
pub fn identity() -> LinearTransformation {
LinearTransformation::new(Vector::new(1.0, 0.0), Vector::new(0.0, 1.0))
}
pub fn scaling(v: Vector) -> LinearTransformation {
LinearTransformation::new(Vector::new(v.x, 0.0), Vector::new(0.0, v.y))
}
pub fn uniform_scaling(k: f32) -> LinearTransformation {
LinearTransformation::scaling(Vector::new(k, k))
}
pub fn scale(self, v: Vector) -> LinearTransformation {
LinearTransformation::new(self.x * v.x, self.y * v.y)
}
pub fn uniform_scale(self, k: f32) -> LinearTransformation {
LinearTransformation::new(self.x * k, self.y * k)
}
| LinearTransformation::new(
self.transform_vector(other.x),
self.transform_vector(other.y),
)
}
}
impl Transformation for LinearTransformation {
fn transform_point(&self, p: Point) -> Point {
(self.x * p.x + self.y * p.y).to_point()
}
fn transform_vector(&self, v: Vector) -> Vector {
self.x * v.x + self.y * v.y
}
} | pub fn compose(self, other: LinearTransformation) -> LinearTransformation { | random_line_split |
linear_transformation.rs | use crate::{Point, Transformation, Vector};
#[derive(Clone, Copy, Debug, PartialEq)]
#[repr(C)]
pub struct LinearTransformation {
pub x: Vector,
pub y: Vector,
}
impl LinearTransformation {
pub fn new(x: Vector, y: Vector) -> LinearTransformation {
LinearTransformation { x, y }
}
pub fn identity() -> LinearTransformation |
pub fn scaling(v: Vector) -> LinearTransformation {
LinearTransformation::new(Vector::new(v.x, 0.0), Vector::new(0.0, v.y))
}
pub fn uniform_scaling(k: f32) -> LinearTransformation {
LinearTransformation::scaling(Vector::new(k, k))
}
pub fn scale(self, v: Vector) -> LinearTransformation {
LinearTransformation::new(self.x * v.x, self.y * v.y)
}
pub fn uniform_scale(self, k: f32) -> LinearTransformation {
LinearTransformation::new(self.x * k, self.y * k)
}
pub fn compose(self, other: LinearTransformation) -> LinearTransformation {
LinearTransformation::new(
self.transform_vector(other.x),
self.transform_vector(other.y),
)
}
}
impl Transformation for LinearTransformation {
fn transform_point(&self, p: Point) -> Point {
(self.x * p.x + self.y * p.y).to_point()
}
fn transform_vector(&self, v: Vector) -> Vector {
self.x * v.x + self.y * v.y
}
}
| {
LinearTransformation::new(Vector::new(1.0, 0.0), Vector::new(0.0, 1.0))
} | identifier_body |
ball_ball.rs | use std::marker::PhantomData;
use na::Translate;
use na;
use math::{Scalar, Point, Vect};
use entities::shape::Ball;
use entities::inspection::Repr;
use queries::geometry::Contact;
use queries::geometry::contacts_internal;
use narrow_phase::{CollisionDetector, CollisionDispatcher};
/// Collision detector between two balls.
pub struct BallBall<P: Point, M> {
prediction: <P::Vect as Vect>::Scalar,
contact: Option<Contact<P>>,
mat_type: PhantomData<M> // FIXME: can we avoid this (using a generalized where clause?)
}
impl<P: Point, M> Clone for BallBall<P, M> {
fn clone(&self) -> BallBall<P, M> {
BallBall {
prediction: self.prediction.clone(),
contact: self.contact.clone(),
mat_type: PhantomData
}
}
}
impl<P: Point, M> BallBall<P, M> {
/// Creates a new persistent collision detector between two balls.
#[inline]
pub fn new(prediction: <P::Vect as Vect>::Scalar) -> BallBall<P, M> {
BallBall {
prediction: prediction,
contact: None,
mat_type: PhantomData
}
}
}
impl<P, M> CollisionDetector<P, M> for BallBall<P, M>
where P: Point,
M:'static + Translate<P> {
fn update(&mut self,
_: &CollisionDispatcher<P, M>,
ma: &M,
a: &Repr<P, M>,
mb: &M,
b: &Repr<P, M>)
-> bool {
let ra = a.repr();
let rb = b.repr();
if let (Some(a), Some(b)) = (ra.downcast_ref::<Ball<<P::Vect as Vect>::Scalar>>(),
rb.downcast_ref::<Ball<<P::Vect as Vect>::Scalar>>()) {
self.contact = contacts_internal::ball_against_ball(
&ma.translate(&na::orig()),
a,
&mb.translate(&na::orig()),
b,
self.prediction);
true
}
else {
false
}
}
#[inline]
fn | (&self) -> usize {
match self.contact {
None => 0,
Some(_) => 1
}
}
#[inline]
fn colls(&self, out_colls: &mut Vec<Contact<P>>) {
match self.contact {
Some(ref c) => out_colls.push(c.clone()),
None => ()
}
}
}
| num_colls | identifier_name |
ball_ball.rs | use std::marker::PhantomData;
use na::Translate;
use na;
use math::{Scalar, Point, Vect};
use entities::shape::Ball;
use entities::inspection::Repr;
use queries::geometry::Contact;
use queries::geometry::contacts_internal;
use narrow_phase::{CollisionDetector, CollisionDispatcher};
/// Collision detector between two balls.
pub struct BallBall<P: Point, M> {
prediction: <P::Vect as Vect>::Scalar,
contact: Option<Contact<P>>,
mat_type: PhantomData<M> // FIXME: can we avoid this (using a generalized where clause?)
}
impl<P: Point, M> Clone for BallBall<P, M> {
fn clone(&self) -> BallBall<P, M> {
BallBall {
prediction: self.prediction.clone(),
contact: self.contact.clone(),
mat_type: PhantomData
}
}
}
impl<P: Point, M> BallBall<P, M> {
/// Creates a new persistent collision detector between two balls. | contact: None,
mat_type: PhantomData
}
}
}
impl<P, M> CollisionDetector<P, M> for BallBall<P, M>
where P: Point,
M:'static + Translate<P> {
fn update(&mut self,
_: &CollisionDispatcher<P, M>,
ma: &M,
a: &Repr<P, M>,
mb: &M,
b: &Repr<P, M>)
-> bool {
let ra = a.repr();
let rb = b.repr();
if let (Some(a), Some(b)) = (ra.downcast_ref::<Ball<<P::Vect as Vect>::Scalar>>(),
rb.downcast_ref::<Ball<<P::Vect as Vect>::Scalar>>()) {
self.contact = contacts_internal::ball_against_ball(
&ma.translate(&na::orig()),
a,
&mb.translate(&na::orig()),
b,
self.prediction);
true
}
else {
false
}
}
#[inline]
fn num_colls(&self) -> usize {
match self.contact {
None => 0,
Some(_) => 1
}
}
#[inline]
fn colls(&self, out_colls: &mut Vec<Contact<P>>) {
match self.contact {
Some(ref c) => out_colls.push(c.clone()),
None => ()
}
}
} | #[inline]
pub fn new(prediction: <P::Vect as Vect>::Scalar) -> BallBall<P, M> {
BallBall {
prediction: prediction, | random_line_split |
ball_ball.rs | use std::marker::PhantomData;
use na::Translate;
use na;
use math::{Scalar, Point, Vect};
use entities::shape::Ball;
use entities::inspection::Repr;
use queries::geometry::Contact;
use queries::geometry::contacts_internal;
use narrow_phase::{CollisionDetector, CollisionDispatcher};
/// Collision detector between two balls.
pub struct BallBall<P: Point, M> {
prediction: <P::Vect as Vect>::Scalar,
contact: Option<Contact<P>>,
mat_type: PhantomData<M> // FIXME: can we avoid this (using a generalized where clause?)
}
impl<P: Point, M> Clone for BallBall<P, M> {
fn clone(&self) -> BallBall<P, M> {
BallBall {
prediction: self.prediction.clone(),
contact: self.contact.clone(),
mat_type: PhantomData
}
}
}
impl<P: Point, M> BallBall<P, M> {
/// Creates a new persistent collision detector between two balls.
#[inline]
pub fn new(prediction: <P::Vect as Vect>::Scalar) -> BallBall<P, M> {
BallBall {
prediction: prediction,
contact: None,
mat_type: PhantomData
}
}
}
impl<P, M> CollisionDetector<P, M> for BallBall<P, M>
where P: Point,
M:'static + Translate<P> {
fn update(&mut self,
_: &CollisionDispatcher<P, M>,
ma: &M,
a: &Repr<P, M>,
mb: &M,
b: &Repr<P, M>)
-> bool {
let ra = a.repr();
let rb = b.repr();
if let (Some(a), Some(b)) = (ra.downcast_ref::<Ball<<P::Vect as Vect>::Scalar>>(),
rb.downcast_ref::<Ball<<P::Vect as Vect>::Scalar>>()) {
self.contact = contacts_internal::ball_against_ball(
&ma.translate(&na::orig()),
a,
&mb.translate(&na::orig()),
b,
self.prediction);
true
}
else {
false
}
}
#[inline]
fn num_colls(&self) -> usize |
#[inline]
fn colls(&self, out_colls: &mut Vec<Contact<P>>) {
match self.contact {
Some(ref c) => out_colls.push(c.clone()),
None => ()
}
}
}
| {
match self.contact {
None => 0,
Some(_) => 1
}
} | identifier_body |
lib.rs | // Copyright (c) 2016 Herman J. Radtke III <herman@hermanradtke.com>
//
// This file is part of carp-rs.
//
// carp-rs is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// carp-rs is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with carp-rs. If not, see <http://www.gnu.org/licenses/>.
extern crate libc;
#[macro_use] | extern crate log;
#[cfg(unix)]
extern crate nix;
extern crate rand;
extern crate pcap;
extern crate crypto;
extern crate byteorder;
use std::result;
pub mod net;
mod error;
mod node;
pub mod advert;
pub mod ip_carp;
pub mod config;
pub mod carp;
pub type Result<T> = result::Result<T, error::Error>; | random_line_split | |
intrinsic.rs | use super::BackendTypes;
use crate::mir::operand::OperandRef;
use rustc_middle::ty::{self, Ty};
use rustc_span::Span;
use rustc_target::abi::call::FnAbi;
| pub trait IntrinsicCallMethods<'tcx>: BackendTypes {
/// Remember to add all intrinsics here, in `compiler/rustc_typeck/src/check/mod.rs`,
/// and in `library/core/src/intrinsics.rs`; if you need access to any LLVM intrinsics,
/// add them to `compiler/rustc_codegen_llvm/src/context.rs`.
fn codegen_intrinsic_call(
&mut self,
instance: ty::Instance<'tcx>,
fn_abi: &FnAbi<'tcx, Ty<'tcx>>,
args: &[OperandRef<'tcx, Self::Value>],
llresult: Self::Value,
span: Span,
);
fn abort(&mut self);
fn assume(&mut self, val: Self::Value);
fn expect(&mut self, cond: Self::Value, expected: bool) -> Self::Value;
/// Trait method used to test whether a given pointer is associated with a type identifier.
fn type_test(&mut self, pointer: Self::Value, typeid: Self::Value) -> Self::Value;
/// Trait method used to inject `va_start` on the "spoofed" `VaListImpl` in
/// Rust defined C-variadic functions.
fn va_start(&mut self, val: Self::Value) -> Self::Value;
/// Trait method used to inject `va_end` on the "spoofed" `VaListImpl` before
/// Rust defined C-variadic functions return.
fn va_end(&mut self, val: Self::Value) -> Self::Value;
} | random_line_split | |
glyph.rs | _cmp(&self, other: &DetailedGlyphRecord) -> Option<Ordering> {
self.entry_offset.partial_cmp(&other.entry_offset)
}
}
impl Ord for DetailedGlyphRecord {
fn cmp(&self, other: &DetailedGlyphRecord) -> Ordering {
self.entry_offset.cmp(&other.entry_offset)
}
}
// Manages the lookup table for detailed glyphs. Sorting is deferred
// until a lookup is actually performed; this matches the expected
// usage pattern of setting/appending all the detailed glyphs, and
// then querying without setting.
#[derive(Clone, Deserialize, Serialize)]
struct DetailedGlyphStore {
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
detail_buffer: Vec<DetailedGlyph>,
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
detail_lookup: Vec<DetailedGlyphRecord>,
lookup_is_sorted: bool,
}
impl<'a> DetailedGlyphStore {
fn new() -> DetailedGlyphStore {
DetailedGlyphStore {
detail_buffer: vec![], // TODO: default size?
detail_lookup: vec![],
lookup_is_sorted: false,
}
}
fn add_detailed_glyphs_for_entry(&mut self, entry_offset: ByteIndex, glyphs: &[DetailedGlyph]) {
let entry = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: self.detail_buffer.len(),
};
debug!(
"Adding entry[off={:?}] for detailed glyphs: {:?}",
entry_offset, glyphs
);
debug_assert!(!self.detail_lookup.contains(&entry));
self.detail_lookup.push(entry);
self.detail_buffer.extend_from_slice(glyphs);
self.lookup_is_sorted = false;
}
fn detailed_glyphs_for_entry(
&'a self,
entry_offset: ByteIndex,
count: u16,
) -> &'a [DetailedGlyph] {
debug!(
"Requesting detailed glyphs[n={}] for entry[off={:?}]",
count, entry_offset
);
// FIXME: Is this right? --pcwalton
// TODO: should fix this somewhere else
if count == 0 {
return &self.detail_buffer[0..0];
}
assert!((count as usize) <= self.detail_buffer.len());
assert!(self.lookup_is_sorted);
let key = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: 0, // unused
};
let i = self
.detail_lookup
.binary_search(&key)
.expect("Invalid index not found in detailed glyph lookup table!");
let main_detail_offset = self.detail_lookup[i].detail_offset;
assert!(main_detail_offset + (count as usize) <= self.detail_buffer.len());
// return a slice into the buffer
&self.detail_buffer[main_detail_offset..main_detail_offset + count as usize]
}
fn detailed_glyph_with_index(
&'a self,
entry_offset: ByteIndex,
detail_offset: u16,
) -> &'a DetailedGlyph {
assert!((detail_offset as usize) <= self.detail_buffer.len());
assert!(self.lookup_is_sorted);
let key = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: 0, // unused
};
let i = self
.detail_lookup
.binary_search(&key)
.expect("Invalid index not found in detailed glyph lookup table!");
let main_detail_offset = self.detail_lookup[i].detail_offset;
assert!(main_detail_offset + (detail_offset as usize) < self.detail_buffer.len());
&self.detail_buffer[main_detail_offset + (detail_offset as usize)]
}
fn ensure_sorted(&mut self) {
if self.lookup_is_sorted {
return;
}
// Sorting a unique vector is surprisingly hard. The following
// code is a good argument for using DVecs, but they require
// immutable locations thus don't play well with freezing.
// Thar be dragons here. You have been warned. (Tips accepted.)
let mut unsorted_records: Vec<DetailedGlyphRecord> = vec![];
mem::swap(&mut self.detail_lookup, &mut unsorted_records);
let mut mut_records: Vec<DetailedGlyphRecord> = unsorted_records;
mut_records.sort_by(|a, b| {
if a < b {
Ordering::Less
} else {
Ordering::Greater
}
});
let mut sorted_records = mut_records;
mem::swap(&mut self.detail_lookup, &mut sorted_records);
self.lookup_is_sorted = true;
}
}
// This struct is used by GlyphStore clients to provide new glyph data.
// It should be allocated on the stack and passed by reference to GlyphStore.
#[derive(Clone, Copy)]
pub struct GlyphData {
id: GlyphId,
advance: Au,
offset: Point2D<Au>,
cluster_start: bool,
ligature_start: bool,
}
impl GlyphData {
/// Creates a new entry for one glyph.
pub fn new(
id: GlyphId,
advance: Au,
offset: Option<Point2D<Au>>,
cluster_start: bool,
ligature_start: bool,
) -> GlyphData {
GlyphData {
id: id,
advance: advance,
offset: offset.unwrap_or(Point2D::zero()),
cluster_start: cluster_start,
ligature_start: ligature_start,
}
}
}
// This enum is a proxy that's provided to GlyphStore clients when iterating
// through glyphs (either for a particular TextRun offset, or all glyphs).
// Rather than eagerly assembling and copying glyph data, it only retrieves
// values as they are needed from the GlyphStore, using provided offsets.
#[derive(Clone, Copy)]
pub enum GlyphInfo<'a> {
Simple(&'a GlyphStore, ByteIndex),
Detail(&'a GlyphStore, ByteIndex, u16),
}
impl<'a> GlyphInfo<'a> {
pub fn id(self) -> GlyphId {
match self {
GlyphInfo::Simple(store, entry_i) => store.entry_buffer[entry_i.to_usize()].id(),
GlyphInfo::Detail(store, entry_i, detail_j) => {
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.id
},
}
}
#[inline(always)]
// FIXME: Resolution conflicts with IteratorUtil trait so adding trailing _
pub fn advance(self) -> Au {
match self {
GlyphInfo::Simple(store, entry_i) => store.entry_buffer[entry_i.to_usize()].advance(),
GlyphInfo::Detail(store, entry_i, detail_j) => {
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.advance
},
}
}
#[inline]
pub fn offset(self) -> Option<Point2D<Au>> {
match self {
GlyphInfo::Simple(_, _) => None,
GlyphInfo::Detail(store, entry_i, detail_j) => Some(
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.offset,
),
}
}
pub fn char_is_space(self) -> bool {
let (store, entry_i) = match self {
GlyphInfo::Simple(store, entry_i) => (store, entry_i),
GlyphInfo::Detail(store, entry_i, _) => (store, entry_i),
};
store.char_is_space(entry_i)
}
}
/// Stores the glyph data belonging to a text run.
///
/// Simple glyphs are stored inline in the `entry_buffer`, detailed glyphs are
/// stored as pointers into the `detail_store`.
///
/// ~~~ascii
/// +- GlyphStore --------------------------------+
/// | +---+---+---+---+---+---+---+ |
/// | entry_buffer: | | s | | s | | s | s | | d = detailed
/// | +-|-+---+-|-+---+-|-+---+---+ | s = simple
/// | | | | |
/// | | +---+-------+ |
/// | | | |
/// | +-V-+-V-+ |
/// | detail_store: | d | d | |
/// | +---+---+ |
/// +---------------------------------------------+
/// ~~~
#[derive(Clone, Deserialize, Serialize)]
pub struct GlyphStore {
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
/// A buffer of glyphs within the text run, in the order in which they
/// appear in the input text.
/// Any changes will also need to be reflected in
/// transmute_entry_buffer_to_u32_buffer().
entry_buffer: Vec<GlyphEntry>,
/// A store of the detailed glyph data. Detailed glyphs contained in the
/// `entry_buffer` point to locations in this data structure.
detail_store: DetailedGlyphStore,
/// A cache of the advance of the entire glyph store.
total_advance: Au,
/// A cache of the number of spaces in the entire glyph store.
total_spaces: i32,
/// Used to check if fast path should be used in glyph iteration.
has_detailed_glyphs: bool,
is_whitespace: bool,
is_rtl: bool,
}
impl<'a> GlyphStore {
/// Initializes the glyph store, but doesn't actually shape anything.
///
/// Use the `add_*` methods to store glyph data.
pub fn new(length: usize, is_whitespace: bool, is_rtl: bool) -> GlyphStore {
assert!(length > 0);
GlyphStore {
entry_buffer: vec![GlyphEntry::initial(); length],
detail_store: DetailedGlyphStore::new(),
total_advance: Au(0),
total_spaces: 0,
has_detailed_glyphs: false,
is_whitespace: is_whitespace,
is_rtl: is_rtl,
}
}
#[inline]
pub fn len(&self) -> ByteIndex {
ByteIndex(self.entry_buffer.len() as isize)
}
#[inline]
pub fn is_whitespace(&self) -> bool {
self.is_whitespace
}
pub fn finalize_changes(&mut self) {
self.detail_store.ensure_sorted();
self.cache_total_advance_and_spaces()
}
#[inline(never)]
fn cache_total_advance_and_spaces(&mut self) {
let mut total_advance = Au(0);
let mut total_spaces = 0;
for glyph in self.iter_glyphs_for_byte_range(&Range::new(ByteIndex(0), self.len())) {
total_advance = total_advance + glyph.advance();
if glyph.char_is_space() {
total_spaces += 1;
}
}
self.total_advance = total_advance;
self.total_spaces = total_spaces;
}
/// Adds a single glyph.
pub fn add_glyph_for_byte_index(&mut self, i: ByteIndex, character: char, data: &GlyphData) {
let glyph_is_compressible = is_simple_glyph_id(data.id) &&
is_simple_advance(data.advance) &&
data.offset == Point2D::zero() &&
data.cluster_start; // others are stored in detail buffer
debug_assert!(data.ligature_start); // can't compress ligature continuation glyphs.
debug_assert!(i < self.len());
let mut entry = if glyph_is_compressible {
GlyphEntry::simple(data.id, data.advance)
} else {
let glyph = &[DetailedGlyph::new(data.id, data.advance, data.offset)];
self.has_detailed_glyphs = true;
self.detail_store.add_detailed_glyphs_for_entry(i, glyph);
GlyphEntry::complex(data.cluster_start, data.ligature_start, 1)
};
if character =='' {
entry.set_char_is_space()
}
self.entry_buffer[i.to_usize()] = entry;
}
pub fn add_glyphs_for_byte_index(&mut self, i: ByteIndex, data_for_glyphs: &[GlyphData]) {
assert!(i < self.len());
assert!(data_for_glyphs.len() > 0);
let glyph_count = data_for_glyphs.len();
let first_glyph_data = data_for_glyphs[0];
let glyphs_vec: Vec<DetailedGlyph> = (0..glyph_count)
.map(|i| {
DetailedGlyph::new(
data_for_glyphs[i].id,
data_for_glyphs[i].advance,
data_for_glyphs[i].offset,
)
})
.collect();
self.has_detailed_glyphs = true;
self.detail_store
.add_detailed_glyphs_for_entry(i, &glyphs_vec);
let entry = GlyphEntry::complex(
first_glyph_data.cluster_start,
first_glyph_data.ligature_start,
glyph_count,
);
debug!(
"Adding multiple glyphs[idx={:?}, count={}]: {:?}",
i, glyph_count, entry
);
self.entry_buffer[i.to_usize()] = entry;
}
#[inline]
pub fn iter_glyphs_for_byte_range(&'a self, range: &Range<ByteIndex>) -> GlyphIterator<'a> {
if range.begin() >= self.len() {
panic!("iter_glyphs_for_range: range.begin beyond length!");
}
if range.end() > self.len() {
panic!("iter_glyphs_for_range: range.end beyond length!");
}
GlyphIterator {
store: self,
byte_index: if self.is_rtl {
range.end()
} else {
range.begin() - ByteIndex(1)
},
byte_range: *range,
glyph_range: None,
}
}
// Scan the glyphs for a given range until we reach a given advance. Returns the index
// and advance of the glyph in the range at the given advance, if reached. Otherwise, returns the
// the number of glyphs and the advance for the given range.
#[inline]
pub fn range_index_of_advance(
&self,
range: &Range<ByteIndex>,
advance: Au,
extra_word_spacing: Au,
) -> (usize, Au) {
let mut index = 0;
let mut current_advance = Au(0);
for glyph in self.iter_glyphs_for_byte_range(range) {
if glyph.char_is_space() {
current_advance += glyph.advance() + extra_word_spacing
} else {
current_advance += glyph.advance()
}
if current_advance > advance {
break;
}
index += 1;
}
(index, current_advance)
}
#[inline]
pub fn advance_for_byte_range(&self, range: &Range<ByteIndex>, extra_word_spacing: Au) -> Au {
if range.begin() == ByteIndex(0) && range.end() == self.len() {
self.total_advance + extra_word_spacing * self.total_spaces
} else if!self.has_detailed_glyphs {
self.advance_for_byte_range_simple_glyphs(range, extra_word_spacing)
} else {
self.advance_for_byte_range_slow_path(range, extra_word_spacing)
}
}
#[inline]
pub fn advance_for_byte_range_slow_path(
&self,
range: &Range<ByteIndex>,
extra_word_spacing: Au,
) -> Au {
self.iter_glyphs_for_byte_range(range)
.fold(Au(0), |advance, glyph| {
if glyph.char_is_space() {
advance + glyph.advance() + extra_word_spacing
} else {
advance + glyph.advance()
}
})
}
#[inline]
#[cfg(feature = "unstable")]
#[cfg(any(target_feature = "sse2", target_feature = "neon"))]
fn advance_for_byte_range_simple_glyphs(
&self,
range: &Range<ByteIndex>,
extra_word_spacing: Au,
) -> Au {
let advance_mask = u32x4::splat(GLYPH_ADVANCE_MASK);
let space_flag_mask = u32x4::splat(FLAG_CHAR_IS_SPACE);
let mut simd_advance = u32x4::splat(0);
let mut simd_spaces = u32x4::splat(0);
let begin = range.begin().to_usize();
let len = range.length().to_usize();
let num_simd_iterations = len / 4;
let leftover_entries = range.end().to_usize() - (len - num_simd_iterations * 4);
let buf = self.transmute_entry_buffer_to_u32_buffer();
for i in 0..num_simd_iterations {
let offset = begin + i * 4;
let v = u32x4::load_unaligned(&buf[offset..]);
let advance = (v & advance_mask) >> GLYPH_ADVANCE_SHIFT;
let spaces = (v & space_flag_mask) >> FLAG_CHAR_IS_SPACE_SHIFT;
simd_advance = simd_advance + advance;
simd_spaces = simd_spaces + spaces;
}
let advance = (simd_advance.extract(0) +
simd_advance.extract(1) +
simd_advance.extract(2) +
simd_advance.extract(3)) as i32;
let spaces = (simd_spaces.extract(0) +
simd_spaces.extract(1) +
simd_spaces.extract(2) +
simd_spaces.extract(3)) as i32;
let mut leftover_advance = Au(0);
let mut leftover_spaces = 0;
for i in leftover_entries..range.end().to_usize() {
leftover_advance = leftover_advance + self.entry_buffer[i].advance();
if self.entry_buffer[i].char_is_space() {
leftover_spaces += 1;
}
}
Au::new(advance) + leftover_advance + extra_word_spacing * (spaces + leftover_spaces)
}
/// When SIMD isn't available, fallback to the slow path.
#[inline]
#[cfg(not(all(
feature = "unstable",
any(target_feature = "sse2", target_feature = "neon")
)))]
fn advance_for_byte_range_simple_glyphs(
&self,
range: &Range<ByteIndex>,
extra_word_spacing: Au,
) -> Au {
self.advance_for_byte_range_slow_path(range, extra_word_spacing)
}
/// Used for SIMD.
#[inline]
#[cfg(feature = "unstable")]
#[cfg(any(target_feature = "sse2", target_feature = "neon"))]
#[allow(unsafe_code)]
fn transmute_entry_buffer_to_u32_buffer(&self) -> &[u32] {
// Statically assert identical sizes
let _ = mem::transmute::<GlyphEntry, u32>;
unsafe { mem::transmute::<&[GlyphEntry], &[u32]>(self.entry_buffer.as_slice()) }
}
pub fn char_is_space(&self, i: ByteIndex) -> bool {
assert!(i < self.len());
self.entry_buffer[i.to_usize()].char_is_space()
}
pub fn space_count_in_range(&self, range: &Range<ByteIndex>) -> u32 {
let mut spaces = 0;
for index in range.each_index() {
if self.char_is_space(index) | {
spaces += 1
} | conditional_block | |
glyph.rs | ) == id
}
fn is_simple_advance(advance: Au) -> bool {
advance >= Au(0) && {
let unsigned_au = advance.0 as u32;
(unsigned_au & (GLYPH_ADVANCE_MASK >> GLYPH_ADVANCE_SHIFT)) == unsigned_au
}
}
pub type DetailedGlyphCount = u16;
// Getters and setters for GlyphEntry. Setter methods are functional,
// because GlyphEntry is immutable and only a u32 in size.
impl GlyphEntry {
#[inline(always)]
fn advance(&self) -> Au {
Au::new(((self.value & GLYPH_ADVANCE_MASK) >> GLYPH_ADVANCE_SHIFT) as i32)
}
#[inline]
fn id(&self) -> GlyphId {
self.value & GLYPH_ID_MASK
}
/// True if original char was normal (U+0020) space. Other chars may
/// map to space glyph, but this does not account for them.
fn char_is_space(&self) -> bool {
self.has_flag(FLAG_CHAR_IS_SPACE)
}
#[inline(always)]
fn set_char_is_space(&mut self) {
self.value |= FLAG_CHAR_IS_SPACE;
}
fn glyph_count(&self) -> u16 {
assert!(!self.is_simple());
(self.value & GLYPH_COUNT_MASK) as u16
}
#[inline(always)]
fn is_simple(&self) -> bool {
self.has_flag(FLAG_IS_SIMPLE_GLYPH)
}
#[inline(always)]
fn has_flag(&self, flag: u32) -> bool {
(self.value & flag)!= 0
}
}
// Stores data for a detailed glyph, in the case that several glyphs
// correspond to one character, or the glyph's data couldn't be packed.
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
struct DetailedGlyph {
id: GlyphId,
// glyph's advance, in the text's direction (LTR or RTL)
advance: Au,
// glyph's offset from the font's em-box (from top-left)
offset: Point2D<Au>,
}
impl DetailedGlyph {
fn new(id: GlyphId, advance: Au, offset: Point2D<Au>) -> DetailedGlyph {
DetailedGlyph {
id: id,
advance: advance,
offset: offset,
}
}
}
#[derive(Clone, Copy, Debug, Deserialize, Eq, PartialEq, Serialize)]
struct DetailedGlyphRecord {
// source string offset/GlyphEntry offset in the TextRun
entry_offset: ByteIndex,
// offset into the detailed glyphs buffer
detail_offset: usize,
}
impl PartialOrd for DetailedGlyphRecord {
fn partial_cmp(&self, other: &DetailedGlyphRecord) -> Option<Ordering> {
self.entry_offset.partial_cmp(&other.entry_offset)
}
}
impl Ord for DetailedGlyphRecord {
fn cmp(&self, other: &DetailedGlyphRecord) -> Ordering {
self.entry_offset.cmp(&other.entry_offset)
}
}
// Manages the lookup table for detailed glyphs. Sorting is deferred
// until a lookup is actually performed; this matches the expected
// usage pattern of setting/appending all the detailed glyphs, and
// then querying without setting.
#[derive(Clone, Deserialize, Serialize)]
struct DetailedGlyphStore {
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
detail_buffer: Vec<DetailedGlyph>,
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
detail_lookup: Vec<DetailedGlyphRecord>,
lookup_is_sorted: bool,
}
impl<'a> DetailedGlyphStore {
fn new() -> DetailedGlyphStore {
DetailedGlyphStore {
detail_buffer: vec![], // TODO: default size?
detail_lookup: vec![],
lookup_is_sorted: false,
}
}
fn add_detailed_glyphs_for_entry(&mut self, entry_offset: ByteIndex, glyphs: &[DetailedGlyph]) {
let entry = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: self.detail_buffer.len(),
};
debug!(
"Adding entry[off={:?}] for detailed glyphs: {:?}",
entry_offset, glyphs
);
debug_assert!(!self.detail_lookup.contains(&entry));
self.detail_lookup.push(entry);
self.detail_buffer.extend_from_slice(glyphs);
self.lookup_is_sorted = false;
}
fn detailed_glyphs_for_entry(
&'a self,
entry_offset: ByteIndex,
count: u16,
) -> &'a [DetailedGlyph] {
debug!(
"Requesting detailed glyphs[n={}] for entry[off={:?}]",
count, entry_offset
);
// FIXME: Is this right? --pcwalton
// TODO: should fix this somewhere else
if count == 0 {
return &self.detail_buffer[0..0];
}
assert!((count as usize) <= self.detail_buffer.len());
assert!(self.lookup_is_sorted);
let key = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: 0, // unused
};
let i = self
.detail_lookup
.binary_search(&key)
.expect("Invalid index not found in detailed glyph lookup table!");
let main_detail_offset = self.detail_lookup[i].detail_offset;
assert!(main_detail_offset + (count as usize) <= self.detail_buffer.len());
// return a slice into the buffer
&self.detail_buffer[main_detail_offset..main_detail_offset + count as usize]
}
fn detailed_glyph_with_index(
&'a self,
entry_offset: ByteIndex,
detail_offset: u16,
) -> &'a DetailedGlyph {
assert!((detail_offset as usize) <= self.detail_buffer.len());
assert!(self.lookup_is_sorted);
let key = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: 0, // unused
};
| assert!(main_detail_offset + (detail_offset as usize) < self.detail_buffer.len());
&self.detail_buffer[main_detail_offset + (detail_offset as usize)]
}
fn ensure_sorted(&mut self) {
if self.lookup_is_sorted {
return;
}
// Sorting a unique vector is surprisingly hard. The following
// code is a good argument for using DVecs, but they require
// immutable locations thus don't play well with freezing.
// Thar be dragons here. You have been warned. (Tips accepted.)
let mut unsorted_records: Vec<DetailedGlyphRecord> = vec![];
mem::swap(&mut self.detail_lookup, &mut unsorted_records);
let mut mut_records: Vec<DetailedGlyphRecord> = unsorted_records;
mut_records.sort_by(|a, b| {
if a < b {
Ordering::Less
} else {
Ordering::Greater
}
});
let mut sorted_records = mut_records;
mem::swap(&mut self.detail_lookup, &mut sorted_records);
self.lookup_is_sorted = true;
}
}
// This struct is used by GlyphStore clients to provide new glyph data.
// It should be allocated on the stack and passed by reference to GlyphStore.
#[derive(Clone, Copy)]
pub struct GlyphData {
id: GlyphId,
advance: Au,
offset: Point2D<Au>,
cluster_start: bool,
ligature_start: bool,
}
impl GlyphData {
/// Creates a new entry for one glyph.
pub fn new(
id: GlyphId,
advance: Au,
offset: Option<Point2D<Au>>,
cluster_start: bool,
ligature_start: bool,
) -> GlyphData {
GlyphData {
id: id,
advance: advance,
offset: offset.unwrap_or(Point2D::zero()),
cluster_start: cluster_start,
ligature_start: ligature_start,
}
}
}
// This enum is a proxy that's provided to GlyphStore clients when iterating
// through glyphs (either for a particular TextRun offset, or all glyphs).
// Rather than eagerly assembling and copying glyph data, it only retrieves
// values as they are needed from the GlyphStore, using provided offsets.
#[derive(Clone, Copy)]
pub enum GlyphInfo<'a> {
Simple(&'a GlyphStore, ByteIndex),
Detail(&'a GlyphStore, ByteIndex, u16),
}
impl<'a> GlyphInfo<'a> {
pub fn id(self) -> GlyphId {
match self {
GlyphInfo::Simple(store, entry_i) => store.entry_buffer[entry_i.to_usize()].id(),
GlyphInfo::Detail(store, entry_i, detail_j) => {
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.id
},
}
}
#[inline(always)]
// FIXME: Resolution conflicts with IteratorUtil trait so adding trailing _
pub fn advance(self) -> Au {
match self {
GlyphInfo::Simple(store, entry_i) => store.entry_buffer[entry_i.to_usize()].advance(),
GlyphInfo::Detail(store, entry_i, detail_j) => {
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.advance
},
}
}
#[inline]
pub fn offset(self) -> Option<Point2D<Au>> {
match self {
GlyphInfo::Simple(_, _) => None,
GlyphInfo::Detail(store, entry_i, detail_j) => Some(
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.offset,
),
}
}
pub fn char_is_space(self) -> bool {
let (store, entry_i) = match self {
GlyphInfo::Simple(store, entry_i) => (store, entry_i),
GlyphInfo::Detail(store, entry_i, _) => (store, entry_i),
};
store.char_is_space(entry_i)
}
}
/// Stores the glyph data belonging to a text run.
///
/// Simple glyphs are stored inline in the `entry_buffer`, detailed glyphs are
/// stored as pointers into the `detail_store`.
///
/// ~~~ascii
/// +- GlyphStore --------------------------------+
/// | +---+---+---+---+---+---+---+ |
/// | entry_buffer: | | s | | s | | s | s | | d = detailed
/// | +-|-+---+-|-+---+-|-+---+---+ | s = simple
/// | | | | |
/// | | +---+-------+ |
/// | | | |
/// | +-V-+-V-+ |
/// | detail_store: | d | d | |
/// | +---+---+ |
/// +---------------------------------------------+
/// ~~~
#[derive(Clone, Deserialize, Serialize)]
pub struct GlyphStore {
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
/// A buffer of glyphs within the text run, in the order in which they
/// appear in the input text.
/// Any changes will also need to be reflected in
/// transmute_entry_buffer_to_u32_buffer().
entry_buffer: Vec<GlyphEntry>,
/// A store of the detailed glyph data. Detailed glyphs contained in the
/// `entry_buffer` point to locations in this data structure.
detail_store: DetailedGlyphStore,
/// A cache of the advance of the entire glyph store.
total_advance: Au,
/// A cache of the number of spaces in the entire glyph store.
total_spaces: i32,
/// Used to check if fast path should be used in glyph iteration.
has_detailed_glyphs: bool,
is_whitespace: bool,
is_rtl: bool,
}
impl<'a> GlyphStore {
/// Initializes the glyph store, but doesn't actually shape anything.
///
/// Use the `add_*` methods to store glyph data.
pub fn new(length: usize, is_whitespace: bool, is_rtl: bool) -> GlyphStore {
assert!(length > 0);
GlyphStore {
entry_buffer: vec![GlyphEntry::initial(); length],
detail_store: DetailedGlyphStore::new(),
total_advance: Au(0),
total_spaces: 0,
has_detailed_glyphs: false,
is_whitespace: is_whitespace,
is_rtl: is_rtl,
}
}
#[inline]
pub fn len(&self) -> ByteIndex {
ByteIndex(self.entry_buffer.len() as isize)
}
#[inline]
pub fn is_whitespace(&self) -> bool {
self.is_whitespace
}
pub fn finalize_changes(&mut self) {
self.detail_store.ensure_sorted();
self.cache_total_advance_and_spaces()
}
#[inline(never)]
fn cache_total_advance_and_spaces(&mut self) {
let mut total_advance = Au(0);
let mut total_spaces = 0;
for glyph in self.iter_glyphs_for_byte_range(&Range::new(ByteIndex(0), self.len())) {
total_advance = total_advance + glyph.advance();
if glyph.char_is_space() {
total_spaces += 1;
}
}
self.total_advance = total_advance;
self.total_spaces = total_spaces;
}
/// Adds a single glyph.
pub fn add_glyph_for_byte_index(&mut self, i: ByteIndex, character: char, data: &GlyphData) {
let glyph_is_compressible = is_simple_glyph_id(data.id) &&
is_simple_advance(data.advance) &&
data.offset == Point2D::zero() &&
data.cluster_start; // others are stored in detail buffer
debug_assert!(data.ligature_start); // can't compress ligature continuation glyphs.
debug_assert!(i < self.len());
let mut entry = if glyph_is_compressible {
GlyphEntry::simple(data.id, data.advance)
} else {
let glyph = &[DetailedGlyph::new(data.id, data.advance, data.offset)];
self.has_detailed_glyphs = true;
self.detail_store.add_detailed_glyphs_for_entry(i, glyph);
GlyphEntry::complex(data.cluster_start, data.ligature_start, 1)
};
if character =='' {
entry.set_char_is_space()
}
self.entry_buffer[i.to_usize()] = entry;
}
pub fn add_glyphs_for_byte_index(&mut self, i: ByteIndex, data_for_glyphs: &[GlyphData]) {
assert!(i < self.len());
assert!(data_for_glyphs.len() > 0);
let glyph_count = data_for_glyphs.len();
let first_glyph_data = data_for_glyphs[0];
let glyphs_vec: Vec<DetailedGlyph> = (0..glyph_count)
.map(|i| {
DetailedGlyph::new(
data_for_glyphs[i].id,
data_for_glyphs[i].advance,
data_for_glyphs[i].offset,
)
})
.collect();
self.has_detailed_glyphs = true;
self.detail_store
.add_detailed_glyphs_for_entry(i, &glyphs_vec);
let entry = GlyphEntry::complex(
first_glyph_data.cluster_start,
first_glyph_data.ligature_start,
glyph_count,
);
debug!(
"Adding multiple glyphs[idx={:?}, count={}]: {:?}",
i, glyph_count, entry
);
self.entry_buffer[i.to_usize()] = entry;
}
#[inline]
pub fn iter_glyphs_for_byte_range(&'a self, range: &Range<ByteIndex>) -> GlyphIterator<'a> {
if range.begin() >= self.len() {
panic!("iter_glyphs_for_range: range.begin beyond length!");
}
if range.end() > self.len() {
panic!("iter_glyphs_for_range: range.end beyond length!");
}
GlyphIterator {
store: self,
byte_index: if self.is_rtl {
range.end()
} else {
range.begin() - ByteIndex(1)
},
byte_range: *range,
glyph_range: None,
}
}
// Scan the glyphs for a given range until we reach a given advance. Returns the index
// and advance of the glyph in the range at the given advance, if reached. Otherwise, returns the
// the number of glyphs and the advance for the given range.
#[inline]
pub fn range_index_of_advance(
&self,
range: &Range<ByteIndex>,
advance: Au,
extra_word_spacing: Au,
) -> (usize, Au) {
let mut index = 0;
let mut current_advance = Au(0);
for glyph in self.iter_glyphs_for_byte_range(range) {
if glyph.char_is_space() {
current_advance += glyph.advance() + extra_word_spacing
} else {
current_advance += glyph.advance()
}
if current_advance > advance {
break;
}
index += 1;
}
(index, current_advance)
}
#[inline]
pub fn advance_for_byte_range(&self, range: &Range<ByteIndex>, extra_word_spacing: Au) -> Au {
if range.begin() == ByteIndex(0) && range.end() == self.len() {
self.total_advance + extra_word_spacing * self.total_spaces
} else if!self.has_detailed_glyphs {
self.advance_for_byte_range_simple_glyphs(range, extra_word_spacing)
} else {
self.advance_for_byte_range_slow_path(range, extra_word_spacing)
}
}
#[inline]
pub fn advance_for_byte_range_slow_path(
&self,
range: &Range<ByteIndex>,
extra_word_spacing: Au,
) -> Au {
self.iter_glyphs_for_byte_range(range)
.fold(Au(0), |advance, glyph| {
if glyph.char_is_space() {
advance + glyph.advance() + extra_word_spacing
} else {
advance + glyph.advance()
}
})
}
#[inline]
#[cfg(feature = "unstable")]
#[cfg(any(target_feature = "sse2", target_feature = "neon"))]
fn advance_for_byte_range_simple_glyphs(
&self,
range: &Range<ByteIndex>,
extra_word_spacing: Au,
) -> Au {
let advance_mask = u32x4::splat(GLYPH_ADVANCE_MASK);
let space_flag_mask = u32x4::splat(FLAG_CHAR_IS_SPACE);
let mut simd_advance = u32x4::splat(0);
let mut simd_spaces = u32x4::splat(0);
let begin = range.begin().to_usize();
let len = range.length().to_usize();
let num_simd_iterations = len / 4;
let leftover_entries = range.end().to_usize() - (len - num_simd_iterations * 4);
let buf = self.transmute_entry_buffer_to_u32_buffer();
for i in 0..num_simd_iterations {
let offset = begin + i * 4;
let v = u32x4::load_unaligned(&buf[offset..]);
let advance = (v & advance_mask) >> GLYPH_ADVANCE_SHIFT;
let spaces = (v & space_flag_mask) >> FLAG_CHAR_IS_SPACE_SHIFT;
simd_advance = simd_advance + advance;
simd_spaces = simd_spaces + spaces;
}
let advance = (simd_advance.extract(0) +
simd_advance.extract( | let i = self
.detail_lookup
.binary_search(&key)
.expect("Invalid index not found in detailed glyph lookup table!");
let main_detail_offset = self.detail_lookup[i].detail_offset; | random_line_split |
glyph.rs |
fn is_initial(&self) -> bool {
*self == GlyphEntry::initial()
}
}
/// The id of a particular glyph within a font
pub type GlyphId = u32;
// TODO: make this more type-safe.
const FLAG_CHAR_IS_SPACE: u32 = 0x40000000;
#[cfg(feature = "unstable")]
#[cfg(any(target_feature = "sse2", target_feature = "neon"))]
const FLAG_CHAR_IS_SPACE_SHIFT: u32 = 30;
const FLAG_IS_SIMPLE_GLYPH: u32 = 0x80000000;
// glyph advance; in Au's.
const GLYPH_ADVANCE_MASK: u32 = 0x3FFF0000;
const GLYPH_ADVANCE_SHIFT: u32 = 16;
const GLYPH_ID_MASK: u32 = 0x0000FFFF;
// Non-simple glyphs (more than one glyph per char; missing glyph,
// newline, tab, large advance, or nonzero x/y offsets) may have one
// or more detailed glyphs associated with them. They are stored in a
// side array so that there is a 1:1 mapping of GlyphEntry to
// unicode char.
// The number of detailed glyphs for this char.
const GLYPH_COUNT_MASK: u32 = 0x0000FFFF;
fn is_simple_glyph_id(id: GlyphId) -> bool {
((id as u32) & GLYPH_ID_MASK) == id
}
fn is_simple_advance(advance: Au) -> bool {
advance >= Au(0) && {
let unsigned_au = advance.0 as u32;
(unsigned_au & (GLYPH_ADVANCE_MASK >> GLYPH_ADVANCE_SHIFT)) == unsigned_au
}
}
pub type DetailedGlyphCount = u16;
// Getters and setters for GlyphEntry. Setter methods are functional,
// because GlyphEntry is immutable and only a u32 in size.
impl GlyphEntry {
#[inline(always)]
fn advance(&self) -> Au {
Au::new(((self.value & GLYPH_ADVANCE_MASK) >> GLYPH_ADVANCE_SHIFT) as i32)
}
#[inline]
fn id(&self) -> GlyphId {
self.value & GLYPH_ID_MASK
}
/// True if original char was normal (U+0020) space. Other chars may
/// map to space glyph, but this does not account for them.
fn char_is_space(&self) -> bool {
self.has_flag(FLAG_CHAR_IS_SPACE)
}
#[inline(always)]
fn set_char_is_space(&mut self) {
self.value |= FLAG_CHAR_IS_SPACE;
}
fn glyph_count(&self) -> u16 {
assert!(!self.is_simple());
(self.value & GLYPH_COUNT_MASK) as u16
}
#[inline(always)]
fn is_simple(&self) -> bool {
self.has_flag(FLAG_IS_SIMPLE_GLYPH)
}
#[inline(always)]
fn has_flag(&self, flag: u32) -> bool {
(self.value & flag)!= 0
}
}
// Stores data for a detailed glyph, in the case that several glyphs
// correspond to one character, or the glyph's data couldn't be packed.
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
struct DetailedGlyph {
id: GlyphId,
// glyph's advance, in the text's direction (LTR or RTL)
advance: Au,
// glyph's offset from the font's em-box (from top-left)
offset: Point2D<Au>,
}
impl DetailedGlyph {
fn new(id: GlyphId, advance: Au, offset: Point2D<Au>) -> DetailedGlyph {
DetailedGlyph {
id: id,
advance: advance,
offset: offset,
}
}
}
#[derive(Clone, Copy, Debug, Deserialize, Eq, PartialEq, Serialize)]
struct DetailedGlyphRecord {
// source string offset/GlyphEntry offset in the TextRun
entry_offset: ByteIndex,
// offset into the detailed glyphs buffer
detail_offset: usize,
}
impl PartialOrd for DetailedGlyphRecord {
fn partial_cmp(&self, other: &DetailedGlyphRecord) -> Option<Ordering> {
self.entry_offset.partial_cmp(&other.entry_offset)
}
}
impl Ord for DetailedGlyphRecord {
fn cmp(&self, other: &DetailedGlyphRecord) -> Ordering {
self.entry_offset.cmp(&other.entry_offset)
}
}
// Manages the lookup table for detailed glyphs. Sorting is deferred
// until a lookup is actually performed; this matches the expected
// usage pattern of setting/appending all the detailed glyphs, and
// then querying without setting.
#[derive(Clone, Deserialize, Serialize)]
struct DetailedGlyphStore {
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
detail_buffer: Vec<DetailedGlyph>,
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
detail_lookup: Vec<DetailedGlyphRecord>,
lookup_is_sorted: bool,
}
impl<'a> DetailedGlyphStore {
fn new() -> DetailedGlyphStore {
DetailedGlyphStore {
detail_buffer: vec![], // TODO: default size?
detail_lookup: vec![],
lookup_is_sorted: false,
}
}
fn add_detailed_glyphs_for_entry(&mut self, entry_offset: ByteIndex, glyphs: &[DetailedGlyph]) {
let entry = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: self.detail_buffer.len(),
};
debug!(
"Adding entry[off={:?}] for detailed glyphs: {:?}",
entry_offset, glyphs
);
debug_assert!(!self.detail_lookup.contains(&entry));
self.detail_lookup.push(entry);
self.detail_buffer.extend_from_slice(glyphs);
self.lookup_is_sorted = false;
}
fn detailed_glyphs_for_entry(
&'a self,
entry_offset: ByteIndex,
count: u16,
) -> &'a [DetailedGlyph] {
debug!(
"Requesting detailed glyphs[n={}] for entry[off={:?}]",
count, entry_offset
);
// FIXME: Is this right? --pcwalton
// TODO: should fix this somewhere else
if count == 0 {
return &self.detail_buffer[0..0];
}
assert!((count as usize) <= self.detail_buffer.len());
assert!(self.lookup_is_sorted);
let key = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: 0, // unused
};
let i = self
.detail_lookup
.binary_search(&key)
.expect("Invalid index not found in detailed glyph lookup table!");
let main_detail_offset = self.detail_lookup[i].detail_offset;
assert!(main_detail_offset + (count as usize) <= self.detail_buffer.len());
// return a slice into the buffer
&self.detail_buffer[main_detail_offset..main_detail_offset + count as usize]
}
fn detailed_glyph_with_index(
&'a self,
entry_offset: ByteIndex,
detail_offset: u16,
) -> &'a DetailedGlyph {
assert!((detail_offset as usize) <= self.detail_buffer.len());
assert!(self.lookup_is_sorted);
let key = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: 0, // unused
};
let i = self
.detail_lookup
.binary_search(&key)
.expect("Invalid index not found in detailed glyph lookup table!");
let main_detail_offset = self.detail_lookup[i].detail_offset;
assert!(main_detail_offset + (detail_offset as usize) < self.detail_buffer.len());
&self.detail_buffer[main_detail_offset + (detail_offset as usize)]
}
fn ensure_sorted(&mut self) {
if self.lookup_is_sorted {
return;
}
// Sorting a unique vector is surprisingly hard. The following
// code is a good argument for using DVecs, but they require
// immutable locations thus don't play well with freezing.
// Thar be dragons here. You have been warned. (Tips accepted.)
let mut unsorted_records: Vec<DetailedGlyphRecord> = vec![];
mem::swap(&mut self.detail_lookup, &mut unsorted_records);
let mut mut_records: Vec<DetailedGlyphRecord> = unsorted_records;
mut_records.sort_by(|a, b| {
if a < b {
Ordering::Less
} else {
Ordering::Greater
}
});
let mut sorted_records = mut_records;
mem::swap(&mut self.detail_lookup, &mut sorted_records);
self.lookup_is_sorted = true;
}
}
// This struct is used by GlyphStore clients to provide new glyph data.
// It should be allocated on the stack and passed by reference to GlyphStore.
#[derive(Clone, Copy)]
pub struct GlyphData {
id: GlyphId,
advance: Au,
offset: Point2D<Au>,
cluster_start: bool,
ligature_start: bool,
}
impl GlyphData {
/// Creates a new entry for one glyph.
pub fn new(
id: GlyphId,
advance: Au,
offset: Option<Point2D<Au>>,
cluster_start: bool,
ligature_start: bool,
) -> GlyphData {
GlyphData {
id: id,
advance: advance,
offset: offset.unwrap_or(Point2D::zero()),
cluster_start: cluster_start,
ligature_start: ligature_start,
}
}
}
// This enum is a proxy that's provided to GlyphStore clients when iterating
// through glyphs (either for a particular TextRun offset, or all glyphs).
// Rather than eagerly assembling and copying glyph data, it only retrieves
// values as they are needed from the GlyphStore, using provided offsets.
#[derive(Clone, Copy)]
pub enum GlyphInfo<'a> {
Simple(&'a GlyphStore, ByteIndex),
Detail(&'a GlyphStore, ByteIndex, u16),
}
impl<'a> GlyphInfo<'a> {
pub fn id(self) -> GlyphId {
match self {
GlyphInfo::Simple(store, entry_i) => store.entry_buffer[entry_i.to_usize()].id(),
GlyphInfo::Detail(store, entry_i, detail_j) => {
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.id
},
}
}
#[inline(always)]
// FIXME: Resolution conflicts with IteratorUtil trait so adding trailing _
pub fn advance(self) -> Au {
match self {
GlyphInfo::Simple(store, entry_i) => store.entry_buffer[entry_i.to_usize()].advance(),
GlyphInfo::Detail(store, entry_i, detail_j) => {
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.advance
},
}
}
#[inline]
pub fn offset(self) -> Option<Point2D<Au>> {
match self {
GlyphInfo::Simple(_, _) => None,
GlyphInfo::Detail(store, entry_i, detail_j) => Some(
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.offset,
),
}
}
pub fn char_is_space(self) -> bool {
let (store, entry_i) = match self {
GlyphInfo::Simple(store, entry_i) => (store, entry_i),
GlyphInfo::Detail(store, entry_i, _) => (store, entry_i),
};
store.char_is_space(entry_i)
}
}
/// Stores the glyph data belonging to a text run.
///
/// Simple glyphs are stored inline in the `entry_buffer`, detailed glyphs are
/// stored as pointers into the `detail_store`.
///
/// ~~~ascii
/// +- GlyphStore --------------------------------+
/// | +---+---+---+---+---+---+---+ |
/// | entry_buffer: | | s | | s | | s | s | | d = detailed
/// | +-|-+---+-|-+---+-|-+---+---+ | s = simple
/// | | | | |
/// | | +---+-------+ |
/// | | | |
/// | +-V-+-V-+ |
/// | detail_store: | d | d | |
/// | +---+---+ |
/// +---------------------------------------------+
/// ~~~
#[derive(Clone, Deserialize, Serialize)]
pub struct GlyphStore {
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
/// A buffer of glyphs within the text run, in the order in which they
/// appear in the input text.
/// Any changes will also need to be reflected in
/// transmute_entry_buffer_to_u32_buffer().
entry_buffer: Vec<GlyphEntry>,
/// A store of the detailed glyph data. Detailed glyphs contained in the
/// `entry_buffer` point to locations in this data structure.
detail_store: DetailedGlyphStore,
/// A cache of the advance of the entire glyph store.
total_advance: Au,
/// A cache of the number of spaces in the entire glyph store.
total_spaces: i32,
/// Used to check if fast path should be used in glyph iteration.
has_detailed_glyphs: bool,
is_whitespace: bool,
is_rtl: bool,
}
impl<'a> GlyphStore {
/// Initializes the glyph store, but doesn't actually shape anything.
///
/// Use the `add_*` methods to store glyph data.
pub fn new(length: usize, is_whitespace: bool, is_rtl: bool) -> GlyphStore {
assert!(length > 0);
GlyphStore {
entry_buffer: vec![GlyphEntry::initial(); length],
detail_store: DetailedGlyphStore::new(),
total_advance: Au(0),
total_spaces: 0,
has_detailed_glyphs: false,
is_whitespace: is_whitespace,
is_rtl: is_rtl,
}
}
#[inline]
pub fn len(&self) -> ByteIndex {
ByteIndex(self.entry_buffer.len() as isize)
}
#[inline]
pub fn is_whitespace(&self) -> bool {
self.is_whitespace
}
pub fn finalize_changes(&mut self) {
self.detail_store.ensure_sorted();
self.cache_total_advance_and_spaces()
}
#[inline(never)]
fn cache_total_advance_and_spaces(&mut self) {
let mut total_advance = Au(0);
let mut total_spaces = 0;
for glyph in self.iter_glyphs_for_byte_range(&Range::new(ByteIndex(0), self.len())) {
total_advance = total_advance + glyph.advance();
if glyph.char_is_space() {
total_spaces += 1;
}
}
self.total_advance = total_advance;
self.total_spaces = total_spaces;
}
/// Adds a single glyph.
pub fn add_glyph_for_byte_index(&mut self, i: ByteIndex, character: char, data: &GlyphData) {
let glyph_is_compressible = is_simple_glyph_id(data.id) &&
is_simple_advance(data.advance) &&
data.offset == Point2D::zero() &&
data.cluster_start; // others are stored in detail buffer
debug_assert!(data.ligature_start); // can't compress ligature continuation glyphs.
debug_assert!(i < self.len());
let mut entry = if glyph_is_compressible {
GlyphEntry::simple(data.id, data.advance)
} else {
let glyph = &[DetailedGlyph::new(data.id, data.advance, data.offset)];
self.has_detailed_glyphs = true;
self.detail_store.add_detailed_glyphs_for_entry(i, glyph);
GlyphEntry::complex(data.cluster_start, data.ligature_start, 1)
};
if character =='' {
entry.set_char_is_space()
}
self.entry_buffer[i.to_usize()] = entry;
}
pub fn add_glyphs_for_byte_index(&mut self, i: ByteIndex, data_for_glyphs: &[GlyphData]) {
assert!(i < self.len());
assert!(data_for_glyphs.len() > 0);
let glyph_count = data_for_glyphs.len();
let first_glyph_data = data_for_glyphs[0];
let glyphs_vec: Vec<DetailedGlyph> = (0..glyph_count)
.map(|i| {
DetailedGlyph::new(
data_for_glyphs[i].id,
data_for_glyphs[i].advance,
data_for_glyphs[i].offset,
)
})
.collect();
self.has_detailed_glyphs = true;
self.detail_store
.add_detailed_glyphs_for_entry(i, &glyphs_vec);
let entry = GlyphEntry::complex(
first_glyph_data.cluster_start,
first_glyph_data.ligature_start,
glyph_count,
);
debug!(
"Adding multiple glyphs[idx={:?}, count={}]: {:?}",
i, glyph_count, entry
);
self.entry_buffer[i.to_usize()] = entry;
}
#[inline]
pub fn iter_glyphs_for_byte_range(&'a self, range: &Range<ByteIndex>) -> GlyphIterator<'a> {
if range.begin() >= self.len() {
panic!("iter_glyphs_for_range: range.begin beyond length!");
}
if range.end() > self.len() {
panic!("iter_glyphs_for_range: range.end beyond length!");
}
GlyphIterator {
store: self,
byte_index: if self.is_rtl {
range.end()
} else {
range.begin() - ByteIndex(1)
},
byte_range: *range,
glyph_range: None,
}
}
// Scan the glyphs for a given range until we reach a given advance. Returns the index
// and advance of the glyph in the range at the given advance, if reached. Otherwise, returns the
// the number of glyphs and the advance for the given range.
#[inline]
pub fn range_index_of_advance(
&self,
range: &Range<ByteIndex>,
advance: Au,
extra_word_spacing: Au,
) -> (usize, Au) {
let mut index = 0;
let mut current_advance = Au(0);
for glyph in self.iter_glyphs_for_byte_range(range) {
if glyph.char_is_space() {
current_advance += glyph.advance() + extra_word_spacing
} else {
current_advance += glyph.advance()
}
if current_advance > advance {
break;
}
index += 1;
}
(index, current_advance)
}
#[inline]
pub fn advance_for_byte_range(&self, range: &Range<ByteIndex>, extra_word_spacing: Au) -> Au {
if range.begin() == ByteIndex(0) && range.end() == self.len() {
self.total_advance + extra_word_spacing * self.total_spaces
} else if!self.has_detailed_glyphs {
self.advance_for_byte_range_simple_glyphs(range, extra_word_spacing)
} else {
self.advance_for_byte_range_slow_path(range, extra_word_spacing)
}
}
#[inline]
pub fn advance_for_byte_range_slow_path(
&self,
range: &Range<ByteIndex>,
extra_word_spacing: Au,
) -> Au {
self.iter_glyphs_for_byte_range(range)
.fold(Au(0), |advance, glyph| {
if glyph.char_is_space() {
advance + glyph.advance() + extra_word_spacing
} else {
advance + glyph.advance()
}
})
}
#[inline]
#[cfg(feature = "unstable")]
#[cfg(any(target_feature = "sse2", target_feature = "neon"))]
fn advance_for_byte_range_simple_glyphs(
| {
assert!(glyph_count <= u16::MAX as usize);
debug!(
"creating complex glyph entry: starts_cluster={}, starts_ligature={}, \
glyph_count={}",
starts_cluster, starts_ligature, glyph_count
);
GlyphEntry::new(glyph_count as u32)
} | identifier_body | |
glyph.rs | )]
struct DetailedGlyph {
id: GlyphId,
// glyph's advance, in the text's direction (LTR or RTL)
advance: Au,
// glyph's offset from the font's em-box (from top-left)
offset: Point2D<Au>,
}
impl DetailedGlyph {
fn new(id: GlyphId, advance: Au, offset: Point2D<Au>) -> DetailedGlyph {
DetailedGlyph {
id: id,
advance: advance,
offset: offset,
}
}
}
#[derive(Clone, Copy, Debug, Deserialize, Eq, PartialEq, Serialize)]
struct DetailedGlyphRecord {
// source string offset/GlyphEntry offset in the TextRun
entry_offset: ByteIndex,
// offset into the detailed glyphs buffer
detail_offset: usize,
}
impl PartialOrd for DetailedGlyphRecord {
fn partial_cmp(&self, other: &DetailedGlyphRecord) -> Option<Ordering> {
self.entry_offset.partial_cmp(&other.entry_offset)
}
}
impl Ord for DetailedGlyphRecord {
fn cmp(&self, other: &DetailedGlyphRecord) -> Ordering {
self.entry_offset.cmp(&other.entry_offset)
}
}
// Manages the lookup table for detailed glyphs. Sorting is deferred
// until a lookup is actually performed; this matches the expected
// usage pattern of setting/appending all the detailed glyphs, and
// then querying without setting.
#[derive(Clone, Deserialize, Serialize)]
struct DetailedGlyphStore {
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
detail_buffer: Vec<DetailedGlyph>,
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
detail_lookup: Vec<DetailedGlyphRecord>,
lookup_is_sorted: bool,
}
impl<'a> DetailedGlyphStore {
fn new() -> DetailedGlyphStore {
DetailedGlyphStore {
detail_buffer: vec![], // TODO: default size?
detail_lookup: vec![],
lookup_is_sorted: false,
}
}
fn add_detailed_glyphs_for_entry(&mut self, entry_offset: ByteIndex, glyphs: &[DetailedGlyph]) {
let entry = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: self.detail_buffer.len(),
};
debug!(
"Adding entry[off={:?}] for detailed glyphs: {:?}",
entry_offset, glyphs
);
debug_assert!(!self.detail_lookup.contains(&entry));
self.detail_lookup.push(entry);
self.detail_buffer.extend_from_slice(glyphs);
self.lookup_is_sorted = false;
}
fn detailed_glyphs_for_entry(
&'a self,
entry_offset: ByteIndex,
count: u16,
) -> &'a [DetailedGlyph] {
debug!(
"Requesting detailed glyphs[n={}] for entry[off={:?}]",
count, entry_offset
);
// FIXME: Is this right? --pcwalton
// TODO: should fix this somewhere else
if count == 0 {
return &self.detail_buffer[0..0];
}
assert!((count as usize) <= self.detail_buffer.len());
assert!(self.lookup_is_sorted);
let key = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: 0, // unused
};
let i = self
.detail_lookup
.binary_search(&key)
.expect("Invalid index not found in detailed glyph lookup table!");
let main_detail_offset = self.detail_lookup[i].detail_offset;
assert!(main_detail_offset + (count as usize) <= self.detail_buffer.len());
// return a slice into the buffer
&self.detail_buffer[main_detail_offset..main_detail_offset + count as usize]
}
fn detailed_glyph_with_index(
&'a self,
entry_offset: ByteIndex,
detail_offset: u16,
) -> &'a DetailedGlyph {
assert!((detail_offset as usize) <= self.detail_buffer.len());
assert!(self.lookup_is_sorted);
let key = DetailedGlyphRecord {
entry_offset: entry_offset,
detail_offset: 0, // unused
};
let i = self
.detail_lookup
.binary_search(&key)
.expect("Invalid index not found in detailed glyph lookup table!");
let main_detail_offset = self.detail_lookup[i].detail_offset;
assert!(main_detail_offset + (detail_offset as usize) < self.detail_buffer.len());
&self.detail_buffer[main_detail_offset + (detail_offset as usize)]
}
fn ensure_sorted(&mut self) {
if self.lookup_is_sorted {
return;
}
// Sorting a unique vector is surprisingly hard. The following
// code is a good argument for using DVecs, but they require
// immutable locations thus don't play well with freezing.
// Thar be dragons here. You have been warned. (Tips accepted.)
let mut unsorted_records: Vec<DetailedGlyphRecord> = vec![];
mem::swap(&mut self.detail_lookup, &mut unsorted_records);
let mut mut_records: Vec<DetailedGlyphRecord> = unsorted_records;
mut_records.sort_by(|a, b| {
if a < b {
Ordering::Less
} else {
Ordering::Greater
}
});
let mut sorted_records = mut_records;
mem::swap(&mut self.detail_lookup, &mut sorted_records);
self.lookup_is_sorted = true;
}
}
// This struct is used by GlyphStore clients to provide new glyph data.
// It should be allocated on the stack and passed by reference to GlyphStore.
#[derive(Clone, Copy)]
pub struct GlyphData {
id: GlyphId,
advance: Au,
offset: Point2D<Au>,
cluster_start: bool,
ligature_start: bool,
}
impl GlyphData {
/// Creates a new entry for one glyph.
pub fn new(
id: GlyphId,
advance: Au,
offset: Option<Point2D<Au>>,
cluster_start: bool,
ligature_start: bool,
) -> GlyphData {
GlyphData {
id: id,
advance: advance,
offset: offset.unwrap_or(Point2D::zero()),
cluster_start: cluster_start,
ligature_start: ligature_start,
}
}
}
// This enum is a proxy that's provided to GlyphStore clients when iterating
// through glyphs (either for a particular TextRun offset, or all glyphs).
// Rather than eagerly assembling and copying glyph data, it only retrieves
// values as they are needed from the GlyphStore, using provided offsets.
#[derive(Clone, Copy)]
pub enum GlyphInfo<'a> {
Simple(&'a GlyphStore, ByteIndex),
Detail(&'a GlyphStore, ByteIndex, u16),
}
impl<'a> GlyphInfo<'a> {
pub fn id(self) -> GlyphId {
match self {
GlyphInfo::Simple(store, entry_i) => store.entry_buffer[entry_i.to_usize()].id(),
GlyphInfo::Detail(store, entry_i, detail_j) => {
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.id
},
}
}
#[inline(always)]
// FIXME: Resolution conflicts with IteratorUtil trait so adding trailing _
pub fn advance(self) -> Au {
match self {
GlyphInfo::Simple(store, entry_i) => store.entry_buffer[entry_i.to_usize()].advance(),
GlyphInfo::Detail(store, entry_i, detail_j) => {
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.advance
},
}
}
#[inline]
pub fn offset(self) -> Option<Point2D<Au>> {
match self {
GlyphInfo::Simple(_, _) => None,
GlyphInfo::Detail(store, entry_i, detail_j) => Some(
store
.detail_store
.detailed_glyph_with_index(entry_i, detail_j)
.offset,
),
}
}
pub fn char_is_space(self) -> bool {
let (store, entry_i) = match self {
GlyphInfo::Simple(store, entry_i) => (store, entry_i),
GlyphInfo::Detail(store, entry_i, _) => (store, entry_i),
};
store.char_is_space(entry_i)
}
}
/// Stores the glyph data belonging to a text run.
///
/// Simple glyphs are stored inline in the `entry_buffer`, detailed glyphs are
/// stored as pointers into the `detail_store`.
///
/// ~~~ascii
/// +- GlyphStore --------------------------------+
/// | +---+---+---+---+---+---+---+ |
/// | entry_buffer: | | s | | s | | s | s | | d = detailed
/// | +-|-+---+-|-+---+-|-+---+---+ | s = simple
/// | | | | |
/// | | +---+-------+ |
/// | | | |
/// | +-V-+-V-+ |
/// | detail_store: | d | d | |
/// | +---+---+ |
/// +---------------------------------------------+
/// ~~~
#[derive(Clone, Deserialize, Serialize)]
pub struct GlyphStore {
// TODO(pcwalton): Allocation of this buffer is expensive. Consider a small-vector
// optimization.
/// A buffer of glyphs within the text run, in the order in which they
/// appear in the input text.
/// Any changes will also need to be reflected in
/// transmute_entry_buffer_to_u32_buffer().
entry_buffer: Vec<GlyphEntry>,
/// A store of the detailed glyph data. Detailed glyphs contained in the
/// `entry_buffer` point to locations in this data structure.
detail_store: DetailedGlyphStore,
/// A cache of the advance of the entire glyph store.
total_advance: Au,
/// A cache of the number of spaces in the entire glyph store.
total_spaces: i32,
/// Used to check if fast path should be used in glyph iteration.
has_detailed_glyphs: bool,
is_whitespace: bool,
is_rtl: bool,
}
impl<'a> GlyphStore {
/// Initializes the glyph store, but doesn't actually shape anything.
///
/// Use the `add_*` methods to store glyph data.
pub fn new(length: usize, is_whitespace: bool, is_rtl: bool) -> GlyphStore {
assert!(length > 0);
GlyphStore {
entry_buffer: vec![GlyphEntry::initial(); length],
detail_store: DetailedGlyphStore::new(),
total_advance: Au(0),
total_spaces: 0,
has_detailed_glyphs: false,
is_whitespace: is_whitespace,
is_rtl: is_rtl,
}
}
#[inline]
pub fn len(&self) -> ByteIndex {
ByteIndex(self.entry_buffer.len() as isize)
}
#[inline]
pub fn is_whitespace(&self) -> bool {
self.is_whitespace
}
pub fn finalize_changes(&mut self) {
self.detail_store.ensure_sorted();
self.cache_total_advance_and_spaces()
}
#[inline(never)]
fn cache_total_advance_and_spaces(&mut self) {
let mut total_advance = Au(0);
let mut total_spaces = 0;
for glyph in self.iter_glyphs_for_byte_range(&Range::new(ByteIndex(0), self.len())) {
total_advance = total_advance + glyph.advance();
if glyph.char_is_space() {
total_spaces += 1;
}
}
self.total_advance = total_advance;
self.total_spaces = total_spaces;
}
/// Adds a single glyph.
pub fn add_glyph_for_byte_index(&mut self, i: ByteIndex, character: char, data: &GlyphData) {
let glyph_is_compressible = is_simple_glyph_id(data.id) &&
is_simple_advance(data.advance) &&
data.offset == Point2D::zero() &&
data.cluster_start; // others are stored in detail buffer
debug_assert!(data.ligature_start); // can't compress ligature continuation glyphs.
debug_assert!(i < self.len());
let mut entry = if glyph_is_compressible {
GlyphEntry::simple(data.id, data.advance)
} else {
let glyph = &[DetailedGlyph::new(data.id, data.advance, data.offset)];
self.has_detailed_glyphs = true;
self.detail_store.add_detailed_glyphs_for_entry(i, glyph);
GlyphEntry::complex(data.cluster_start, data.ligature_start, 1)
};
if character =='' {
entry.set_char_is_space()
}
self.entry_buffer[i.to_usize()] = entry;
}
pub fn add_glyphs_for_byte_index(&mut self, i: ByteIndex, data_for_glyphs: &[GlyphData]) {
assert!(i < self.len());
assert!(data_for_glyphs.len() > 0);
let glyph_count = data_for_glyphs.len();
let first_glyph_data = data_for_glyphs[0];
let glyphs_vec: Vec<DetailedGlyph> = (0..glyph_count)
.map(|i| {
DetailedGlyph::new(
data_for_glyphs[i].id,
data_for_glyphs[i].advance,
data_for_glyphs[i].offset,
)
})
.collect();
self.has_detailed_glyphs = true;
self.detail_store
.add_detailed_glyphs_for_entry(i, &glyphs_vec);
let entry = GlyphEntry::complex(
first_glyph_data.cluster_start,
first_glyph_data.ligature_start,
glyph_count,
);
debug!(
"Adding multiple glyphs[idx={:?}, count={}]: {:?}",
i, glyph_count, entry
);
self.entry_buffer[i.to_usize()] = entry;
}
#[inline]
pub fn iter_glyphs_for_byte_range(&'a self, range: &Range<ByteIndex>) -> GlyphIterator<'a> {
if range.begin() >= self.len() {
panic!("iter_glyphs_for_range: range.begin beyond length!");
}
if range.end() > self.len() {
panic!("iter_glyphs_for_range: range.end beyond length!");
}
GlyphIterator {
store: self,
byte_index: if self.is_rtl {
range.end()
} else {
range.begin() - ByteIndex(1)
},
byte_range: *range,
glyph_range: None,
}
}
// Scan the glyphs for a given range until we reach a given advance. Returns the index
// and advance of the glyph in the range at the given advance, if reached. Otherwise, returns the
// the number of glyphs and the advance for the given range.
#[inline]
pub fn range_index_of_advance(
&self,
range: &Range<ByteIndex>,
advance: Au,
extra_word_spacing: Au,
) -> (usize, Au) {
let mut index = 0;
let mut current_advance = Au(0);
for glyph in self.iter_glyphs_for_byte_range(range) {
if glyph.char_is_space() {
current_advance += glyph.advance() + extra_word_spacing
} else {
current_advance += glyph.advance()
}
if current_advance > advance {
break;
}
index += 1;
}
(index, current_advance)
}
#[inline]
pub fn advance_for_byte_range(&self, range: &Range<ByteIndex>, extra_word_spacing: Au) -> Au {
if range.begin() == ByteIndex(0) && range.end() == self.len() {
self.total_advance + extra_word_spacing * self.total_spaces
} else if!self.has_detailed_glyphs {
self.advance_for_byte_range_simple_glyphs(range, extra_word_spacing)
} else {
self.advance_for_byte_range_slow_path(range, extra_word_spacing)
}
}
#[inline]
pub fn advance_for_byte_range_slow_path(
&self,
range: &Range<ByteIndex>,
extra_word_spacing: Au,
) -> Au {
self.iter_glyphs_for_byte_range(range)
.fold(Au(0), |advance, glyph| {
if glyph.char_is_space() {
advance + glyph.advance() + extra_word_spacing
} else {
advance + glyph.advance()
}
})
}
#[inline]
#[cfg(feature = "unstable")]
#[cfg(any(target_feature = "sse2", target_feature = "neon"))]
fn advance_for_byte_range_simple_glyphs(
&self,
range: &Range<ByteIndex>,
extra_word_spacing: Au,
) -> Au {
let advance_mask = u32x4::splat(GLYPH_ADVANCE_MASK);
let space_flag_mask = u32x4::splat(FLAG_CHAR_IS_SPACE);
let mut simd_advance = u32x4::splat(0);
let mut simd_spaces = u32x4::splat(0);
let begin = range.begin().to_usize();
let len = range.length().to_usize();
let num_simd_iterations = len / 4;
let leftover_entries = range.end().to_usize() - (len - num_simd_iterations * 4);
let buf = self.transmute_entry_buffer_to_u32_buffer();
for i in 0..num_simd_iterations {
let offset = begin + i * 4;
let v = u32x4::load_unaligned(&buf[offset..]);
let advance = (v & advance_mask) >> GLYPH_ADVANCE_SHIFT;
let spaces = (v & space_flag_mask) >> FLAG_CHAR_IS_SPACE_SHIFT;
simd_advance = simd_advance + advance;
simd_spaces = simd_spaces + spaces;
}
let advance = (simd_advance.extract(0) +
simd_advance.extract(1) +
simd_advance.extract(2) +
simd_advance.extract(3)) as i32;
let spaces = (simd_spaces.extract(0) +
simd_spaces.extract(1) +
simd_spaces.extract(2) +
simd_spaces.extract(3)) as i32;
let mut leftover_advance = Au(0);
let mut leftover_spaces = 0;
for i in leftover_entries..range.end().to_usize() {
leftover_advance = leftover_advance + self.entry_buffer[i].advance();
if self.entry_buffer[i].char_is_space() {
leftover_spaces += 1;
}
}
Au::new(advance) + leftover_advance + extra_word_spacing * (spaces + leftover_spaces)
}
/// When SIMD isn't available, fallback to the slow path.
#[inline]
#[cfg(not(all(
feature = "unstable",
any(target_feature = "sse2", target_feature = "neon")
)))]
fn advance_for_byte_range_simple_glyphs(
&self,
range: &Range<ByteIndex>,
extra_word_spacing: Au,
) -> Au {
self.advance_for_byte_range_slow_path(range, extra_word_spacing)
}
/// Used for SIMD.
#[inline]
#[cfg(feature = "unstable")]
#[cfg(any(target_feature = "sse2", target_feature = "neon"))]
#[allow(unsafe_code)]
fn | transmute_entry_buffer_to_u32_buffer | identifier_name | |
string.rs | // Copyright 2015-2017 Daniel P. Clark & array_tool Developers
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
/// A grapheme iterator that produces the bytes for each grapheme.
#[derive(Debug)]
pub struct GraphemeBytesIter<'a> {
source: &'a str,
offset: usize,
grapheme_count: usize,
}
impl<'a> GraphemeBytesIter<'a> {
/// Creates a new grapheme iterator from a string source.
pub fn new(source: &'a str) -> GraphemeBytesIter<'a> {
GraphemeBytesIter {
source: source,
offset: 0,
grapheme_count: 0,
}
}
}
impl<'a> Iterator for GraphemeBytesIter<'a> {
type Item = &'a [u8];
fn next(&mut self) -> Option<&'a [u8]> {
let mut result: Option<&[u8]> = None;
let mut idx = self.offset;
for _ in self.offset..self.source.len() {
idx += 1;
if self.offset < self.source.len() {
if self.source.is_char_boundary(idx) {
let slice: &[u8] = self.source[self.offset..idx].as_bytes();
self.grapheme_count += 1;
self.offset = idx;
result = Some(slice);
break
}
}
}
result
}
}
impl<'a> ExactSizeIterator for GraphemeBytesIter<'a> {
fn len(&self) -> usize {
self.source.chars().count()
}
}
/// ToGraphemeBytesIter - create an iterator to return bytes for each grapheme in a string.
pub trait ToGraphemeBytesIter<'a> {
/// Returns a GraphemeBytesIter which you may iterate over.
///
/// # Example
/// ```
/// use array_tool::string::ToGraphemeBytesIter;
///
/// let string = "a sβd fΓ©Z";
/// let mut graphemes = string.grapheme_bytes_iter();
/// graphemes.skip(3).next();
/// ```
///
/// # Output
/// ```text
/// [226, 128, 148]
/// ```
fn grapheme_bytes_iter(&'a self) -> GraphemeBytesIter<'a>;
}
impl<'a> ToGraphemeBytesIter<'a> for str {
fn grapheme_bytes_iter(&'a self) -> GraphemeBytesIter<'a> {
GraphemeBytesIter::new(&self)
}
}
/// Squeeze - squeezes duplicate characters down to one each
pub trait Squeeze {
/// # Example
/// ```
/// use array_tool::string::Squeeze;
///
/// "yellow moon".squeeze("");
/// ```
///
/// # Output
/// ```text
/// "yelow mon"
/// ```
fn squeeze(&self, targets: &'static str) -> String;
}
impl Squeeze for str {
fn squeeze(&self, targets: &'static str) -> String {
let mut output = Vec::<u8>::with_capacity(self.len());
let everything: bool = targets.is_empty();
let chars = targets.grapheme_bytes_iter().collect::<Vec<&[u8]>>();
let mut last: &[u8] = &[0];
for character in self.grapheme_bytes_iter() {
if last!= character {
output.extend_from_slice(character);
} else if!(everything || chars.contains(&character)) {
output.extend_from_slice(character);
}
last = character;
}
String::from_utf8(output).expect("squeeze failed to render String!")
}
}
/// Justify - expand line to given width.
pub trait Justify {
/// # Example
/// ```
/// use array_tool::string::Justify;
///
/// "asd asdf asd".justify_line(14);
/// ```
///
/// # Output
/// ```text
/// "asd asdf asd"
/// ```
fn justify_line(&self, width: usize) -> String;
}
impl Justify for str {
fn justify_line(&self, width: usize) -> String {
if self.is_empty() { return format!("{}", self) };
let trimmed = self.trim() ;
let len = trimmed.chars().count();
if len >= width { return self.to_string(); };
let difference = width - len;
let iter = trimmed.split_whitespace();
let spaces = iter.count() - 1;
let mut iter = trimmed.split_whitespace().peekable();
if spaces == 0 { return self.to_string(); }
let mut obj = String::with_capacity(trimmed.len() + spaces);
let div = difference / spaces;
let mut remainder = difference % spaces;
while let Some(x) = iter.next() {
obj.push_str( x );
let val = if remainder > 0 {
remainder = remainder - 1;
div + 1
} else { div };
for _ in 0..val+1 {
if let Some(_) = iter.peek() { // Don't add spaces if last word
obj.push_str( " " );
}
}
}
obj
}
}
/// Substitute string character for each index given.
pub trait SubstMarks {
/// # Example
/// ```
/// use array_tool::string::SubstMarks;
///
/// "asdf asdf asdf".subst_marks(vec![0,5,8], "Z");
/// ```
///
/// # Output
/// ```text
/// "Zsdf ZsdZ asdf"
/// ```
fn subst_marks(&self, marks: Vec<usize>, chr: &'static str) -> String;
}
impl SubstMarks for str {
fn subst_marks(&self, marks: Vec<usize>, chr: &'static str) -> String {
let mut output = Vec::<u8>::with_capacity(self.len());
let mut count = 0;
let mut last = 0;
for i in 0..self.len() {
let idx = i + 1;
if self.is_char_boundary(idx) {
if marks.contains(&count) {
count += 1;
last = idx;
output.extend_from_slice(chr.as_bytes());
continue
}
let slice: &[u8] = self[last..idx].as_bytes();
output.extend_from_slice(slice);
count += 1;
last = idx
}
}
String::from_utf8(output).expect("subst_marks failed to render String!")
}
}
/// After whitespace
pub trait AfterWhitespace {
/// Given offset method will seek from there to end of string to find the first
/// non white space. Resulting value is counted from offset.
///
/// # Example
/// ```
/// use array_tool::string::AfterWhitespace;
///
/// assert_eq!(
/// "asdf asdf asdf".seek_end_of_whitespace(6),
/// Some(9)
/// );
/// ```
fn seek_end_of_whitespace(&self, offset: usize) -> Option<usize>;
}
impl AfterWhitespace for str {
fn seek_end_of_whitespace(&self, offset: usize) -> Option<usize> {
if self.len() < offset { return None; };
let mut seeker = self[offset..self.len()].chars();
let mut val = None;
let mut indx = 0;
while let Some(x) = seeker.next() {
if x.ne(&" ".chars().next().unwrap()) {
val = Some(indx);
break;
}
indx += 1;
}
val
}
}
/// Word wrapping
pub trait WordWrap {
/// White space is treated as valid content and new lines will only be swapped in for
/// the last white space character at the end of the given width. White space may reach beyond
/// the width you've provided. You will need to trim end of lines in your own output (e.g.
/// splitting string at each new line and printing the line with trim_right). Or just trust
/// that lines that are beyond the width are just white space and only print the width -
/// ignoring tailing white space.
///
/// # Example
/// ```
/// use array_tool::string::WordWrap;
///
/// "asd asdf asd".word_wrap(8);
/// ```
///
/// # Output
/// ```text
/// "asd asdf\nasd"
/// ```
fn word_wrap(&self, width: usize) -> String;
}
// No need to worry about character encoding since we're only checking for the
// space and new line characters.
impl WordWrap for &'static str {
fn word_wrap(&self, width: usize) -> String {
let mut markers = vec![];
fn wordwrap(t: &'static str, chunk: usize, offset: usize, mrkrs: &mut Vec<usize>) -> String {
match t[offset..*vec![offset+chunk,t.len()].iter().min().unwrap()].rfind("\n") {
None => {
match t[offset..*vec![offset+chunk,t.len()].iter().min().unwrap()].rfind(" ") {
Some(x) => {
let mut eows = x; // end of white space
if offset+chunk < t.len() { // check if white space continues
match t.seek_end_of_whitespace(offset+x) {
Some(a) => {
if a.ne(&0) {
eows = x+a-1;
}
},
None => {},
}
}
if offset+chunk < t.len() { // safe to seek ahead by 1 or not end of string
if!["\n".chars().next().unwrap(), " ".chars().next().unwrap()].contains(
&t[offset+eows+1..offset+eows+2].chars().next().unwrap()
) {
mrkrs.push(offset+eows)
}
};
wordwrap(t, chunk, offset+eows+1, mrkrs)
},
None => {
if offset+chunk < t.len() { / | se {
use string::SubstMarks;
return t.subst_marks(mrkrs.to_vec(), "\n")
}
},
}
},
Some(x) => {
wordwrap(t, chunk, offset+x+1, mrkrs)
},
}
};
wordwrap(self, width+1, 0, &mut markers)
}
}
| / String may continue
wordwrap(t, chunk, offset+1, mrkrs) // Recurse + 1 until next space
} el | conditional_block |
string.rs | // Copyright 2015-2017 Daniel P. Clark & array_tool Developers
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
/// A grapheme iterator that produces the bytes for each grapheme.
#[derive(Debug)]
pub struct GraphemeBytesIter<'a> {
source: &'a str,
offset: usize,
grapheme_count: usize,
}
impl<'a> GraphemeBytesIter<'a> {
/// Creates a new grapheme iterator from a string source.
pub fn new(source: &'a str) -> GraphemeBytesIter<'a> { | }
}
}
impl<'a> Iterator for GraphemeBytesIter<'a> {
type Item = &'a [u8];
fn next(&mut self) -> Option<&'a [u8]> {
let mut result: Option<&[u8]> = None;
let mut idx = self.offset;
for _ in self.offset..self.source.len() {
idx += 1;
if self.offset < self.source.len() {
if self.source.is_char_boundary(idx) {
let slice: &[u8] = self.source[self.offset..idx].as_bytes();
self.grapheme_count += 1;
self.offset = idx;
result = Some(slice);
break
}
}
}
result
}
}
impl<'a> ExactSizeIterator for GraphemeBytesIter<'a> {
fn len(&self) -> usize {
self.source.chars().count()
}
}
/// ToGraphemeBytesIter - create an iterator to return bytes for each grapheme in a string.
pub trait ToGraphemeBytesIter<'a> {
/// Returns a GraphemeBytesIter which you may iterate over.
///
/// # Example
/// ```
/// use array_tool::string::ToGraphemeBytesIter;
///
/// let string = "a sβd fΓ©Z";
/// let mut graphemes = string.grapheme_bytes_iter();
/// graphemes.skip(3).next();
/// ```
///
/// # Output
/// ```text
/// [226, 128, 148]
/// ```
fn grapheme_bytes_iter(&'a self) -> GraphemeBytesIter<'a>;
}
impl<'a> ToGraphemeBytesIter<'a> for str {
fn grapheme_bytes_iter(&'a self) -> GraphemeBytesIter<'a> {
GraphemeBytesIter::new(&self)
}
}
/// Squeeze - squeezes duplicate characters down to one each
pub trait Squeeze {
/// # Example
/// ```
/// use array_tool::string::Squeeze;
///
/// "yellow moon".squeeze("");
/// ```
///
/// # Output
/// ```text
/// "yelow mon"
/// ```
fn squeeze(&self, targets: &'static str) -> String;
}
impl Squeeze for str {
fn squeeze(&self, targets: &'static str) -> String {
let mut output = Vec::<u8>::with_capacity(self.len());
let everything: bool = targets.is_empty();
let chars = targets.grapheme_bytes_iter().collect::<Vec<&[u8]>>();
let mut last: &[u8] = &[0];
for character in self.grapheme_bytes_iter() {
if last!= character {
output.extend_from_slice(character);
} else if!(everything || chars.contains(&character)) {
output.extend_from_slice(character);
}
last = character;
}
String::from_utf8(output).expect("squeeze failed to render String!")
}
}
/// Justify - expand line to given width.
pub trait Justify {
/// # Example
/// ```
/// use array_tool::string::Justify;
///
/// "asd asdf asd".justify_line(14);
/// ```
///
/// # Output
/// ```text
/// "asd asdf asd"
/// ```
fn justify_line(&self, width: usize) -> String;
}
impl Justify for str {
fn justify_line(&self, width: usize) -> String {
if self.is_empty() { return format!("{}", self) };
let trimmed = self.trim() ;
let len = trimmed.chars().count();
if len >= width { return self.to_string(); };
let difference = width - len;
let iter = trimmed.split_whitespace();
let spaces = iter.count() - 1;
let mut iter = trimmed.split_whitespace().peekable();
if spaces == 0 { return self.to_string(); }
let mut obj = String::with_capacity(trimmed.len() + spaces);
let div = difference / spaces;
let mut remainder = difference % spaces;
while let Some(x) = iter.next() {
obj.push_str( x );
let val = if remainder > 0 {
remainder = remainder - 1;
div + 1
} else { div };
for _ in 0..val+1 {
if let Some(_) = iter.peek() { // Don't add spaces if last word
obj.push_str( " " );
}
}
}
obj
}
}
/// Substitute string character for each index given.
pub trait SubstMarks {
/// # Example
/// ```
/// use array_tool::string::SubstMarks;
///
/// "asdf asdf asdf".subst_marks(vec![0,5,8], "Z");
/// ```
///
/// # Output
/// ```text
/// "Zsdf ZsdZ asdf"
/// ```
fn subst_marks(&self, marks: Vec<usize>, chr: &'static str) -> String;
}
impl SubstMarks for str {
fn subst_marks(&self, marks: Vec<usize>, chr: &'static str) -> String {
let mut output = Vec::<u8>::with_capacity(self.len());
let mut count = 0;
let mut last = 0;
for i in 0..self.len() {
let idx = i + 1;
if self.is_char_boundary(idx) {
if marks.contains(&count) {
count += 1;
last = idx;
output.extend_from_slice(chr.as_bytes());
continue
}
let slice: &[u8] = self[last..idx].as_bytes();
output.extend_from_slice(slice);
count += 1;
last = idx
}
}
String::from_utf8(output).expect("subst_marks failed to render String!")
}
}
/// After whitespace
pub trait AfterWhitespace {
/// Given offset method will seek from there to end of string to find the first
/// non white space. Resulting value is counted from offset.
///
/// # Example
/// ```
/// use array_tool::string::AfterWhitespace;
///
/// assert_eq!(
/// "asdf asdf asdf".seek_end_of_whitespace(6),
/// Some(9)
/// );
/// ```
fn seek_end_of_whitespace(&self, offset: usize) -> Option<usize>;
}
impl AfterWhitespace for str {
fn seek_end_of_whitespace(&self, offset: usize) -> Option<usize> {
if self.len() < offset { return None; };
let mut seeker = self[offset..self.len()].chars();
let mut val = None;
let mut indx = 0;
while let Some(x) = seeker.next() {
if x.ne(&" ".chars().next().unwrap()) {
val = Some(indx);
break;
}
indx += 1;
}
val
}
}
/// Word wrapping
pub trait WordWrap {
/// White space is treated as valid content and new lines will only be swapped in for
/// the last white space character at the end of the given width. White space may reach beyond
/// the width you've provided. You will need to trim end of lines in your own output (e.g.
/// splitting string at each new line and printing the line with trim_right). Or just trust
/// that lines that are beyond the width are just white space and only print the width -
/// ignoring tailing white space.
///
/// # Example
/// ```
/// use array_tool::string::WordWrap;
///
/// "asd asdf asd".word_wrap(8);
/// ```
///
/// # Output
/// ```text
/// "asd asdf\nasd"
/// ```
fn word_wrap(&self, width: usize) -> String;
}
// No need to worry about character encoding since we're only checking for the
// space and new line characters.
impl WordWrap for &'static str {
fn word_wrap(&self, width: usize) -> String {
let mut markers = vec![];
fn wordwrap(t: &'static str, chunk: usize, offset: usize, mrkrs: &mut Vec<usize>) -> String {
match t[offset..*vec![offset+chunk,t.len()].iter().min().unwrap()].rfind("\n") {
None => {
match t[offset..*vec![offset+chunk,t.len()].iter().min().unwrap()].rfind(" ") {
Some(x) => {
let mut eows = x; // end of white space
if offset+chunk < t.len() { // check if white space continues
match t.seek_end_of_whitespace(offset+x) {
Some(a) => {
if a.ne(&0) {
eows = x+a-1;
}
},
None => {},
}
}
if offset+chunk < t.len() { // safe to seek ahead by 1 or not end of string
if!["\n".chars().next().unwrap(), " ".chars().next().unwrap()].contains(
&t[offset+eows+1..offset+eows+2].chars().next().unwrap()
) {
mrkrs.push(offset+eows)
}
};
wordwrap(t, chunk, offset+eows+1, mrkrs)
},
None => {
if offset+chunk < t.len() { // String may continue
wordwrap(t, chunk, offset+1, mrkrs) // Recurse + 1 until next space
} else {
use string::SubstMarks;
return t.subst_marks(mrkrs.to_vec(), "\n")
}
},
}
},
Some(x) => {
wordwrap(t, chunk, offset+x+1, mrkrs)
},
}
};
wordwrap(self, width+1, 0, &mut markers)
}
} | GraphemeBytesIter {
source: source,
offset: 0,
grapheme_count: 0, | random_line_split |
string.rs | // Copyright 2015-2017 Daniel P. Clark & array_tool Developers
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
/// A grapheme iterator that produces the bytes for each grapheme.
#[derive(Debug)]
pub struct GraphemeBytesIter<'a> {
source: &'a str,
offset: usize,
grapheme_count: usize,
}
impl<'a> GraphemeBytesIter<'a> {
/// Creates a new grapheme iterator from a string source.
pub fn new(source: &'a str) -> GraphemeBytesIter<'a> {
GraphemeBytesIter {
source: source,
offset: 0,
grapheme_count: 0,
}
}
}
impl<'a> Iterator for GraphemeBytesIter<'a> {
type Item = &'a [u8];
fn next(&mut self) -> Option<&'a [u8]> {
let mut result: Option<&[u8]> = None;
let mut idx = self.offset;
for _ in self.offset..self.source.len() {
idx += 1;
if self.offset < self.source.len() {
if self.source.is_char_boundary(idx) {
let slice: &[u8] = self.source[self.offset..idx].as_bytes();
self.grapheme_count += 1;
self.offset = idx;
result = Some(slice);
break
}
}
}
result
}
}
impl<'a> ExactSizeIterator for GraphemeBytesIter<'a> {
fn len(&self) -> usize {
self.source.chars().count()
}
}
/// ToGraphemeBytesIter - create an iterator to return bytes for each grapheme in a string.
pub trait ToGraphemeBytesIter<'a> {
/// Returns a GraphemeBytesIter which you may iterate over.
///
/// # Example
/// ```
/// use array_tool::string::ToGraphemeBytesIter;
///
/// let string = "a sβd fΓ©Z";
/// let mut graphemes = string.grapheme_bytes_iter();
/// graphemes.skip(3).next();
/// ```
///
/// # Output
/// ```text
/// [226, 128, 148]
/// ```
fn grapheme_bytes_iter(&'a self) -> GraphemeBytesIter<'a>;
}
impl<'a> ToGraphemeBytesIter<'a> for str {
fn grapheme_bytes_iter(&'a self) -> GraphemeBytesIter<'a> {
GraphemeBytesIter::new(&self)
}
}
/// Squeeze - squeezes duplicate characters down to one each
pub trait Squeeze {
/// # Example
/// ```
/// use array_tool::string::Squeeze;
///
/// "yellow moon".squeeze("");
/// ```
///
/// # Output
/// ```text
/// "yelow mon"
/// ```
fn squeeze(&self, targets: &'static str) -> String;
}
impl Squeeze for str {
fn squeeze(&self, targets: &'static str) -> String {
let mut output = Vec::<u8>::with_capacity(self.len());
let everything: bool = targets.is_empty();
let chars = targets.grapheme_bytes_iter().collect::<Vec<&[u8]>>();
let mut last: &[u8] = &[0];
for character in self.grapheme_bytes_iter() {
if last!= character {
output.extend_from_slice(character);
} else if!(everything || chars.contains(&character)) {
output.extend_from_slice(character);
}
last = character;
}
String::from_utf8(output).expect("squeeze failed to render String!")
}
}
/// Justify - expand line to given width.
pub trait Justify {
/// # Example
/// ```
/// use array_tool::string::Justify;
///
/// "asd asdf asd".justify_line(14);
/// ```
///
/// # Output
/// ```text
/// "asd asdf asd"
/// ```
fn justify_line(&self, width: usize) -> String;
}
impl Justify for str {
fn justify_line(&self, width: usize) -> String {
if self.is_empty() { return format!("{}", self) };
let trimmed = self.trim() ;
let len = trimmed.chars().count();
if len >= width { return self.to_string(); };
let difference = width - len;
let iter = trimmed.split_whitespace();
let spaces = iter.count() - 1;
let mut iter = trimmed.split_whitespace().peekable();
if spaces == 0 { return self.to_string(); }
let mut obj = String::with_capacity(trimmed.len() + spaces);
let div = difference / spaces;
let mut remainder = difference % spaces;
while let Some(x) = iter.next() {
obj.push_str( x );
let val = if remainder > 0 {
remainder = remainder - 1;
div + 1
} else { div };
for _ in 0..val+1 {
if let Some(_) = iter.peek() { // Don't add spaces if last word
obj.push_str( " " );
}
}
}
obj
}
}
/// Substitute string character for each index given.
pub trait SubstMarks {
/// # Example
/// ```
/// use array_tool::string::SubstMarks;
///
/// "asdf asdf asdf".subst_marks(vec![0,5,8], "Z");
/// ```
///
/// # Output
/// ```text
/// "Zsdf ZsdZ asdf"
/// ```
fn subst_marks(&self, marks: Vec<usize>, chr: &'static str) -> String;
}
impl SubstMarks for str {
fn sub | elf, marks: Vec<usize>, chr: &'static str) -> String {
let mut output = Vec::<u8>::with_capacity(self.len());
let mut count = 0;
let mut last = 0;
for i in 0..self.len() {
let idx = i + 1;
if self.is_char_boundary(idx) {
if marks.contains(&count) {
count += 1;
last = idx;
output.extend_from_slice(chr.as_bytes());
continue
}
let slice: &[u8] = self[last..idx].as_bytes();
output.extend_from_slice(slice);
count += 1;
last = idx
}
}
String::from_utf8(output).expect("subst_marks failed to render String!")
}
}
/// After whitespace
pub trait AfterWhitespace {
/// Given offset method will seek from there to end of string to find the first
/// non white space. Resulting value is counted from offset.
///
/// # Example
/// ```
/// use array_tool::string::AfterWhitespace;
///
/// assert_eq!(
/// "asdf asdf asdf".seek_end_of_whitespace(6),
/// Some(9)
/// );
/// ```
fn seek_end_of_whitespace(&self, offset: usize) -> Option<usize>;
}
impl AfterWhitespace for str {
fn seek_end_of_whitespace(&self, offset: usize) -> Option<usize> {
if self.len() < offset { return None; };
let mut seeker = self[offset..self.len()].chars();
let mut val = None;
let mut indx = 0;
while let Some(x) = seeker.next() {
if x.ne(&" ".chars().next().unwrap()) {
val = Some(indx);
break;
}
indx += 1;
}
val
}
}
/// Word wrapping
pub trait WordWrap {
/// White space is treated as valid content and new lines will only be swapped in for
/// the last white space character at the end of the given width. White space may reach beyond
/// the width you've provided. You will need to trim end of lines in your own output (e.g.
/// splitting string at each new line and printing the line with trim_right). Or just trust
/// that lines that are beyond the width are just white space and only print the width -
/// ignoring tailing white space.
///
/// # Example
/// ```
/// use array_tool::string::WordWrap;
///
/// "asd asdf asd".word_wrap(8);
/// ```
///
/// # Output
/// ```text
/// "asd asdf\nasd"
/// ```
fn word_wrap(&self, width: usize) -> String;
}
// No need to worry about character encoding since we're only checking for the
// space and new line characters.
impl WordWrap for &'static str {
fn word_wrap(&self, width: usize) -> String {
let mut markers = vec![];
fn wordwrap(t: &'static str, chunk: usize, offset: usize, mrkrs: &mut Vec<usize>) -> String {
match t[offset..*vec![offset+chunk,t.len()].iter().min().unwrap()].rfind("\n") {
None => {
match t[offset..*vec![offset+chunk,t.len()].iter().min().unwrap()].rfind(" ") {
Some(x) => {
let mut eows = x; // end of white space
if offset+chunk < t.len() { // check if white space continues
match t.seek_end_of_whitespace(offset+x) {
Some(a) => {
if a.ne(&0) {
eows = x+a-1;
}
},
None => {},
}
}
if offset+chunk < t.len() { // safe to seek ahead by 1 or not end of string
if!["\n".chars().next().unwrap(), " ".chars().next().unwrap()].contains(
&t[offset+eows+1..offset+eows+2].chars().next().unwrap()
) {
mrkrs.push(offset+eows)
}
};
wordwrap(t, chunk, offset+eows+1, mrkrs)
},
None => {
if offset+chunk < t.len() { // String may continue
wordwrap(t, chunk, offset+1, mrkrs) // Recurse + 1 until next space
} else {
use string::SubstMarks;
return t.subst_marks(mrkrs.to_vec(), "\n")
}
},
}
},
Some(x) => {
wordwrap(t, chunk, offset+x+1, mrkrs)
},
}
};
wordwrap(self, width+1, 0, &mut markers)
}
}
| st_marks(&s | identifier_name |
string.rs | // Copyright 2015-2017 Daniel P. Clark & array_tool Developers
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
/// A grapheme iterator that produces the bytes for each grapheme.
#[derive(Debug)]
pub struct GraphemeBytesIter<'a> {
source: &'a str,
offset: usize,
grapheme_count: usize,
}
impl<'a> GraphemeBytesIter<'a> {
/// Creates a new grapheme iterator from a string source.
pub fn new(source: &'a str) -> GraphemeBytesIter<'a> {
GraphemeBytesIter {
source: source,
offset: 0,
grapheme_count: 0,
}
}
}
impl<'a> Iterator for GraphemeBytesIter<'a> {
type Item = &'a [u8];
fn next(&mut self) -> Option<&'a [u8]> {
let mut result: Option<&[u8]> = None;
let mut idx = self.offset;
for _ in self.offset..self.source.len() {
idx += 1;
if self.offset < self.source.len() {
if self.source.is_char_boundary(idx) {
let slice: &[u8] = self.source[self.offset..idx].as_bytes();
self.grapheme_count += 1;
self.offset = idx;
result = Some(slice);
break
}
}
}
result
}
}
impl<'a> ExactSizeIterator for GraphemeBytesIter<'a> {
fn len(&self) -> usize {
self.source.chars().count()
}
}
/// ToGraphemeBytesIter - create an iterator to return bytes for each grapheme in a string.
pub trait ToGraphemeBytesIter<'a> {
/// Returns a GraphemeBytesIter which you may iterate over.
///
/// # Example
/// ```
/// use array_tool::string::ToGraphemeBytesIter;
///
/// let string = "a sβd fΓ©Z";
/// let mut graphemes = string.grapheme_bytes_iter();
/// graphemes.skip(3).next();
/// ```
///
/// # Output
/// ```text
/// [226, 128, 148]
/// ```
fn grapheme_bytes_iter(&'a self) -> GraphemeBytesIter<'a>;
}
impl<'a> ToGraphemeBytesIter<'a> for str {
fn grapheme_bytes_iter(&'a self) -> GraphemeBytesIter<'a> {
GraphemeBytesIter::new(&self)
}
}
/// Squeeze - squeezes duplicate characters down to one each
pub trait Squeeze {
/// # Example
/// ```
/// use array_tool::string::Squeeze;
///
/// "yellow moon".squeeze("");
/// ```
///
/// # Output
/// ```text
/// "yelow mon"
/// ```
fn squeeze(&self, targets: &'static str) -> String;
}
impl Squeeze for str {
fn squeeze(&self, targets: &'static str) -> String {
let mut output = Vec::<u8>::with_capacity(self.len());
let everything: bool = targets.is_empty();
let chars = targets.grapheme_bytes_iter().collect::<Vec<&[u8]>>();
let mut last: &[u8] = &[0];
for character in self.grapheme_bytes_iter() {
if last!= character {
output.extend_from_slice(character);
} else if!(everything || chars.contains(&character)) {
output.extend_from_slice(character);
}
last = character;
}
String::from_utf8(output).expect("squeeze failed to render String!")
}
}
/// Justify - expand line to given width.
pub trait Justify {
/// # Example
/// ```
/// use array_tool::string::Justify;
///
/// "asd asdf asd".justify_line(14);
/// ```
///
/// # Output
/// ```text
/// "asd asdf asd"
/// ```
fn justify_line(&self, width: usize) -> String;
}
impl Justify for str {
fn justify_line(&self, width: usize) -> String {
if self.is_empty() { return format!("{}", self) };
let trimmed = self.trim() ;
let len = trimmed.chars().count();
if len >= width { return self.to_string(); };
let difference = width - len;
let iter = trimmed.split_whitespace();
let spaces = iter.count() - 1;
let mut iter = trimmed.split_whitespace().peekable();
if spaces == 0 { return self.to_string(); }
let mut obj = String::with_capacity(trimmed.len() + spaces);
let div = difference / spaces;
let mut remainder = difference % spaces;
while let Some(x) = iter.next() {
obj.push_str( x );
let val = if remainder > 0 {
remainder = remainder - 1;
div + 1
} else { div };
for _ in 0..val+1 {
if let Some(_) = iter.peek() { // Don't add spaces if last word
obj.push_str( " " );
}
}
}
obj
}
}
/// Substitute string character for each index given.
pub trait SubstMarks {
/// # Example
/// ```
/// use array_tool::string::SubstMarks;
///
/// "asdf asdf asdf".subst_marks(vec![0,5,8], "Z");
/// ```
///
/// # Output
/// ```text
/// "Zsdf ZsdZ asdf"
/// ```
fn subst_marks(&self, marks: Vec<usize>, chr: &'static str) -> String;
}
impl SubstMarks for str {
fn subst_marks(&self, marks: Vec<usize>, chr: &'static str) -> String {
let mut output = Vec::<u8>::with_capacity(self.len());
let mut count = 0;
let mut last = 0;
for i in 0..self.len() {
let idx = i + 1;
if self.is_char_boundary(idx) {
if marks.contains(&count) {
count += 1;
last = idx;
output.extend_from_slice(chr.as_bytes());
continue
}
let slice: &[u8] = self[last..idx].as_bytes();
output.extend_from_slice(slice);
count += 1;
last = idx
}
}
String::from_utf8(output).expect("subst_marks failed to render String!")
}
}
/// After whitespace
pub trait AfterWhitespace {
/// Given offset method will seek from there to end of string to find the first
/// non white space. Resulting value is counted from offset.
///
/// # Example
/// ```
/// use array_tool::string::AfterWhitespace;
///
/// assert_eq!(
/// "asdf asdf asdf".seek_end_of_whitespace(6),
/// Some(9)
/// );
/// ```
fn seek_end_of_whitespace(&self, offset: usize) -> Option<usize>;
}
impl AfterWhitespace for str {
fn seek_end_of_whitespace(&self, offset: usize) -> Option<usize> {
if self.len() < offset { return None; };
let mut seeker = self[offset..self.len()].chars();
let mut val = None;
let mut indx = 0;
while let Some(x) = seeker.next() {
if x.ne(&" ".chars().next().unwrap()) {
val = Some(indx);
break;
}
indx += 1;
}
val
}
}
/// Word wrapping
pub trait WordWrap {
/// White space is treated as valid content and new lines will only be swapped in for
/// the last white space character at the end of the given width. White space may reach beyond
/// the width you've provided. You will need to trim end of lines in your own output (e.g.
/// splitting string at each new line and printing the line with trim_right). Or just trust
/// that lines that are beyond the width are just white space and only print the width -
/// ignoring tailing white space.
///
/// # Example
/// ```
/// use array_tool::string::WordWrap;
///
/// "asd asdf asd".word_wrap(8);
/// ```
///
/// # Output
/// ```text
/// "asd asdf\nasd"
/// ```
fn word_wrap(&self, width: usize) -> String;
}
// No need to worry about character encoding since we're only checking for the
// space and new line characters.
impl WordWrap for &'static str {
fn word_wrap(&self, width: usize) -> String {
let mut markers = vec![];
fn wordwrap(t: &'static str, chunk: usize, offset: usize, mrkrs: &mut Vec<usize>) -> String {
| }
};
wordwrap(t, chunk, offset+eows+1, mrkrs)
},
None => {
if offset+chunk < t.len() { // String may continue
wordwrap(t, chunk, offset+1, mrkrs) // Recurse + 1 until next space
} else {
use string::SubstMarks;
return t.subst_marks(mrkrs.to_vec(), "\n")
}
},
}
},
Some(x) => {
wordwrap(t, chunk, offset+x+1, mrkrs)
},
}
};
wordwrap(self, width+1, 0, &mut markers)
}
}
| match t[offset..*vec![offset+chunk,t.len()].iter().min().unwrap()].rfind("\n") {
None => {
match t[offset..*vec![offset+chunk,t.len()].iter().min().unwrap()].rfind(" ") {
Some(x) => {
let mut eows = x; // end of white space
if offset+chunk < t.len() { // check if white space continues
match t.seek_end_of_whitespace(offset+x) {
Some(a) => {
if a.ne(&0) {
eows = x+a-1;
}
},
None => {},
}
}
if offset+chunk < t.len() { // safe to seek ahead by 1 or not end of string
if !["\n".chars().next().unwrap(), " ".chars().next().unwrap()].contains(
&t[offset+eows+1..offset+eows+2].chars().next().unwrap()
) {
mrkrs.push(offset+eows) | identifier_body |
issue-4448.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// pretty-expanded FIXME #23616
use std::sync::mpsc::channel;
use std::thread;
pub fn | () {
let (tx, rx) = channel::<&'static str>();
let t = thread::spawn(move|| {
assert_eq!(rx.recv().unwrap(), "hello, world");
});
tx.send("hello, world").unwrap();
t.join().ok().unwrap();
}
| main | identifier_name |
issue-4448.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// pretty-expanded FIXME #23616
use std::sync::mpsc::channel;
use std::thread;
pub fn main() | {
let (tx, rx) = channel::<&'static str>();
let t = thread::spawn(move|| {
assert_eq!(rx.recv().unwrap(), "hello, world");
});
tx.send("hello, world").unwrap();
t.join().ok().unwrap();
} | identifier_body | |
prelude.rs | // Copyright 2018 Developers of the Rand project.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// https://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or https://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Convenience re-export of common members
//!
//! Like the standard library's prelude, this module simplifies importing of | //! ```
//! use rand::prelude::*;
//! # let mut r = StdRng::from_rng(thread_rng()).unwrap();
//! # let _: f32 = r.gen();
//! ```
#[doc(no_inline)] pub use crate::distributions::Distribution;
#[cfg(feature = "small_rng")]
#[doc(no_inline)]
pub use crate::rngs::SmallRng;
#[cfg(feature = "std_rng")]
#[doc(no_inline)] pub use crate::rngs::StdRng;
#[doc(no_inline)]
#[cfg(all(feature = "std", feature = "std_rng"))]
pub use crate::rngs::ThreadRng;
#[doc(no_inline)] pub use crate::seq::{IteratorRandom, SliceRandom};
#[doc(no_inline)]
#[cfg(all(feature = "std", feature = "std_rng"))]
pub use crate::{random, thread_rng};
#[doc(no_inline)] pub use crate::{CryptoRng, Rng, RngCore, SeedableRng}; | //! common items. Unlike the standard prelude, the contents of this module must
//! be imported manually:
//! | random_line_split |
test_create_embed.rs | #[macro_use] extern crate serde_json;
extern crate serenity;
use serde_json::Value;
use serenity::model::{Embed, EmbedField, EmbedImage};
use serenity::utils::builder::CreateEmbed;
use serenity::utils::Colour;
#[test]
fn | () {
let embed = Embed {
author: None,
colour: Colour::new(0xFF0011),
description: Some("This is a test description".to_owned()),
fields: vec![
EmbedField {
inline: false,
name: "a".to_owned(),
value: "b".to_owned(),
},
EmbedField {
inline: true,
name: "c".to_owned(),
value: "z".to_owned(),
},
],
image: Some(EmbedImage {
height: 213,
proxy_url: "a".to_owned(),
url: "https://i.imgur.com/XfWpfCV.gif".to_owned(),
width: 224,
}),
kind: "rich".to_owned(),
provider: None,
thumbnail: None,
timestamp: None,
title: Some("hakase".to_owned()),
url: Some("https://i.imgur.com/XfWpfCV.gif".to_owned()),
video: None,
};
let builder = CreateEmbed::from(embed)
.colour(0xFF0011)
.description("This is a hakase description")
.image("https://i.imgur.com/XfWpfCV.gif")
.title("still a hakase")
.url("https://i.imgur.com/XfWpfCV.gif");
let built = Value::Object(builder.0);
let obj = json!({
"color": 0xFF0011,
"description": "This is a hakase description",
"title": "still a hakase",
"type": "rich",
"url": "https://i.imgur.com/XfWpfCV.gif",
"fields": [
{
"inline": false,
"name": "a",
"value": "b",
},
{
"inline": true,
"name": "c",
"value": "z",
},
],
"image": {
"url": "https://i.imgur.com/XfWpfCV.gif",
},
});
assert_eq!(built, obj);
}
| test_from_embed | identifier_name |
test_create_embed.rs | #[macro_use] extern crate serde_json;
extern crate serenity;
use serde_json::Value;
use serenity::model::{Embed, EmbedField, EmbedImage};
use serenity::utils::builder::CreateEmbed;
use serenity::utils::Colour;
#[test]
fn test_from_embed() {
let embed = Embed {
author: None,
colour: Colour::new(0xFF0011),
description: Some("This is a test description".to_owned()),
fields: vec![
EmbedField {
inline: false,
name: "a".to_owned(),
value: "b".to_owned(),
},
EmbedField {
inline: true,
name: "c".to_owned(),
value: "z".to_owned(),
},
],
image: Some(EmbedImage {
height: 213,
proxy_url: "a".to_owned(),
url: "https://i.imgur.com/XfWpfCV.gif".to_owned(),
width: 224,
}),
kind: "rich".to_owned(),
provider: None,
thumbnail: None,
timestamp: None,
title: Some("hakase".to_owned()),
url: Some("https://i.imgur.com/XfWpfCV.gif".to_owned()),
video: None,
};
let builder = CreateEmbed::from(embed)
.colour(0xFF0011)
.description("This is a hakase description")
.image("https://i.imgur.com/XfWpfCV.gif")
.title("still a hakase")
.url("https://i.imgur.com/XfWpfCV.gif");
let built = Value::Object(builder.0);
let obj = json!({
"color": 0xFF0011,
"description": "This is a hakase description",
"title": "still a hakase",
"type": "rich",
"url": "https://i.imgur.com/XfWpfCV.gif",
"fields": [
{
"inline": false,
"name": "a",
"value": "b",
},
{
"inline": true,
"name": "c",
"value": "z",
},
],
"image": {
"url": "https://i.imgur.com/XfWpfCV.gif",
},
});
assert_eq!(built, obj); | } | random_line_split | |
test_create_embed.rs | #[macro_use] extern crate serde_json;
extern crate serenity;
use serde_json::Value;
use serenity::model::{Embed, EmbedField, EmbedImage};
use serenity::utils::builder::CreateEmbed;
use serenity::utils::Colour;
#[test]
fn test_from_embed() | url: "https://i.imgur.com/XfWpfCV.gif".to_owned(),
width: 224,
}),
kind: "rich".to_owned(),
provider: None,
thumbnail: None,
timestamp: None,
title: Some("hakase".to_owned()),
url: Some("https://i.imgur.com/XfWpfCV.gif".to_owned()),
video: None,
};
let builder = CreateEmbed::from(embed)
.colour(0xFF0011)
.description("This is a hakase description")
.image("https://i.imgur.com/XfWpfCV.gif")
.title("still a hakase")
.url("https://i.imgur.com/XfWpfCV.gif");
let built = Value::Object(builder.0);
let obj = json!({
"color": 0xFF0011,
"description": "This is a hakase description",
"title": "still a hakase",
"type": "rich",
"url": "https://i.imgur.com/XfWpfCV.gif",
"fields": [
{
"inline": false,
"name": "a",
"value": "b",
},
{
"inline": true,
"name": "c",
"value": "z",
},
],
"image": {
"url": "https://i.imgur.com/XfWpfCV.gif",
},
});
assert_eq!(built, obj);
}
| {
let embed = Embed {
author: None,
colour: Colour::new(0xFF0011),
description: Some("This is a test description".to_owned()),
fields: vec![
EmbedField {
inline: false,
name: "a".to_owned(),
value: "b".to_owned(),
},
EmbedField {
inline: true,
name: "c".to_owned(),
value: "z".to_owned(),
},
],
image: Some(EmbedImage {
height: 213,
proxy_url: "a".to_owned(), | identifier_body |
mod.rs | #[cfg(test)]
pub mod mocks;
#[cfg(test)]
mod spec_tests;
use crate::{
cart::Cart,
ppu::{control_register::IncrementAmount, write_latch::LatchState},
};
use std::cell::Cell;
pub trait IVram: Default {
fn write_ppu_addr(&self, latch_state: LatchState);
fn write_ppu_data<C: Cart>(&mut self, val: u8, inc_amount: IncrementAmount, cart: &mut C);
fn read_ppu_data<C: Cart>(&self, inc_amount: IncrementAmount, cart: &C) -> u8;
fn ppu_data<C: Cart>(&self, cart: &C) -> u8;
fn read<C: Cart>(&self, addr: u16, cart: &C) -> u8;
fn read_palette(&self, addr: u16) -> u8;
fn addr(&self) -> u16;
fn scroll_write(&self, latch_state: LatchState);
fn control_write(&self, val: u8);
fn coarse_x_increment(&self);
fn fine_y_increment(&self);
fn copy_horizontal_pos_to_addr(&self);
fn copy_vertical_pos_to_addr(&self);
fn fine_x(&self) -> u8;
}
pub struct Vram {
address: Cell<u16>,
name_tables: [u8; 0x1000],
palette: [u8; 0x20],
ppu_data_buffer: Cell<u8>,
t: Cell<u16>,
fine_x: Cell<u8>,
}
impl Default for Vram {
fn default() -> Self {
Vram {
address: Cell::new(0),
name_tables: [0; 0x1000],
palette: [0; 0x20],
ppu_data_buffer: Cell::new(0),
t: Cell::new(0),
fine_x: Cell::new(0),
}
}
}
impl IVram for Vram {
fn | (&self, latch_state: LatchState) {
// Addresses greater than 0x3fff are mirrored down
match latch_state {
LatchState::FirstWrite(val) => {
// t:..FEDCBA........ = d:..FEDCBA
// t:.X.............. = 0
let t = self.t.get() & 0b1000_0000_1111_1111;
self.t.set(((u16::from(val) & 0b0011_1111) << 8) | t)
}
LatchState::SecondWrite(val) => {
// t:....... HGFEDCBA = d: HGFEDCBA
// v = t
let t = u16::from(val) | (self.t.get() & 0b0111_1111_0000_0000);
self.t.set(t);
self.address.set(t);
}
}
}
fn write_ppu_data<C: Cart>(&mut self, val: u8, inc_amount: IncrementAmount, cart: &mut C) {
let addr = self.address.get();
if addr < 0x2000 {
cart.write_chr(addr, val);
} else if addr < 0x3f00 {
self.name_tables[addr as usize & 0x0fff] = val;
} else if addr < 0x4000 {
let addr = addr as usize & 0x1f;
// Certain sprite addresses are mirrored back into background addresses
let addr = match addr & 0xf {
0x0 => 0x0,
0x4 => 0x4,
0x8 => 0x8,
0xc => 0xc,
_ => addr,
};
self.palette[addr] = val;
}
match inc_amount {
IncrementAmount::One => self.address.set(self.address.get() + 1),
IncrementAmount::ThirtyTwo => self.address.set(self.address.get() + 32),
}
}
fn read_ppu_data<C: Cart>(&self, inc_amount: IncrementAmount, cart: &C) -> u8 {
let val = self.ppu_data(cart);
match inc_amount {
IncrementAmount::One => self.address.set(self.address.get() + 1),
IncrementAmount::ThirtyTwo => self.address.set(self.address.get() + 32),
}
val
}
fn ppu_data<C: Cart>(&self, cart: &C) -> u8 {
let addr = self.address.get();
let val = self.read(addr, cart);
// When reading while the VRAM address is in the range 0-$3EFF (i.e., before the palettes),
// the read will return the contents of an internal read buffer. This internal buffer is
// updated only when reading PPUDATA, and so is preserved across frames. After the CPU reads
// and gets the contents of the internal buffer, the PPU will immediately update the
// internal buffer with the byte at the current VRAM address
if addr < 0x3f00 {
let buffered_val = self.ppu_data_buffer.get();
self.ppu_data_buffer.set(val);
buffered_val
} else {
val
}
}
fn read<C: Cart>(&self, addr: u16, cart: &C) -> u8 {
if addr < 0x2000 {
cart.read_chr(addr)
} else if addr < 0x3f00 {
self.name_tables[addr as usize & 0x0fff]
} else if addr < 0x4000 {
let addr = addr & 0x1f;
self.read_palette(addr)
} else {
panic!("Invalid vram read")
}
}
fn read_palette(&self, addr: u16) -> u8 {
// Certain sprite addresses are mirrored back into background addresses
let addr = match addr & 0xf {
0x0 => 0x0,
0x4 => 0x4,
0x8 => 0x8,
0xc => 0xc,
_ => addr,
};
self.palette[addr as usize]
}
fn addr(&self) -> u16 {
self.address.get()
}
fn scroll_write(&self, latch_state: LatchState) {
match latch_state {
LatchState::FirstWrite(val) => {
// t:..........HGFED = d: HGFED...
let t = self.t.get() & 0b_1111_1111_1110_0000;
self.t.set(((u16::from(val) & 0b_1111_1000) >> 3) | t);
//x: CBA = d:.....CBA
self.fine_x.set(val & 0b_0000_0111);
}
LatchState::SecondWrite(val) => {
// t: CBA..HG FED..... = d: HGFEDCBA
let t = self.t.get() & 0b_0000_1100_0001_1111;
let cba_mask = (u16::from(val) & 0b_0000_0111) << 12;
let hgfed_mask = (u16::from(val) & 0b_1111_1000) << 2;
self.t.set((cba_mask | hgfed_mask) | t);
}
}
}
fn control_write(&self, val: u8) {
// t:...BA.......... = d:......BA
let t = self.t.get() & 0b0111_0011_1111_1111;
let new_t = ((u16::from(val) & 0b0011) << 10) | t;
self.t.set(new_t);
}
fn coarse_x_increment(&self) {
let v = self.address.get();
// The coarse X component of v needs to be incremented when the next tile is reached. Bits
// 0-4 are incremented, with overflow toggling bit 10. This means that bits 0-4 count from 0
// to 31 across a single nametable, and bit 10 selects the current nametable horizontally.
let v = if v & 0x001F == 31 {
// set coarse X = 0 and switch horizontal nametable
(v &!0x001F) ^ 0x0400
} else {
// increment coarse X
v + 1
};
self.address.set(v);
}
fn fine_y_increment(&self) {
let v = self.address.get();
let v = if v & 0x7000!= 0x7000 {
// if fine Y < 7, increment fine Y
v + 0x1000
} else {
// if fine Y = 0...
let v = v &!0x7000;
// let y = coarse Y
let mut y = (v & 0x03E0) >> 5;
let v = if y == 29 {
// set coarse Y to 0
y = 0;
// switch vertical nametable
v ^ 0x0800
} else if y == 31 {
// set coarse Y = 0, nametable not switched
y = 0;
v
} else {
// increment coarse Y
y += 1;
v
};
// put coarse Y back into v
(v &!0x03E0) | (y << 5)
};
self.address.set(v);
}
fn copy_horizontal_pos_to_addr(&self) {
// At dot 257 of each scanline, if rendering is enabled, the PPU copies all bits related to
// horizontal position from t to v:
// v:....F.....EDCBA = t:....F.....EDCBA
let v = self.address.get() & 0b0111_1011_1110_0000;
self.address.set((self.t.get() & 0b0000_0100_0001_1111) | v)
}
fn copy_vertical_pos_to_addr(&self) {
// During dots 280 to 304 of the pre-render scanline (end of vblank), if rendering is
// enabled, at the end of vblank, shortly after the horizontal bits are copied from t to v
// at dot 257, the PPU will repeatedly copy the vertical bits from t to v from dots 280 to
// 304, completing the full initialization of v from t:
// v: IHGF.ED CBA..... = t: IHGF.ED CBA.....
let v = self.address.get() & 0b0000_0100_0001_1111;
self.address.set((self.t.get() & 0b0111_1011_1110_0000) | v)
}
fn fine_x(&self) -> u8 {
self.fine_x.get()
}
}
| write_ppu_addr | identifier_name |
mod.rs | #[cfg(test)]
pub mod mocks;
#[cfg(test)]
mod spec_tests;
use crate::{
cart::Cart,
ppu::{control_register::IncrementAmount, write_latch::LatchState},
};
use std::cell::Cell;
pub trait IVram: Default {
fn write_ppu_addr(&self, latch_state: LatchState);
fn write_ppu_data<C: Cart>(&mut self, val: u8, inc_amount: IncrementAmount, cart: &mut C);
fn read_ppu_data<C: Cart>(&self, inc_amount: IncrementAmount, cart: &C) -> u8;
fn ppu_data<C: Cart>(&self, cart: &C) -> u8;
fn read<C: Cart>(&self, addr: u16, cart: &C) -> u8;
fn read_palette(&self, addr: u16) -> u8;
fn addr(&self) -> u16;
fn scroll_write(&self, latch_state: LatchState);
fn control_write(&self, val: u8);
fn coarse_x_increment(&self);
fn fine_y_increment(&self);
fn copy_horizontal_pos_to_addr(&self);
fn copy_vertical_pos_to_addr(&self);
fn fine_x(&self) -> u8;
}
pub struct Vram {
address: Cell<u16>,
name_tables: [u8; 0x1000],
palette: [u8; 0x20],
ppu_data_buffer: Cell<u8>,
t: Cell<u16>,
fine_x: Cell<u8>,
}
impl Default for Vram {
fn default() -> Self {
Vram {
address: Cell::new(0),
name_tables: [0; 0x1000],
palette: [0; 0x20],
ppu_data_buffer: Cell::new(0),
t: Cell::new(0),
fine_x: Cell::new(0),
}
}
}
impl IVram for Vram {
fn write_ppu_addr(&self, latch_state: LatchState) {
// Addresses greater than 0x3fff are mirrored down
match latch_state {
LatchState::FirstWrite(val) => {
// t:..FEDCBA........ = d:..FEDCBA
// t:.X.............. = 0
let t = self.t.get() & 0b1000_0000_1111_1111;
self.t.set(((u16::from(val) & 0b0011_1111) << 8) | t)
}
LatchState::SecondWrite(val) => {
// t:....... HGFEDCBA = d: HGFEDCBA
// v = t
let t = u16::from(val) | (self.t.get() & 0b0111_1111_0000_0000);
self.t.set(t);
self.address.set(t);
}
}
}
fn write_ppu_data<C: Cart>(&mut self, val: u8, inc_amount: IncrementAmount, cart: &mut C) {
let addr = self.address.get();
if addr < 0x2000 {
cart.write_chr(addr, val);
} else if addr < 0x3f00 {
self.name_tables[addr as usize & 0x0fff] = val;
} else if addr < 0x4000 {
let addr = addr as usize & 0x1f;
// Certain sprite addresses are mirrored back into background addresses
let addr = match addr & 0xf {
0x0 => 0x0,
0x4 => 0x4,
0x8 => 0x8,
0xc => 0xc,
_ => addr,
};
self.palette[addr] = val;
}
match inc_amount {
IncrementAmount::One => self.address.set(self.address.get() + 1),
IncrementAmount::ThirtyTwo => self.address.set(self.address.get() + 32),
} | IncrementAmount::One => self.address.set(self.address.get() + 1),
IncrementAmount::ThirtyTwo => self.address.set(self.address.get() + 32),
}
val
}
fn ppu_data<C: Cart>(&self, cart: &C) -> u8 {
let addr = self.address.get();
let val = self.read(addr, cart);
// When reading while the VRAM address is in the range 0-$3EFF (i.e., before the palettes),
// the read will return the contents of an internal read buffer. This internal buffer is
// updated only when reading PPUDATA, and so is preserved across frames. After the CPU reads
// and gets the contents of the internal buffer, the PPU will immediately update the
// internal buffer with the byte at the current VRAM address
if addr < 0x3f00 {
let buffered_val = self.ppu_data_buffer.get();
self.ppu_data_buffer.set(val);
buffered_val
} else {
val
}
}
fn read<C: Cart>(&self, addr: u16, cart: &C) -> u8 {
if addr < 0x2000 {
cart.read_chr(addr)
} else if addr < 0x3f00 {
self.name_tables[addr as usize & 0x0fff]
} else if addr < 0x4000 {
let addr = addr & 0x1f;
self.read_palette(addr)
} else {
panic!("Invalid vram read")
}
}
fn read_palette(&self, addr: u16) -> u8 {
// Certain sprite addresses are mirrored back into background addresses
let addr = match addr & 0xf {
0x0 => 0x0,
0x4 => 0x4,
0x8 => 0x8,
0xc => 0xc,
_ => addr,
};
self.palette[addr as usize]
}
fn addr(&self) -> u16 {
self.address.get()
}
fn scroll_write(&self, latch_state: LatchState) {
match latch_state {
LatchState::FirstWrite(val) => {
// t:..........HGFED = d: HGFED...
let t = self.t.get() & 0b_1111_1111_1110_0000;
self.t.set(((u16::from(val) & 0b_1111_1000) >> 3) | t);
//x: CBA = d:.....CBA
self.fine_x.set(val & 0b_0000_0111);
}
LatchState::SecondWrite(val) => {
// t: CBA..HG FED..... = d: HGFEDCBA
let t = self.t.get() & 0b_0000_1100_0001_1111;
let cba_mask = (u16::from(val) & 0b_0000_0111) << 12;
let hgfed_mask = (u16::from(val) & 0b_1111_1000) << 2;
self.t.set((cba_mask | hgfed_mask) | t);
}
}
}
fn control_write(&self, val: u8) {
// t:...BA.......... = d:......BA
let t = self.t.get() & 0b0111_0011_1111_1111;
let new_t = ((u16::from(val) & 0b0011) << 10) | t;
self.t.set(new_t);
}
fn coarse_x_increment(&self) {
let v = self.address.get();
// The coarse X component of v needs to be incremented when the next tile is reached. Bits
// 0-4 are incremented, with overflow toggling bit 10. This means that bits 0-4 count from 0
// to 31 across a single nametable, and bit 10 selects the current nametable horizontally.
let v = if v & 0x001F == 31 {
// set coarse X = 0 and switch horizontal nametable
(v &!0x001F) ^ 0x0400
} else {
// increment coarse X
v + 1
};
self.address.set(v);
}
fn fine_y_increment(&self) {
let v = self.address.get();
let v = if v & 0x7000!= 0x7000 {
// if fine Y < 7, increment fine Y
v + 0x1000
} else {
// if fine Y = 0...
let v = v &!0x7000;
// let y = coarse Y
let mut y = (v & 0x03E0) >> 5;
let v = if y == 29 {
// set coarse Y to 0
y = 0;
// switch vertical nametable
v ^ 0x0800
} else if y == 31 {
// set coarse Y = 0, nametable not switched
y = 0;
v
} else {
// increment coarse Y
y += 1;
v
};
// put coarse Y back into v
(v &!0x03E0) | (y << 5)
};
self.address.set(v);
}
fn copy_horizontal_pos_to_addr(&self) {
// At dot 257 of each scanline, if rendering is enabled, the PPU copies all bits related to
// horizontal position from t to v:
// v:....F.....EDCBA = t:....F.....EDCBA
let v = self.address.get() & 0b0111_1011_1110_0000;
self.address.set((self.t.get() & 0b0000_0100_0001_1111) | v)
}
fn copy_vertical_pos_to_addr(&self) {
// During dots 280 to 304 of the pre-render scanline (end of vblank), if rendering is
// enabled, at the end of vblank, shortly after the horizontal bits are copied from t to v
// at dot 257, the PPU will repeatedly copy the vertical bits from t to v from dots 280 to
// 304, completing the full initialization of v from t:
// v: IHGF.ED CBA..... = t: IHGF.ED CBA.....
let v = self.address.get() & 0b0000_0100_0001_1111;
self.address.set((self.t.get() & 0b0111_1011_1110_0000) | v)
}
fn fine_x(&self) -> u8 {
self.fine_x.get()
}
} | }
fn read_ppu_data<C: Cart>(&self, inc_amount: IncrementAmount, cart: &C) -> u8 {
let val = self.ppu_data(cart);
match inc_amount { | random_line_split |
mod.rs | #[cfg(test)]
pub mod mocks;
#[cfg(test)]
mod spec_tests;
use crate::{
cart::Cart,
ppu::{control_register::IncrementAmount, write_latch::LatchState},
};
use std::cell::Cell;
pub trait IVram: Default {
fn write_ppu_addr(&self, latch_state: LatchState);
fn write_ppu_data<C: Cart>(&mut self, val: u8, inc_amount: IncrementAmount, cart: &mut C);
fn read_ppu_data<C: Cart>(&self, inc_amount: IncrementAmount, cart: &C) -> u8;
fn ppu_data<C: Cart>(&self, cart: &C) -> u8;
fn read<C: Cart>(&self, addr: u16, cart: &C) -> u8;
fn read_palette(&self, addr: u16) -> u8;
fn addr(&self) -> u16;
fn scroll_write(&self, latch_state: LatchState);
fn control_write(&self, val: u8);
fn coarse_x_increment(&self);
fn fine_y_increment(&self);
fn copy_horizontal_pos_to_addr(&self);
fn copy_vertical_pos_to_addr(&self);
fn fine_x(&self) -> u8;
}
pub struct Vram {
address: Cell<u16>,
name_tables: [u8; 0x1000],
palette: [u8; 0x20],
ppu_data_buffer: Cell<u8>,
t: Cell<u16>,
fine_x: Cell<u8>,
}
impl Default for Vram {
fn default() -> Self {
Vram {
address: Cell::new(0),
name_tables: [0; 0x1000],
palette: [0; 0x20],
ppu_data_buffer: Cell::new(0),
t: Cell::new(0),
fine_x: Cell::new(0),
}
}
}
impl IVram for Vram {
fn write_ppu_addr(&self, latch_state: LatchState) {
// Addresses greater than 0x3fff are mirrored down
match latch_state {
LatchState::FirstWrite(val) => {
// t:..FEDCBA........ = d:..FEDCBA
// t:.X.............. = 0
let t = self.t.get() & 0b1000_0000_1111_1111;
self.t.set(((u16::from(val) & 0b0011_1111) << 8) | t)
}
LatchState::SecondWrite(val) => {
// t:....... HGFEDCBA = d: HGFEDCBA
// v = t
let t = u16::from(val) | (self.t.get() & 0b0111_1111_0000_0000);
self.t.set(t);
self.address.set(t);
}
}
}
fn write_ppu_data<C: Cart>(&mut self, val: u8, inc_amount: IncrementAmount, cart: &mut C) {
let addr = self.address.get();
if addr < 0x2000 {
cart.write_chr(addr, val);
} else if addr < 0x3f00 {
self.name_tables[addr as usize & 0x0fff] = val;
} else if addr < 0x4000 {
let addr = addr as usize & 0x1f;
// Certain sprite addresses are mirrored back into background addresses
let addr = match addr & 0xf {
0x0 => 0x0,
0x4 => 0x4,
0x8 => 0x8,
0xc => 0xc,
_ => addr,
};
self.palette[addr] = val;
}
match inc_amount {
IncrementAmount::One => self.address.set(self.address.get() + 1),
IncrementAmount::ThirtyTwo => self.address.set(self.address.get() + 32),
}
}
fn read_ppu_data<C: Cart>(&self, inc_amount: IncrementAmount, cart: &C) -> u8 {
let val = self.ppu_data(cart);
match inc_amount {
IncrementAmount::One => self.address.set(self.address.get() + 1),
IncrementAmount::ThirtyTwo => self.address.set(self.address.get() + 32),
}
val
}
fn ppu_data<C: Cart>(&self, cart: &C) -> u8 {
let addr = self.address.get();
let val = self.read(addr, cart);
// When reading while the VRAM address is in the range 0-$3EFF (i.e., before the palettes),
// the read will return the contents of an internal read buffer. This internal buffer is
// updated only when reading PPUDATA, and so is preserved across frames. After the CPU reads
// and gets the contents of the internal buffer, the PPU will immediately update the
// internal buffer with the byte at the current VRAM address
if addr < 0x3f00 {
let buffered_val = self.ppu_data_buffer.get();
self.ppu_data_buffer.set(val);
buffered_val
} else {
val
}
}
fn read<C: Cart>(&self, addr: u16, cart: &C) -> u8 {
if addr < 0x2000 {
cart.read_chr(addr)
} else if addr < 0x3f00 {
self.name_tables[addr as usize & 0x0fff]
} else if addr < 0x4000 {
let addr = addr & 0x1f;
self.read_palette(addr)
} else {
panic!("Invalid vram read")
}
}
fn read_palette(&self, addr: u16) -> u8 {
// Certain sprite addresses are mirrored back into background addresses
let addr = match addr & 0xf {
0x0 => 0x0,
0x4 => 0x4,
0x8 => 0x8,
0xc => 0xc,
_ => addr,
};
self.palette[addr as usize]
}
fn addr(&self) -> u16 {
self.address.get()
}
fn scroll_write(&self, latch_state: LatchState) {
match latch_state {
LatchState::FirstWrite(val) => {
// t:..........HGFED = d: HGFED...
let t = self.t.get() & 0b_1111_1111_1110_0000;
self.t.set(((u16::from(val) & 0b_1111_1000) >> 3) | t);
//x: CBA = d:.....CBA
self.fine_x.set(val & 0b_0000_0111);
}
LatchState::SecondWrite(val) => {
// t: CBA..HG FED..... = d: HGFEDCBA
let t = self.t.get() & 0b_0000_1100_0001_1111;
let cba_mask = (u16::from(val) & 0b_0000_0111) << 12;
let hgfed_mask = (u16::from(val) & 0b_1111_1000) << 2;
self.t.set((cba_mask | hgfed_mask) | t);
}
}
}
fn control_write(&self, val: u8) {
// t:...BA.......... = d:......BA
let t = self.t.get() & 0b0111_0011_1111_1111;
let new_t = ((u16::from(val) & 0b0011) << 10) | t;
self.t.set(new_t);
}
fn coarse_x_increment(&self) |
fn fine_y_increment(&self) {
let v = self.address.get();
let v = if v & 0x7000!= 0x7000 {
// if fine Y < 7, increment fine Y
v + 0x1000
} else {
// if fine Y = 0...
let v = v &!0x7000;
// let y = coarse Y
let mut y = (v & 0x03E0) >> 5;
let v = if y == 29 {
// set coarse Y to 0
y = 0;
// switch vertical nametable
v ^ 0x0800
} else if y == 31 {
// set coarse Y = 0, nametable not switched
y = 0;
v
} else {
// increment coarse Y
y += 1;
v
};
// put coarse Y back into v
(v &!0x03E0) | (y << 5)
};
self.address.set(v);
}
fn copy_horizontal_pos_to_addr(&self) {
// At dot 257 of each scanline, if rendering is enabled, the PPU copies all bits related to
// horizontal position from t to v:
// v:....F.....EDCBA = t:....F.....EDCBA
let v = self.address.get() & 0b0111_1011_1110_0000;
self.address.set((self.t.get() & 0b0000_0100_0001_1111) | v)
}
fn copy_vertical_pos_to_addr(&self) {
// During dots 280 to 304 of the pre-render scanline (end of vblank), if rendering is
// enabled, at the end of vblank, shortly after the horizontal bits are copied from t to v
// at dot 257, the PPU will repeatedly copy the vertical bits from t to v from dots 280 to
// 304, completing the full initialization of v from t:
// v: IHGF.ED CBA..... = t: IHGF.ED CBA.....
let v = self.address.get() & 0b0000_0100_0001_1111;
self.address.set((self.t.get() & 0b0111_1011_1110_0000) | v)
}
fn fine_x(&self) -> u8 {
self.fine_x.get()
}
}
| {
let v = self.address.get();
// The coarse X component of v needs to be incremented when the next tile is reached. Bits
// 0-4 are incremented, with overflow toggling bit 10. This means that bits 0-4 count from 0
// to 31 across a single nametable, and bit 10 selects the current nametable horizontally.
let v = if v & 0x001F == 31 {
// set coarse X = 0 and switch horizontal nametable
(v & !0x001F) ^ 0x0400
} else {
// increment coarse X
v + 1
};
self.address.set(v);
} | identifier_body |
external_reference.rs | use std::str::FromStr;
use thiserror::Error;
use crate::publications::reference_type;
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct ExternalReference(reference_type::ReferenceType, String);
#[derive(Error, Debug)]
pub enum ConversionError {
#[error("No prefix for the reference id: `{0}`")]
MissingPrefix(String),
#[error("Unknown type of reference: {0}")]
RefTypeError(#[from] reference_type::ConversionError),
#[error("Format of reference `{0}` is invalid")]
InvalidFormat(String),
}
impl ExternalReference {
pub fn new(ref_type: reference_type::ReferenceType, ref_id: String) -> Self {
Self(ref_type, ref_id)
}
pub fn pmid(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Pmid, ref_id)
}
pub fn pmcid(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Pmcid, ref_id)
}
pub fn doi(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Doi, ref_id)
}
pub fn ref_type(&self) -> reference_type::ReferenceType {
self.0
}
pub fn ref_id(&self) -> &str {
&self.1
}
pub fn | (&self) -> String {
format!("{}:{}", self.0.to_string(), self.0)
}
}
impl FromStr for ExternalReference {
type Err = ConversionError;
fn from_str(raw: &str) -> Result<ExternalReference, Self::Err> {
let parts: Vec<&str> = raw.split(":").collect();
if parts.len() == 1 {
return Err(Self::Err::MissingPrefix(raw.to_string()));
}
if parts.len() > 2 {
return Err(Self::Err::InvalidFormat(raw.to_string()));
}
let ref_type: reference_type::ReferenceType = parts[0].parse()?;
Ok(ExternalReference(ref_type, parts[1].to_string()))
}
}
impl From<ExternalReference> for String {
fn from(raw: ExternalReference) -> String {
format!("{}:{}", raw.0, raw.1)
}
}
| to_string | identifier_name |
external_reference.rs | use std::str::FromStr;
use thiserror::Error;
use crate::publications::reference_type;
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct ExternalReference(reference_type::ReferenceType, String);
#[derive(Error, Debug)]
pub enum ConversionError {
#[error("No prefix for the reference id: `{0}`")]
MissingPrefix(String),
#[error("Unknown type of reference: {0}")]
RefTypeError(#[from] reference_type::ConversionError),
#[error("Format of reference `{0}` is invalid")]
InvalidFormat(String),
}
impl ExternalReference {
pub fn new(ref_type: reference_type::ReferenceType, ref_id: String) -> Self {
Self(ref_type, ref_id)
}
pub fn pmid(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Pmid, ref_id)
}
pub fn pmcid(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Pmcid, ref_id)
}
pub fn doi(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Doi, ref_id)
}
pub fn ref_type(&self) -> reference_type::ReferenceType {
self.0
}
pub fn ref_id(&self) -> &str |
pub fn to_string(&self) -> String {
format!("{}:{}", self.0.to_string(), self.0)
}
}
impl FromStr for ExternalReference {
type Err = ConversionError;
fn from_str(raw: &str) -> Result<ExternalReference, Self::Err> {
let parts: Vec<&str> = raw.split(":").collect();
if parts.len() == 1 {
return Err(Self::Err::MissingPrefix(raw.to_string()));
}
if parts.len() > 2 {
return Err(Self::Err::InvalidFormat(raw.to_string()));
}
let ref_type: reference_type::ReferenceType = parts[0].parse()?;
Ok(ExternalReference(ref_type, parts[1].to_string()))
}
}
impl From<ExternalReference> for String {
fn from(raw: ExternalReference) -> String {
format!("{}:{}", raw.0, raw.1)
}
}
| {
&self.1
} | identifier_body |
external_reference.rs | use std::str::FromStr;
use thiserror::Error;
use crate::publications::reference_type;
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct ExternalReference(reference_type::ReferenceType, String);
#[derive(Error, Debug)]
pub enum ConversionError {
#[error("No prefix for the reference id: `{0}`")]
MissingPrefix(String),
#[error("Unknown type of reference: {0}")]
RefTypeError(#[from] reference_type::ConversionError),
#[error("Format of reference `{0}` is invalid")]
InvalidFormat(String),
}
impl ExternalReference {
pub fn new(ref_type: reference_type::ReferenceType, ref_id: String) -> Self {
Self(ref_type, ref_id)
}
pub fn pmid(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Pmid, ref_id)
}
pub fn pmcid(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Pmcid, ref_id)
}
pub fn doi(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Doi, ref_id)
}
pub fn ref_type(&self) -> reference_type::ReferenceType {
self.0
}
pub fn ref_id(&self) -> &str {
&self.1
}
pub fn to_string(&self) -> String {
format!("{}:{}", self.0.to_string(), self.0)
}
}
impl FromStr for ExternalReference {
type Err = ConversionError;
fn from_str(raw: &str) -> Result<ExternalReference, Self::Err> {
let parts: Vec<&str> = raw.split(":").collect();
if parts.len() == 1 { |
if parts.len() > 2 {
return Err(Self::Err::InvalidFormat(raw.to_string()));
}
let ref_type: reference_type::ReferenceType = parts[0].parse()?;
Ok(ExternalReference(ref_type, parts[1].to_string()))
}
}
impl From<ExternalReference> for String {
fn from(raw: ExternalReference) -> String {
format!("{}:{}", raw.0, raw.1)
}
} | return Err(Self::Err::MissingPrefix(raw.to_string()));
} | random_line_split |
external_reference.rs | use std::str::FromStr;
use thiserror::Error;
use crate::publications::reference_type;
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct ExternalReference(reference_type::ReferenceType, String);
#[derive(Error, Debug)]
pub enum ConversionError {
#[error("No prefix for the reference id: `{0}`")]
MissingPrefix(String),
#[error("Unknown type of reference: {0}")]
RefTypeError(#[from] reference_type::ConversionError),
#[error("Format of reference `{0}` is invalid")]
InvalidFormat(String),
}
impl ExternalReference {
pub fn new(ref_type: reference_type::ReferenceType, ref_id: String) -> Self {
Self(ref_type, ref_id)
}
pub fn pmid(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Pmid, ref_id)
}
pub fn pmcid(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Pmcid, ref_id)
}
pub fn doi(ref_id: String) -> Self {
Self(reference_type::ReferenceType::Doi, ref_id)
}
pub fn ref_type(&self) -> reference_type::ReferenceType {
self.0
}
pub fn ref_id(&self) -> &str {
&self.1
}
pub fn to_string(&self) -> String {
format!("{}:{}", self.0.to_string(), self.0)
}
}
impl FromStr for ExternalReference {
type Err = ConversionError;
fn from_str(raw: &str) -> Result<ExternalReference, Self::Err> {
let parts: Vec<&str> = raw.split(":").collect();
if parts.len() == 1 {
return Err(Self::Err::MissingPrefix(raw.to_string()));
}
if parts.len() > 2 |
let ref_type: reference_type::ReferenceType = parts[0].parse()?;
Ok(ExternalReference(ref_type, parts[1].to_string()))
}
}
impl From<ExternalReference> for String {
fn from(raw: ExternalReference) -> String {
format!("{}:{}", raw.0, raw.1)
}
}
| {
return Err(Self::Err::InvalidFormat(raw.to_string()));
} | conditional_block |
main.rs | use std::collections::{HashMap};
use std::io::{self, BufRead};
use lazy_regex::regex;
use regex::Regex;
type H = HashMap<Header, Vec<(String, String, String)>>;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
enum Header {
Versioned { package: String, version: String },
Missing { package: String },
}
fn main() {
let mut lib_exes: H = Default::default();
let mut tests: H = Default::default();
let mut benches: H = Default::default();
let mut last_header: Option<Header> = None;
let header_versioned = regex!(
r#"^(?P<package>[a-zA-z]([a-zA-z0-9.-]*?))-(?P<version>(\d+(\.\d+)*)).+?is out of bounds for:$"#
);
let header_missing = regex!(r#"^(?P<package>[a-zA-z]([a-zA-z0-9.-]*)).+?depended on by:$"#);
let package = regex!(
r#"^- \[ \] (?P<package>[a-zA-z]([a-zA-z0-9.-]*?))-(?P<version>(\d+(\.\d+)*)).+?Used by: (?P<component>.+)$"#
);
// Ignore everything until the bounds issues show up.
let mut process_line = false;
for line in io::stdin().lock().lines().flatten() {
if is_reg_match(&line, regex!(r#"^\s*$"#)) {
// noop
} else if line == "curator: Snapshot dependency graph contains errors:" {
process_line = true;
} else if!process_line {
println!("[INFO] {}", line);
} else if let Some(cap) = package.captures(&line) {
let root = last_header.clone().unwrap();
let package = cap.name("package").unwrap().as_str();
let version = cap.name("version").unwrap().as_str();
let component = cap.name("component").unwrap().as_str();
match component {
"library" | "executable" => {
insert(&mut lib_exes, root, package, version, component)
}
"benchmark" => insert(&mut benches, root, package, version, "benchmarks"),
"test-suite" => insert(&mut tests, root, package, version, component),
_ => panic!("Bad component: {}", component),
}
} else if let Some(cap) = header_versioned.captures(&line) {
let package = cap.name("package").unwrap().as_str().to_owned();
let version = cap.name("version").unwrap().as_str().to_owned();
last_header = Some(Header::Versioned { package, version });
} else if let Some(cap) = header_missing.captures(&line) {
let package = cap.name("package").unwrap().as_str().to_owned();
last_header = Some(Header::Missing { package });
} else {
panic!("Unhandled: {:?}", line);
}
}
if!lib_exes.is_empty() {
println!("\nLIBS + EXES\n");
}
for (header, packages) in lib_exes {
for (package, version, component) in packages {
printer(" ", &package, true, &version, &component, &header);
}
}
if!tests.is_empty() |
for (header, packages) in tests {
for (package, version, component) in packages {
printer(" ", &package, false, &version, &component, &header);
}
}
if!benches.is_empty() {
println!("\nBENCHMARKS\n");
}
for (header, packages) in benches {
for (package, version, component) in packages {
printer(" ", &package, false, &version, &component, &header);
}
}
}
fn printer(
indent: &str,
package: &str,
lt0: bool,
version: &str,
component: &str,
header: &Header,
) {
let lt0 = if lt0 { " < 0" } else { "" };
println!(
"{indent}- {package}{lt0} # tried {package}-{version}, but its *{component}* {cause}",
indent = indent,
package = package,
lt0 = lt0,
version = version,
component = component,
cause = match header {
Header::Versioned { package, version } => format!(
"does not support: {package}-{version}",
package = package,
version = version
),
Header::Missing { package } => format!(
"requires the disabled package: {package}",
package = package
),
},
);
}
fn insert(h: &mut H, header: Header, package: &str, version: &str, component: &str) {
(*h.entry(header).or_insert_with(Vec::new)).push((
package.to_owned(),
version.to_owned(),
component.to_owned(),
));
}
fn is_reg_match(s: &str, r: &Regex) -> bool {
r.captures(s).is_some()
}
| {
println!("\nTESTS\n");
} | conditional_block |
main.rs | use std::collections::{HashMap};
use std::io::{self, BufRead};
use lazy_regex::regex;
use regex::Regex;
type H = HashMap<Header, Vec<(String, String, String)>>;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
enum Header {
Versioned { package: String, version: String },
Missing { package: String },
}
fn main() {
let mut lib_exes: H = Default::default();
let mut tests: H = Default::default();
let mut benches: H = Default::default();
let mut last_header: Option<Header> = None;
let header_versioned = regex!(
r#"^(?P<package>[a-zA-z]([a-zA-z0-9.-]*?))-(?P<version>(\d+(\.\d+)*)).+?is out of bounds for:$"#
);
let header_missing = regex!(r#"^(?P<package>[a-zA-z]([a-zA-z0-9.-]*)).+?depended on by:$"#);
let package = regex!(
r#"^- \[ \] (?P<package>[a-zA-z]([a-zA-z0-9.-]*?))-(?P<version>(\d+(\.\d+)*)).+?Used by: (?P<component>.+)$"#
);
// Ignore everything until the bounds issues show up.
let mut process_line = false;
for line in io::stdin().lock().lines().flatten() {
if is_reg_match(&line, regex!(r#"^\s*$"#)) {
// noop
} else if line == "curator: Snapshot dependency graph contains errors:" {
process_line = true;
} else if!process_line {
println!("[INFO] {}", line);
} else if let Some(cap) = package.captures(&line) {
let root = last_header.clone().unwrap();
let package = cap.name("package").unwrap().as_str();
let version = cap.name("version").unwrap().as_str();
let component = cap.name("component").unwrap().as_str();
match component {
"library" | "executable" => {
insert(&mut lib_exes, root, package, version, component)
}
"benchmark" => insert(&mut benches, root, package, version, "benchmarks"),
"test-suite" => insert(&mut tests, root, package, version, component),
_ => panic!("Bad component: {}", component),
}
} else if let Some(cap) = header_versioned.captures(&line) {
let package = cap.name("package").unwrap().as_str().to_owned();
let version = cap.name("version").unwrap().as_str().to_owned();
last_header = Some(Header::Versioned { package, version });
} else if let Some(cap) = header_missing.captures(&line) {
let package = cap.name("package").unwrap().as_str().to_owned();
last_header = Some(Header::Missing { package });
} else {
panic!("Unhandled: {:?}", line);
}
}
if!lib_exes.is_empty() {
println!("\nLIBS + EXES\n");
}
for (header, packages) in lib_exes {
for (package, version, component) in packages {
printer(" ", &package, true, &version, &component, &header);
}
}
if!tests.is_empty() {
println!("\nTESTS\n");
}
for (header, packages) in tests {
for (package, version, component) in packages {
printer(" ", &package, false, &version, &component, &header);
}
}
if!benches.is_empty() {
println!("\nBENCHMARKS\n");
}
for (header, packages) in benches {
for (package, version, component) in packages {
printer(" ", &package, false, &version, &component, &header);
}
}
}
fn printer(
indent: &str,
package: &str,
lt0: bool,
version: &str,
component: &str,
header: &Header,
) {
let lt0 = if lt0 { " < 0" } else { "" };
println!(
"{indent}- {package}{lt0} # tried {package}-{version}, but its *{component}* {cause}",
indent = indent,
package = package,
lt0 = lt0,
version = version,
component = component,
cause = match header {
Header::Versioned { package, version } => format!(
"does not support: {package}-{version}",
package = package,
version = version
),
Header::Missing { package } => format!(
"requires the disabled package: {package}",
package = package
),
},
);
}
fn insert(h: &mut H, header: Header, package: &str, version: &str, component: &str) |
fn is_reg_match(s: &str, r: &Regex) -> bool {
r.captures(s).is_some()
}
| {
(*h.entry(header).or_insert_with(Vec::new)).push((
package.to_owned(),
version.to_owned(),
component.to_owned(),
));
} | identifier_body |
main.rs | use std::collections::{HashMap};
use std::io::{self, BufRead};
use lazy_regex::regex;
use regex::Regex;
type H = HashMap<Header, Vec<(String, String, String)>>;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
enum Header {
Versioned { package: String, version: String },
Missing { package: String },
}
fn main() {
let mut lib_exes: H = Default::default();
let mut tests: H = Default::default();
let mut benches: H = Default::default();
let mut last_header: Option<Header> = None;
let header_versioned = regex!(
r#"^(?P<package>[a-zA-z]([a-zA-z0-9.-]*?))-(?P<version>(\d+(\.\d+)*)).+?is out of bounds for:$"#
);
let header_missing = regex!(r#"^(?P<package>[a-zA-z]([a-zA-z0-9.-]*)).+?depended on by:$"#);
let package = regex!(
r#"^- \[ \] (?P<package>[a-zA-z]([a-zA-z0-9.-]*?))-(?P<version>(\d+(\.\d+)*)).+?Used by: (?P<component>.+)$"#
);
// Ignore everything until the bounds issues show up.
let mut process_line = false;
for line in io::stdin().lock().lines().flatten() {
if is_reg_match(&line, regex!(r#"^\s*$"#)) {
// noop
} else if line == "curator: Snapshot dependency graph contains errors:" {
process_line = true;
} else if!process_line {
println!("[INFO] {}", line);
} else if let Some(cap) = package.captures(&line) {
let root = last_header.clone().unwrap();
let package = cap.name("package").unwrap().as_str();
let version = cap.name("version").unwrap().as_str();
let component = cap.name("component").unwrap().as_str();
match component {
"library" | "executable" => {
insert(&mut lib_exes, root, package, version, component)
}
"benchmark" => insert(&mut benches, root, package, version, "benchmarks"),
"test-suite" => insert(&mut tests, root, package, version, component),
_ => panic!("Bad component: {}", component),
}
} else if let Some(cap) = header_versioned.captures(&line) {
let package = cap.name("package").unwrap().as_str().to_owned();
let version = cap.name("version").unwrap().as_str().to_owned();
last_header = Some(Header::Versioned { package, version });
} else if let Some(cap) = header_missing.captures(&line) {
let package = cap.name("package").unwrap().as_str().to_owned();
last_header = Some(Header::Missing { package });
} else {
panic!("Unhandled: {:?}", line);
}
}
if!lib_exes.is_empty() {
println!("\nLIBS + EXES\n");
}
for (header, packages) in lib_exes {
for (package, version, component) in packages {
printer(" ", &package, true, &version, &component, &header);
}
}
if!tests.is_empty() {
println!("\nTESTS\n");
}
for (header, packages) in tests {
for (package, version, component) in packages {
printer(" ", &package, false, &version, &component, &header);
} | }
for (header, packages) in benches {
for (package, version, component) in packages {
printer(" ", &package, false, &version, &component, &header);
}
}
}
fn printer(
indent: &str,
package: &str,
lt0: bool,
version: &str,
component: &str,
header: &Header,
) {
let lt0 = if lt0 { " < 0" } else { "" };
println!(
"{indent}- {package}{lt0} # tried {package}-{version}, but its *{component}* {cause}",
indent = indent,
package = package,
lt0 = lt0,
version = version,
component = component,
cause = match header {
Header::Versioned { package, version } => format!(
"does not support: {package}-{version}",
package = package,
version = version
),
Header::Missing { package } => format!(
"requires the disabled package: {package}",
package = package
),
},
);
}
fn insert(h: &mut H, header: Header, package: &str, version: &str, component: &str) {
(*h.entry(header).or_insert_with(Vec::new)).push((
package.to_owned(),
version.to_owned(),
component.to_owned(),
));
}
fn is_reg_match(s: &str, r: &Regex) -> bool {
r.captures(s).is_some()
} | }
if !benches.is_empty() {
println!("\nBENCHMARKS\n"); | random_line_split |
main.rs | use std::collections::{HashMap};
use std::io::{self, BufRead};
use lazy_regex::regex;
use regex::Regex;
type H = HashMap<Header, Vec<(String, String, String)>>;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
enum Header {
Versioned { package: String, version: String },
Missing { package: String },
}
fn main() {
let mut lib_exes: H = Default::default();
let mut tests: H = Default::default();
let mut benches: H = Default::default();
let mut last_header: Option<Header> = None;
let header_versioned = regex!(
r#"^(?P<package>[a-zA-z]([a-zA-z0-9.-]*?))-(?P<version>(\d+(\.\d+)*)).+?is out of bounds for:$"#
);
let header_missing = regex!(r#"^(?P<package>[a-zA-z]([a-zA-z0-9.-]*)).+?depended on by:$"#);
let package = regex!(
r#"^- \[ \] (?P<package>[a-zA-z]([a-zA-z0-9.-]*?))-(?P<version>(\d+(\.\d+)*)).+?Used by: (?P<component>.+)$"#
);
// Ignore everything until the bounds issues show up.
let mut process_line = false;
for line in io::stdin().lock().lines().flatten() {
if is_reg_match(&line, regex!(r#"^\s*$"#)) {
// noop
} else if line == "curator: Snapshot dependency graph contains errors:" {
process_line = true;
} else if!process_line {
println!("[INFO] {}", line);
} else if let Some(cap) = package.captures(&line) {
let root = last_header.clone().unwrap();
let package = cap.name("package").unwrap().as_str();
let version = cap.name("version").unwrap().as_str();
let component = cap.name("component").unwrap().as_str();
match component {
"library" | "executable" => {
insert(&mut lib_exes, root, package, version, component)
}
"benchmark" => insert(&mut benches, root, package, version, "benchmarks"),
"test-suite" => insert(&mut tests, root, package, version, component),
_ => panic!("Bad component: {}", component),
}
} else if let Some(cap) = header_versioned.captures(&line) {
let package = cap.name("package").unwrap().as_str().to_owned();
let version = cap.name("version").unwrap().as_str().to_owned();
last_header = Some(Header::Versioned { package, version });
} else if let Some(cap) = header_missing.captures(&line) {
let package = cap.name("package").unwrap().as_str().to_owned();
last_header = Some(Header::Missing { package });
} else {
panic!("Unhandled: {:?}", line);
}
}
if!lib_exes.is_empty() {
println!("\nLIBS + EXES\n");
}
for (header, packages) in lib_exes {
for (package, version, component) in packages {
printer(" ", &package, true, &version, &component, &header);
}
}
if!tests.is_empty() {
println!("\nTESTS\n");
}
for (header, packages) in tests {
for (package, version, component) in packages {
printer(" ", &package, false, &version, &component, &header);
}
}
if!benches.is_empty() {
println!("\nBENCHMARKS\n");
}
for (header, packages) in benches {
for (package, version, component) in packages {
printer(" ", &package, false, &version, &component, &header);
}
}
}
fn printer(
indent: &str,
package: &str,
lt0: bool,
version: &str,
component: &str,
header: &Header,
) {
let lt0 = if lt0 { " < 0" } else { "" };
println!(
"{indent}- {package}{lt0} # tried {package}-{version}, but its *{component}* {cause}",
indent = indent,
package = package,
lt0 = lt0,
version = version,
component = component,
cause = match header {
Header::Versioned { package, version } => format!(
"does not support: {package}-{version}",
package = package,
version = version
),
Header::Missing { package } => format!(
"requires the disabled package: {package}",
package = package
),
},
);
}
fn | (h: &mut H, header: Header, package: &str, version: &str, component: &str) {
(*h.entry(header).or_insert_with(Vec::new)).push((
package.to_owned(),
version.to_owned(),
component.to_owned(),
));
}
fn is_reg_match(s: &str, r: &Regex) -> bool {
r.captures(s).is_some()
}
| insert | identifier_name |
issue-23485.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// run-pass
#![allow(unused_imports)]
// Test for an ICE that occurred when a default method implementation
// was applied to a type that did not meet the prerequisites. The
// problem occurred specifically because normalizing
// `Self::Item::Target` was impossible in this case.
use std::boxed::Box;
use std::marker::Sized;
use std::clone::Clone;
use std::ops::Deref;
use std::option::Option;
use std::option::Option::{Some,None};
trait Iterator {
type Item;
fn next(&mut self) -> Option<Self::Item>;
fn clone_first(mut self) -> Option<<Self::Item as Deref>::Target> where
Self: Sized,
Self::Item: Deref,
<Self::Item as Deref>::Target: Clone,
{
self.next().map(|x| x.clone())
}
}
struct Counter {
value: i32
}
struct | {
value: i32
}
impl Iterator for Counter {
type Item = Token;
fn next(&mut self) -> Option<Token> {
let x = self.value;
self.value += 1;
Some(Token { value: x })
}
}
fn main() {
let mut x: Box<Iterator<Item=Token>> = Box::new(Counter { value: 22 });
assert_eq!(x.next().unwrap().value, 22);
}
| Token | identifier_name |
issue-23485.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// run-pass
#![allow(unused_imports)]
// Test for an ICE that occurred when a default method implementation
// was applied to a type that did not meet the prerequisites. The
// problem occurred specifically because normalizing
// `Self::Item::Target` was impossible in this case.
use std::boxed::Box;
use std::marker::Sized;
use std::clone::Clone;
use std::ops::Deref;
use std::option::Option;
use std::option::Option::{Some,None};
trait Iterator {
type Item;
fn next(&mut self) -> Option<Self::Item>;
fn clone_first(mut self) -> Option<<Self::Item as Deref>::Target> where
Self: Sized,
Self::Item: Deref,
<Self::Item as Deref>::Target: Clone,
{
self.next().map(|x| x.clone())
}
}
struct Counter {
value: i32
}
struct Token {
value: i32
}
impl Iterator for Counter {
type Item = Token;
fn next(&mut self) -> Option<Token> |
}
fn main() {
let mut x: Box<Iterator<Item=Token>> = Box::new(Counter { value: 22 });
assert_eq!(x.next().unwrap().value, 22);
}
| {
let x = self.value;
self.value += 1;
Some(Token { value: x })
} | identifier_body |
issue-23485.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// run-pass
#![allow(unused_imports)]
// Test for an ICE that occurred when a default method implementation
// was applied to a type that did not meet the prerequisites. The
// problem occurred specifically because normalizing
// `Self::Item::Target` was impossible in this case.
use std::boxed::Box;
use std::marker::Sized;
use std::clone::Clone;
use std::ops::Deref;
use std::option::Option;
use std::option::Option::{Some,None};
trait Iterator {
type Item;
fn next(&mut self) -> Option<Self::Item>;
fn clone_first(mut self) -> Option<<Self::Item as Deref>::Target> where
Self: Sized,
Self::Item: Deref,
<Self::Item as Deref>::Target: Clone,
{
self.next().map(|x| x.clone())
}
}
struct Counter {
value: i32
}
struct Token {
value: i32
}
impl Iterator for Counter { | self.value += 1;
Some(Token { value: x })
}
}
fn main() {
let mut x: Box<Iterator<Item=Token>> = Box::new(Counter { value: 22 });
assert_eq!(x.next().unwrap().value, 22);
} | type Item = Token;
fn next(&mut self) -> Option<Token> {
let x = self.value; | random_line_split |
queue.rs | use collections::vec::Vec;
/// A FIFO Queue
pub struct Queue<T> {
/// The queue as a vector
pub vec: Vec<T>,
}
impl<T> Queue<T> {
/// Create new queue
pub fn new() -> Self {
Queue { vec: Vec::new() }
}
| /// Push element to queue
pub fn push(&mut self, value: T) {
self.vec.push(value);
}
/// Pop the last element
pub fn pop(&mut self) -> Option<T> {
if!self.vec.is_empty() {
Some(self.vec.remove(0))
} else {
None
}
}
/// Get the length of the queue
pub fn len(&self) -> usize {
self.vec.len()
}
}
impl<T> Clone for Queue<T> where T: Clone {
fn clone(&self) -> Self {
Queue { vec: self.vec.clone() }
}
} | random_line_split | |
queue.rs | use collections::vec::Vec;
/// A FIFO Queue
pub struct Queue<T> {
/// The queue as a vector
pub vec: Vec<T>,
}
impl<T> Queue<T> {
/// Create new queue
pub fn new() -> Self {
Queue { vec: Vec::new() }
}
/// Push element to queue
pub fn | (&mut self, value: T) {
self.vec.push(value);
}
/// Pop the last element
pub fn pop(&mut self) -> Option<T> {
if!self.vec.is_empty() {
Some(self.vec.remove(0))
} else {
None
}
}
/// Get the length of the queue
pub fn len(&self) -> usize {
self.vec.len()
}
}
impl<T> Clone for Queue<T> where T: Clone {
fn clone(&self) -> Self {
Queue { vec: self.vec.clone() }
}
}
| push | identifier_name |
queue.rs | use collections::vec::Vec;
/// A FIFO Queue
pub struct Queue<T> {
/// The queue as a vector
pub vec: Vec<T>,
}
impl<T> Queue<T> {
/// Create new queue
pub fn new() -> Self {
Queue { vec: Vec::new() }
}
/// Push element to queue
pub fn push(&mut self, value: T) {
self.vec.push(value);
}
/// Pop the last element
pub fn pop(&mut self) -> Option<T> {
if!self.vec.is_empty() {
Some(self.vec.remove(0))
} else |
}
/// Get the length of the queue
pub fn len(&self) -> usize {
self.vec.len()
}
}
impl<T> Clone for Queue<T> where T: Clone {
fn clone(&self) -> Self {
Queue { vec: self.vec.clone() }
}
}
| {
None
} | conditional_block |
queue.rs | use collections::vec::Vec;
/// A FIFO Queue
pub struct Queue<T> {
/// The queue as a vector
pub vec: Vec<T>,
}
impl<T> Queue<T> {
/// Create new queue
pub fn new() -> Self |
/// Push element to queue
pub fn push(&mut self, value: T) {
self.vec.push(value);
}
/// Pop the last element
pub fn pop(&mut self) -> Option<T> {
if!self.vec.is_empty() {
Some(self.vec.remove(0))
} else {
None
}
}
/// Get the length of the queue
pub fn len(&self) -> usize {
self.vec.len()
}
}
impl<T> Clone for Queue<T> where T: Clone {
fn clone(&self) -> Self {
Queue { vec: self.vec.clone() }
}
}
| {
Queue { vec: Vec::new() }
} | identifier_body |
hashmap.rs | #[macro_use] extern crate log;
extern crate rusty_raft;
extern crate rand;
extern crate rustc_serialize;
extern crate env_logger;
use rand::{thread_rng, Rng};
use rusty_raft::server::{start_server, ServerHandle};
use rusty_raft::client::{RaftConnection};
use rusty_raft::client::state_machine::{
StateMachine, RaftStateMachine};
use rusty_raft::common::{Config, RaftError};
use std::env::args;
use std::collections::HashMap;
use std::fmt::Debug;
use std::net::SocketAddr;
use std::str;
use std::str::FromStr;
use std::time::Duration;
use std::fs::File;
use std::io::{stdin, stdout, SeekFrom, Error, ErrorKind, Read, BufReader, Seek, BufRead, Cursor, Write};
use rustc_serialize::json;
static USAGE: &'static str = "
Commands:
server Starts a server from an initial cluster config.
client Starts a client repl, that communicates with
a particular cluster.
Usage:
hashmap server <id>
hashmap servers
hashmap client
Options:
-h --help Show a help message.
";
fn main() {
env_logger::init().unwrap();
trace!("Starting program");
// TODO (sydli) make prettier
let mut args = args();
if let Some(command) = args.nth(1) {
if &command == "server" {
if let (Some(id_str), Some(filename)) =
(args.next(), args.next()) {
if let Ok(id) = id_str.parse::<u64>() {
Server::new(id, cluster_from_file(&filename).get(&id).unwrap()).repl();
return;
}
}
} else if &command == "client" {
if let Some(filename) = args.next() {
Client::new(&cluster_from_file(&filename)).repl();
return;
}
} else if &command == "servers" {
if let Some(filename) = args.next() {
Cluster::new(&cluster_from_file(&filename)).repl();
return;
}
}
}
println!("Incorrect usage. \n{}", USAGE);
}
///
/// Automatically builds a repl loop for an object, if it implements |exec|.
/// Each line a user types into the repl is fed to |exec|.
///
/// Default commands:
/// exit Exits the loop and shuts down associated process.
/// help Displays the |help| message for more usage info.
trait Repl {
///
/// Processes each input command from the Repl.
/// Returns true if |command| is formatted correctly.
///
fn exec(&mut self, command: String) -> bool;
///
/// Usage string displayed with the default command "help".
///
fn usage(&self) -> String;
fn print_usage(&self) {
println!("\nREPL COMMANDS:\n==============\n{}\n{}\n{}",
"exit\n\t\tExits the loop and shuts down associated process.",
"help\n\t\tSpits out this message.",
self.usage());
}
///
/// Implementation of the repl.
fn repl(&mut self) {
println!("[ Starting repl ]");
loop {
print!("> ");
stdout().flush().unwrap();
let mut buffer = String::new();
if stdin().read_line(&mut buffer).is_ok() {
let words: Vec<String> =
buffer.split_whitespace().map(String::from).collect();
if words.get(0) == Some(&String::from("exit")) { break; }
if words.get(0) == Some(&String::from("help")) {
self.print_usage();
continue;
}
if!self.exec(buffer.clone()) {
println!("Command not recognized.");
self.print_usage();
}
}
}
}
}
#[derive(Clone, Debug)]
struct ServerInfo {
addr: SocketAddr,
state_filename: String,
log_filename: String,
}
const STATE_FILENAME_LEN: usize = 20;
impl ServerInfo {
fn new(addr: SocketAddr) -> ServerInfo {
let mut random_filename: String = thread_rng().gen_ascii_chars().take(STATE_FILENAME_LEN).collect();
ServerInfo {
addr: addr,
state_filename: String::from("/tmp/state_") + &random_filename,
log_filename: String::from("/tmp/log_") + &random_filename,
}
}
}
struct Server { handle: ServerHandle, }
impl Repl for Server { // TODO: impl repl for a single node in the cluster
fn exec(&mut self, command: String) -> bool { true }
fn usage(&self) -> String { String::from("") }
}
const FIRST_ID: u64 = 1;
impl Server {
fn new(id: u64, info: &ServerInfo) -> Server {
trace!("Starting new server with state file {} and log file {}", &info.state_filename, &info.log_filename);
Server {
handle:
start_server(id, Box::new(RaftHashMap { map: HashMap::new() }), info.addr, id == FIRST_ID,
info.state_filename.clone(), info.log_filename.clone()).unwrap()
}
}
}
struct Cluster {
servers: HashMap<u64, Server>,
cluster: HashMap<u64, ServerInfo>,
client: RaftConnection,
}
impl Cluster {
fn new(info: &HashMap<u64, ServerInfo>) -> Cluster {
let first_cluster = info.clone().into_iter().map(|(id, info)| (id, info.addr) )
.filter(|&(id, _)| id == FIRST_ID).collect::<HashMap<u64, SocketAddr>>();
let mut servers = HashMap::new();
// start up all the servers
for (id, info) in info { servers.insert(*id, Server::new(*id, info)); }
// connect to the cluster (which will only contain the bootstrapped server)
let mut raft_db = RaftConnection::new_with_session(&first_cluster).unwrap();
// issue AddServer RPCs for all the other servers
for id in servers.keys() {
if *id == FIRST_ID |
raft_db.add_server(*id, info.get(id).cloned().unwrap().addr).unwrap();
}
Cluster { servers: servers, cluster: info.clone(), client: raft_db }
}
fn add_server(&mut self, id: u64, addr: SocketAddr) {
if self.cluster.contains_key(&id) {
println!("Server {} is already in the cluster. Servers {:?}", id, self.cluster);
return;
}
trace!("Starting a new server at {}", addr);
let info = ServerInfo::new(addr);
self.cluster.insert(id, info);
self.servers.insert(id, Server::new(id, &self.cluster[&id]));
trace!("Attempting to add server {}", id);
self.client.add_server(id, addr).unwrap();
println!("Added server {}, {:?}", id, addr);
}
fn remove_server(&mut self, id: u64) {
if!self.cluster.contains_key(&id) {
println!("Server {} is not in the cluster! Servers: {:?}", id,
self.cluster);
}
println!("Removing server {}", id);
}
fn kill_server(&mut self, id: u64) {
if!self.servers.contains_key(&id) {
println!("Server {} is not up right now!", id);
} else {
{
// drop server
self.servers.remove(&id).unwrap();
println!("Killed server {}", id);
}
}
}
fn start_server(&mut self, id: u64) {
if self.servers.contains_key(&id) {
println!("Server {} is already up!", id);
}
self.servers.insert(id, Server::new(id, self.cluster.get(&id).unwrap()));
println!("Restarted server {}", id);
}
fn print_servers(&self) {
println!("Servers in cluster: {:?}\n Live servers: {:?}",
self.cluster, self.servers.keys().collect::<Vec<&u64>>());
}
}
impl Repl for Cluster {
fn exec(&mut self, command: String) -> bool {
let words: Vec<&str> =
command.split_whitespace().collect();
let first = words.get(0).map(|x|*x);
if first == Some("add") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
let addr = words.get(2).and_then(|x| as_addr(*x).ok());
if num.and(addr).is_none() { return false; }
self.add_server(num.unwrap(), addr.unwrap());
} else if first == Some("remove") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.remove_server(num.unwrap());
} else if first == Some("kill") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.kill_server(num.unwrap());
} else if first == Some("start") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.start_server(num.unwrap());
} else if first == Some("list") {
self.print_servers();
} else { return false; }
return true;
}
fn usage(&self) -> String {
String::from(format!(
"{}\n{}\n{}\n{}\n{}",
"add <node-id> <node-addr>\tAdds a server to the cluster.",
"remove <node-id>\tRemoves a server from the cluster.",
"start <node-id>\tStarts a server from the cluster.",
"kill <node-id>\tKills a server from the cluster.",
"list\tLists all cluster servers, and the ones that are live."))
}
}
struct Client {
raft: RaftConnection,
}
impl Repl for Client {
fn exec (&mut self, command: String) -> bool {
let words: Vec<String> =
command.split_whitespace().map(String::from).collect();
self.process_command(words)
}
fn usage(&self) -> String {
String::from(format!(
"{}\n{}",
"get <key>\n\t\tPrints value of <key> in hashmap, if it exists.",
"put <key> <value>\n\t\tPuts value of <key> as <value>"))
}
}
#[derive(RustcDecodable, RustcEncodable)]
struct Put {
key: String,
value: String,
}
impl Client {
fn new(cluster: &HashMap<u64, ServerInfo>) -> Client {
let cluster = cluster.clone().into_iter().map(|(id, info)| (id, info.addr))
.collect::<HashMap<u64, SocketAddr>>();
let connection = RaftConnection::new_with_session(&cluster);
if connection.is_none() {
println!("Couldn't establish connection to cluster at {:?}", cluster);
panic!();
}
Client { raft: connection.unwrap() }
}
fn get(&mut self, key: String) -> Result<String, RaftError> {
self.raft.query(key.as_bytes())
.and_then(
|result| str::from_utf8(&result)
.map(str::to_string)
.map_err(deserialize_error))
}
fn put(&mut self, key: String, value: String) -> Result<(), RaftError> {
json::encode(&Put {key:key, value:value})
.map_err(serialize_error)
.and_then(|buffer| self.raft.command(&buffer.as_bytes()))
}
// TODO (sydli): make this function less shit
fn process_command(&mut self, words: Vec<String>)
-> bool {
if words.len() == 0 { return true; }
let ref cmd = words[0];
if *cmd == String::from("get") {
if words.len() <= 1 { return false; }
words.get(1).map(|key| {
match self.get(key.clone()) {
Ok(value) => println!("Value for {} is {}", key, value),
Err(err) => println!("Error during get: {:?}", err),
}
}).unwrap();
} else if *cmd == String::from("put") {
if words.len() <= 2 { return false; }
words.get(1).map(|key| { words.get(2).map(|val| {
match self.put(key.clone(), val.clone()) {
Ok(()) => println!("Put {} => {} successfully", key, val),
Err(err) => println!("Error during put: {:?}", err),
}
}).unwrap()}).unwrap();
} else { return false; }
return true;
}
}
fn io_err() -> std::io::Error {
Error::new(ErrorKind::InvalidData, "File incorrectly formatted")
}
fn as_num(x: &str) -> Result<u64, std::io::Error> {
String::from(x).parse::<u64>().map_err(|_| io_err())
}
fn as_addr(x: &str) -> Result<SocketAddr, std::io::Error> {
String::from(x).parse::<SocketAddr>().map_err(|_| io_err())
}
/// Panics on io error (we can't access the cluster info!)
/// TODO (sydli) make io_errs more informative
// TODO(jason): Remove this and bootstrap a dynamic cluster
fn cluster_from_file(filename: &str) -> HashMap<u64, ServerInfo> {
let file = File::open(filename.clone())
.expect(&format!("Unable to open file {}", filename));
let mut lines = BufReader::new(file).lines();
lines.next().ok_or(io_err())
.map(|line_or_io_error| line_or_io_error.unwrap())
.and_then(|x| as_num(&x))
.and_then(|num| (0..num).map(|_| {
lines.next().ok_or(io_err())
.map(|line_or_io_error| line_or_io_error.unwrap())
.and_then(|node_str| {
let mut words = node_str.split_whitespace().map(String::from);
let id = words.next().ok_or(io_err())
.and_then(|x| as_num(&x));
words.next().ok_or(io_err())
.and_then(|x| as_addr(&x))
.and_then(move |addr| id.map(|id| (id, ServerInfo::new(addr))))
})
}).collect::<Result<Vec<_>, _>>())
.map(|nodes: Vec<(u64, ServerInfo)>|
nodes.iter().cloned().collect::<HashMap<u64, ServerInfo>>()
).unwrap()
}
struct RaftHashMap {
map: HashMap<String, String>,
}
fn serialize_error <T: Debug>(error: T) -> RaftError {
RaftError::ClientError(
format!("Couldn't serialize object. Error: {:?}", error))
}
fn deserialize_error <T: Debug>(error: T) -> RaftError {
RaftError::ClientError(
format!("Couldn't deserialize buffer. Error: {:?}", error))
}
fn key_error(key: &String) -> RaftError {
RaftError::ClientError(format!("Couldn't find key {}", key))
}
impl StateMachine for RaftHashMap {
fn command (&mut self, buffer: &[u8]) ->Result<(), RaftError> {
str::from_utf8(buffer)
.map_err(deserialize_error)
.and_then(|string| json::decode(&string)
.map_err(deserialize_error))
.map(|put: Put|
{
self.map.insert(put.key, put.value);
})
}
fn query (& self, buffer: &[u8]) ->Result<Vec<u8>, RaftError> {
str::from_utf8(buffer)
.map_err(deserialize_error)
.and_then(|key| {
let key = key.to_string();
self.map.get(&key)
.map(|x| x.as_bytes().to_vec())
.ok_or(key_error(&key))
})
}
}
| { continue; } | conditional_block |
hashmap.rs | #[macro_use] extern crate log;
extern crate rusty_raft;
extern crate rand;
extern crate rustc_serialize;
extern crate env_logger;
use rand::{thread_rng, Rng};
use rusty_raft::server::{start_server, ServerHandle};
use rusty_raft::client::{RaftConnection};
use rusty_raft::client::state_machine::{
StateMachine, RaftStateMachine};
use rusty_raft::common::{Config, RaftError};
use std::env::args;
use std::collections::HashMap;
use std::fmt::Debug;
use std::net::SocketAddr;
use std::str;
use std::str::FromStr;
use std::time::Duration;
use std::fs::File;
use std::io::{stdin, stdout, SeekFrom, Error, ErrorKind, Read, BufReader, Seek, BufRead, Cursor, Write};
use rustc_serialize::json;
static USAGE: &'static str = "
Commands:
server Starts a server from an initial cluster config.
client Starts a client repl, that communicates with
a particular cluster.
Usage:
hashmap server <id>
hashmap servers
hashmap client
Options:
-h --help Show a help message.
";
fn main() {
env_logger::init().unwrap();
trace!("Starting program");
// TODO (sydli) make prettier
let mut args = args();
if let Some(command) = args.nth(1) {
if &command == "server" {
if let (Some(id_str), Some(filename)) =
(args.next(), args.next()) {
if let Ok(id) = id_str.parse::<u64>() {
Server::new(id, cluster_from_file(&filename).get(&id).unwrap()).repl();
return;
}
}
} else if &command == "client" {
if let Some(filename) = args.next() {
Client::new(&cluster_from_file(&filename)).repl();
return;
}
} else if &command == "servers" {
if let Some(filename) = args.next() {
Cluster::new(&cluster_from_file(&filename)).repl();
return;
}
}
}
println!("Incorrect usage. \n{}", USAGE);
}
///
/// Automatically builds a repl loop for an object, if it implements |exec|.
/// Each line a user types into the repl is fed to |exec|.
///
/// Default commands:
/// exit Exits the loop and shuts down associated process.
/// help Displays the |help| message for more usage info.
trait Repl {
///
/// Processes each input command from the Repl.
/// Returns true if |command| is formatted correctly.
///
fn exec(&mut self, command: String) -> bool;
///
/// Usage string displayed with the default command "help".
///
fn usage(&self) -> String;
fn print_usage(&self) {
println!("\nREPL COMMANDS:\n==============\n{}\n{}\n{}",
"exit\n\t\tExits the loop and shuts down associated process.",
"help\n\t\tSpits out this message.",
self.usage());
}
///
/// Implementation of the repl.
fn repl(&mut self) {
println!("[ Starting repl ]");
loop {
print!("> ");
stdout().flush().unwrap();
let mut buffer = String::new();
if stdin().read_line(&mut buffer).is_ok() {
let words: Vec<String> =
buffer.split_whitespace().map(String::from).collect();
if words.get(0) == Some(&String::from("exit")) { break; }
if words.get(0) == Some(&String::from("help")) {
self.print_usage();
continue;
}
if!self.exec(buffer.clone()) {
println!("Command not recognized.");
self.print_usage();
}
}
}
}
}
#[derive(Clone, Debug)]
struct ServerInfo {
addr: SocketAddr,
state_filename: String,
log_filename: String,
}
const STATE_FILENAME_LEN: usize = 20;
impl ServerInfo {
fn new(addr: SocketAddr) -> ServerInfo {
let mut random_filename: String = thread_rng().gen_ascii_chars().take(STATE_FILENAME_LEN).collect();
ServerInfo {
addr: addr,
state_filename: String::from("/tmp/state_") + &random_filename,
log_filename: String::from("/tmp/log_") + &random_filename,
}
}
}
struct Server { handle: ServerHandle, }
impl Repl for Server { // TODO: impl repl for a single node in the cluster
fn exec(&mut self, command: String) -> bool { true }
fn usage(&self) -> String { String::from("") }
}
const FIRST_ID: u64 = 1;
impl Server {
fn new(id: u64, info: &ServerInfo) -> Server {
trace!("Starting new server with state file {} and log file {}", &info.state_filename, &info.log_filename);
Server {
handle:
start_server(id, Box::new(RaftHashMap { map: HashMap::new() }), info.addr, id == FIRST_ID,
info.state_filename.clone(), info.log_filename.clone()).unwrap()
}
}
}
struct Cluster {
servers: HashMap<u64, Server>,
cluster: HashMap<u64, ServerInfo>,
client: RaftConnection,
}
impl Cluster {
fn new(info: &HashMap<u64, ServerInfo>) -> Cluster {
let first_cluster = info.clone().into_iter().map(|(id, info)| (id, info.addr) )
.filter(|&(id, _)| id == FIRST_ID).collect::<HashMap<u64, SocketAddr>>();
let mut servers = HashMap::new();
// start up all the servers
for (id, info) in info { servers.insert(*id, Server::new(*id, info)); }
// connect to the cluster (which will only contain the bootstrapped server)
let mut raft_db = RaftConnection::new_with_session(&first_cluster).unwrap();
// issue AddServer RPCs for all the other servers
for id in servers.keys() {
if *id == FIRST_ID { continue; }
raft_db.add_server(*id, info.get(id).cloned().unwrap().addr).unwrap();
}
Cluster { servers: servers, cluster: info.clone(), client: raft_db }
}
fn add_server(&mut self, id: u64, addr: SocketAddr) {
if self.cluster.contains_key(&id) {
println!("Server {} is already in the cluster. Servers {:?}", id, self.cluster);
return;
}
trace!("Starting a new server at {}", addr);
let info = ServerInfo::new(addr);
self.cluster.insert(id, info);
self.servers.insert(id, Server::new(id, &self.cluster[&id]));
trace!("Attempting to add server {}", id);
self.client.add_server(id, addr).unwrap();
println!("Added server {}, {:?}", id, addr);
}
fn remove_server(&mut self, id: u64) {
if!self.cluster.contains_key(&id) {
println!("Server {} is not in the cluster! Servers: {:?}", id,
self.cluster);
}
println!("Removing server {}", id);
}
fn kill_server(&mut self, id: u64) {
if!self.servers.contains_key(&id) {
println!("Server {} is not up right now!", id);
} else {
{
// drop server
self.servers.remove(&id).unwrap();
println!("Killed server {}", id);
}
}
}
fn start_server(&mut self, id: u64) {
if self.servers.contains_key(&id) {
println!("Server {} is already up!", id);
}
self.servers.insert(id, Server::new(id, self.cluster.get(&id).unwrap()));
println!("Restarted server {}", id);
}
fn print_servers(&self) {
println!("Servers in cluster: {:?}\n Live servers: {:?}",
self.cluster, self.servers.keys().collect::<Vec<&u64>>());
}
}
impl Repl for Cluster {
fn exec(&mut self, command: String) -> bool {
let words: Vec<&str> =
command.split_whitespace().collect();
let first = words.get(0).map(|x|*x);
if first == Some("add") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
let addr = words.get(2).and_then(|x| as_addr(*x).ok());
if num.and(addr).is_none() { return false; }
self.add_server(num.unwrap(), addr.unwrap());
} else if first == Some("remove") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.remove_server(num.unwrap());
} else if first == Some("kill") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.kill_server(num.unwrap());
} else if first == Some("start") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.start_server(num.unwrap());
} else if first == Some("list") {
self.print_servers();
} else { return false; }
return true;
}
fn usage(&self) -> String {
String::from(format!(
"{}\n{}\n{}\n{}\n{}",
"add <node-id> <node-addr>\tAdds a server to the cluster.",
"remove <node-id>\tRemoves a server from the cluster.",
"start <node-id>\tStarts a server from the cluster.",
"kill <node-id>\tKills a server from the cluster.",
"list\tLists all cluster servers, and the ones that are live."))
}
}
struct Client {
raft: RaftConnection,
}
impl Repl for Client {
fn exec (&mut self, command: String) -> bool {
let words: Vec<String> =
command.split_whitespace().map(String::from).collect();
self.process_command(words)
}
fn usage(&self) -> String {
String::from(format!(
"{}\n{}",
"get <key>\n\t\tPrints value of <key> in hashmap, if it exists.",
"put <key> <value>\n\t\tPuts value of <key> as <value>"))
}
}
#[derive(RustcDecodable, RustcEncodable)]
struct Put {
key: String,
value: String,
}
impl Client {
fn new(cluster: &HashMap<u64, ServerInfo>) -> Client {
let cluster = cluster.clone().into_iter().map(|(id, info)| (id, info.addr))
.collect::<HashMap<u64, SocketAddr>>();
let connection = RaftConnection::new_with_session(&cluster);
if connection.is_none() {
println!("Couldn't establish connection to cluster at {:?}", cluster);
panic!();
}
Client { raft: connection.unwrap() }
}
fn get(&mut self, key: String) -> Result<String, RaftError> {
self.raft.query(key.as_bytes())
.and_then(
|result| str::from_utf8(&result)
.map(str::to_string)
.map_err(deserialize_error))
}
fn put(&mut self, key: String, value: String) -> Result<(), RaftError> |
// TODO (sydli): make this function less shit
fn process_command(&mut self, words: Vec<String>)
-> bool {
if words.len() == 0 { return true; }
let ref cmd = words[0];
if *cmd == String::from("get") {
if words.len() <= 1 { return false; }
words.get(1).map(|key| {
match self.get(key.clone()) {
Ok(value) => println!("Value for {} is {}", key, value),
Err(err) => println!("Error during get: {:?}", err),
}
}).unwrap();
} else if *cmd == String::from("put") {
if words.len() <= 2 { return false; }
words.get(1).map(|key| { words.get(2).map(|val| {
match self.put(key.clone(), val.clone()) {
Ok(()) => println!("Put {} => {} successfully", key, val),
Err(err) => println!("Error during put: {:?}", err),
}
}).unwrap()}).unwrap();
} else { return false; }
return true;
}
}
fn io_err() -> std::io::Error {
Error::new(ErrorKind::InvalidData, "File incorrectly formatted")
}
fn as_num(x: &str) -> Result<u64, std::io::Error> {
String::from(x).parse::<u64>().map_err(|_| io_err())
}
fn as_addr(x: &str) -> Result<SocketAddr, std::io::Error> {
String::from(x).parse::<SocketAddr>().map_err(|_| io_err())
}
/// Panics on io error (we can't access the cluster info!)
/// TODO (sydli) make io_errs more informative
// TODO(jason): Remove this and bootstrap a dynamic cluster
fn cluster_from_file(filename: &str) -> HashMap<u64, ServerInfo> {
let file = File::open(filename.clone())
.expect(&format!("Unable to open file {}", filename));
let mut lines = BufReader::new(file).lines();
lines.next().ok_or(io_err())
.map(|line_or_io_error| line_or_io_error.unwrap())
.and_then(|x| as_num(&x))
.and_then(|num| (0..num).map(|_| {
lines.next().ok_or(io_err())
.map(|line_or_io_error| line_or_io_error.unwrap())
.and_then(|node_str| {
let mut words = node_str.split_whitespace().map(String::from);
let id = words.next().ok_or(io_err())
.and_then(|x| as_num(&x));
words.next().ok_or(io_err())
.and_then(|x| as_addr(&x))
.and_then(move |addr| id.map(|id| (id, ServerInfo::new(addr))))
})
}).collect::<Result<Vec<_>, _>>())
.map(|nodes: Vec<(u64, ServerInfo)>|
nodes.iter().cloned().collect::<HashMap<u64, ServerInfo>>()
).unwrap()
}
struct RaftHashMap {
map: HashMap<String, String>,
}
fn serialize_error <T: Debug>(error: T) -> RaftError {
RaftError::ClientError(
format!("Couldn't serialize object. Error: {:?}", error))
}
fn deserialize_error <T: Debug>(error: T) -> RaftError {
RaftError::ClientError(
format!("Couldn't deserialize buffer. Error: {:?}", error))
}
fn key_error(key: &String) -> RaftError {
RaftError::ClientError(format!("Couldn't find key {}", key))
}
impl StateMachine for RaftHashMap {
fn command (&mut self, buffer: &[u8]) ->Result<(), RaftError> {
str::from_utf8(buffer)
.map_err(deserialize_error)
.and_then(|string| json::decode(&string)
.map_err(deserialize_error))
.map(|put: Put|
{
self.map.insert(put.key, put.value);
})
}
fn query (& self, buffer: &[u8]) ->Result<Vec<u8>, RaftError> {
str::from_utf8(buffer)
.map_err(deserialize_error)
.and_then(|key| {
let key = key.to_string();
self.map.get(&key)
.map(|x| x.as_bytes().to_vec())
.ok_or(key_error(&key))
})
}
}
| {
json::encode(&Put {key:key, value:value})
.map_err(serialize_error)
.and_then(|buffer| self.raft.command(&buffer.as_bytes()))
} | identifier_body |
hashmap.rs | #[macro_use] extern crate log;
extern crate rusty_raft;
extern crate rand;
extern crate rustc_serialize;
extern crate env_logger;
use rand::{thread_rng, Rng};
use rusty_raft::server::{start_server, ServerHandle};
use rusty_raft::client::{RaftConnection};
use rusty_raft::client::state_machine::{
StateMachine, RaftStateMachine};
use rusty_raft::common::{Config, RaftError};
use std::env::args;
use std::collections::HashMap;
use std::fmt::Debug;
use std::net::SocketAddr;
use std::str;
use std::str::FromStr;
use std::time::Duration;
use std::fs::File;
use std::io::{stdin, stdout, SeekFrom, Error, ErrorKind, Read, BufReader, Seek, BufRead, Cursor, Write};
use rustc_serialize::json;
static USAGE: &'static str = "
Commands:
server Starts a server from an initial cluster config.
client Starts a client repl, that communicates with
a particular cluster.
Usage:
hashmap server <id>
hashmap servers
hashmap client
Options:
-h --help Show a help message.
";
fn main() {
env_logger::init().unwrap();
trace!("Starting program");
// TODO (sydli) make prettier
let mut args = args();
if let Some(command) = args.nth(1) {
if &command == "server" {
if let (Some(id_str), Some(filename)) =
(args.next(), args.next()) {
if let Ok(id) = id_str.parse::<u64>() {
Server::new(id, cluster_from_file(&filename).get(&id).unwrap()).repl();
return;
}
}
} else if &command == "client" {
if let Some(filename) = args.next() {
Client::new(&cluster_from_file(&filename)).repl();
return;
}
} else if &command == "servers" {
if let Some(filename) = args.next() {
Cluster::new(&cluster_from_file(&filename)).repl();
return;
}
}
}
println!("Incorrect usage. \n{}", USAGE);
}
///
/// Automatically builds a repl loop for an object, if it implements |exec|.
/// Each line a user types into the repl is fed to |exec|.
///
/// Default commands:
/// exit Exits the loop and shuts down associated process.
/// help Displays the |help| message for more usage info.
trait Repl {
///
/// Processes each input command from the Repl.
/// Returns true if |command| is formatted correctly.
///
fn exec(&mut self, command: String) -> bool;
///
/// Usage string displayed with the default command "help".
///
fn usage(&self) -> String;
fn print_usage(&self) {
println!("\nREPL COMMANDS:\n==============\n{}\n{}\n{}",
"exit\n\t\tExits the loop and shuts down associated process.",
"help\n\t\tSpits out this message.",
self.usage());
}
///
/// Implementation of the repl.
fn repl(&mut self) {
println!("[ Starting repl ]");
loop {
print!("> ");
stdout().flush().unwrap();
let mut buffer = String::new();
if stdin().read_line(&mut buffer).is_ok() {
let words: Vec<String> =
buffer.split_whitespace().map(String::from).collect();
if words.get(0) == Some(&String::from("exit")) { break; }
if words.get(0) == Some(&String::from("help")) {
self.print_usage();
continue;
}
if!self.exec(buffer.clone()) {
println!("Command not recognized.");
self.print_usage();
}
}
}
}
}
#[derive(Clone, Debug)]
struct ServerInfo {
addr: SocketAddr,
state_filename: String,
log_filename: String,
}
const STATE_FILENAME_LEN: usize = 20;
impl ServerInfo {
fn new(addr: SocketAddr) -> ServerInfo {
let mut random_filename: String = thread_rng().gen_ascii_chars().take(STATE_FILENAME_LEN).collect();
ServerInfo {
addr: addr,
state_filename: String::from("/tmp/state_") + &random_filename,
log_filename: String::from("/tmp/log_") + &random_filename,
}
}
}
struct Server { handle: ServerHandle, }
impl Repl for Server { // TODO: impl repl for a single node in the cluster
fn exec(&mut self, command: String) -> bool { true }
fn usage(&self) -> String { String::from("") }
}
const FIRST_ID: u64 = 1;
impl Server {
fn new(id: u64, info: &ServerInfo) -> Server {
trace!("Starting new server with state file {} and log file {}", &info.state_filename, &info.log_filename);
Server {
handle:
start_server(id, Box::new(RaftHashMap { map: HashMap::new() }), info.addr, id == FIRST_ID,
info.state_filename.clone(), info.log_filename.clone()).unwrap()
}
}
}
struct Cluster {
servers: HashMap<u64, Server>,
cluster: HashMap<u64, ServerInfo>,
client: RaftConnection,
}
impl Cluster {
fn new(info: &HashMap<u64, ServerInfo>) -> Cluster {
let first_cluster = info.clone().into_iter().map(|(id, info)| (id, info.addr) )
.filter(|&(id, _)| id == FIRST_ID).collect::<HashMap<u64, SocketAddr>>();
let mut servers = HashMap::new();
// start up all the servers
for (id, info) in info { servers.insert(*id, Server::new(*id, info)); }
// connect to the cluster (which will only contain the bootstrapped server)
let mut raft_db = RaftConnection::new_with_session(&first_cluster).unwrap();
// issue AddServer RPCs for all the other servers
for id in servers.keys() {
if *id == FIRST_ID { continue; }
raft_db.add_server(*id, info.get(id).cloned().unwrap().addr).unwrap();
}
Cluster { servers: servers, cluster: info.clone(), client: raft_db }
}
fn add_server(&mut self, id: u64, addr: SocketAddr) {
if self.cluster.contains_key(&id) {
println!("Server {} is already in the cluster. Servers {:?}", id, self.cluster);
return;
}
trace!("Starting a new server at {}", addr);
let info = ServerInfo::new(addr);
self.cluster.insert(id, info);
self.servers.insert(id, Server::new(id, &self.cluster[&id]));
trace!("Attempting to add server {}", id);
self.client.add_server(id, addr).unwrap();
println!("Added server {}, {:?}", id, addr);
}
fn remove_server(&mut self, id: u64) {
if!self.cluster.contains_key(&id) {
println!("Server {} is not in the cluster! Servers: {:?}", id,
self.cluster);
}
println!("Removing server {}", id);
}
fn kill_server(&mut self, id: u64) {
if!self.servers.contains_key(&id) {
println!("Server {} is not up right now!", id);
} else {
{
// drop server
self.servers.remove(&id).unwrap();
println!("Killed server {}", id);
}
}
}
fn start_server(&mut self, id: u64) {
if self.servers.contains_key(&id) {
println!("Server {} is already up!", id);
}
self.servers.insert(id, Server::new(id, self.cluster.get(&id).unwrap()));
println!("Restarted server {}", id);
}
fn print_servers(&self) {
println!("Servers in cluster: {:?}\n Live servers: {:?}",
self.cluster, self.servers.keys().collect::<Vec<&u64>>());
}
}
impl Repl for Cluster {
fn exec(&mut self, command: String) -> bool {
let words: Vec<&str> =
command.split_whitespace().collect();
let first = words.get(0).map(|x|*x);
if first == Some("add") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
let addr = words.get(2).and_then(|x| as_addr(*x).ok());
if num.and(addr).is_none() { return false; }
self.add_server(num.unwrap(), addr.unwrap());
} else if first == Some("remove") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.remove_server(num.unwrap());
} else if first == Some("kill") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.kill_server(num.unwrap());
} else if first == Some("start") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.start_server(num.unwrap());
} else if first == Some("list") {
self.print_servers();
} else { return false; }
return true;
}
fn usage(&self) -> String {
String::from(format!(
"{}\n{}\n{}\n{}\n{}",
"add <node-id> <node-addr>\tAdds a server to the cluster.",
"remove <node-id>\tRemoves a server from the cluster.",
"start <node-id>\tStarts a server from the cluster.",
"kill <node-id>\tKills a server from the cluster.",
"list\tLists all cluster servers, and the ones that are live."))
}
}
struct Client {
raft: RaftConnection,
}
impl Repl for Client {
fn exec (&mut self, command: String) -> bool {
let words: Vec<String> =
command.split_whitespace().map(String::from).collect();
self.process_command(words)
}
fn usage(&self) -> String {
String::from(format!(
"{}\n{}",
"get <key>\n\t\tPrints value of <key> in hashmap, if it exists.",
"put <key> <value>\n\t\tPuts value of <key> as <value>"))
}
}
#[derive(RustcDecodable, RustcEncodable)]
struct Put {
key: String,
value: String,
}
impl Client {
fn new(cluster: &HashMap<u64, ServerInfo>) -> Client {
let cluster = cluster.clone().into_iter().map(|(id, info)| (id, info.addr))
.collect::<HashMap<u64, SocketAddr>>();
let connection = RaftConnection::new_with_session(&cluster);
if connection.is_none() {
println!("Couldn't establish connection to cluster at {:?}", cluster);
panic!();
}
Client { raft: connection.unwrap() }
}
fn get(&mut self, key: String) -> Result<String, RaftError> {
self.raft.query(key.as_bytes())
.and_then(
|result| str::from_utf8(&result)
.map(str::to_string)
.map_err(deserialize_error))
}
fn put(&mut self, key: String, value: String) -> Result<(), RaftError> {
json::encode(&Put {key:key, value:value})
.map_err(serialize_error)
.and_then(|buffer| self.raft.command(&buffer.as_bytes()))
}
// TODO (sydli): make this function less shit
fn process_command(&mut self, words: Vec<String>)
-> bool {
if words.len() == 0 { return true; }
let ref cmd = words[0];
if *cmd == String::from("get") {
if words.len() <= 1 { return false; }
words.get(1).map(|key| {
match self.get(key.clone()) {
Ok(value) => println!("Value for {} is {}", key, value),
Err(err) => println!("Error during get: {:?}", err),
}
}).unwrap();
} else if *cmd == String::from("put") {
if words.len() <= 2 { return false; }
words.get(1).map(|key| { words.get(2).map(|val| {
match self.put(key.clone(), val.clone()) {
Ok(()) => println!("Put {} => {} successfully", key, val),
Err(err) => println!("Error during put: {:?}", err),
}
}).unwrap()}).unwrap();
} else { return false; }
return true;
}
}
fn io_err() -> std::io::Error {
Error::new(ErrorKind::InvalidData, "File incorrectly formatted")
}
fn as_num(x: &str) -> Result<u64, std::io::Error> {
String::from(x).parse::<u64>().map_err(|_| io_err())
}
fn as_addr(x: &str) -> Result<SocketAddr, std::io::Error> {
String::from(x).parse::<SocketAddr>().map_err(|_| io_err())
}
/// Panics on io error (we can't access the cluster info!)
/// TODO (sydli) make io_errs more informative
// TODO(jason): Remove this and bootstrap a dynamic cluster
fn cluster_from_file(filename: &str) -> HashMap<u64, ServerInfo> {
let file = File::open(filename.clone())
.expect(&format!("Unable to open file {}", filename));
let mut lines = BufReader::new(file).lines();
lines.next().ok_or(io_err())
.map(|line_or_io_error| line_or_io_error.unwrap())
.and_then(|x| as_num(&x))
.and_then(|num| (0..num).map(|_| {
lines.next().ok_or(io_err())
.map(|line_or_io_error| line_or_io_error.unwrap())
.and_then(|node_str| {
let mut words = node_str.split_whitespace().map(String::from);
let id = words.next().ok_or(io_err())
.and_then(|x| as_num(&x));
words.next().ok_or(io_err())
.and_then(|x| as_addr(&x))
.and_then(move |addr| id.map(|id| (id, ServerInfo::new(addr))))
})
}).collect::<Result<Vec<_>, _>>())
.map(|nodes: Vec<(u64, ServerInfo)>|
nodes.iter().cloned().collect::<HashMap<u64, ServerInfo>>()
).unwrap()
}
struct RaftHashMap {
map: HashMap<String, String>,
}
fn | <T: Debug>(error: T) -> RaftError {
RaftError::ClientError(
format!("Couldn't serialize object. Error: {:?}", error))
}
fn deserialize_error <T: Debug>(error: T) -> RaftError {
RaftError::ClientError(
format!("Couldn't deserialize buffer. Error: {:?}", error))
}
fn key_error(key: &String) -> RaftError {
RaftError::ClientError(format!("Couldn't find key {}", key))
}
impl StateMachine for RaftHashMap {
fn command (&mut self, buffer: &[u8]) ->Result<(), RaftError> {
str::from_utf8(buffer)
.map_err(deserialize_error)
.and_then(|string| json::decode(&string)
.map_err(deserialize_error))
.map(|put: Put|
{
self.map.insert(put.key, put.value);
})
}
fn query (& self, buffer: &[u8]) ->Result<Vec<u8>, RaftError> {
str::from_utf8(buffer)
.map_err(deserialize_error)
.and_then(|key| {
let key = key.to_string();
self.map.get(&key)
.map(|x| x.as_bytes().to_vec())
.ok_or(key_error(&key))
})
}
}
| serialize_error | identifier_name |
hashmap.rs | #[macro_use] extern crate log;
extern crate rusty_raft;
extern crate rand;
extern crate rustc_serialize;
extern crate env_logger;
use rand::{thread_rng, Rng};
use rusty_raft::server::{start_server, ServerHandle};
use rusty_raft::client::{RaftConnection};
use rusty_raft::client::state_machine::{
StateMachine, RaftStateMachine};
use rusty_raft::common::{Config, RaftError};
use std::env::args;
use std::collections::HashMap;
use std::fmt::Debug;
use std::net::SocketAddr;
use std::str;
use std::str::FromStr;
use std::time::Duration;
use std::fs::File;
use std::io::{stdin, stdout, SeekFrom, Error, ErrorKind, Read, BufReader, Seek, BufRead, Cursor, Write};
use rustc_serialize::json;
static USAGE: &'static str = "
Commands:
server Starts a server from an initial cluster config.
client Starts a client repl, that communicates with
a particular cluster.
Usage:
hashmap server <id>
hashmap servers
hashmap client
Options:
-h --help Show a help message.
";
fn main() {
env_logger::init().unwrap();
trace!("Starting program");
// TODO (sydli) make prettier
let mut args = args();
if let Some(command) = args.nth(1) {
if &command == "server" {
if let (Some(id_str), Some(filename)) =
(args.next(), args.next()) {
if let Ok(id) = id_str.parse::<u64>() {
Server::new(id, cluster_from_file(&filename).get(&id).unwrap()).repl();
return;
}
}
} else if &command == "client" {
if let Some(filename) = args.next() {
Client::new(&cluster_from_file(&filename)).repl();
return;
}
} else if &command == "servers" {
if let Some(filename) = args.next() {
Cluster::new(&cluster_from_file(&filename)).repl();
return;
}
}
}
println!("Incorrect usage. \n{}", USAGE);
}
///
/// Automatically builds a repl loop for an object, if it implements |exec|.
/// Each line a user types into the repl is fed to |exec|.
///
/// Default commands:
/// exit Exits the loop and shuts down associated process.
/// help Displays the |help| message for more usage info.
trait Repl {
///
/// Processes each input command from the Repl.
/// Returns true if |command| is formatted correctly.
///
fn exec(&mut self, command: String) -> bool;
///
/// Usage string displayed with the default command "help".
///
fn usage(&self) -> String;
fn print_usage(&self) {
println!("\nREPL COMMANDS:\n==============\n{}\n{}\n{}",
"exit\n\t\tExits the loop and shuts down associated process.",
"help\n\t\tSpits out this message.",
self.usage());
}
///
/// Implementation of the repl.
fn repl(&mut self) {
println!("[ Starting repl ]");
loop {
print!("> ");
stdout().flush().unwrap();
let mut buffer = String::new();
if stdin().read_line(&mut buffer).is_ok() {
let words: Vec<String> =
buffer.split_whitespace().map(String::from).collect();
if words.get(0) == Some(&String::from("exit")) { break; }
if words.get(0) == Some(&String::from("help")) {
self.print_usage();
continue;
}
if!self.exec(buffer.clone()) {
println!("Command not recognized.");
self.print_usage();
}
}
}
}
}
#[derive(Clone, Debug)]
struct ServerInfo {
addr: SocketAddr,
state_filename: String,
log_filename: String,
}
const STATE_FILENAME_LEN: usize = 20;
impl ServerInfo {
fn new(addr: SocketAddr) -> ServerInfo {
let mut random_filename: String = thread_rng().gen_ascii_chars().take(STATE_FILENAME_LEN).collect();
ServerInfo {
addr: addr,
state_filename: String::from("/tmp/state_") + &random_filename,
log_filename: String::from("/tmp/log_") + &random_filename,
}
}
}
struct Server { handle: ServerHandle, }
impl Repl for Server { // TODO: impl repl for a single node in the cluster
fn exec(&mut self, command: String) -> bool { true }
fn usage(&self) -> String { String::from("") }
}
const FIRST_ID: u64 = 1;
impl Server {
fn new(id: u64, info: &ServerInfo) -> Server {
trace!("Starting new server with state file {} and log file {}", &info.state_filename, &info.log_filename);
Server {
handle:
start_server(id, Box::new(RaftHashMap { map: HashMap::new() }), info.addr, id == FIRST_ID,
info.state_filename.clone(), info.log_filename.clone()).unwrap()
}
}
}
struct Cluster {
servers: HashMap<u64, Server>,
cluster: HashMap<u64, ServerInfo>,
client: RaftConnection,
}
impl Cluster {
fn new(info: &HashMap<u64, ServerInfo>) -> Cluster {
let first_cluster = info.clone().into_iter().map(|(id, info)| (id, info.addr) )
.filter(|&(id, _)| id == FIRST_ID).collect::<HashMap<u64, SocketAddr>>();
let mut servers = HashMap::new();
// start up all the servers
for (id, info) in info { servers.insert(*id, Server::new(*id, info)); }
// connect to the cluster (which will only contain the bootstrapped server)
let mut raft_db = RaftConnection::new_with_session(&first_cluster).unwrap();
// issue AddServer RPCs for all the other servers
for id in servers.keys() {
if *id == FIRST_ID { continue; }
raft_db.add_server(*id, info.get(id).cloned().unwrap().addr).unwrap();
}
Cluster { servers: servers, cluster: info.clone(), client: raft_db }
}
fn add_server(&mut self, id: u64, addr: SocketAddr) {
if self.cluster.contains_key(&id) {
println!("Server {} is already in the cluster. Servers {:?}", id, self.cluster);
return;
}
trace!("Starting a new server at {}", addr);
let info = ServerInfo::new(addr);
self.cluster.insert(id, info);
self.servers.insert(id, Server::new(id, &self.cluster[&id]));
trace!("Attempting to add server {}", id);
self.client.add_server(id, addr).unwrap();
println!("Added server {}, {:?}", id, addr);
}
fn remove_server(&mut self, id: u64) {
if!self.cluster.contains_key(&id) {
println!("Server {} is not in the cluster! Servers: {:?}", id,
self.cluster);
}
println!("Removing server {}", id);
}
fn kill_server(&mut self, id: u64) {
if!self.servers.contains_key(&id) {
println!("Server {} is not up right now!", id);
} else {
{
// drop server
self.servers.remove(&id).unwrap();
println!("Killed server {}", id);
}
}
}
fn start_server(&mut self, id: u64) {
if self.servers.contains_key(&id) {
println!("Server {} is already up!", id);
}
self.servers.insert(id, Server::new(id, self.cluster.get(&id).unwrap()));
println!("Restarted server {}", id);
}
fn print_servers(&self) {
println!("Servers in cluster: {:?}\n Live servers: {:?}",
self.cluster, self.servers.keys().collect::<Vec<&u64>>());
}
}
impl Repl for Cluster {
fn exec(&mut self, command: String) -> bool {
let words: Vec<&str> =
command.split_whitespace().collect();
let first = words.get(0).map(|x|*x);
if first == Some("add") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
let addr = words.get(2).and_then(|x| as_addr(*x).ok());
if num.and(addr).is_none() { return false; }
self.add_server(num.unwrap(), addr.unwrap());
} else if first == Some("remove") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.remove_server(num.unwrap());
} else if first == Some("kill") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.kill_server(num.unwrap());
} else if first == Some("start") {
let num = words.get(1).and_then(|x| as_num(*x).ok());
if num.is_none() { return false; }
self.start_server(num.unwrap());
} else if first == Some("list") {
self.print_servers();
} else { return false; }
return true;
}
fn usage(&self) -> String {
String::from(format!(
"{}\n{}\n{}\n{}\n{}",
"add <node-id> <node-addr>\tAdds a server to the cluster.",
"remove <node-id>\tRemoves a server from the cluster.",
"start <node-id>\tStarts a server from the cluster.",
"kill <node-id>\tKills a server from the cluster.",
"list\tLists all cluster servers, and the ones that are live."))
}
}
struct Client {
raft: RaftConnection,
}
impl Repl for Client {
fn exec (&mut self, command: String) -> bool {
let words: Vec<String> =
command.split_whitespace().map(String::from).collect();
self.process_command(words)
}
fn usage(&self) -> String {
String::from(format!(
"{}\n{}",
"get <key>\n\t\tPrints value of <key> in hashmap, if it exists.",
"put <key> <value>\n\t\tPuts value of <key> as <value>"))
}
}
#[derive(RustcDecodable, RustcEncodable)]
struct Put {
key: String,
value: String,
}
impl Client {
fn new(cluster: &HashMap<u64, ServerInfo>) -> Client {
let cluster = cluster.clone().into_iter().map(|(id, info)| (id, info.addr))
.collect::<HashMap<u64, SocketAddr>>();
let connection = RaftConnection::new_with_session(&cluster);
if connection.is_none() {
println!("Couldn't establish connection to cluster at {:?}", cluster);
panic!();
}
Client { raft: connection.unwrap() }
}
fn get(&mut self, key: String) -> Result<String, RaftError> {
self.raft.query(key.as_bytes())
.and_then(
|result| str::from_utf8(&result)
.map(str::to_string)
.map_err(deserialize_error))
}
fn put(&mut self, key: String, value: String) -> Result<(), RaftError> {
json::encode(&Put {key:key, value:value})
.map_err(serialize_error)
.and_then(|buffer| self.raft.command(&buffer.as_bytes()))
}
// TODO (sydli): make this function less shit
fn process_command(&mut self, words: Vec<String>)
-> bool {
if words.len() == 0 { return true; }
let ref cmd = words[0];
if *cmd == String::from("get") {
if words.len() <= 1 { return false; }
words.get(1).map(|key| {
match self.get(key.clone()) {
Ok(value) => println!("Value for {} is {}", key, value),
Err(err) => println!("Error during get: {:?}", err),
}
}).unwrap();
} else if *cmd == String::from("put") {
if words.len() <= 2 { return false; }
words.get(1).map(|key| { words.get(2).map(|val| {
match self.put(key.clone(), val.clone()) {
Ok(()) => println!("Put {} => {} successfully", key, val),
Err(err) => println!("Error during put: {:?}", err),
}
}).unwrap()}).unwrap();
} else { return false; }
return true;
}
}
fn io_err() -> std::io::Error {
Error::new(ErrorKind::InvalidData, "File incorrectly formatted")
}
fn as_num(x: &str) -> Result<u64, std::io::Error> {
String::from(x).parse::<u64>().map_err(|_| io_err())
}
fn as_addr(x: &str) -> Result<SocketAddr, std::io::Error> {
String::from(x).parse::<SocketAddr>().map_err(|_| io_err())
}
/// Panics on io error (we can't access the cluster info!)
/// TODO (sydli) make io_errs more informative
// TODO(jason): Remove this and bootstrap a dynamic cluster
fn cluster_from_file(filename: &str) -> HashMap<u64, ServerInfo> {
let file = File::open(filename.clone())
.expect(&format!("Unable to open file {}", filename));
let mut lines = BufReader::new(file).lines();
lines.next().ok_or(io_err())
.map(|line_or_io_error| line_or_io_error.unwrap())
.and_then(|x| as_num(&x))
.and_then(|num| (0..num).map(|_| {
lines.next().ok_or(io_err())
.map(|line_or_io_error| line_or_io_error.unwrap())
.and_then(|node_str| {
let mut words = node_str.split_whitespace().map(String::from);
let id = words.next().ok_or(io_err())
.and_then(|x| as_num(&x));
words.next().ok_or(io_err())
.and_then(|x| as_addr(&x))
.and_then(move |addr| id.map(|id| (id, ServerInfo::new(addr))))
})
}).collect::<Result<Vec<_>, _>>())
.map(|nodes: Vec<(u64, ServerInfo)>|
nodes.iter().cloned().collect::<HashMap<u64, ServerInfo>>()
).unwrap()
}
struct RaftHashMap {
map: HashMap<String, String>,
}
fn serialize_error <T: Debug>(error: T) -> RaftError {
RaftError::ClientError(
format!("Couldn't serialize object. Error: {:?}", error))
}
fn deserialize_error <T: Debug>(error: T) -> RaftError {
RaftError::ClientError(
format!("Couldn't deserialize buffer. Error: {:?}", error))
}
| }
impl StateMachine for RaftHashMap {
fn command (&mut self, buffer: &[u8]) ->Result<(), RaftError> {
str::from_utf8(buffer)
.map_err(deserialize_error)
.and_then(|string| json::decode(&string)
.map_err(deserialize_error))
.map(|put: Put|
{
self.map.insert(put.key, put.value);
})
}
fn query (& self, buffer: &[u8]) ->Result<Vec<u8>, RaftError> {
str::from_utf8(buffer)
.map_err(deserialize_error)
.and_then(|key| {
let key = key.to_string();
self.map.get(&key)
.map(|x| x.as_bytes().to_vec())
.ok_or(key_error(&key))
})
}
} | fn key_error(key: &String) -> RaftError {
RaftError::ClientError(format!("Couldn't find key {}", key)) | random_line_split |
manifest.rs | // Copyright 2015, 2016 Ethcore (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or | // Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use serde_json;
pub use api::App as Manifest;
pub const MANIFEST_FILENAME: &'static str = "manifest.json";
pub fn deserialize_manifest(manifest: String) -> Result<Manifest, String> {
serde_json::from_str::<Manifest>(&manifest).map_err(|e| format!("{:?}", e))
// TODO [todr] Manifest validation (especialy: id (used as path))
}
pub fn serialize_manifest(manifest: &Manifest) -> Result<String, String> {
serde_json::to_string_pretty(manifest).map_err(|e| format!("{:?}", e))
} | // (at your option) any later version.
| random_line_split |
manifest.rs | // Copyright 2015, 2016 Ethcore (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use serde_json;
pub use api::App as Manifest;
pub const MANIFEST_FILENAME: &'static str = "manifest.json";
pub fn deserialize_manifest(manifest: String) -> Result<Manifest, String> {
serde_json::from_str::<Manifest>(&manifest).map_err(|e| format!("{:?}", e))
// TODO [todr] Manifest validation (especialy: id (used as path))
}
pub fn | (manifest: &Manifest) -> Result<String, String> {
serde_json::to_string_pretty(manifest).map_err(|e| format!("{:?}", e))
}
| serialize_manifest | identifier_name |
manifest.rs | // Copyright 2015, 2016 Ethcore (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use serde_json;
pub use api::App as Manifest;
pub const MANIFEST_FILENAME: &'static str = "manifest.json";
pub fn deserialize_manifest(manifest: String) -> Result<Manifest, String> |
pub fn serialize_manifest(manifest: &Manifest) -> Result<String, String> {
serde_json::to_string_pretty(manifest).map_err(|e| format!("{:?}", e))
}
| {
serde_json::from_str::<Manifest>(&manifest).map_err(|e| format!("{:?}", e))
// TODO [todr] Manifest validation (especialy: id (used as path))
} | identifier_body |
graphviz.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! This module provides linkage between rustc::middle::graph and
//! libgraphviz traits, specialized to attaching borrowck analysis
//! data to rendered labels.
/// For clarity, rename the graphviz crate locally to dot.
use graphviz as dot;
pub use middle::cfg::graphviz::{Node, Edge};
use middle::cfg::graphviz as cfg_dot;
use middle::borrowck;
use middle::borrowck::{BorrowckCtxt, LoanPath};
use middle::cfg::{CFGIndex};
use middle::dataflow::{DataFlowOperator, DataFlowContext, EntryOrExit};
use middle::dataflow;
use std::rc::Rc;
use std::str;
#[deriving(Show)]
pub enum Variant {
Loans,
Moves,
Assigns,
}
impl Variant {
pub fn short_name(&self) -> &'static str {
match *self {
Loans => "loans",
Moves => "moves",
Assigns => "assigns",
}
}
}
pub struct DataflowLabeller<'a, 'tcx: 'a> {
pub inner: cfg_dot::LabelledCFG<'a, 'tcx>,
pub variants: Vec<Variant>,
pub borrowck_ctxt: &'a BorrowckCtxt<'a, 'tcx>,
pub analysis_data: &'a borrowck::AnalysisData<'a, 'tcx>,
}
impl<'a, 'tcx> DataflowLabeller<'a, 'tcx> {
fn dataflow_for(&self, e: EntryOrExit, n: &Node<'a>) -> String {
let id = n.val1().data.id;
debug!("dataflow_for({}, id={}) {}", e, id, self.variants);
let mut sets = "".to_string();
let mut seen_one = false;
for &variant in self.variants.iter() {
if seen_one { sets.push_str(" "); } else { seen_one = true; }
sets.push_str(variant.short_name());
sets.push_str(": ");
sets.push_str(self.dataflow_for_variant(e, n, variant).as_slice());
}
sets
}
fn dataflow_for_variant(&self, e: EntryOrExit, n: &Node, v: Variant) -> String {
let cfgidx = n.val0();
match v {
Loans => self.dataflow_loans_for(e, cfgidx),
Moves => self.dataflow_moves_for(e, cfgidx),
Assigns => self.dataflow_assigns_for(e, cfgidx),
}
}
fn build_set<O:DataFlowOperator>(&self,
e: EntryOrExit,
cfgidx: CFGIndex,
dfcx: &DataFlowContext<'a, 'tcx, O>,
to_lp: |uint| -> Rc<LoanPath>) -> String {
let mut saw_some = false;
let mut set = "{".to_string();
dfcx.each_bit_for_node(e, cfgidx, |index| {
let lp = to_lp(index);
if saw_some {
set.push_str(", ");
}
let loan_str = self.borrowck_ctxt.loan_path_to_string(&*lp);
set.push_str(loan_str.as_slice());
saw_some = true;
true
});
set.append("}")
}
fn dataflow_loans_for(&self, e: EntryOrExit, cfgidx: CFGIndex) -> String {
let dfcx = &self.analysis_data.loans;
let loan_index_to_path = |loan_index| {
let all_loans = &self.analysis_data.all_loans;
all_loans.get(loan_index).loan_path()
};
self.build_set(e, cfgidx, dfcx, loan_index_to_path)
}
fn dataflow_moves_for(&self, e: EntryOrExit, cfgidx: CFGIndex) -> String {
let dfcx = &self.analysis_data.move_data.dfcx_moves;
let move_index_to_path = |move_index| {
let move_data = &self.analysis_data.move_data.move_data;
let moves = move_data.moves.borrow();
let the_move = moves.get(move_index);
move_data.path_loan_path(the_move.path)
};
self.build_set(e, cfgidx, dfcx, move_index_to_path)
}
fn dataflow_assigns_for(&self, e: EntryOrExit, cfgidx: CFGIndex) -> String {
let dfcx = &self.analysis_data.move_data.dfcx_assign;
let assign_index_to_path = |assign_index| {
let move_data = &self.analysis_data.move_data.move_data;
let assignments = move_data.var_assignments.borrow();
let assignment = assignments.get(assign_index);
move_data.path_loan_path(assignment.path)
};
self.build_set(e, cfgidx, dfcx, assign_index_to_path)
}
}
impl<'a, 'tcx> dot::Labeller<'a, Node<'a>, Edge<'a>> for DataflowLabeller<'a, 'tcx> {
fn graph_id(&'a self) -> dot::Id<'a> { self.inner.graph_id() }
fn node_id(&'a self, n: &Node<'a>) -> dot::Id<'a> { self.inner.node_id(n) }
fn node_label(&'a self, n: &Node<'a>) -> dot::LabelText<'a> {
let prefix = self.dataflow_for(dataflow::Entry, n);
let suffix = self.dataflow_for(dataflow::Exit, n);
let inner_label = self.inner.node_label(n);
inner_label
.prefix_line(dot::LabelStr(str::Owned(prefix)))
.suffix_line(dot::LabelStr(str::Owned(suffix)))
}
fn edge_label(&'a self, e: &Edge<'a>) -> dot::LabelText<'a> |
}
impl<'a, 'tcx> dot::GraphWalk<'a, Node<'a>, Edge<'a>> for DataflowLabeller<'a, 'tcx> {
fn nodes(&self) -> dot::Nodes<'a, Node<'a>> { self.inner.nodes() }
fn edges(&self) -> dot::Edges<'a, Edge<'a>> { self.inner.edges() }
fn source(&self, edge: &Edge<'a>) -> Node<'a> { self.inner.source(edge) }
fn target(&self, edge: &Edge<'a>) -> Node<'a> { self.inner.target(edge) }
}
| { self.inner.edge_label(e) } | identifier_body |
graphviz.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! This module provides linkage between rustc::middle::graph and
//! libgraphviz traits, specialized to attaching borrowck analysis
//! data to rendered labels.
/// For clarity, rename the graphviz crate locally to dot.
use graphviz as dot;
pub use middle::cfg::graphviz::{Node, Edge};
use middle::cfg::graphviz as cfg_dot;
use middle::borrowck;
use middle::borrowck::{BorrowckCtxt, LoanPath};
use middle::cfg::{CFGIndex};
use middle::dataflow::{DataFlowOperator, DataFlowContext, EntryOrExit};
use middle::dataflow;
use std::rc::Rc;
use std::str;
#[deriving(Show)]
pub enum Variant {
Loans,
Moves,
Assigns,
}
impl Variant {
pub fn short_name(&self) -> &'static str {
match *self {
Loans => "loans",
Moves => "moves",
Assigns => "assigns",
}
}
}
pub struct DataflowLabeller<'a, 'tcx: 'a> {
pub inner: cfg_dot::LabelledCFG<'a, 'tcx>,
pub variants: Vec<Variant>,
pub borrowck_ctxt: &'a BorrowckCtxt<'a, 'tcx>,
pub analysis_data: &'a borrowck::AnalysisData<'a, 'tcx>,
}
impl<'a, 'tcx> DataflowLabeller<'a, 'tcx> {
fn dataflow_for(&self, e: EntryOrExit, n: &Node<'a>) -> String {
let id = n.val1().data.id;
debug!("dataflow_for({}, id={}) {}", e, id, self.variants);
let mut sets = "".to_string();
let mut seen_one = false;
for &variant in self.variants.iter() {
if seen_one { sets.push_str(" "); } else { seen_one = true; }
sets.push_str(variant.short_name());
sets.push_str(": ");
sets.push_str(self.dataflow_for_variant(e, n, variant).as_slice());
}
sets
}
fn dataflow_for_variant(&self, e: EntryOrExit, n: &Node, v: Variant) -> String {
let cfgidx = n.val0();
match v {
Loans => self.dataflow_loans_for(e, cfgidx),
Moves => self.dataflow_moves_for(e, cfgidx),
Assigns => self.dataflow_assigns_for(e, cfgidx),
}
}
fn build_set<O:DataFlowOperator>(&self,
e: EntryOrExit,
cfgidx: CFGIndex,
dfcx: &DataFlowContext<'a, 'tcx, O>,
to_lp: |uint| -> Rc<LoanPath>) -> String {
let mut saw_some = false;
let mut set = "{".to_string();
dfcx.each_bit_for_node(e, cfgidx, |index| {
let lp = to_lp(index);
if saw_some {
set.push_str(", ");
}
let loan_str = self.borrowck_ctxt.loan_path_to_string(&*lp);
set.push_str(loan_str.as_slice());
saw_some = true;
true
});
set.append("}")
}
fn dataflow_loans_for(&self, e: EntryOrExit, cfgidx: CFGIndex) -> String {
let dfcx = &self.analysis_data.loans;
let loan_index_to_path = |loan_index| {
let all_loans = &self.analysis_data.all_loans;
all_loans.get(loan_index).loan_path()
};
self.build_set(e, cfgidx, dfcx, loan_index_to_path)
}
fn dataflow_moves_for(&self, e: EntryOrExit, cfgidx: CFGIndex) -> String {
let dfcx = &self.analysis_data.move_data.dfcx_moves;
let move_index_to_path = |move_index| {
let move_data = &self.analysis_data.move_data.move_data;
let moves = move_data.moves.borrow();
let the_move = moves.get(move_index);
move_data.path_loan_path(the_move.path)
};
self.build_set(e, cfgidx, dfcx, move_index_to_path)
}
fn | (&self, e: EntryOrExit, cfgidx: CFGIndex) -> String {
let dfcx = &self.analysis_data.move_data.dfcx_assign;
let assign_index_to_path = |assign_index| {
let move_data = &self.analysis_data.move_data.move_data;
let assignments = move_data.var_assignments.borrow();
let assignment = assignments.get(assign_index);
move_data.path_loan_path(assignment.path)
};
self.build_set(e, cfgidx, dfcx, assign_index_to_path)
}
}
impl<'a, 'tcx> dot::Labeller<'a, Node<'a>, Edge<'a>> for DataflowLabeller<'a, 'tcx> {
fn graph_id(&'a self) -> dot::Id<'a> { self.inner.graph_id() }
fn node_id(&'a self, n: &Node<'a>) -> dot::Id<'a> { self.inner.node_id(n) }
fn node_label(&'a self, n: &Node<'a>) -> dot::LabelText<'a> {
let prefix = self.dataflow_for(dataflow::Entry, n);
let suffix = self.dataflow_for(dataflow::Exit, n);
let inner_label = self.inner.node_label(n);
inner_label
.prefix_line(dot::LabelStr(str::Owned(prefix)))
.suffix_line(dot::LabelStr(str::Owned(suffix)))
}
fn edge_label(&'a self, e: &Edge<'a>) -> dot::LabelText<'a> { self.inner.edge_label(e) }
}
impl<'a, 'tcx> dot::GraphWalk<'a, Node<'a>, Edge<'a>> for DataflowLabeller<'a, 'tcx> {
fn nodes(&self) -> dot::Nodes<'a, Node<'a>> { self.inner.nodes() }
fn edges(&self) -> dot::Edges<'a, Edge<'a>> { self.inner.edges() }
fn source(&self, edge: &Edge<'a>) -> Node<'a> { self.inner.source(edge) }
fn target(&self, edge: &Edge<'a>) -> Node<'a> { self.inner.target(edge) }
}
| dataflow_assigns_for | identifier_name |
graphviz.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! This module provides linkage between rustc::middle::graph and
//! libgraphviz traits, specialized to attaching borrowck analysis
//! data to rendered labels.
/// For clarity, rename the graphviz crate locally to dot.
use graphviz as dot;
pub use middle::cfg::graphviz::{Node, Edge};
use middle::cfg::graphviz as cfg_dot;
use middle::borrowck;
use middle::borrowck::{BorrowckCtxt, LoanPath};
use middle::cfg::{CFGIndex};
use middle::dataflow::{DataFlowOperator, DataFlowContext, EntryOrExit};
use middle::dataflow;
use std::rc::Rc;
use std::str;
#[deriving(Show)]
pub enum Variant {
Loans,
Moves,
Assigns,
}
impl Variant {
pub fn short_name(&self) -> &'static str {
match *self {
Loans => "loans",
Moves => "moves",
Assigns => "assigns",
}
}
}
pub struct DataflowLabeller<'a, 'tcx: 'a> {
pub inner: cfg_dot::LabelledCFG<'a, 'tcx>,
pub variants: Vec<Variant>,
pub borrowck_ctxt: &'a BorrowckCtxt<'a, 'tcx>,
pub analysis_data: &'a borrowck::AnalysisData<'a, 'tcx>,
}
impl<'a, 'tcx> DataflowLabeller<'a, 'tcx> {
fn dataflow_for(&self, e: EntryOrExit, n: &Node<'a>) -> String {
let id = n.val1().data.id;
debug!("dataflow_for({}, id={}) {}", e, id, self.variants);
let mut sets = "".to_string();
let mut seen_one = false;
for &variant in self.variants.iter() {
if seen_one { sets.push_str(" "); } else { seen_one = true; }
sets.push_str(variant.short_name());
sets.push_str(": ");
sets.push_str(self.dataflow_for_variant(e, n, variant).as_slice()); | fn dataflow_for_variant(&self, e: EntryOrExit, n: &Node, v: Variant) -> String {
let cfgidx = n.val0();
match v {
Loans => self.dataflow_loans_for(e, cfgidx),
Moves => self.dataflow_moves_for(e, cfgidx),
Assigns => self.dataflow_assigns_for(e, cfgidx),
}
}
fn build_set<O:DataFlowOperator>(&self,
e: EntryOrExit,
cfgidx: CFGIndex,
dfcx: &DataFlowContext<'a, 'tcx, O>,
to_lp: |uint| -> Rc<LoanPath>) -> String {
let mut saw_some = false;
let mut set = "{".to_string();
dfcx.each_bit_for_node(e, cfgidx, |index| {
let lp = to_lp(index);
if saw_some {
set.push_str(", ");
}
let loan_str = self.borrowck_ctxt.loan_path_to_string(&*lp);
set.push_str(loan_str.as_slice());
saw_some = true;
true
});
set.append("}")
}
fn dataflow_loans_for(&self, e: EntryOrExit, cfgidx: CFGIndex) -> String {
let dfcx = &self.analysis_data.loans;
let loan_index_to_path = |loan_index| {
let all_loans = &self.analysis_data.all_loans;
all_loans.get(loan_index).loan_path()
};
self.build_set(e, cfgidx, dfcx, loan_index_to_path)
}
fn dataflow_moves_for(&self, e: EntryOrExit, cfgidx: CFGIndex) -> String {
let dfcx = &self.analysis_data.move_data.dfcx_moves;
let move_index_to_path = |move_index| {
let move_data = &self.analysis_data.move_data.move_data;
let moves = move_data.moves.borrow();
let the_move = moves.get(move_index);
move_data.path_loan_path(the_move.path)
};
self.build_set(e, cfgidx, dfcx, move_index_to_path)
}
fn dataflow_assigns_for(&self, e: EntryOrExit, cfgidx: CFGIndex) -> String {
let dfcx = &self.analysis_data.move_data.dfcx_assign;
let assign_index_to_path = |assign_index| {
let move_data = &self.analysis_data.move_data.move_data;
let assignments = move_data.var_assignments.borrow();
let assignment = assignments.get(assign_index);
move_data.path_loan_path(assignment.path)
};
self.build_set(e, cfgidx, dfcx, assign_index_to_path)
}
}
impl<'a, 'tcx> dot::Labeller<'a, Node<'a>, Edge<'a>> for DataflowLabeller<'a, 'tcx> {
fn graph_id(&'a self) -> dot::Id<'a> { self.inner.graph_id() }
fn node_id(&'a self, n: &Node<'a>) -> dot::Id<'a> { self.inner.node_id(n) }
fn node_label(&'a self, n: &Node<'a>) -> dot::LabelText<'a> {
let prefix = self.dataflow_for(dataflow::Entry, n);
let suffix = self.dataflow_for(dataflow::Exit, n);
let inner_label = self.inner.node_label(n);
inner_label
.prefix_line(dot::LabelStr(str::Owned(prefix)))
.suffix_line(dot::LabelStr(str::Owned(suffix)))
}
fn edge_label(&'a self, e: &Edge<'a>) -> dot::LabelText<'a> { self.inner.edge_label(e) }
}
impl<'a, 'tcx> dot::GraphWalk<'a, Node<'a>, Edge<'a>> for DataflowLabeller<'a, 'tcx> {
fn nodes(&self) -> dot::Nodes<'a, Node<'a>> { self.inner.nodes() }
fn edges(&self) -> dot::Edges<'a, Edge<'a>> { self.inner.edges() }
fn source(&self, edge: &Edge<'a>) -> Node<'a> { self.inner.source(edge) }
fn target(&self, edge: &Edge<'a>) -> Node<'a> { self.inner.target(edge) }
} | }
sets
}
| random_line_split |
dynamic.rs | // SPDX-License-Identifier: Apache-2.0
use std::env;
use std::fs::File;
use std::io::{self, Error, ErrorKind, Read, Seek, SeekFrom};
use std::path::{Path, PathBuf};
use super::common;
//================================================
// Validation
//================================================
/// Extracts the ELF class from the ELF header in a shared library.
fn parse_elf_header(path: &Path) -> io::Result<u8> {
let mut file = File::open(path)?;
let mut buffer = [0; 5];
file.read_exact(&mut buffer)?;
if buffer[..4] == [127, 69, 76, 70] | else {
Err(Error::new(ErrorKind::InvalidData, "invalid ELF header"))
}
}
/// Extracts the magic number from the PE header in a shared library.
fn parse_pe_header(path: &Path) -> io::Result<u16> {
let mut file = File::open(path)?;
// Extract the header offset.
let mut buffer = [0; 4];
let start = SeekFrom::Start(0x3C);
file.seek(start)?;
file.read_exact(&mut buffer)?;
let offset = i32::from_le_bytes(buffer);
// Check the validity of the header.
file.seek(SeekFrom::Start(offset as u64))?;
file.read_exact(&mut buffer)?;
if buffer!= [80, 69, 0, 0] {
return Err(Error::new(ErrorKind::InvalidData, "invalid PE header"));
}
// Extract the magic number.
let mut buffer = [0; 2];
file.seek(SeekFrom::Current(20))?;
file.read_exact(&mut buffer)?;
Ok(u16::from_le_bytes(buffer))
}
/// Checks that a `libclang` shared library matches the target platform.
fn validate_library(path: &Path) -> Result<(), String> {
if cfg!(any(target_os = "linux", target_os = "freebsd")) {
let class = parse_elf_header(path).map_err(|e| e.to_string())?;
if cfg!(target_pointer_width = "32") && class!= 1 {
return Err("invalid ELF class (64-bit)".into());
}
if cfg!(target_pointer_width = "64") && class!= 2 {
return Err("invalid ELF class (32-bit)".into());
}
Ok(())
} else if cfg!(target_os = "windows") {
let magic = parse_pe_header(path).map_err(|e| e.to_string())?;
if cfg!(target_pointer_width = "32") && magic!= 267 {
return Err("invalid DLL (64-bit)".into());
}
if cfg!(target_pointer_width = "64") && magic!= 523 {
return Err("invalid DLL (32-bit)".into());
}
Ok(())
} else {
Ok(())
}
}
//================================================
// Searching
//================================================
/// Extracts the version components in a `libclang` shared library filename.
fn parse_version(filename: &str) -> Vec<u32> {
let version = if let Some(version) = filename.strip_prefix("libclang.so.") {
version
} else if filename.starts_with("libclang-") {
&filename[9..filename.len() - 3]
} else {
return vec![];
};
version.split('.').map(|s| s.parse().unwrap_or(0)).collect()
}
/// Finds `libclang` shared libraries and returns the paths to, filenames of,
/// and versions of those shared libraries.
fn search_libclang_directories(runtime: bool) -> Result<Vec<(PathBuf, String, Vec<u32>)>, String> {
let mut files = vec![format!(
"{}clang{}",
env::consts::DLL_PREFIX,
env::consts::DLL_SUFFIX
)];
if cfg!(target_os = "linux") {
// Some Linux distributions don't create a `libclang.so` symlink, so we
// need to look for versioned files (e.g., `libclang-3.9.so`).
files.push("libclang-*.so".into());
// Some Linux distributions don't create a `libclang.so` symlink and
// don't have versioned files as described above, so we need to look for
// suffix versioned files (e.g., `libclang.so.1`). However, `ld` cannot
// link to these files, so this will only be included when linking at
// runtime.
if runtime {
files.push("libclang.so.*".into());
files.push("libclang-*.so.*".into());
}
}
if cfg!(any(
target_os = "freebsd",
target_os = "haiku",
target_os = "netbsd",
target_os = "openbsd",
)) {
// Some BSD distributions don't create a `libclang.so` symlink either,
// but use a different naming scheme for versioned files (e.g.,
// `libclang.so.7.0`).
files.push("libclang.so.*".into());
}
if cfg!(target_os = "windows") {
// The official LLVM build uses `libclang.dll` on Windows instead of
// `clang.dll`. However, unofficial builds such as MinGW use `clang.dll`.
files.push("libclang.dll".into());
}
// Find and validate `libclang` shared libraries and collect the versions.
let mut valid = vec![];
let mut invalid = vec![];
for (directory, filename) in common::search_libclang_directories(&files, "LIBCLANG_PATH") {
let path = directory.join(&filename);
match validate_library(&path) {
Ok(()) => {
let version = parse_version(&filename);
valid.push((directory, filename, version))
}
Err(message) => invalid.push(format!("({}: {})", path.display(), message)),
}
}
if!valid.is_empty() {
return Ok(valid);
}
let message = format!(
"couldn't find any valid shared libraries matching: [{}], set the \
`LIBCLANG_PATH` environment variable to a path where one of these files \
can be found (invalid: [{}])",
files
.iter()
.map(|f| format!("'{}'", f))
.collect::<Vec<_>>()
.join(", "),
invalid.join(", "),
);
Err(message)
}
/// Finds the "best" `libclang` shared library and returns the directory and
/// filename of that library.
pub fn find(runtime: bool) -> Result<(PathBuf, String), String> {
search_libclang_directories(runtime)?
.iter()
// We want to find the `libclang` shared library with the highest
// version number, hence `max_by_key` below.
//
// However, in the case where there are multiple such `libclang` shared
// libraries, we want to use the order in which they appeared in the
// list returned by `search_libclang_directories` as a tiebreaker since
// that function returns `libclang` shared libraries in descending order
// of preference by how they were found.
//
// `max_by_key`, perhaps surprisingly, returns the *last* item with the
// maximum key rather than the first which results in the opposite of
// the tiebreaking behavior we want. This is easily fixed by reversing
// the list first.
.rev()
.max_by_key(|f| &f.2)
.cloned()
.map(|(path, filename, _)| (path, filename))
.ok_or_else(|| "unreachable".into())
}
//================================================
// Linking
//================================================
/// Finds and links to a `libclang` shared library.
#[cfg(not(feature = "runtime"))]
pub fn link() {
let cep = common::CommandErrorPrinter::default();
use std::fs;
let (directory, filename) = find(false).unwrap();
println!("cargo:rustc-link-search={}", directory.display());
if cfg!(all(target_os = "windows", target_env = "msvc")) {
// Find the `libclang` stub static library required for the MSVC
// toolchain.
let lib = if!directory.ends_with("bin") {
directory
} else {
directory.parent().unwrap().join("lib")
};
if lib.join("libclang.lib").exists() {
println!("cargo:rustc-link-search={}", lib.display());
} else if lib.join("libclang.dll.a").exists() {
// MSYS and MinGW use `libclang.dll.a` instead of `libclang.lib`.
// It is linkable with the MSVC linker, but Rust doesn't recognize
// the `.a` suffix, so we need to copy it with a different name.
//
// FIXME: Maybe we can just hardlink or symlink it?
let out = env::var("OUT_DIR").unwrap();
fs::copy(
lib.join("libclang.dll.a"),
Path::new(&out).join("libclang.lib"),
)
.unwrap();
println!("cargo:rustc-link-search=native={}", out);
} else {
panic!(
"using '{}', so 'libclang.lib' or 'libclang.dll.a' must be \
available in {}",
filename,
lib.display(),
);
}
println!("cargo:rustc-link-lib=dylib=libclang");
} else {
let name = filename.trim_start_matches("lib");
// Strip extensions and trailing version numbers (e.g., the `.so.7.0` in
// `libclang.so.7.0`).
let name = match name.find(".dylib").or_else(|| name.find(".so")) {
Some(index) => &name[0..index],
None => name,
};
println!("cargo:rustc-link-lib=dylib={}", name);
}
cep.discard();
}
| {
Ok(buffer[4])
} | conditional_block |
dynamic.rs | // SPDX-License-Identifier: Apache-2.0
use std::env;
use std::fs::File;
use std::io::{self, Error, ErrorKind, Read, Seek, SeekFrom};
use std::path::{Path, PathBuf};
use super::common;
//================================================
// Validation
//================================================
/// Extracts the ELF class from the ELF header in a shared library.
fn parse_elf_header(path: &Path) -> io::Result<u8> {
let mut file = File::open(path)?;
let mut buffer = [0; 5];
file.read_exact(&mut buffer)?;
if buffer[..4] == [127, 69, 76, 70] {
Ok(buffer[4])
} else {
Err(Error::new(ErrorKind::InvalidData, "invalid ELF header"))
}
}
/// Extracts the magic number from the PE header in a shared library.
fn parse_pe_header(path: &Path) -> io::Result<u16> {
let mut file = File::open(path)?;
// Extract the header offset.
let mut buffer = [0; 4];
let start = SeekFrom::Start(0x3C);
file.seek(start)?;
file.read_exact(&mut buffer)?;
let offset = i32::from_le_bytes(buffer);
// Check the validity of the header.
file.seek(SeekFrom::Start(offset as u64))?;
file.read_exact(&mut buffer)?;
if buffer!= [80, 69, 0, 0] {
return Err(Error::new(ErrorKind::InvalidData, "invalid PE header"));
}
// Extract the magic number.
let mut buffer = [0; 2];
file.seek(SeekFrom::Current(20))?;
file.read_exact(&mut buffer)?; | /// Checks that a `libclang` shared library matches the target platform.
fn validate_library(path: &Path) -> Result<(), String> {
if cfg!(any(target_os = "linux", target_os = "freebsd")) {
let class = parse_elf_header(path).map_err(|e| e.to_string())?;
if cfg!(target_pointer_width = "32") && class!= 1 {
return Err("invalid ELF class (64-bit)".into());
}
if cfg!(target_pointer_width = "64") && class!= 2 {
return Err("invalid ELF class (32-bit)".into());
}
Ok(())
} else if cfg!(target_os = "windows") {
let magic = parse_pe_header(path).map_err(|e| e.to_string())?;
if cfg!(target_pointer_width = "32") && magic!= 267 {
return Err("invalid DLL (64-bit)".into());
}
if cfg!(target_pointer_width = "64") && magic!= 523 {
return Err("invalid DLL (32-bit)".into());
}
Ok(())
} else {
Ok(())
}
}
//================================================
// Searching
//================================================
/// Extracts the version components in a `libclang` shared library filename.
fn parse_version(filename: &str) -> Vec<u32> {
let version = if let Some(version) = filename.strip_prefix("libclang.so.") {
version
} else if filename.starts_with("libclang-") {
&filename[9..filename.len() - 3]
} else {
return vec![];
};
version.split('.').map(|s| s.parse().unwrap_or(0)).collect()
}
/// Finds `libclang` shared libraries and returns the paths to, filenames of,
/// and versions of those shared libraries.
fn search_libclang_directories(runtime: bool) -> Result<Vec<(PathBuf, String, Vec<u32>)>, String> {
let mut files = vec![format!(
"{}clang{}",
env::consts::DLL_PREFIX,
env::consts::DLL_SUFFIX
)];
if cfg!(target_os = "linux") {
// Some Linux distributions don't create a `libclang.so` symlink, so we
// need to look for versioned files (e.g., `libclang-3.9.so`).
files.push("libclang-*.so".into());
// Some Linux distributions don't create a `libclang.so` symlink and
// don't have versioned files as described above, so we need to look for
// suffix versioned files (e.g., `libclang.so.1`). However, `ld` cannot
// link to these files, so this will only be included when linking at
// runtime.
if runtime {
files.push("libclang.so.*".into());
files.push("libclang-*.so.*".into());
}
}
if cfg!(any(
target_os = "freebsd",
target_os = "haiku",
target_os = "netbsd",
target_os = "openbsd",
)) {
// Some BSD distributions don't create a `libclang.so` symlink either,
// but use a different naming scheme for versioned files (e.g.,
// `libclang.so.7.0`).
files.push("libclang.so.*".into());
}
if cfg!(target_os = "windows") {
// The official LLVM build uses `libclang.dll` on Windows instead of
// `clang.dll`. However, unofficial builds such as MinGW use `clang.dll`.
files.push("libclang.dll".into());
}
// Find and validate `libclang` shared libraries and collect the versions.
let mut valid = vec![];
let mut invalid = vec![];
for (directory, filename) in common::search_libclang_directories(&files, "LIBCLANG_PATH") {
let path = directory.join(&filename);
match validate_library(&path) {
Ok(()) => {
let version = parse_version(&filename);
valid.push((directory, filename, version))
}
Err(message) => invalid.push(format!("({}: {})", path.display(), message)),
}
}
if!valid.is_empty() {
return Ok(valid);
}
let message = format!(
"couldn't find any valid shared libraries matching: [{}], set the \
`LIBCLANG_PATH` environment variable to a path where one of these files \
can be found (invalid: [{}])",
files
.iter()
.map(|f| format!("'{}'", f))
.collect::<Vec<_>>()
.join(", "),
invalid.join(", "),
);
Err(message)
}
/// Finds the "best" `libclang` shared library and returns the directory and
/// filename of that library.
pub fn find(runtime: bool) -> Result<(PathBuf, String), String> {
search_libclang_directories(runtime)?
.iter()
// We want to find the `libclang` shared library with the highest
// version number, hence `max_by_key` below.
//
// However, in the case where there are multiple such `libclang` shared
// libraries, we want to use the order in which they appeared in the
// list returned by `search_libclang_directories` as a tiebreaker since
// that function returns `libclang` shared libraries in descending order
// of preference by how they were found.
//
// `max_by_key`, perhaps surprisingly, returns the *last* item with the
// maximum key rather than the first which results in the opposite of
// the tiebreaking behavior we want. This is easily fixed by reversing
// the list first.
.rev()
.max_by_key(|f| &f.2)
.cloned()
.map(|(path, filename, _)| (path, filename))
.ok_or_else(|| "unreachable".into())
}
//================================================
// Linking
//================================================
/// Finds and links to a `libclang` shared library.
#[cfg(not(feature = "runtime"))]
pub fn link() {
let cep = common::CommandErrorPrinter::default();
use std::fs;
let (directory, filename) = find(false).unwrap();
println!("cargo:rustc-link-search={}", directory.display());
if cfg!(all(target_os = "windows", target_env = "msvc")) {
// Find the `libclang` stub static library required for the MSVC
// toolchain.
let lib = if!directory.ends_with("bin") {
directory
} else {
directory.parent().unwrap().join("lib")
};
if lib.join("libclang.lib").exists() {
println!("cargo:rustc-link-search={}", lib.display());
} else if lib.join("libclang.dll.a").exists() {
// MSYS and MinGW use `libclang.dll.a` instead of `libclang.lib`.
// It is linkable with the MSVC linker, but Rust doesn't recognize
// the `.a` suffix, so we need to copy it with a different name.
//
// FIXME: Maybe we can just hardlink or symlink it?
let out = env::var("OUT_DIR").unwrap();
fs::copy(
lib.join("libclang.dll.a"),
Path::new(&out).join("libclang.lib"),
)
.unwrap();
println!("cargo:rustc-link-search=native={}", out);
} else {
panic!(
"using '{}', so 'libclang.lib' or 'libclang.dll.a' must be \
available in {}",
filename,
lib.display(),
);
}
println!("cargo:rustc-link-lib=dylib=libclang");
} else {
let name = filename.trim_start_matches("lib");
// Strip extensions and trailing version numbers (e.g., the `.so.7.0` in
// `libclang.so.7.0`).
let name = match name.find(".dylib").or_else(|| name.find(".so")) {
Some(index) => &name[0..index],
None => name,
};
println!("cargo:rustc-link-lib=dylib={}", name);
}
cep.discard();
} | Ok(u16::from_le_bytes(buffer))
}
| random_line_split |
dynamic.rs | // SPDX-License-Identifier: Apache-2.0
use std::env;
use std::fs::File;
use std::io::{self, Error, ErrorKind, Read, Seek, SeekFrom};
use std::path::{Path, PathBuf};
use super::common;
//================================================
// Validation
//================================================
/// Extracts the ELF class from the ELF header in a shared library.
fn parse_elf_header(path: &Path) -> io::Result<u8> {
let mut file = File::open(path)?;
let mut buffer = [0; 5];
file.read_exact(&mut buffer)?;
if buffer[..4] == [127, 69, 76, 70] {
Ok(buffer[4])
} else {
Err(Error::new(ErrorKind::InvalidData, "invalid ELF header"))
}
}
/// Extracts the magic number from the PE header in a shared library.
fn parse_pe_header(path: &Path) -> io::Result<u16> {
let mut file = File::open(path)?;
// Extract the header offset.
let mut buffer = [0; 4];
let start = SeekFrom::Start(0x3C);
file.seek(start)?;
file.read_exact(&mut buffer)?;
let offset = i32::from_le_bytes(buffer);
// Check the validity of the header.
file.seek(SeekFrom::Start(offset as u64))?;
file.read_exact(&mut buffer)?;
if buffer!= [80, 69, 0, 0] {
return Err(Error::new(ErrorKind::InvalidData, "invalid PE header"));
}
// Extract the magic number.
let mut buffer = [0; 2];
file.seek(SeekFrom::Current(20))?;
file.read_exact(&mut buffer)?;
Ok(u16::from_le_bytes(buffer))
}
/// Checks that a `libclang` shared library matches the target platform.
fn validate_library(path: &Path) -> Result<(), String> {
if cfg!(any(target_os = "linux", target_os = "freebsd")) {
let class = parse_elf_header(path).map_err(|e| e.to_string())?;
if cfg!(target_pointer_width = "32") && class!= 1 {
return Err("invalid ELF class (64-bit)".into());
}
if cfg!(target_pointer_width = "64") && class!= 2 {
return Err("invalid ELF class (32-bit)".into());
}
Ok(())
} else if cfg!(target_os = "windows") {
let magic = parse_pe_header(path).map_err(|e| e.to_string())?;
if cfg!(target_pointer_width = "32") && magic!= 267 {
return Err("invalid DLL (64-bit)".into());
}
if cfg!(target_pointer_width = "64") && magic!= 523 {
return Err("invalid DLL (32-bit)".into());
}
Ok(())
} else {
Ok(())
}
}
//================================================
// Searching
//================================================
/// Extracts the version components in a `libclang` shared library filename.
fn | (filename: &str) -> Vec<u32> {
let version = if let Some(version) = filename.strip_prefix("libclang.so.") {
version
} else if filename.starts_with("libclang-") {
&filename[9..filename.len() - 3]
} else {
return vec![];
};
version.split('.').map(|s| s.parse().unwrap_or(0)).collect()
}
/// Finds `libclang` shared libraries and returns the paths to, filenames of,
/// and versions of those shared libraries.
fn search_libclang_directories(runtime: bool) -> Result<Vec<(PathBuf, String, Vec<u32>)>, String> {
let mut files = vec![format!(
"{}clang{}",
env::consts::DLL_PREFIX,
env::consts::DLL_SUFFIX
)];
if cfg!(target_os = "linux") {
// Some Linux distributions don't create a `libclang.so` symlink, so we
// need to look for versioned files (e.g., `libclang-3.9.so`).
files.push("libclang-*.so".into());
// Some Linux distributions don't create a `libclang.so` symlink and
// don't have versioned files as described above, so we need to look for
// suffix versioned files (e.g., `libclang.so.1`). However, `ld` cannot
// link to these files, so this will only be included when linking at
// runtime.
if runtime {
files.push("libclang.so.*".into());
files.push("libclang-*.so.*".into());
}
}
if cfg!(any(
target_os = "freebsd",
target_os = "haiku",
target_os = "netbsd",
target_os = "openbsd",
)) {
// Some BSD distributions don't create a `libclang.so` symlink either,
// but use a different naming scheme for versioned files (e.g.,
// `libclang.so.7.0`).
files.push("libclang.so.*".into());
}
if cfg!(target_os = "windows") {
// The official LLVM build uses `libclang.dll` on Windows instead of
// `clang.dll`. However, unofficial builds such as MinGW use `clang.dll`.
files.push("libclang.dll".into());
}
// Find and validate `libclang` shared libraries and collect the versions.
let mut valid = vec![];
let mut invalid = vec![];
for (directory, filename) in common::search_libclang_directories(&files, "LIBCLANG_PATH") {
let path = directory.join(&filename);
match validate_library(&path) {
Ok(()) => {
let version = parse_version(&filename);
valid.push((directory, filename, version))
}
Err(message) => invalid.push(format!("({}: {})", path.display(), message)),
}
}
if!valid.is_empty() {
return Ok(valid);
}
let message = format!(
"couldn't find any valid shared libraries matching: [{}], set the \
`LIBCLANG_PATH` environment variable to a path where one of these files \
can be found (invalid: [{}])",
files
.iter()
.map(|f| format!("'{}'", f))
.collect::<Vec<_>>()
.join(", "),
invalid.join(", "),
);
Err(message)
}
/// Finds the "best" `libclang` shared library and returns the directory and
/// filename of that library.
pub fn find(runtime: bool) -> Result<(PathBuf, String), String> {
search_libclang_directories(runtime)?
.iter()
// We want to find the `libclang` shared library with the highest
// version number, hence `max_by_key` below.
//
// However, in the case where there are multiple such `libclang` shared
// libraries, we want to use the order in which they appeared in the
// list returned by `search_libclang_directories` as a tiebreaker since
// that function returns `libclang` shared libraries in descending order
// of preference by how they were found.
//
// `max_by_key`, perhaps surprisingly, returns the *last* item with the
// maximum key rather than the first which results in the opposite of
// the tiebreaking behavior we want. This is easily fixed by reversing
// the list first.
.rev()
.max_by_key(|f| &f.2)
.cloned()
.map(|(path, filename, _)| (path, filename))
.ok_or_else(|| "unreachable".into())
}
//================================================
// Linking
//================================================
/// Finds and links to a `libclang` shared library.
#[cfg(not(feature = "runtime"))]
pub fn link() {
let cep = common::CommandErrorPrinter::default();
use std::fs;
let (directory, filename) = find(false).unwrap();
println!("cargo:rustc-link-search={}", directory.display());
if cfg!(all(target_os = "windows", target_env = "msvc")) {
// Find the `libclang` stub static library required for the MSVC
// toolchain.
let lib = if!directory.ends_with("bin") {
directory
} else {
directory.parent().unwrap().join("lib")
};
if lib.join("libclang.lib").exists() {
println!("cargo:rustc-link-search={}", lib.display());
} else if lib.join("libclang.dll.a").exists() {
// MSYS and MinGW use `libclang.dll.a` instead of `libclang.lib`.
// It is linkable with the MSVC linker, but Rust doesn't recognize
// the `.a` suffix, so we need to copy it with a different name.
//
// FIXME: Maybe we can just hardlink or symlink it?
let out = env::var("OUT_DIR").unwrap();
fs::copy(
lib.join("libclang.dll.a"),
Path::new(&out).join("libclang.lib"),
)
.unwrap();
println!("cargo:rustc-link-search=native={}", out);
} else {
panic!(
"using '{}', so 'libclang.lib' or 'libclang.dll.a' must be \
available in {}",
filename,
lib.display(),
);
}
println!("cargo:rustc-link-lib=dylib=libclang");
} else {
let name = filename.trim_start_matches("lib");
// Strip extensions and trailing version numbers (e.g., the `.so.7.0` in
// `libclang.so.7.0`).
let name = match name.find(".dylib").or_else(|| name.find(".so")) {
Some(index) => &name[0..index],
None => name,
};
println!("cargo:rustc-link-lib=dylib={}", name);
}
cep.discard();
}
| parse_version | identifier_name |
dynamic.rs | // SPDX-License-Identifier: Apache-2.0
use std::env;
use std::fs::File;
use std::io::{self, Error, ErrorKind, Read, Seek, SeekFrom};
use std::path::{Path, PathBuf};
use super::common;
//================================================
// Validation
//================================================
/// Extracts the ELF class from the ELF header in a shared library.
fn parse_elf_header(path: &Path) -> io::Result<u8> {
let mut file = File::open(path)?;
let mut buffer = [0; 5];
file.read_exact(&mut buffer)?;
if buffer[..4] == [127, 69, 76, 70] {
Ok(buffer[4])
} else {
Err(Error::new(ErrorKind::InvalidData, "invalid ELF header"))
}
}
/// Extracts the magic number from the PE header in a shared library.
fn parse_pe_header(path: &Path) -> io::Result<u16> {
let mut file = File::open(path)?;
// Extract the header offset.
let mut buffer = [0; 4];
let start = SeekFrom::Start(0x3C);
file.seek(start)?;
file.read_exact(&mut buffer)?;
let offset = i32::from_le_bytes(buffer);
// Check the validity of the header.
file.seek(SeekFrom::Start(offset as u64))?;
file.read_exact(&mut buffer)?;
if buffer!= [80, 69, 0, 0] {
return Err(Error::new(ErrorKind::InvalidData, "invalid PE header"));
}
// Extract the magic number.
let mut buffer = [0; 2];
file.seek(SeekFrom::Current(20))?;
file.read_exact(&mut buffer)?;
Ok(u16::from_le_bytes(buffer))
}
/// Checks that a `libclang` shared library matches the target platform.
fn validate_library(path: &Path) -> Result<(), String> {
if cfg!(any(target_os = "linux", target_os = "freebsd")) {
let class = parse_elf_header(path).map_err(|e| e.to_string())?;
if cfg!(target_pointer_width = "32") && class!= 1 {
return Err("invalid ELF class (64-bit)".into());
}
if cfg!(target_pointer_width = "64") && class!= 2 {
return Err("invalid ELF class (32-bit)".into());
}
Ok(())
} else if cfg!(target_os = "windows") {
let magic = parse_pe_header(path).map_err(|e| e.to_string())?;
if cfg!(target_pointer_width = "32") && magic!= 267 {
return Err("invalid DLL (64-bit)".into());
}
if cfg!(target_pointer_width = "64") && magic!= 523 {
return Err("invalid DLL (32-bit)".into());
}
Ok(())
} else {
Ok(())
}
}
//================================================
// Searching
//================================================
/// Extracts the version components in a `libclang` shared library filename.
fn parse_version(filename: &str) -> Vec<u32> {
let version = if let Some(version) = filename.strip_prefix("libclang.so.") {
version
} else if filename.starts_with("libclang-") {
&filename[9..filename.len() - 3]
} else {
return vec![];
};
version.split('.').map(|s| s.parse().unwrap_or(0)).collect()
}
/// Finds `libclang` shared libraries and returns the paths to, filenames of,
/// and versions of those shared libraries.
fn search_libclang_directories(runtime: bool) -> Result<Vec<(PathBuf, String, Vec<u32>)>, String> | }
}
if cfg!(any(
target_os = "freebsd",
target_os = "haiku",
target_os = "netbsd",
target_os = "openbsd",
)) {
// Some BSD distributions don't create a `libclang.so` symlink either,
// but use a different naming scheme for versioned files (e.g.,
// `libclang.so.7.0`).
files.push("libclang.so.*".into());
}
if cfg!(target_os = "windows") {
// The official LLVM build uses `libclang.dll` on Windows instead of
// `clang.dll`. However, unofficial builds such as MinGW use `clang.dll`.
files.push("libclang.dll".into());
}
// Find and validate `libclang` shared libraries and collect the versions.
let mut valid = vec![];
let mut invalid = vec![];
for (directory, filename) in common::search_libclang_directories(&files, "LIBCLANG_PATH") {
let path = directory.join(&filename);
match validate_library(&path) {
Ok(()) => {
let version = parse_version(&filename);
valid.push((directory, filename, version))
}
Err(message) => invalid.push(format!("({}: {})", path.display(), message)),
}
}
if!valid.is_empty() {
return Ok(valid);
}
let message = format!(
"couldn't find any valid shared libraries matching: [{}], set the \
`LIBCLANG_PATH` environment variable to a path where one of these files \
can be found (invalid: [{}])",
files
.iter()
.map(|f| format!("'{}'", f))
.collect::<Vec<_>>()
.join(", "),
invalid.join(", "),
);
Err(message)
}
/// Finds the "best" `libclang` shared library and returns the directory and
/// filename of that library.
pub fn find(runtime: bool) -> Result<(PathBuf, String), String> {
search_libclang_directories(runtime)?
.iter()
// We want to find the `libclang` shared library with the highest
// version number, hence `max_by_key` below.
//
// However, in the case where there are multiple such `libclang` shared
// libraries, we want to use the order in which they appeared in the
// list returned by `search_libclang_directories` as a tiebreaker since
// that function returns `libclang` shared libraries in descending order
// of preference by how they were found.
//
// `max_by_key`, perhaps surprisingly, returns the *last* item with the
// maximum key rather than the first which results in the opposite of
// the tiebreaking behavior we want. This is easily fixed by reversing
// the list first.
.rev()
.max_by_key(|f| &f.2)
.cloned()
.map(|(path, filename, _)| (path, filename))
.ok_or_else(|| "unreachable".into())
}
//================================================
// Linking
//================================================
/// Finds and links to a `libclang` shared library.
#[cfg(not(feature = "runtime"))]
pub fn link() {
let cep = common::CommandErrorPrinter::default();
use std::fs;
let (directory, filename) = find(false).unwrap();
println!("cargo:rustc-link-search={}", directory.display());
if cfg!(all(target_os = "windows", target_env = "msvc")) {
// Find the `libclang` stub static library required for the MSVC
// toolchain.
let lib = if!directory.ends_with("bin") {
directory
} else {
directory.parent().unwrap().join("lib")
};
if lib.join("libclang.lib").exists() {
println!("cargo:rustc-link-search={}", lib.display());
} else if lib.join("libclang.dll.a").exists() {
// MSYS and MinGW use `libclang.dll.a` instead of `libclang.lib`.
// It is linkable with the MSVC linker, but Rust doesn't recognize
// the `.a` suffix, so we need to copy it with a different name.
//
// FIXME: Maybe we can just hardlink or symlink it?
let out = env::var("OUT_DIR").unwrap();
fs::copy(
lib.join("libclang.dll.a"),
Path::new(&out).join("libclang.lib"),
)
.unwrap();
println!("cargo:rustc-link-search=native={}", out);
} else {
panic!(
"using '{}', so 'libclang.lib' or 'libclang.dll.a' must be \
available in {}",
filename,
lib.display(),
);
}
println!("cargo:rustc-link-lib=dylib=libclang");
} else {
let name = filename.trim_start_matches("lib");
// Strip extensions and trailing version numbers (e.g., the `.so.7.0` in
// `libclang.so.7.0`).
let name = match name.find(".dylib").or_else(|| name.find(".so")) {
Some(index) => &name[0..index],
None => name,
};
println!("cargo:rustc-link-lib=dylib={}", name);
}
cep.discard();
}
| {
let mut files = vec![format!(
"{}clang{}",
env::consts::DLL_PREFIX,
env::consts::DLL_SUFFIX
)];
if cfg!(target_os = "linux") {
// Some Linux distributions don't create a `libclang.so` symlink, so we
// need to look for versioned files (e.g., `libclang-3.9.so`).
files.push("libclang-*.so".into());
// Some Linux distributions don't create a `libclang.so` symlink and
// don't have versioned files as described above, so we need to look for
// suffix versioned files (e.g., `libclang.so.1`). However, `ld` cannot
// link to these files, so this will only be included when linking at
// runtime.
if runtime {
files.push("libclang.so.*".into());
files.push("libclang-*.so.*".into()); | identifier_body |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.