repo_id
stringclasses 563
values | file_path
stringlengths 40
166
| content
stringlengths 1
2.94M
| __index_level_0__
int64 0
0
|
|---|---|---|---|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/instruction.rs
|
//! Instruction types
use {
super::{
check_program_account, check_spl_token_program_account,
error::TokenError,
extension::{transfer_fee::instruction::TransferFeeInstruction, ExtensionType},
native_mint,
pod::{pod_from_bytes, pod_get_packed_len},
},
bytemuck::Pod,
solana_sdk::{
instruction::{AccountMeta, Instruction},
program_error::ProgramError,
program_option::COption,
pubkey::{Pubkey, PUBKEY_BYTES},
system_program, sysvar,
},
std::{
convert::{TryFrom, TryInto},
mem::size_of,
},
};
/// Minimum number of multisignature signers (min N)
pub const MIN_SIGNERS: usize = 1;
/// Maximum number of multisignature signers (max N)
pub const MAX_SIGNERS: usize = 11;
/// Serialized length of a u16, for unpacking
const U16_BYTES: usize = 2;
/// Serialized length of a u64, for unpacking
const U64_BYTES: usize = 8;
/// Instructions supported by the token program.
#[repr(C)]
#[derive(Clone, Debug, PartialEq)]
pub enum TokenInstruction<'a> {
/// Initializes a new mint and optionally deposits all the newly minted
/// tokens in an account.
///
/// The `InitializeMint` instruction requires no signers and MUST be
/// included within the same Transaction as the system program's
/// `CreateAccount` instruction that creates the account being initialized.
/// Otherwise another party can acquire ownership of the uninitialized
/// account.
///
/// All extensions must be initialized before calling this instruction.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The mint to initialize.
/// 1. `[]` Rent sysvar
///
InitializeMint {
/// Number of base 10 digits to the right of the decimal place.
decimals: u8,
/// The authority/multisignature to mint tokens.
mint_authority: Pubkey,
/// The freeze authority/multisignature of the mint.
freeze_authority: COption<Pubkey>,
},
/// Initializes a new account to hold tokens. If this account is associated
/// with the native mint then the token balance of the initialized account
/// will be equal to the amount of SOL in the account. If this account is
/// associated with another mint, that mint must be initialized before this
/// command can succeed.
///
/// The `InitializeAccount` instruction requires no signers and MUST be
/// included within the same Transaction as the system program's
/// `CreateAccount` instruction that creates the account being initialized.
/// Otherwise another party can acquire ownership of the uninitialized
/// account.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The account to initialize.
/// 1. `[]` The mint this account will be associated with.
/// 2. `[]` The new account's owner/multisignature.
/// 3. `[]` Rent sysvar
InitializeAccount,
/// Initializes a multisignature account with N provided signers.
///
/// Multisignature accounts can used in place of any single owner/delegate
/// accounts in any token instruction that require an owner/delegate to be
/// present. The variant field represents the number of signers (M)
/// required to validate this multisignature account.
///
/// The `InitializeMultisig` instruction requires no signers and MUST be
/// included within the same Transaction as the system program's
/// `CreateAccount` instruction that creates the account being initialized.
/// Otherwise another party can acquire ownership of the uninitialized
/// account.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The multisignature account to initialize.
/// 1. `[]` Rent sysvar
/// 2. ..2+N. `[]` The signer accounts, must equal to N where 1 <= N <=
/// 11.
InitializeMultisig {
/// The number of signers (M) required to validate this multisignature
/// account.
m: u8,
},
/// NOTE This instruction is deprecated in favor of `TransferChecked` or
/// `TransferCheckedWithFee`
///
/// Transfers tokens from one account to another either directly or via a
/// delegate. If this account is associated with the native mint then equal
/// amounts of SOL and Tokens will be transferred to the destination
/// account.
///
/// If either account contains an `TransferFeeAmount` extension, this will fail.
/// Mints with the `TransferFeeConfig` extension are required in order to assess the fee.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The source account.
/// 1. `[writable]` The destination account.
/// 2. `[signer]` The source account's owner/delegate.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The source account.
/// 1. `[writable]` The destination account.
/// 2. `[]` The source account's multisignature owner/delegate.
/// 3. ..3+M `[signer]` M signer accounts.
#[deprecated(
since = "4.0.0",
note = "please use `TransferChecked` or `TransferCheckedWithFee` instead"
)]
Transfer {
/// The amount of tokens to transfer.
amount: u64,
},
/// Approves a delegate. A delegate is given the authority over tokens on
/// behalf of the source account's owner.
///
/// Accounts expected by this instruction:
///
/// * Single owner
/// 0. `[writable]` The source account.
/// 1. `[]` The delegate.
/// 2. `[signer]` The source account owner.
///
/// * Multisignature owner
/// 0. `[writable]` The source account.
/// 1. `[]` The delegate.
/// 2. `[]` The source account's multisignature owner.
/// 3. ..3+M `[signer]` M signer accounts
Approve {
/// The amount of tokens the delegate is approved for.
amount: u64,
},
/// Revokes the delegate's authority.
///
/// Accounts expected by this instruction:
///
/// * Single owner
/// 0. `[writable]` The source account.
/// 1. `[signer]` The source account owner or current delegate.
///
/// * Multisignature owner
/// 0. `[writable]` The source account.
/// 1. `[]` The source account's multisignature owner or current delegate.
/// 2. ..2+M `[signer]` M signer accounts
Revoke,
/// Sets a new authority of a mint or account.
///
/// Accounts expected by this instruction:
///
/// * Single authority
/// 0. `[writable]` The mint or account to change the authority of.
/// 1. `[signer]` The current authority of the mint or account.
///
/// * Multisignature authority
/// 0. `[writable]` The mint or account to change the authority of.
/// 1. `[]` The mint's or account's current multisignature authority.
/// 2. ..2+M `[signer]` M signer accounts
SetAuthority {
/// The type of authority to update.
authority_type: AuthorityType,
/// The new authority
new_authority: COption<Pubkey>,
},
/// Mints new tokens to an account. The native mint does not support
/// minting.
///
/// Accounts expected by this instruction:
///
/// * Single authority
/// 0. `[writable]` The mint.
/// 1. `[writable]` The account to mint tokens to.
/// 2. `[signer]` The mint's minting authority.
///
/// * Multisignature authority
/// 0. `[writable]` The mint.
/// 1. `[writable]` The account to mint tokens to.
/// 2. `[]` The mint's multisignature mint-tokens authority.
/// 3. ..3+M `[signer]` M signer accounts.
MintTo {
/// The amount of new tokens to mint.
amount: u64,
},
/// Burns tokens by removing them from an account. `Burn` does not support
/// accounts associated with the native mint, use `CloseAccount` instead.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The account to burn from.
/// 1. `[writable]` The token mint.
/// 2. `[signer]` The account's owner/delegate.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The account to burn from.
/// 1. `[writable]` The token mint.
/// 2. `[]` The account's multisignature owner/delegate.
/// 3. ..3+M `[signer]` M signer accounts.
Burn {
/// The amount of tokens to burn.
amount: u64,
},
/// Close an account by transferring all its SOL to the destination account.
/// Non-native accounts may only be closed if its token amount is zero.
///
/// Accounts with the `TransferFeeAmount` extension may only be closed if the withheld
/// amount is zero.
///
/// Mints may be closed if they have the `MintCloseAuthority` extension and their token
/// supply is zero
///
/// Note that if the account to close has a `ConfidentialTransferExtension`, the
/// `ConfidentialTransferInstruction::EmptyAccount` instruction must precede this
/// instruction.
///
/// Accounts expected by this instruction:
///
/// * Single owner
/// 0. `[writable]` The account to close.
/// 1. `[writable]` The destination account.
/// 2. `[signer]` The account's owner.
///
/// * Multisignature owner
/// 0. `[writable]` The account to close.
/// 1. `[writable]` The destination account.
/// 2. `[]` The account's multisignature owner.
/// 3. ..3+M `[signer]` M signer accounts.
CloseAccount,
/// Freeze an Initialized account using the Mint's freeze_authority (if
/// set).
///
/// Accounts expected by this instruction:
///
/// * Single owner
/// 0. `[writable]` The account to freeze.
/// 1. `[]` The token mint.
/// 2. `[signer]` The mint freeze authority.
///
/// * Multisignature owner
/// 0. `[writable]` The account to freeze.
/// 1. `[]` The token mint.
/// 2. `[]` The mint's multisignature freeze authority.
/// 3. ..3+M `[signer]` M signer accounts.
FreezeAccount,
/// Thaw a Frozen account using the Mint's freeze_authority (if set).
///
/// Accounts expected by this instruction:
///
/// * Single owner
/// 0. `[writable]` The account to freeze.
/// 1. `[]` The token mint.
/// 2. `[signer]` The mint freeze authority.
///
/// * Multisignature owner
/// 0. `[writable]` The account to freeze.
/// 1. `[]` The token mint.
/// 2. `[]` The mint's multisignature freeze authority.
/// 3. ..3+M `[signer]` M signer accounts.
ThawAccount,
/// Transfers tokens from one account to another either directly or via a
/// delegate. If this account is associated with the native mint then equal
/// amounts of SOL and Tokens will be transferred to the destination
/// account.
///
/// This instruction differs from Transfer in that the token mint and
/// decimals value is checked by the caller. This may be useful when
/// creating transactions offline or within a hardware wallet.
///
/// If either account contains an `TransferFeeAmount` extension, the fee is
/// withheld in the destination account.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The source account.
/// 1. `[]` The token mint.
/// 2. `[writable]` The destination account.
/// 3. `[signer]` The source account's owner/delegate.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The source account.
/// 1. `[]` The token mint.
/// 2. `[writable]` The destination account.
/// 3. `[]` The source account's multisignature owner/delegate.
/// 4. ..4+M `[signer]` M signer accounts.
TransferChecked {
/// The amount of tokens to transfer.
amount: u64,
/// Expected number of base 10 digits to the right of the decimal place.
decimals: u8,
},
/// Approves a delegate. A delegate is given the authority over tokens on
/// behalf of the source account's owner.
///
/// This instruction differs from Approve in that the token mint and
/// decimals value is checked by the caller. This may be useful when
/// creating transactions offline or within a hardware wallet.
///
/// Accounts expected by this instruction:
///
/// * Single owner
/// 0. `[writable]` The source account.
/// 1. `[]` The token mint.
/// 2. `[]` The delegate.
/// 3. `[signer]` The source account owner.
///
/// * Multisignature owner
/// 0. `[writable]` The source account.
/// 1. `[]` The token mint.
/// 2. `[]` The delegate.
/// 3. `[]` The source account's multisignature owner.
/// 4. ..4+M `[signer]` M signer accounts
ApproveChecked {
/// The amount of tokens the delegate is approved for.
amount: u64,
/// Expected number of base 10 digits to the right of the decimal place.
decimals: u8,
},
/// Mints new tokens to an account. The native mint does not support
/// minting.
///
/// This instruction differs from MintTo in that the decimals value is
/// checked by the caller. This may be useful when creating transactions
/// offline or within a hardware wallet.
///
/// Accounts expected by this instruction:
///
/// * Single authority
/// 0. `[writable]` The mint.
/// 1. `[writable]` The account to mint tokens to.
/// 2. `[signer]` The mint's minting authority.
///
/// * Multisignature authority
/// 0. `[writable]` The mint.
/// 1. `[writable]` The account to mint tokens to.
/// 2. `[]` The mint's multisignature mint-tokens authority.
/// 3. ..3+M `[signer]` M signer accounts.
MintToChecked {
/// The amount of new tokens to mint.
amount: u64,
/// Expected number of base 10 digits to the right of the decimal place.
decimals: u8,
},
/// Burns tokens by removing them from an account. `BurnChecked` does not
/// support accounts associated with the native mint, use `CloseAccount`
/// instead.
///
/// This instruction differs from Burn in that the decimals value is checked
/// by the caller. This may be useful when creating transactions offline or
/// within a hardware wallet.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The account to burn from.
/// 1. `[writable]` The token mint.
/// 2. `[signer]` The account's owner/delegate.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The account to burn from.
/// 1. `[writable]` The token mint.
/// 2. `[]` The account's multisignature owner/delegate.
/// 3. ..3+M `[signer]` M signer accounts.
BurnChecked {
/// The amount of tokens to burn.
amount: u64,
/// Expected number of base 10 digits to the right of the decimal place.
decimals: u8,
},
/// Like InitializeAccount, but the owner pubkey is passed via instruction data
/// rather than the accounts list. This variant may be preferable when using
/// Cross Program Invocation from an instruction that does not need the owner's
/// `AccountInfo` otherwise.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The account to initialize.
/// 1. `[]` The mint this account will be associated with.
/// 2. `[]` Rent sysvar
InitializeAccount2 {
/// The new account's owner/multisignature.
owner: Pubkey,
},
/// Given a wrapped / native token account (a token account containing SOL)
/// updates its amount field based on the account's underlying `lamports`.
/// This is useful if a non-wrapped SOL account uses `system_instruction::transfer`
/// to move lamports to a wrapped token account, and needs to have its token
/// `amount` field updated.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The native token account to sync with its underlying lamports.
SyncNative,
/// Like InitializeAccount2, but does not require the Rent sysvar to be provided
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The account to initialize.
/// 1. `[]` The mint this account will be associated with.
InitializeAccount3 {
/// The new account's owner/multisignature.
owner: Pubkey,
},
/// Like InitializeMultisig, but does not require the Rent sysvar to be provided
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The multisignature account to initialize.
/// 1. ..1+N. `[]` The signer accounts, must equal to N where 1 <= N <=
/// 11.
InitializeMultisig2 {
/// The number of signers (M) required to validate this multisignature
/// account.
m: u8,
},
/// Like InitializeMint, but does not require the Rent sysvar to be provided
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The mint to initialize.
///
InitializeMint2 {
/// Number of base 10 digits to the right of the decimal place.
decimals: u8,
/// The authority/multisignature to mint tokens.
mint_authority: Pubkey,
/// The freeze authority/multisignature of the mint.
freeze_authority: COption<Pubkey>,
},
/// Gets the required size of an account for the given mint as a little-endian
/// `u64`.
///
/// Return data can be fetched using `sol_get_return_data` and deserializing
/// the return data as a little-endian `u64`.
///
/// Accounts expected by this instruction:
///
/// 0. `[]` The mint to calculate for
GetAccountDataSize {
/// Additional extension types to include in the returned account size
extension_types: Vec<ExtensionType>,
},
/// Initialize the Immutable Owner extension for the given token account
///
/// Fails if the account has already been initialized, so must be called before
/// `InitializeAccount`.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The account to initialize.
///
/// Data expected by this instruction:
/// None
///
InitializeImmutableOwner,
/// Convert an Amount of tokens to a UiAmount `string`, using the given mint.
///
/// Fails on an invalid mint.
///
/// Return data can be fetched using `sol_get_return_data` and deserialized with
/// `String::from_utf8`.
///
/// Accounts expected by this instruction:
///
/// 0. `[]` The mint to calculate for
AmountToUiAmount {
/// The amount of tokens to convert.
amount: u64,
},
/// Convert a UiAmount of tokens to a little-endian `u64` raw Amount, using the given mint.
///
/// Return data can be fetched using `sol_get_return_data` and deserializing
/// the return data as a little-endian `u64`.
///
/// Accounts expected by this instruction:
///
/// 0. `[]` The mint to calculate for
UiAmountToAmount {
/// The ui_amount of tokens to convert.
ui_amount: &'a str,
},
/// Initialize the close account authority on a new mint.
///
/// Fails if the mint has already been initialized, so must be called before
/// `InitializeMint`.
///
/// The mint must have exactly enough space allocated for the base mint (82
/// bytes), plus 83 bytes of padding, 1 byte reserved for the account type,
/// then space required for this extension, plus any others.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The mint to initialize.
InitializeMintCloseAuthority {
/// Authority that must sign the `CloseAccount` instruction on a mint
close_authority: COption<Pubkey>,
},
/// The common instruction prefix for Transfer Fee extension instructions.
///
/// See `extension::transfer_fee::instruction::TransferFeeInstruction` for
/// further details about the extended instructions that share this instruction prefix
TransferFeeExtension(TransferFeeInstruction),
/// The common instruction prefix for Confidential Transfer extension instructions.
///
/// See `extension::confidential_transfer::instruction::ConfidentialTransferInstruction` for
/// further details about the extended instructions that share this instruction prefix
ConfidentialTransferExtension,
/// The common instruction prefix for Default Account State extension instructions.
///
/// See `extension::default_account_state::instruction::DefaultAccountStateInstruction` for
/// further details about the extended instructions that share this instruction prefix
DefaultAccountStateExtension,
/// Check to see if a token account is large enough for a list of ExtensionTypes, and if not,
/// use reallocation to increase the data size.
///
/// Accounts expected by this instruction:
///
/// * Single owner
/// 0. `[writable]` The account to reallocate.
/// 1. `[signer, writable]` The payer account to fund reallocation
/// 2. `[]` System program for reallocation funding
/// 3. `[signer]` The account's owner.
///
/// * Multisignature owner
/// 0. `[writable]` The account to reallocate.
/// 1. `[signer, writable]` The payer account to fund reallocation
/// 2. `[]` System program for reallocation funding
/// 3. `[]` The account's multisignature owner/delegate.
/// 4. ..4+M `[signer]` M signer accounts.
///
Reallocate {
/// New extension types to include in the reallocated account
extension_types: Vec<ExtensionType>,
},
/// The common instruction prefix for Memo Transfer account extension instructions.
///
/// See `extension::memo_transfer::instruction::RequiredMemoTransfersInstruction` for
/// further details about the extended instructions that share this instruction prefix
MemoTransferExtension,
/// Creates the native mint.
///
/// This instruction only needs to be invoked once after deployment and is permissionless,
/// Wrapped SOL (`native_mint::id()`) will not be available until this instruction is
/// successfully executed.
///
/// Accounts expected by this instruction:
///
/// 0. `[writeable,signer]` Funding account (must be a system account)
/// 1. `[writable]` The native mint address
/// 2. `[]` System program for mint account funding
///
CreateNativeMint,
/// Initialize the non transferable extension for the given mint account
///
/// Fails if the account has already been initialized, so must be called before
/// `InitializeMint`.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The mint account to initialize.
///
/// Data expected by this instruction:
/// None
///
InitializeNonTransferableMint,
/// The common instruction prefix for Interest Bearing extension instructions.
///
/// See `extension::interest_bearing_mint::instruction::InterestBearingMintInstruction` for
/// further details about the extended instructions that share this instruction prefix
InterestBearingMintExtension,
}
impl<'a> TokenInstruction<'a> {
/// Unpacks a byte buffer into a [TokenInstruction](enum.TokenInstruction.html).
pub fn unpack(input: &'a [u8]) -> Result<Self, ProgramError> {
use TokenError::InvalidInstruction;
let (&tag, rest) = input.split_first().ok_or(InvalidInstruction)?;
Ok(match tag {
0 => {
let (&decimals, rest) = rest.split_first().ok_or(InvalidInstruction)?;
let (mint_authority, rest) = Self::unpack_pubkey(rest)?;
let (freeze_authority, _rest) = Self::unpack_pubkey_option(rest)?;
Self::InitializeMint {
mint_authority,
freeze_authority,
decimals,
}
}
1 => Self::InitializeAccount,
2 => {
let &m = rest.first().ok_or(InvalidInstruction)?;
Self::InitializeMultisig { m }
}
3 | 4 | 7 | 8 => {
let amount = rest
.get(..U64_BYTES)
.and_then(|slice| slice.try_into().ok())
.map(u64::from_le_bytes)
.ok_or(InvalidInstruction)?;
match tag {
#[allow(deprecated)]
3 => Self::Transfer { amount },
4 => Self::Approve { amount },
7 => Self::MintTo { amount },
8 => Self::Burn { amount },
_ => unreachable!(),
}
}
5 => Self::Revoke,
6 => {
let (authority_type, rest) = rest
.split_first()
.ok_or_else(|| ProgramError::from(InvalidInstruction))
.and_then(|(&t, rest)| Ok((AuthorityType::from(t)?, rest)))?;
let (new_authority, _rest) = Self::unpack_pubkey_option(rest)?;
Self::SetAuthority {
authority_type,
new_authority,
}
}
9 => Self::CloseAccount,
10 => Self::FreezeAccount,
11 => Self::ThawAccount,
12 => {
let (amount, decimals, _rest) = Self::unpack_amount_decimals(rest)?;
Self::TransferChecked { amount, decimals }
}
13 => {
let (amount, decimals, _rest) = Self::unpack_amount_decimals(rest)?;
Self::ApproveChecked { amount, decimals }
}
14 => {
let (amount, decimals, _rest) = Self::unpack_amount_decimals(rest)?;
Self::MintToChecked { amount, decimals }
}
15 => {
let (amount, decimals, _rest) = Self::unpack_amount_decimals(rest)?;
Self::BurnChecked { amount, decimals }
}
16 => {
let (owner, _rest) = Self::unpack_pubkey(rest)?;
Self::InitializeAccount2 { owner }
}
17 => Self::SyncNative,
18 => {
let (owner, _rest) = Self::unpack_pubkey(rest)?;
Self::InitializeAccount3 { owner }
}
19 => {
let &m = rest.first().ok_or(InvalidInstruction)?;
Self::InitializeMultisig2 { m }
}
20 => {
let (&decimals, rest) = rest.split_first().ok_or(InvalidInstruction)?;
let (mint_authority, rest) = Self::unpack_pubkey(rest)?;
let (freeze_authority, _rest) = Self::unpack_pubkey_option(rest)?;
Self::InitializeMint2 {
mint_authority,
freeze_authority,
decimals,
}
}
21 => {
let mut extension_types = vec![];
for chunk in rest.chunks(size_of::<ExtensionType>()) {
extension_types.push(chunk.try_into()?);
}
Self::GetAccountDataSize { extension_types }
}
22 => Self::InitializeImmutableOwner,
23 => {
let (amount, _rest) = Self::unpack_u64(rest)?;
Self::AmountToUiAmount { amount }
}
24 => {
let ui_amount = std::str::from_utf8(rest).map_err(|_| InvalidInstruction)?;
Self::UiAmountToAmount { ui_amount }
}
25 => {
let (close_authority, _rest) = Self::unpack_pubkey_option(rest)?;
Self::InitializeMintCloseAuthority { close_authority }
}
26 => {
let (instruction, _rest) = TransferFeeInstruction::unpack(rest)?;
Self::TransferFeeExtension(instruction)
}
27 => Self::ConfidentialTransferExtension,
28 => Self::DefaultAccountStateExtension,
29 => {
let mut extension_types = vec![];
for chunk in rest.chunks(size_of::<ExtensionType>()) {
extension_types.push(chunk.try_into()?);
}
Self::Reallocate { extension_types }
}
30 => Self::MemoTransferExtension,
31 => Self::CreateNativeMint,
32 => Self::InitializeNonTransferableMint,
33 => Self::InterestBearingMintExtension,
_ => return Err(TokenError::InvalidInstruction.into()),
})
}
/// Packs a [TokenInstruction](enum.TokenInstruction.html) into a byte buffer.
pub fn pack(&self) -> Vec<u8> {
let mut buf = Vec::with_capacity(size_of::<Self>());
match self {
&Self::InitializeMint {
ref mint_authority,
ref freeze_authority,
decimals,
} => {
buf.push(0);
buf.push(decimals);
buf.extend_from_slice(mint_authority.as_ref());
Self::pack_pubkey_option(freeze_authority, &mut buf);
}
Self::InitializeAccount => buf.push(1),
&Self::InitializeMultisig { m } => {
buf.push(2);
buf.push(m);
}
#[allow(deprecated)]
&Self::Transfer { amount } => {
buf.push(3);
buf.extend_from_slice(&amount.to_le_bytes());
}
&Self::Approve { amount } => {
buf.push(4);
buf.extend_from_slice(&amount.to_le_bytes());
}
&Self::MintTo { amount } => {
buf.push(7);
buf.extend_from_slice(&amount.to_le_bytes());
}
&Self::Burn { amount } => {
buf.push(8);
buf.extend_from_slice(&amount.to_le_bytes());
}
Self::Revoke => buf.push(5),
Self::SetAuthority {
authority_type,
ref new_authority,
} => {
buf.push(6);
buf.push(authority_type.into());
Self::pack_pubkey_option(new_authority, &mut buf);
}
Self::CloseAccount => buf.push(9),
Self::FreezeAccount => buf.push(10),
Self::ThawAccount => buf.push(11),
&Self::TransferChecked { amount, decimals } => {
buf.push(12);
buf.extend_from_slice(&amount.to_le_bytes());
buf.push(decimals);
}
&Self::ApproveChecked { amount, decimals } => {
buf.push(13);
buf.extend_from_slice(&amount.to_le_bytes());
buf.push(decimals);
}
&Self::MintToChecked { amount, decimals } => {
buf.push(14);
buf.extend_from_slice(&amount.to_le_bytes());
buf.push(decimals);
}
&Self::BurnChecked { amount, decimals } => {
buf.push(15);
buf.extend_from_slice(&amount.to_le_bytes());
buf.push(decimals);
}
&Self::InitializeAccount2 { owner } => {
buf.push(16);
buf.extend_from_slice(owner.as_ref());
}
&Self::SyncNative => {
buf.push(17);
}
&Self::InitializeAccount3 { owner } => {
buf.push(18);
buf.extend_from_slice(owner.as_ref());
}
&Self::InitializeMultisig2 { m } => {
buf.push(19);
buf.push(m);
}
&Self::InitializeMint2 {
ref mint_authority,
ref freeze_authority,
decimals,
} => {
buf.push(20);
buf.push(decimals);
buf.extend_from_slice(mint_authority.as_ref());
Self::pack_pubkey_option(freeze_authority, &mut buf);
}
Self::GetAccountDataSize { extension_types } => {
buf.push(21);
for extension_type in extension_types {
buf.extend_from_slice(&<[u8; 2]>::from(*extension_type));
}
}
&Self::InitializeImmutableOwner => {
buf.push(22);
}
&Self::AmountToUiAmount { amount } => {
buf.push(23);
buf.extend_from_slice(&amount.to_le_bytes());
}
Self::UiAmountToAmount { ui_amount } => {
buf.push(24);
buf.extend_from_slice(ui_amount.as_bytes());
}
Self::InitializeMintCloseAuthority { close_authority } => {
buf.push(25);
Self::pack_pubkey_option(close_authority, &mut buf);
}
Self::TransferFeeExtension(instruction) => {
buf.push(26);
TransferFeeInstruction::pack(instruction, &mut buf);
}
Self::ConfidentialTransferExtension => {
buf.push(27);
}
Self::DefaultAccountStateExtension => {
buf.push(28);
}
Self::Reallocate {
ref extension_types,
} => {
buf.push(29);
for extension_type in extension_types {
buf.extend_from_slice(&<[u8; 2]>::from(*extension_type));
}
}
Self::MemoTransferExtension => {
buf.push(30);
}
Self::CreateNativeMint => {
buf.push(31);
}
Self::InitializeNonTransferableMint => {
buf.push(32);
}
Self::InterestBearingMintExtension => {
buf.push(33);
}
};
buf
}
pub(crate) fn unpack_pubkey(input: &[u8]) -> Result<(Pubkey, &[u8]), ProgramError> {
let pk = input
.get(..PUBKEY_BYTES)
.map(|bytes| Pubkey::try_from(bytes).unwrap())
.ok_or(TokenError::InvalidInstruction)?;
Ok((pk, &input[PUBKEY_BYTES..]))
}
pub(crate) fn unpack_pubkey_option(
input: &[u8],
) -> Result<(COption<Pubkey>, &[u8]), ProgramError> {
match input.split_first() {
Option::Some((&0, rest)) => Ok((COption::None, rest)),
Option::Some((&1, rest)) => {
let (pk, rest) = Self::unpack_pubkey(rest)?;
Ok((COption::Some(pk), rest))
}
_ => Err(TokenError::InvalidInstruction.into()),
}
}
pub(crate) fn pack_pubkey_option(value: &COption<Pubkey>, buf: &mut Vec<u8>) {
match *value {
COption::Some(ref key) => {
buf.push(1);
buf.extend_from_slice(&key.to_bytes());
}
COption::None => buf.push(0),
}
}
pub(crate) fn unpack_u16(input: &[u8]) -> Result<(u16, &[u8]), ProgramError> {
let value = input
.get(..U16_BYTES)
.and_then(|slice| slice.try_into().ok())
.map(u16::from_le_bytes)
.ok_or(TokenError::InvalidInstruction)?;
Ok((value, &input[U16_BYTES..]))
}
pub(crate) fn unpack_u64(input: &[u8]) -> Result<(u64, &[u8]), ProgramError> {
let value = input
.get(..U64_BYTES)
.and_then(|slice| slice.try_into().ok())
.map(u64::from_le_bytes)
.ok_or(TokenError::InvalidInstruction)?;
Ok((value, &input[U64_BYTES..]))
}
pub(crate) fn unpack_amount_decimals(input: &[u8]) -> Result<(u64, u8, &[u8]), ProgramError> {
let (amount, rest) = Self::unpack_u64(input)?;
let (&decimals, rest) = rest.split_first().ok_or(TokenError::InvalidInstruction)?;
Ok((amount, decimals, rest))
}
}
/// Specifies the authority type for SetAuthority instructions
#[repr(u8)]
#[derive(Clone, Debug, PartialEq)]
pub enum AuthorityType {
/// Authority to mint new tokens
MintTokens,
/// Authority to freeze any account associated with the Mint
FreezeAccount,
/// Owner of a given token account
AccountOwner,
/// Authority to close a token account
CloseAccount,
/// Authority to set the transfer fee
TransferFeeConfig,
/// Authority to withdraw withheld tokens from a mint
WithheldWithdraw,
/// Authority to close a mint account
CloseMint,
/// Authority to set the interest rate
InterestRate,
}
impl AuthorityType {
fn into(&self) -> u8 {
match self {
AuthorityType::MintTokens => 0,
AuthorityType::FreezeAccount => 1,
AuthorityType::AccountOwner => 2,
AuthorityType::CloseAccount => 3,
AuthorityType::TransferFeeConfig => 4,
AuthorityType::WithheldWithdraw => 5,
AuthorityType::CloseMint => 6,
AuthorityType::InterestRate => 7,
}
}
fn from(index: u8) -> Result<Self, ProgramError> {
match index {
0 => Ok(AuthorityType::MintTokens),
1 => Ok(AuthorityType::FreezeAccount),
2 => Ok(AuthorityType::AccountOwner),
3 => Ok(AuthorityType::CloseAccount),
4 => Ok(AuthorityType::TransferFeeConfig),
5 => Ok(AuthorityType::WithheldWithdraw),
6 => Ok(AuthorityType::CloseMint),
7 => Ok(AuthorityType::InterestRate),
_ => Err(TokenError::InvalidInstruction.into()),
}
}
}
/// Creates a `InitializeMint` instruction.
pub fn initialize_mint(
token_program_id: &Pubkey,
mint_pubkey: &Pubkey,
mint_authority_pubkey: &Pubkey,
freeze_authority_pubkey: Option<&Pubkey>,
decimals: u8,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let freeze_authority = freeze_authority_pubkey.cloned().into();
let data = TokenInstruction::InitializeMint {
mint_authority: *mint_authority_pubkey,
freeze_authority,
decimals,
}
.pack();
let accounts = vec![
AccountMeta::new(*mint_pubkey, false),
AccountMeta::new_readonly(sysvar::rent::id(), false),
];
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `InitializeMint2` instruction.
pub fn initialize_mint2(
token_program_id: &Pubkey,
mint_pubkey: &Pubkey,
mint_authority_pubkey: &Pubkey,
freeze_authority_pubkey: Option<&Pubkey>,
decimals: u8,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let freeze_authority = freeze_authority_pubkey.cloned().into();
let data = TokenInstruction::InitializeMint2 {
mint_authority: *mint_authority_pubkey,
freeze_authority,
decimals,
}
.pack();
let accounts = vec![AccountMeta::new(*mint_pubkey, false)];
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `InitializeAccount` instruction.
pub fn initialize_account(
token_program_id: &Pubkey,
account_pubkey: &Pubkey,
mint_pubkey: &Pubkey,
owner_pubkey: &Pubkey,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::InitializeAccount.pack();
let accounts = vec![
AccountMeta::new(*account_pubkey, false),
AccountMeta::new_readonly(*mint_pubkey, false),
AccountMeta::new_readonly(*owner_pubkey, false),
AccountMeta::new_readonly(sysvar::rent::id(), false),
];
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `InitializeAccount2` instruction.
pub fn initialize_account2(
token_program_id: &Pubkey,
account_pubkey: &Pubkey,
mint_pubkey: &Pubkey,
owner_pubkey: &Pubkey,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::InitializeAccount2 {
owner: *owner_pubkey,
}
.pack();
let accounts = vec![
AccountMeta::new(*account_pubkey, false),
AccountMeta::new_readonly(*mint_pubkey, false),
AccountMeta::new_readonly(sysvar::rent::id(), false),
];
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `InitializeAccount3` instruction.
pub fn initialize_account3(
token_program_id: &Pubkey,
account_pubkey: &Pubkey,
mint_pubkey: &Pubkey,
owner_pubkey: &Pubkey,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::InitializeAccount3 {
owner: *owner_pubkey,
}
.pack();
let accounts = vec![
AccountMeta::new(*account_pubkey, false),
AccountMeta::new_readonly(*mint_pubkey, false),
];
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `InitializeMultisig` instruction.
pub fn initialize_multisig(
token_program_id: &Pubkey,
multisig_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
m: u8,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
if !is_valid_signer_index(m as usize)
|| !is_valid_signer_index(signer_pubkeys.len())
|| m as usize > signer_pubkeys.len()
{
return Err(ProgramError::MissingRequiredSignature);
}
let data = TokenInstruction::InitializeMultisig { m }.pack();
let mut accounts = Vec::with_capacity(1 + 1 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*multisig_pubkey, false));
accounts.push(AccountMeta::new_readonly(sysvar::rent::id(), false));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, false));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `InitializeMultisig2` instruction.
pub fn initialize_multisig2(
token_program_id: &Pubkey,
multisig_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
m: u8,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
if !is_valid_signer_index(m as usize)
|| !is_valid_signer_index(signer_pubkeys.len())
|| m as usize > signer_pubkeys.len()
{
return Err(ProgramError::MissingRequiredSignature);
}
let data = TokenInstruction::InitializeMultisig2 { m }.pack();
let mut accounts = Vec::with_capacity(1 + 1 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*multisig_pubkey, false));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, false));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `Transfer` instruction.
#[deprecated(
since = "4.0.0",
note = "please use `transfer_checked` or `transfer_checked_with_fee` instead"
)]
pub fn transfer(
token_program_id: &Pubkey,
source_pubkey: &Pubkey,
destination_pubkey: &Pubkey,
authority_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
amount: u64,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
#[allow(deprecated)]
let data = TokenInstruction::Transfer { amount }.pack();
let mut accounts = Vec::with_capacity(3 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*source_pubkey, false));
accounts.push(AccountMeta::new(*destination_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*authority_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates an `Approve` instruction.
pub fn approve(
token_program_id: &Pubkey,
source_pubkey: &Pubkey,
delegate_pubkey: &Pubkey,
owner_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
amount: u64,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::Approve { amount }.pack();
let mut accounts = Vec::with_capacity(3 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*source_pubkey, false));
accounts.push(AccountMeta::new_readonly(*delegate_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*owner_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `Revoke` instruction.
pub fn revoke(
token_program_id: &Pubkey,
source_pubkey: &Pubkey,
owner_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::Revoke.pack();
let mut accounts = Vec::with_capacity(2 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*source_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*owner_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `SetAuthority` instruction.
pub fn set_authority(
token_program_id: &Pubkey,
owned_pubkey: &Pubkey,
new_authority_pubkey: Option<&Pubkey>,
authority_type: AuthorityType,
owner_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let new_authority = new_authority_pubkey.cloned().into();
let data = TokenInstruction::SetAuthority {
authority_type,
new_authority,
}
.pack();
let mut accounts = Vec::with_capacity(3 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*owned_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*owner_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `MintTo` instruction.
pub fn mint_to(
token_program_id: &Pubkey,
mint_pubkey: &Pubkey,
account_pubkey: &Pubkey,
owner_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
amount: u64,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::MintTo { amount }.pack();
let mut accounts = Vec::with_capacity(3 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*mint_pubkey, false));
accounts.push(AccountMeta::new(*account_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*owner_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `Burn` instruction.
pub fn burn(
token_program_id: &Pubkey,
account_pubkey: &Pubkey,
mint_pubkey: &Pubkey,
authority_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
amount: u64,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::Burn { amount }.pack();
let mut accounts = Vec::with_capacity(3 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*account_pubkey, false));
accounts.push(AccountMeta::new(*mint_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*authority_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `CloseAccount` instruction.
pub fn close_account(
token_program_id: &Pubkey,
account_pubkey: &Pubkey,
destination_pubkey: &Pubkey,
owner_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::CloseAccount.pack();
let mut accounts = Vec::with_capacity(3 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*account_pubkey, false));
accounts.push(AccountMeta::new(*destination_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*owner_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `FreezeAccount` instruction.
pub fn freeze_account(
token_program_id: &Pubkey,
account_pubkey: &Pubkey,
mint_pubkey: &Pubkey,
owner_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::FreezeAccount.pack();
let mut accounts = Vec::with_capacity(3 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*account_pubkey, false));
accounts.push(AccountMeta::new_readonly(*mint_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*owner_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `ThawAccount` instruction.
pub fn thaw_account(
token_program_id: &Pubkey,
account_pubkey: &Pubkey,
mint_pubkey: &Pubkey,
owner_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::ThawAccount.pack();
let mut accounts = Vec::with_capacity(3 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*account_pubkey, false));
accounts.push(AccountMeta::new_readonly(*mint_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*owner_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `TransferChecked` instruction.
#[allow(clippy::too_many_arguments)]
pub fn transfer_checked(
token_program_id: &Pubkey,
source_pubkey: &Pubkey,
mint_pubkey: &Pubkey,
destination_pubkey: &Pubkey,
authority_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
amount: u64,
decimals: u8,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::TransferChecked { amount, decimals }.pack();
let mut accounts = Vec::with_capacity(4 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*source_pubkey, false));
accounts.push(AccountMeta::new_readonly(*mint_pubkey, false));
accounts.push(AccountMeta::new(*destination_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*authority_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates an `ApproveChecked` instruction.
#[allow(clippy::too_many_arguments)]
pub fn approve_checked(
token_program_id: &Pubkey,
source_pubkey: &Pubkey,
mint_pubkey: &Pubkey,
delegate_pubkey: &Pubkey,
owner_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
amount: u64,
decimals: u8,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::ApproveChecked { amount, decimals }.pack();
let mut accounts = Vec::with_capacity(4 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*source_pubkey, false));
accounts.push(AccountMeta::new_readonly(*mint_pubkey, false));
accounts.push(AccountMeta::new_readonly(*delegate_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*owner_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `MintToChecked` instruction.
pub fn mint_to_checked(
token_program_id: &Pubkey,
mint_pubkey: &Pubkey,
account_pubkey: &Pubkey,
owner_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
amount: u64,
decimals: u8,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::MintToChecked { amount, decimals }.pack();
let mut accounts = Vec::with_capacity(3 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*mint_pubkey, false));
accounts.push(AccountMeta::new(*account_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*owner_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `BurnChecked` instruction.
pub fn burn_checked(
token_program_id: &Pubkey,
account_pubkey: &Pubkey,
mint_pubkey: &Pubkey,
authority_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
amount: u64,
decimals: u8,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
let data = TokenInstruction::BurnChecked { amount, decimals }.pack();
let mut accounts = Vec::with_capacity(3 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*account_pubkey, false));
accounts.push(AccountMeta::new(*mint_pubkey, false));
accounts.push(AccountMeta::new_readonly(
*authority_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `SyncNative` instruction
pub fn sync_native(
token_program_id: &Pubkey,
account_pubkey: &Pubkey,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
Ok(Instruction {
program_id: *token_program_id,
accounts: vec![AccountMeta::new(*account_pubkey, false)],
data: TokenInstruction::SyncNative.pack(),
})
}
/// Creates a `GetAccountDataSize` instruction
pub fn get_account_data_size(
token_program_id: &Pubkey,
mint_pubkey: &Pubkey,
extension_types: &[ExtensionType],
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
Ok(Instruction {
program_id: *token_program_id,
accounts: vec![AccountMeta::new_readonly(*mint_pubkey, false)],
data: TokenInstruction::GetAccountDataSize {
extension_types: extension_types.to_vec(),
}
.pack(),
})
}
/// Creates an `InitializeMintCloseAuthority` instruction
pub fn initialize_mint_close_authority(
token_program_id: &Pubkey,
mint_pubkey: &Pubkey,
close_authority: Option<&Pubkey>,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let close_authority = close_authority.cloned().into();
Ok(Instruction {
program_id: *token_program_id,
accounts: vec![AccountMeta::new(*mint_pubkey, false)],
data: TokenInstruction::InitializeMintCloseAuthority { close_authority }.pack(),
})
}
/// Create an `InitializeImmutableOwner` instruction
pub fn initialize_immutable_owner(
token_program_id: &Pubkey,
token_account: &Pubkey,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
Ok(Instruction {
program_id: *token_program_id,
accounts: vec![AccountMeta::new(*token_account, false)],
data: TokenInstruction::InitializeImmutableOwner.pack(),
})
}
/// Creates an `AmountToUiAmount` instruction
pub fn amount_to_ui_amount(
token_program_id: &Pubkey,
mint_pubkey: &Pubkey,
amount: u64,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
Ok(Instruction {
program_id: *token_program_id,
accounts: vec![AccountMeta::new_readonly(*mint_pubkey, false)],
data: TokenInstruction::AmountToUiAmount { amount }.pack(),
})
}
/// Creates a `UiAmountToAmount` instruction
pub fn ui_amount_to_amount(
token_program_id: &Pubkey,
mint_pubkey: &Pubkey,
ui_amount: &str,
) -> Result<Instruction, ProgramError> {
check_spl_token_program_account(token_program_id)?;
Ok(Instruction {
program_id: *token_program_id,
accounts: vec![AccountMeta::new_readonly(*mint_pubkey, false)],
data: TokenInstruction::UiAmountToAmount { ui_amount }.pack(),
})
}
/// Creates a `Reallocate` instruction
pub fn reallocate(
token_program_id: &Pubkey,
account_pubkey: &Pubkey,
payer: &Pubkey,
owner_pubkey: &Pubkey,
signer_pubkeys: &[&Pubkey],
extension_types: &[ExtensionType],
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = Vec::with_capacity(4 + signer_pubkeys.len());
accounts.push(AccountMeta::new(*account_pubkey, false));
accounts.push(AccountMeta::new(*payer, true));
accounts.push(AccountMeta::new_readonly(system_program::id(), false));
accounts.push(AccountMeta::new_readonly(
*owner_pubkey,
signer_pubkeys.is_empty(),
));
for signer_pubkey in signer_pubkeys.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data: TokenInstruction::Reallocate {
extension_types: extension_types.to_vec(),
}
.pack(),
})
}
/// Creates a `CreateNativeMint` instruction
pub fn create_native_mint(
token_program_id: &Pubkey,
payer: &Pubkey,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
Ok(Instruction {
program_id: *token_program_id,
accounts: vec![
AccountMeta::new(*payer, true),
AccountMeta::new(native_mint::id(), false),
AccountMeta::new_readonly(system_program::id(), false),
],
data: TokenInstruction::CreateNativeMint.pack(),
})
}
/// Creates an `InitializeNonTransferableMint` instruction
pub fn initialize_non_transferable_mint(
token_program_id: &Pubkey,
mint_pubkey: &Pubkey,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
Ok(Instruction {
program_id: *token_program_id,
accounts: vec![AccountMeta::new(*mint_pubkey, false)],
data: TokenInstruction::InitializeNonTransferableMint.pack(),
})
}
/// Utility function that checks index is between MIN_SIGNERS and MAX_SIGNERS
pub fn is_valid_signer_index(index: usize) -> bool {
(MIN_SIGNERS..=MAX_SIGNERS).contains(&index)
}
/// Utility function for decoding just the instruction type
pub fn decode_instruction_type<T: TryFrom<u8>>(input: &[u8]) -> Result<T, ProgramError> {
if input.is_empty() {
Err(ProgramError::InvalidInstructionData)
} else {
T::try_from(input[0]).map_err(|_| TokenError::InvalidInstruction.into())
}
}
/// Utility function for decoding instruction data
pub fn decode_instruction_data<T: Pod>(input: &[u8]) -> Result<&T, ProgramError> {
if input.len() != pod_get_packed_len::<T>().saturating_add(1) {
Err(ProgramError::InvalidInstructionData)
} else {
pod_from_bytes(&input[1..])
}
}
/// Utility function for encoding instruction data
pub(crate) fn encode_instruction<T: Into<u8>, D: Pod>(
token_program_id: &Pubkey,
accounts: Vec<AccountMeta>,
token_instruction_type: TokenInstruction,
instruction_type: T,
instruction_data: &D,
) -> Instruction {
let mut data = token_instruction_type.pack();
data.push(T::into(instruction_type));
data.extend_from_slice(bytemuck::bytes_of(instruction_data));
Instruction {
program_id: *token_program_id,
accounts,
data,
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/non_transferable.rs
|
use {
crate::program::spl_token_2022::extension::{Extension, ExtensionType},
bytemuck::{Pod, Zeroable},
};
/// Indicates that the tokens from this mint can't be transferred
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
#[repr(transparent)]
pub struct NonTransferable;
impl Extension for NonTransferable {
const TYPE: ExtensionType = ExtensionType::NonTransferable;
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/mint_close_authority.rs
|
use {
crate::program::spl_token_2022::{
extension::{Extension, ExtensionType},
pod::*,
},
bytemuck::{Pod, Zeroable},
};
/// Close authority extension data for mints.
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
pub struct MintCloseAuthority {
/// Optional authority to close the mint
pub close_authority: OptionalNonZeroPubkey,
}
impl Extension for MintCloseAuthority {
const TYPE: ExtensionType = ExtensionType::MintCloseAuthority;
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/mod.rs
|
// //! Extensions available to token mints and accounts
use {
super::{
error::TokenError,
extension::{
// TODO:
// confidential_transfer::{ConfidentialTransferAccount, ConfidentialTransferMint},
default_account_state::DefaultAccountState,
immutable_owner::ImmutableOwner,
interest_bearing_mint::InterestBearingConfig,
memo_transfer::MemoTransfer,
mint_close_authority::MintCloseAuthority,
non_transferable::NonTransferable,
transfer_fee::{TransferFeeAmount, TransferFeeConfig},
},
pod::*,
state::{Account, Mint, Multisig},
},
bytemuck::{Pod, Zeroable},
num_enum::{IntoPrimitive, TryFromPrimitive},
solana_sdk::{
program_error::ProgramError,
program_pack::{IsInitialized, Pack},
},
std::{
convert::{TryFrom, TryInto},
mem::size_of,
},
};
// TODO:
// /// Confidential Transfer extension
// pub mod confidential_transfer;
/// Default Account State extension
pub mod default_account_state;
/// Immutable Owner extension
pub mod immutable_owner;
/// Interest-Bearing Mint extension
pub mod interest_bearing_mint;
/// Memo Transfer extension
pub mod memo_transfer;
/// Mint Close Authority extension
pub mod mint_close_authority;
/// Non Transferable extension
pub mod non_transferable;
/// Utility to reallocate token accounts
pub mod reallocate;
// / Transfer Fee extension
pub mod transfer_fee;
/// Length in TLV structure
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
#[repr(transparent)]
pub struct Length(PodU16);
impl From<Length> for usize {
fn from(n: Length) -> Self {
Self::from(u16::from(n.0))
}
}
impl TryFrom<usize> for Length {
type Error = ProgramError;
fn try_from(n: usize) -> Result<Self, Self::Error> {
u16::try_from(n)
.map(|v| Self(PodU16::from(v)))
.map_err(|_| ProgramError::AccountDataTooSmall)
}
}
/// Helper function to get the current TlvIndices from the current spot
fn get_tlv_indices(type_start: usize) -> TlvIndices {
let length_start = type_start.saturating_add(size_of::<ExtensionType>());
let value_start = length_start.saturating_add(pod_get_packed_len::<Length>());
TlvIndices {
type_start,
length_start,
value_start,
}
}
/// Helper struct for returning the indices of the type, length, and value in
/// a TLV entry
#[derive(Debug)]
struct TlvIndices {
pub type_start: usize,
pub length_start: usize,
pub value_start: usize,
}
fn get_extension_indices<V: Extension>(
tlv_data: &[u8],
init: bool,
) -> Result<TlvIndices, ProgramError> {
let mut start_index = 0;
let v_account_type = V::TYPE.get_account_type();
while start_index < tlv_data.len() {
let tlv_indices = get_tlv_indices(start_index);
if tlv_data.len() < tlv_indices.value_start {
return Err(ProgramError::InvalidAccountData);
}
let extension_type =
ExtensionType::try_from(&tlv_data[tlv_indices.type_start..tlv_indices.length_start])?;
let account_type = extension_type.get_account_type();
// got to an empty spot, can init here, or move forward if not initing
if extension_type == ExtensionType::Uninitialized {
if init {
return Ok(tlv_indices);
} else {
start_index = tlv_indices.length_start;
}
} else if extension_type == V::TYPE {
// found an instance of the extension that we're initializing, return!
return Ok(tlv_indices);
} else if v_account_type != account_type {
return Err(TokenError::ExtensionTypeMismatch.into());
} else {
let length = pod_from_bytes::<Length>(
&tlv_data[tlv_indices.length_start..tlv_indices.value_start],
)?;
let value_end_index = tlv_indices.value_start.saturating_add(usize::from(*length));
start_index = value_end_index;
}
}
Err(ProgramError::InvalidAccountData)
}
fn get_extension_types(tlv_data: &[u8]) -> Result<Vec<ExtensionType>, ProgramError> {
let mut extension_types = vec![];
let mut start_index = 0;
while start_index < tlv_data.len() {
let tlv_indices = get_tlv_indices(start_index);
if tlv_data.len() < tlv_indices.value_start {
return Ok(extension_types);
}
let extension_type =
ExtensionType::try_from(&tlv_data[tlv_indices.type_start..tlv_indices.length_start])?;
if extension_type == ExtensionType::Uninitialized {
return Ok(extension_types);
} else {
extension_types.push(extension_type);
let length = pod_from_bytes::<Length>(
&tlv_data[tlv_indices.length_start..tlv_indices.value_start],
)?;
let value_end_index = tlv_indices.value_start.saturating_add(usize::from(*length));
start_index = value_end_index;
}
}
Ok(extension_types)
}
fn get_first_extension_type(tlv_data: &[u8]) -> Result<Option<ExtensionType>, ProgramError> {
if tlv_data.is_empty() {
Ok(None)
} else {
let tlv_indices = get_tlv_indices(0);
if tlv_data.len() <= tlv_indices.length_start {
return Ok(None);
}
let extension_type =
ExtensionType::try_from(&tlv_data[tlv_indices.type_start..tlv_indices.length_start])?;
if extension_type == ExtensionType::Uninitialized {
Ok(None)
} else {
Ok(Some(extension_type))
}
}
}
fn check_min_len_and_not_multisig(input: &[u8], minimum_len: usize) -> Result<(), ProgramError> {
if input.len() == Multisig::LEN || input.len() < minimum_len {
Err(ProgramError::InvalidAccountData)
} else {
Ok(())
}
}
fn check_account_type<S: BaseState>(account_type: AccountType) -> Result<(), ProgramError> {
if account_type != S::ACCOUNT_TYPE {
Err(ProgramError::InvalidAccountData)
} else {
Ok(())
}
}
/// Any account with extensions must be at least `Account::LEN`. Both mints and
/// accounts can have extensions
/// A mint with extensions that takes it past 165 could be indiscernible from an
/// Account with an extension, even if we add the account type. For example,
/// let's say we have:
///
/// Account: 165 bytes... + [2, 0, 3, 0, 100, ....]
/// ^ ^ ^ ^
/// acct type extension length data...
///
/// Mint: 82 bytes... + 83 bytes of other extension data + [2, 0, 3, 0, 100, ....]
/// ^ data in extension just happens to look like this
///
/// With this approach, we only start writing the TLV data after Account::LEN,
/// which means we always know that the account type is going to be right after
/// that. We do a special case checking for a Multisig length, because those
/// aren't extensible under any circumstances.
const BASE_ACCOUNT_LENGTH: usize = Account::LEN;
fn type_and_tlv_indices<S: BaseState>(
rest_input: &[u8],
) -> Result<Option<(usize, usize)>, ProgramError> {
if rest_input.is_empty() {
Ok(None)
} else {
let account_type_index = BASE_ACCOUNT_LENGTH.saturating_sub(S::LEN);
// check padding is all zeroes
let tlv_start_index = account_type_index.saturating_add(size_of::<AccountType>());
if rest_input.len() <= tlv_start_index {
return Err(ProgramError::InvalidAccountData);
}
if rest_input[..account_type_index] != vec![0; account_type_index] {
Err(ProgramError::InvalidAccountData)
} else {
Ok(Some((account_type_index, tlv_start_index)))
}
}
}
/// Checks a base buffer to verify if it is an Account without having to completely deserialize it
fn is_initialized_account(input: &[u8]) -> Result<bool, ProgramError> {
const ACCOUNT_INITIALIZED_INDEX: usize = 108; // See state.rs#L99
if input.len() != BASE_ACCOUNT_LENGTH {
return Err(ProgramError::InvalidAccountData);
}
Ok(input[ACCOUNT_INITIALIZED_INDEX] != 0)
}
fn get_extension<S: BaseState, V: Extension>(tlv_data: &[u8]) -> Result<&V, ProgramError> {
if V::TYPE.get_account_type() != S::ACCOUNT_TYPE {
return Err(ProgramError::InvalidAccountData);
}
let TlvIndices {
type_start: _,
length_start,
value_start,
} = get_extension_indices::<V>(tlv_data, false)?;
// get_extension_indices has checked that tlv_data is long enough to include these indices
let length = pod_from_bytes::<Length>(&tlv_data[length_start..value_start])?;
let value_end = value_start.saturating_add(usize::from(*length));
pod_from_bytes::<V>(&tlv_data[value_start..value_end])
}
/// Encapsulates owned immutable base state data (mint or account) with possible extensions
#[derive(Debug, PartialEq)]
pub struct StateWithExtensionsOwned<S: BaseState> {
/// Unpacked base data
pub base: S,
/// Raw TLV data, deserialized on demand
tlv_data: Vec<u8>,
}
impl<S: BaseState> StateWithExtensionsOwned<S> {
/// Unpack base state, leaving the extension data as a slice
///
/// Fails if the base state is not initialized.
pub fn unpack(mut input: Vec<u8>) -> Result<Self, ProgramError> {
check_min_len_and_not_multisig(&input, S::LEN)?;
let mut rest = input.split_off(S::LEN);
let base = S::unpack(&input)?;
if let Some((account_type_index, tlv_start_index)) = type_and_tlv_indices::<S>(&rest)? {
// type_and_tlv_indices() checks that returned indexes are within range
let account_type = AccountType::try_from(rest[account_type_index])
.map_err(|_| ProgramError::InvalidAccountData)?;
check_account_type::<S>(account_type)?;
let tlv_data = rest.split_off(tlv_start_index);
Ok(Self { base, tlv_data })
} else {
Ok(Self {
base,
tlv_data: vec![],
})
}
}
/// Unpack a portion of the TLV data as the desired type
pub fn get_extension<V: Extension>(&self) -> Result<&V, ProgramError> {
get_extension::<S, V>(&self.tlv_data)
}
/// Iterates through the TLV entries, returning only the types
pub fn get_extension_types(&self) -> Result<Vec<ExtensionType>, ProgramError> {
get_extension_types(&self.tlv_data)
}
}
/// Encapsulates immutable base state data (mint or account) with possible extensions
#[derive(Debug, PartialEq)]
pub struct StateWithExtensions<'data, S: BaseState> {
/// Unpacked base data
pub base: S,
/// Slice of data containing all TLV data, deserialized on demand
tlv_data: &'data [u8],
}
impl<'data, S: BaseState> StateWithExtensions<'data, S> {
/// Unpack base state, leaving the extension data as a slice
///
/// Fails if the base state is not initialized.
pub fn unpack(input: &'data [u8]) -> Result<Self, ProgramError> {
check_min_len_and_not_multisig(input, S::LEN)?;
let (base_data, rest) = input.split_at(S::LEN);
let base = S::unpack(base_data)?;
if let Some((account_type_index, tlv_start_index)) = type_and_tlv_indices::<S>(rest)? {
// type_and_tlv_indices() checks that returned indexes are within range
let account_type = AccountType::try_from(rest[account_type_index])
.map_err(|_| ProgramError::InvalidAccountData)?;
check_account_type::<S>(account_type)?;
Ok(Self {
base,
tlv_data: &rest[tlv_start_index..],
})
} else {
Ok(Self {
base,
tlv_data: &[],
})
}
}
/// Unpack a portion of the TLV data as the desired type
pub fn get_extension<V: Extension>(&self) -> Result<&V, ProgramError> {
get_extension::<S, V>(self.tlv_data)
}
/// Iterates through the TLV entries, returning only the types
pub fn get_extension_types(&self) -> Result<Vec<ExtensionType>, ProgramError> {
get_extension_types(self.tlv_data)
}
}
/// Encapsulates mutable base state data (mint or account) with possible extensions
#[derive(Debug, PartialEq)]
pub struct StateWithExtensionsMut<'data, S: BaseState> {
/// Unpacked base data
pub base: S,
/// Raw base data
base_data: &'data mut [u8],
/// Writable account type
account_type: &'data mut [u8],
/// Slice of data containing all TLV data, deserialized on demand
tlv_data: &'data mut [u8],
}
impl<'data, S: BaseState> StateWithExtensionsMut<'data, S> {
/// Unpack base state, leaving the extension data as a mutable slice
///
/// Fails if the base state is not initialized.
pub fn unpack(input: &'data mut [u8]) -> Result<Self, ProgramError> {
check_min_len_and_not_multisig(input, S::LEN)?;
let (base_data, rest) = input.split_at_mut(S::LEN);
let base = S::unpack(base_data)?;
if let Some((account_type_index, tlv_start_index)) = type_and_tlv_indices::<S>(rest)? {
// type_and_tlv_indices() checks that returned indexes are within range
let account_type = AccountType::try_from(rest[account_type_index])
.map_err(|_| ProgramError::InvalidAccountData)?;
check_account_type::<S>(account_type)?;
let (account_type, tlv_data) = rest.split_at_mut(tlv_start_index);
Ok(Self {
base,
base_data,
account_type: &mut account_type[account_type_index..tlv_start_index],
tlv_data,
})
} else {
Ok(Self {
base,
base_data,
account_type: &mut [],
tlv_data: &mut [],
})
}
}
/// Unpack an uninitialized base state, leaving the extension data as a mutable slice
///
/// Fails if the base state has already been initialized.
pub fn unpack_uninitialized(input: &'data mut [u8]) -> Result<Self, ProgramError> {
check_min_len_and_not_multisig(input, S::LEN)?;
let (base_data, rest) = input.split_at_mut(S::LEN);
let base = S::unpack_unchecked(base_data)?;
if base.is_initialized() {
return Err(TokenError::AlreadyInUse.into());
}
if let Some((account_type_index, tlv_start_index)) = type_and_tlv_indices::<S>(rest)? {
// type_and_tlv_indices() checks that returned indexes are within range
let account_type = AccountType::try_from(rest[account_type_index])
.map_err(|_| ProgramError::InvalidAccountData)?;
if account_type != AccountType::Uninitialized {
return Err(ProgramError::InvalidAccountData);
}
let (account_type, tlv_data) = rest.split_at_mut(tlv_start_index);
let state = Self {
base,
base_data,
account_type: &mut account_type[account_type_index..tlv_start_index],
tlv_data,
};
if let Some(extension_type) = state.get_first_extension_type()? {
let account_type = extension_type.get_account_type();
if account_type != S::ACCOUNT_TYPE {
return Err(TokenError::ExtensionBaseMismatch.into());
}
}
Ok(state)
} else {
Ok(Self {
base,
base_data,
account_type: &mut [],
tlv_data: &mut [],
})
}
}
/// Unpack a portion of the TLV data as the desired type that allows modifying the type
pub fn get_extension_mut<V: Extension>(&mut self) -> Result<&mut V, ProgramError> {
if V::TYPE.get_account_type() != S::ACCOUNT_TYPE {
return Err(ProgramError::InvalidAccountData);
}
let TlvIndices {
type_start,
length_start,
value_start,
} = get_extension_indices::<V>(self.tlv_data, false)?;
if self.tlv_data[type_start..].len() < V::TYPE.get_tlv_len() {
return Err(ProgramError::InvalidAccountData);
}
let length = pod_from_bytes::<Length>(&self.tlv_data[length_start..value_start])?;
let value_end = value_start.saturating_add(usize::from(*length));
pod_from_bytes_mut::<V>(&mut self.tlv_data[value_start..value_end])
}
/// Unpack a portion of the TLV data as the desired type
pub fn get_extension<V: Extension>(&self) -> Result<&V, ProgramError> {
if V::TYPE.get_account_type() != S::ACCOUNT_TYPE {
return Err(ProgramError::InvalidAccountData);
}
let TlvIndices {
type_start,
length_start,
value_start,
} = get_extension_indices::<V>(self.tlv_data, false)?;
if self.tlv_data[type_start..].len() < V::TYPE.get_tlv_len() {
return Err(ProgramError::InvalidAccountData);
}
let length = pod_from_bytes::<Length>(&self.tlv_data[length_start..value_start])?;
let value_end = value_start.saturating_add(usize::from(*length));
pod_from_bytes::<V>(&self.tlv_data[value_start..value_end])
}
/// Packs base state data into the base data portion
pub fn pack_base(&mut self) {
S::pack_into_slice(&self.base, self.base_data);
}
/// Packs the default extension data into an open slot if not already found in the
/// data buffer. If extension is already found in the buffer, it overwrites the existing
/// extension with the default state if `overwrite` is set. If extension found, but
/// `overwrite` is not set, it returns error.
pub fn init_extension<V: Extension>(
&mut self,
overwrite: bool,
) -> Result<&mut V, ProgramError> {
if V::TYPE.get_account_type() != S::ACCOUNT_TYPE {
return Err(ProgramError::InvalidAccountData);
}
let TlvIndices {
type_start,
length_start,
value_start,
} = get_extension_indices::<V>(self.tlv_data, true)?;
if self.tlv_data[type_start..].len() < V::TYPE.get_tlv_len() {
return Err(ProgramError::InvalidAccountData);
}
let extension_type = ExtensionType::try_from(&self.tlv_data[type_start..length_start])?;
if extension_type == ExtensionType::Uninitialized || overwrite {
// write extension type
let extension_type_array: [u8; 2] = V::TYPE.into();
let extension_type_ref = &mut self.tlv_data[type_start..length_start];
extension_type_ref.copy_from_slice(&extension_type_array);
// write length
let length_ref =
pod_from_bytes_mut::<Length>(&mut self.tlv_data[length_start..value_start])?;
// maybe this becomes smarter later for dynamically sized extensions
let length = pod_get_packed_len::<V>();
*length_ref = Length::try_from(length).unwrap();
let value_end = value_start.saturating_add(length);
let extension_ref =
pod_from_bytes_mut::<V>(&mut self.tlv_data[value_start..value_end])?;
*extension_ref = V::default();
Ok(extension_ref)
} else {
// extension is already initialized, but no overwrite permission
Err(TokenError::ExtensionAlreadyInitialized.into())
}
}
/// If `extension_type` is an Account-associated ExtensionType that requires initialization on
/// InitializeAccount, this method packs the default relevant Extension of an ExtensionType
/// into an open slot if not already found in the data buffer, otherwise overwrites the
/// existing extension with the default state. For all other ExtensionTypes, this is a no-op.
pub fn init_account_extension_from_type(
&mut self,
extension_type: ExtensionType,
) -> Result<(), ProgramError> {
if extension_type.get_account_type() != AccountType::Account {
return Ok(());
}
match extension_type {
ExtensionType::TransferFeeAmount => {
self.init_extension::<TransferFeeAmount>(true).map(|_| ())
}
// ConfidentialTransfers are currently opt-in only, so this is a no-op for extra safety
// on InitializeAccount
ExtensionType::ConfidentialTransferAccount => Ok(()),
_ => unreachable!(),
}
}
/// Write the account type into the buffer, done during the base
/// state initialization
/// Noops if there is no room for an extension in the account, needed for
/// pure base mints / accounts.
pub fn init_account_type(&mut self) -> Result<(), ProgramError> {
if !self.account_type.is_empty() {
if let Some(extension_type) = self.get_first_extension_type()? {
let account_type = extension_type.get_account_type();
if account_type != S::ACCOUNT_TYPE {
return Err(TokenError::ExtensionBaseMismatch.into());
}
}
self.account_type[0] = S::ACCOUNT_TYPE.into();
}
Ok(())
}
/// Iterates through the TLV entries, returning only the types
pub fn get_extension_types(&self) -> Result<Vec<ExtensionType>, ProgramError> {
get_extension_types(self.tlv_data)
}
fn get_first_extension_type(&self) -> Result<Option<ExtensionType>, ProgramError> {
get_first_extension_type(self.tlv_data)
}
}
/// If AccountType is uninitialized, set it to the BaseState's ACCOUNT_TYPE;
/// if AccountType is already set, check is set correctly for BaseState
/// This method assumes that the `base_data` has already been packed with data of the desired type.
pub fn set_account_type<S: BaseState>(input: &mut [u8]) -> Result<(), ProgramError> {
check_min_len_and_not_multisig(input, S::LEN)?;
let (base_data, rest) = input.split_at_mut(S::LEN);
if S::ACCOUNT_TYPE == AccountType::Account && !is_initialized_account(base_data)? {
return Err(ProgramError::InvalidAccountData);
}
if let Some((account_type_index, _tlv_start_index)) = type_and_tlv_indices::<S>(rest)? {
let mut account_type = AccountType::try_from(rest[account_type_index])
.map_err(|_| ProgramError::InvalidAccountData)?;
if account_type == AccountType::Uninitialized {
rest[account_type_index] = S::ACCOUNT_TYPE.into();
account_type = S::ACCOUNT_TYPE;
}
check_account_type::<S>(account_type)?;
Ok(())
} else {
Err(ProgramError::InvalidAccountData)
}
}
/// Different kinds of accounts. Note that `Mint`, `Account`, and `Multisig` types
/// are determined exclusively by the size of the account, and are not included in
/// the account data. `AccountType` is only included if extensions have been
/// initialized.
#[repr(u8)]
#[derive(Clone, Copy, Debug, Default, PartialEq, TryFromPrimitive, IntoPrimitive)]
pub enum AccountType {
/// Marker for 0 data
#[default]
Uninitialized,
/// Mint account with additional extensions
Mint,
/// Token holding account with additional extensions
Account,
}
/// Extensions that can be applied to mints or accounts. Mint extensions must only be
/// applied to mint accounts, and account extensions must only be applied to token holding
/// accounts.
#[repr(u16)]
#[derive(Clone, Copy, Debug, PartialEq, TryFromPrimitive, IntoPrimitive)]
pub enum ExtensionType {
/// Used as padding if the account size would otherwise be 355, same as a multisig
Uninitialized,
/// Includes transfer fee rate info and accompanying authorities to withdraw and set the fee
TransferFeeConfig,
/// Includes withheld transfer fees
TransferFeeAmount,
/// Includes an optional mint close authority
MintCloseAuthority,
/// Auditor configuration for confidential transfers
ConfidentialTransferMint,
/// State for confidential transfers
ConfidentialTransferAccount,
/// Specifies the default Account::state for new Accounts
DefaultAccountState,
/// Indicates that the Account owner authority cannot be changed
ImmutableOwner,
/// Require inbound transfers to have memo
MemoTransfer,
/// Indicates that the tokens from this mint can't be transferred
NonTransferable,
/// Tokens accrue interest over time,
InterestBearingConfig,
/// Padding extension used to make an account exactly Multisig::LEN, used for testing
#[cfg(test)]
AccountPaddingTest = u16::MAX - 1,
/// Padding extension used to make a mint exactly Multisig::LEN, used for testing
#[cfg(test)]
MintPaddingTest = u16::MAX,
}
impl TryFrom<&[u8]> for ExtensionType {
type Error = ProgramError;
fn try_from(a: &[u8]) -> Result<Self, Self::Error> {
Self::try_from(u16::from_le_bytes(
a.try_into().map_err(|_| ProgramError::InvalidAccountData)?,
))
.map_err(|_| ProgramError::InvalidAccountData)
}
}
impl From<ExtensionType> for [u8; 2] {
fn from(a: ExtensionType) -> Self {
u16::from(a).to_le_bytes()
}
}
impl ExtensionType {
/// Get the data length of the type associated with the enum
pub fn get_type_len(&self) -> usize {
match self {
ExtensionType::Uninitialized => 0,
ExtensionType::TransferFeeConfig => pod_get_packed_len::<TransferFeeConfig>(),
ExtensionType::TransferFeeAmount => pod_get_packed_len::<TransferFeeAmount>(),
ExtensionType::MintCloseAuthority => pod_get_packed_len::<MintCloseAuthority>(),
ExtensionType::ImmutableOwner => pod_get_packed_len::<ImmutableOwner>(),
ExtensionType::ConfidentialTransferMint => {
// TODO:
// pod_get_packed_len::<ConfidentialTransferMint>()
0
}
ExtensionType::ConfidentialTransferAccount => {
// TODO:
// pod_get_packed_len::<ConfidentialTransferAccount>()
0
}
ExtensionType::DefaultAccountState => pod_get_packed_len::<DefaultAccountState>(),
ExtensionType::MemoTransfer => pod_get_packed_len::<MemoTransfer>(),
ExtensionType::NonTransferable => pod_get_packed_len::<NonTransferable>(),
ExtensionType::InterestBearingConfig => pod_get_packed_len::<InterestBearingConfig>(),
#[cfg(test)]
ExtensionType::AccountPaddingTest => 0,
#[cfg(test)]
ExtensionType::MintPaddingTest => 0,
}
}
/// Get the TLV length for an ExtensionType
fn get_tlv_len(&self) -> usize {
self.get_type_len()
.saturating_add(size_of::<ExtensionType>())
.saturating_add(pod_get_packed_len::<Length>())
}
/// Get the TLV length for a set of ExtensionTypes
fn get_total_tlv_len(extension_types: &[Self]) -> usize {
// dedupe extensions
let mut extensions = vec![];
for extension_type in extension_types {
if !extensions.contains(&extension_type) {
extensions.push(extension_type);
}
}
let tlv_len: usize = extensions.iter().map(|e| e.get_tlv_len()).sum();
if tlv_len
== Multisig::LEN
.saturating_sub(BASE_ACCOUNT_LENGTH)
.saturating_sub(size_of::<AccountType>())
{
tlv_len.saturating_add(size_of::<ExtensionType>())
} else {
tlv_len
}
}
/// Get the required account data length for the given ExtensionTypes
pub fn get_account_len<S: BaseState>(extension_types: &[Self]) -> usize {
if extension_types.is_empty() {
S::LEN
} else {
let extension_size = Self::get_total_tlv_len(extension_types);
extension_size
.saturating_add(BASE_ACCOUNT_LENGTH)
.saturating_add(size_of::<AccountType>())
}
}
/// Get the associated account type
pub fn get_account_type(&self) -> AccountType {
match self {
ExtensionType::Uninitialized => AccountType::Uninitialized,
ExtensionType::TransferFeeConfig
| ExtensionType::MintCloseAuthority
| ExtensionType::ConfidentialTransferMint
| ExtensionType::DefaultAccountState
| ExtensionType::NonTransferable
| ExtensionType::InterestBearingConfig => AccountType::Mint,
ExtensionType::ImmutableOwner
| ExtensionType::TransferFeeAmount
| ExtensionType::ConfidentialTransferAccount
| ExtensionType::MemoTransfer => AccountType::Account,
#[cfg(test)]
ExtensionType::AccountPaddingTest => AccountType::Account,
#[cfg(test)]
ExtensionType::MintPaddingTest => AccountType::Mint,
}
}
/// Based on a set of AccountType::Mint ExtensionTypes, get the list of AccountType::Account
/// ExtensionTypes required on InitializeAccount
pub fn get_required_init_account_extensions(mint_extension_types: &[Self]) -> Vec<Self> {
let mut account_extension_types = vec![];
for extension_type in mint_extension_types {
#[allow(clippy::single_match)]
match extension_type {
ExtensionType::TransferFeeConfig => {
account_extension_types.push(ExtensionType::TransferFeeAmount);
}
#[cfg(test)]
ExtensionType::MintPaddingTest => {
account_extension_types.push(ExtensionType::AccountPaddingTest);
}
_ => {}
}
}
account_extension_types
}
}
/// Trait for base states, specifying the associated enum
pub trait BaseState: Pack + IsInitialized {
/// Associated extension type enum, checked at the start of TLV entries
const ACCOUNT_TYPE: AccountType;
}
impl BaseState for Account {
const ACCOUNT_TYPE: AccountType = AccountType::Account;
}
impl BaseState for Mint {
const ACCOUNT_TYPE: AccountType = AccountType::Mint;
}
/// Trait to be implemented by all extension states, specifying which extension
/// and account type they are associated with
pub trait Extension: Pod + Default {
/// Associated extension type enum, checked at the start of TLV entries
const TYPE: ExtensionType;
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/reallocate.rs
|
use {
crate::program::spl_token_2022::{
error::TokenError,
extension::{set_account_type, AccountType, ExtensionType, StateWithExtensions},
processor::Processor,
state::Account,
},
solana_sdk::{
account_info::{next_account_info, AccountInfo},
entrypoint::ProgramResult,
msg,
program::invoke,
pubkey::Pubkey,
system_instruction,
sysvar::{rent::Rent, Sysvar},
},
};
/// Processes a [Reallocate](enum.TokenInstruction.html) instruction
pub fn process_reallocate(
program_id: &Pubkey,
accounts: &[AccountInfo],
new_extension_types: Vec<ExtensionType>,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let token_account_info = next_account_info(account_info_iter)?;
let payer_info = next_account_info(account_info_iter)?;
let system_program_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
// check that account is the right type and validate owner
let mut current_extension_types = {
let token_account = token_account_info.data.borrow();
let account = StateWithExtensions::<Account>::unpack(&token_account)?;
Processor::validate_owner(
program_id,
&account.base.owner,
authority_info,
authority_info_data_len,
account_info_iter.as_slice(),
)?;
account.get_extension_types()?
};
// check that all desired extensions are for the right account type
if new_extension_types
.iter()
.any(|extension_type| extension_type.get_account_type() != AccountType::Account)
{
return Err(TokenError::InvalidState.into());
}
// ExtensionType::get_account_len() dedupes types, so just a dumb concatenation is fine here
current_extension_types.extend_from_slice(&new_extension_types);
let needed_account_len = ExtensionType::get_account_len::<Account>(¤t_extension_types);
// if account is already large enough, return early
if token_account_info.data_len() >= needed_account_len {
return Ok(());
}
// reallocate
msg!(
"account needs realloc, +{:?} bytes",
needed_account_len - token_account_info.data_len()
);
token_account_info.realloc(needed_account_len, false)?;
// if additional lamports needed to remain rent-exempt, transfer them
let rent = Rent::get()?;
let new_minimum_balance = rent.minimum_balance(needed_account_len);
let lamports_diff = new_minimum_balance.saturating_sub(token_account_info.lamports());
invoke(
&system_instruction::transfer(payer_info.key, token_account_info.key, lamports_diff),
&[
payer_info.clone(),
token_account_info.clone(),
system_program_info.clone(),
],
)?;
// unpack to set account_type, if needed
let mut token_account = token_account_info.data.borrow_mut();
set_account_type::<Account>(&mut token_account)?;
Ok(())
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/immutable_owner.rs
|
use {
super::{Extension, ExtensionType},
bytemuck::{Pod, Zeroable},
};
/// Indicates that the Account owner authority cannot be changed
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
#[repr(transparent)]
pub struct ImmutableOwner;
impl Extension for ImmutableOwner {
const TYPE: ExtensionType = ExtensionType::ImmutableOwner;
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/interest_bearing_mint/processor.rs
|
use {
crate::program::spl_token_2022::{
check_program_account,
error::TokenError,
extension::{
interest_bearing_mint::{
instruction::{InitializeInstructionData, InterestBearingMintInstruction},
BasisPoints, InterestBearingConfig,
},
StateWithExtensionsMut,
},
instruction::{decode_instruction_data, decode_instruction_type},
pod::OptionalNonZeroPubkey,
processor::Processor,
state::Mint,
},
solana_sdk::{
account_info::{next_account_info, AccountInfo},
clock::Clock,
entrypoint::ProgramResult,
msg,
pubkey::Pubkey,
sysvar::Sysvar,
},
};
fn process_initialize(
_program_id: &Pubkey,
accounts: &[AccountInfo],
rate_authority: &OptionalNonZeroPubkey,
rate: &BasisPoints,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let mut mint_data = mint_account_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack_uninitialized(&mut mint_data)?;
let clock = Clock::get()?;
let extension = mint.init_extension::<InterestBearingConfig>(true)?;
extension.rate_authority = *rate_authority;
extension.initialization_timestamp = clock.unix_timestamp.into();
extension.last_update_timestamp = clock.unix_timestamp.into();
// There is no validation on the rate, since ridiculous values are *technically*
// possible!
extension.pre_update_average_rate = *rate;
extension.current_rate = *rate;
Ok(())
}
fn process_update_rate(
program_id: &Pubkey,
accounts: &[AccountInfo],
new_rate: &BasisPoints,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let owner_info = next_account_info(account_info_iter)?;
let owner_info_data_len = owner_info.data_len();
let mut mint_data = mint_account_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack(&mut mint_data)?;
let extension = mint.get_extension_mut::<InterestBearingConfig>()?;
let rate_authority =
Option::<Pubkey>::from(extension.rate_authority).ok_or(TokenError::NoAuthorityExists)?;
Processor::validate_owner(
program_id,
&rate_authority,
owner_info,
owner_info_data_len,
account_info_iter.as_slice(),
)?;
let clock = Clock::get()?;
let new_average_rate = extension
.time_weighted_average_rate(clock.unix_timestamp)
.ok_or(TokenError::Overflow)?;
extension.pre_update_average_rate = new_average_rate.into();
extension.last_update_timestamp = clock.unix_timestamp.into();
// There is no validation on the rate, since ridiculous values are *technically*
// possible!
extension.current_rate = *new_rate;
Ok(())
}
pub(crate) fn process_instruction(
program_id: &Pubkey,
accounts: &[AccountInfo],
input: &[u8],
) -> ProgramResult {
check_program_account(program_id)?;
match decode_instruction_type(input)? {
InterestBearingMintInstruction::Initialize => {
msg!("InterestBearingMintInstruction::Initialize");
let InitializeInstructionData {
rate_authority,
rate,
} = decode_instruction_data(input)?;
process_initialize(program_id, accounts, rate_authority, rate)
}
InterestBearingMintInstruction::UpdateRate => {
msg!("InterestBearingMintInstruction::UpdateRate");
let new_rate = decode_instruction_data(input)?;
process_update_rate(program_id, accounts, new_rate)
}
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/interest_bearing_mint/mod.rs
|
use {
crate::program::spl_token_2022::{
extension::{Extension, ExtensionType},
pod::{OptionalNonZeroPubkey, PodI16, PodI64},
},
bytemuck::{Pod, Zeroable},
solana_sdk::program_error::ProgramError,
std::convert::TryInto,
};
/// Interest-bearing mint extension instructions
pub mod instruction;
/// Interest-bearing mint extension processor
pub mod processor;
/// Annual interest rate, expressed as basis points
pub type BasisPoints = PodI16;
const ONE_IN_BASIS_POINTS: f64 = 10_000.;
const SECONDS_PER_YEAR: f64 = 60. * 60. * 24. * 365.24;
/// UnixTimestamp expressed with an alignment-independent type
pub type UnixTimestamp = PodI64;
/// Interest-bearing extension data for mints
///
/// Tokens accrue interest at an annual rate expressed by `current_rate`,
/// compounded continuously, so APY will be higher than the published interest
/// rate.
///
/// To support changing the rate, the config also maintains state for the previous
/// rate.
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
pub struct InterestBearingConfig {
/// Authority that can set the interest rate and authority
pub rate_authority: OptionalNonZeroPubkey,
/// Timestamp of initialization, from which to base interest calculations
pub initialization_timestamp: UnixTimestamp,
/// Average rate from initialization until the last time it was updated
pub pre_update_average_rate: BasisPoints,
/// Timestamp of the last update, used to calculate the total amount accrued
pub last_update_timestamp: UnixTimestamp,
/// Current rate, since the last update
pub current_rate: BasisPoints,
}
impl InterestBearingConfig {
fn pre_update_timespan(&self) -> Option<i64> {
i64::from(self.last_update_timestamp).checked_sub(self.initialization_timestamp.into())
}
fn pre_update_exp(&self) -> Option<f64> {
let numerator = (i16::from(self.pre_update_average_rate) as i128)
.checked_mul(self.pre_update_timespan()? as i128)? as f64;
let exponent = numerator / SECONDS_PER_YEAR / ONE_IN_BASIS_POINTS;
Some(exponent.exp())
}
fn post_update_timespan(&self, unix_timestamp: i64) -> Option<i64> {
unix_timestamp.checked_sub(self.last_update_timestamp.into())
}
fn post_update_exp(&self, unix_timestamp: i64) -> Option<f64> {
let numerator = (i16::from(self.current_rate) as i128)
.checked_mul(self.post_update_timespan(unix_timestamp)? as i128)?
as f64;
let exponent = numerator / SECONDS_PER_YEAR / ONE_IN_BASIS_POINTS;
Some(exponent.exp())
}
fn total_scale(&self, decimals: u8, unix_timestamp: i64) -> Option<f64> {
Some(
self.pre_update_exp()? * self.post_update_exp(unix_timestamp)?
/ 10_f64.powi(decimals as i32),
)
}
/// Convert a raw amount to its UI representation using the given decimals field
/// Excess zeroes or unneeded decimal point are trimmed.
pub fn amount_to_ui_amount(
&self,
amount: u64,
decimals: u8,
unix_timestamp: i64,
) -> Option<String> {
let scaled_amount_with_interest =
(amount as f64) * self.total_scale(decimals, unix_timestamp)?;
Some(scaled_amount_with_interest.to_string())
}
/// Try to convert a UI representation of a token amount to its raw amount using the given decimals
/// field
pub fn try_ui_amount_into_amount(
&self,
ui_amount: &str,
decimals: u8,
unix_timestamp: i64,
) -> Result<u64, ProgramError> {
let scaled_amount = ui_amount
.parse::<f64>()
.map_err(|_| ProgramError::InvalidArgument)?;
let amount = scaled_amount
/ self
.total_scale(decimals, unix_timestamp)
.ok_or(ProgramError::InvalidArgument)?;
if amount > (u64::MAX as f64) || amount < (u64::MIN as f64) || amount.is_nan() {
Err(ProgramError::InvalidArgument)
} else {
Ok(amount.round() as u64) // this is important, if you round earlier, you'll get wrong "inf" answers
}
}
/// The new average rate is the time-weighted average of the current rate and average rate,
/// solving for r such that:
///
/// exp(r_1 * t_1) * exp(r_2 * t_2) = exp(r * (t_1 + t_2))
///
/// r_1 * t_1 + r_2 * t_2 = r * (t_1 + t_2)
///
/// r = (r_1 * t_1 + r_2 * t_2) / (t_1 + t_2)
pub fn time_weighted_average_rate(&self, current_timestamp: i64) -> Option<i16> {
let initialization_timestamp = i64::from(self.initialization_timestamp) as i128;
let last_update_timestamp = i64::from(self.last_update_timestamp) as i128;
let r_1 = i16::from(self.pre_update_average_rate) as i128;
let t_1 = last_update_timestamp.checked_sub(initialization_timestamp)?;
let r_2 = i16::from(self.current_rate) as i128;
let t_2 = (current_timestamp as i128).checked_sub(last_update_timestamp)?;
let total_timespan = t_1.checked_add(t_2)?;
let average_rate = if total_timespan == 0 {
// happens in testing situations, just use the new rate since the earlier
// one was never practically used
r_2
} else {
r_1.checked_mul(t_1)?
.checked_add(r_2.checked_mul(t_2)?)?
.checked_div(total_timespan)?
};
average_rate.try_into().ok()
}
}
impl Extension for InterestBearingConfig {
const TYPE: ExtensionType = ExtensionType::InterestBearingConfig;
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/interest_bearing_mint/instruction.rs
|
use {
crate::program::spl_token_2022::{
check_program_account,
extension::interest_bearing_mint::BasisPoints,
instruction::{encode_instruction, TokenInstruction},
pod::OptionalNonZeroPubkey,
},
bytemuck::{Pod, Zeroable},
num_enum::{IntoPrimitive, TryFromPrimitive},
solana_sdk::{
instruction::{AccountMeta, Instruction},
program_error::ProgramError,
pubkey::Pubkey,
},
std::convert::TryInto,
};
/// Interesting-bearing mint extension instructions
#[derive(Clone, Copy, Debug, PartialEq, IntoPrimitive, TryFromPrimitive)]
#[repr(u8)]
pub enum InterestBearingMintInstruction {
/// Initialize a new mint with interest accrual.
///
/// Fails if the mint has already been initialized, so must be called before
/// `InitializeMint`.
///
/// The mint must have exactly enough space allocated for the base mint (82
/// bytes), plus 83 bytes of padding, 1 byte reserved for the account type,
/// then space required for this extension, plus any others.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The mint to initialize.
///
/// Data expected by this instruction:
/// `crate::extension::interest_bearing::instruction::InitializeInstructionData`
///
Initialize,
/// Update the interest rate. Only supported for mints that include the
/// `InterestBearingConfig` extension.
///
/// Accounts expected by this instruction:
///
/// * Single authority
/// 0. `[writable]` The mint.
/// 1. `[signer]` The mint rate authority.
///
/// * Multisignature authority
/// 0. `[writable]` The mint.
/// 1. `[]` The mint's multisignature rate authority.
/// 2. ..2+M `[signer]` M signer accounts.
///
/// Data expected by this instruction:
/// `crate::extension::interest_bearing::BasisPoints`
///
UpdateRate,
}
/// Data expected by `InterestBearing::Initialize`
#[derive(Clone, Copy, Pod, Zeroable)]
#[repr(C)]
pub struct InitializeInstructionData {
/// The public key for the account that can update the rate
pub rate_authority: OptionalNonZeroPubkey,
/// The initial interest rate
pub rate: BasisPoints,
}
/// Create an `Initialize` instruction
pub fn initialize(
token_program_id: &Pubkey,
mint: &Pubkey,
rate_authority: Option<Pubkey>,
rate: i16,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let accounts = vec![AccountMeta::new(*mint, false)];
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::InterestBearingMintExtension,
InterestBearingMintInstruction::Initialize,
&InitializeInstructionData {
rate_authority: rate_authority.try_into()?,
rate: rate.into(),
},
))
}
/// Create an `UpdateRate` instruction
pub fn update_rate(
token_program_id: &Pubkey,
mint: &Pubkey,
rate_authority: &Pubkey,
signers: &[&Pubkey],
rate: i16,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*mint, false),
AccountMeta::new_readonly(*rate_authority, signers.is_empty()),
];
for signer_pubkey in signers.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::InterestBearingMintExtension,
InterestBearingMintInstruction::UpdateRate,
&BasisPoints::from(rate),
))
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/memo_transfer/processor.rs
|
use {
crate::program::spl_token_2022::{
check_program_account,
extension::{
memo_transfer::{instruction::RequiredMemoTransfersInstruction, MemoTransfer},
StateWithExtensionsMut,
},
instruction::decode_instruction_type,
processor::Processor,
state::Account,
},
solana_sdk::{
account_info::{next_account_info, AccountInfo},
entrypoint::ProgramResult,
msg,
pubkey::Pubkey,
},
};
fn process_enable_required_memo_transfers(
program_id: &Pubkey,
accounts: &[AccountInfo],
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let token_account_info = next_account_info(account_info_iter)?;
let owner_info = next_account_info(account_info_iter)?;
let owner_info_data_len = owner_info.data_len();
let mut account_data = token_account_info.data.borrow_mut();
let mut account = StateWithExtensionsMut::<Account>::unpack(&mut account_data)?;
Processor::validate_owner(
program_id,
&account.base.owner,
owner_info,
owner_info_data_len,
account_info_iter.as_slice(),
)?;
let extension = if let Ok(extension) = account.get_extension_mut::<MemoTransfer>() {
extension
} else {
account.init_extension::<MemoTransfer>(true)?
};
extension.require_incoming_transfer_memos = true.into();
Ok(())
}
fn process_diasble_required_memo_transfers(
program_id: &Pubkey,
accounts: &[AccountInfo],
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let token_account_info = next_account_info(account_info_iter)?;
let owner_info = next_account_info(account_info_iter)?;
let owner_info_data_len = owner_info.data_len();
let mut account_data = token_account_info.data.borrow_mut();
let mut account = StateWithExtensionsMut::<Account>::unpack(&mut account_data)?;
Processor::validate_owner(
program_id,
&account.base.owner,
owner_info,
owner_info_data_len,
account_info_iter.as_slice(),
)?;
let extension = if let Ok(extension) = account.get_extension_mut::<MemoTransfer>() {
extension
} else {
account.init_extension::<MemoTransfer>(true)?
};
extension.require_incoming_transfer_memos = false.into();
Ok(())
}
#[allow(dead_code)]
pub(crate) fn process_instruction(
program_id: &Pubkey,
accounts: &[AccountInfo],
input: &[u8],
) -> ProgramResult {
check_program_account(program_id)?;
match decode_instruction_type(input)? {
RequiredMemoTransfersInstruction::Enable => {
msg!("RequiredMemoTransfersInstruction::Enable");
process_enable_required_memo_transfers(program_id, accounts)
}
RequiredMemoTransfersInstruction::Disable => {
msg!("RequiredMemoTransfersInstruction::Disable");
process_diasble_required_memo_transfers(program_id, accounts)
}
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/memo_transfer/mod.rs
|
use {
crate::program::{
spl_memo,
spl_token_2022::{
error::TokenError,
extension::{Extension, ExtensionType, StateWithExtensionsMut},
pod::PodBool,
state::Account,
},
},
bytemuck::{Pod, Zeroable},
solana_sdk::{
instruction::get_processed_sibling_instruction, program_error::ProgramError, pubkey::Pubkey,
},
};
/// Memo Transfer extension instructions
pub mod instruction;
/// Memo Transfer extension processor
pub mod processor;
/// Memo Transfer extension for Accounts
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
pub struct MemoTransfer {
/// Require transfers into this account to be accompanied by a memo
pub require_incoming_transfer_memos: PodBool,
}
impl Extension for MemoTransfer {
const TYPE: ExtensionType = ExtensionType::MemoTransfer;
}
/// Determine if a memo is required for transfers into this account
pub fn memo_required(account_state: &StateWithExtensionsMut<Account>) -> bool {
if let Ok(extension) = account_state.get_extension::<MemoTransfer>() {
return extension.require_incoming_transfer_memos.into();
}
false
}
/// Check if the previous sibling instruction is a memo
pub fn check_previous_sibling_instruction_is_memo() -> Result<(), ProgramError> {
let is_memo_program = |program_id: &Pubkey| -> bool {
program_id == &spl_memo::id() || program_id == &spl_memo::v1::id()
};
let previous_instruction = get_processed_sibling_instruction(0);
match previous_instruction {
Some(instruction) if is_memo_program(&instruction.program_id) => {}
_ => {
return Err(TokenError::NoMemo.into());
}
}
Ok(())
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/memo_transfer/instruction.rs
|
use {
crate::program::spl_token_2022::{
check_program_account,
instruction::{encode_instruction, TokenInstruction},
},
num_enum::{IntoPrimitive, TryFromPrimitive},
solana_sdk::{
instruction::{AccountMeta, Instruction},
program_error::ProgramError,
pubkey::Pubkey,
},
};
/// Default Account State extension instructions
#[derive(Clone, Copy, Debug, PartialEq, IntoPrimitive, TryFromPrimitive)]
#[repr(u8)]
pub enum RequiredMemoTransfersInstruction {
/// Require memos for transfers into this Account. Adds the MemoTransfer extension to the
/// Account, if it doesn't already exist.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The account to update.
/// 1. `[signer]` The account's owner.
///
/// * Multisignature authority
/// 0. `[writable]` The account to update.
/// 1. `[]` The account's multisignature owner.
/// 2. ..2+M `[signer]` M signer accounts.
///
Enable,
/// Stop requiring memos for transfers into this Account.
///
/// Fails if the account does not have the extension present.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The account to update.
/// 1. `[signer]` The account's owner.
///
/// * Multisignature authority
/// 0. `[writable]` The account to update.
/// 1. `[]` The account's multisignature owner.
/// 2. ..2+M `[signer]` M signer accounts.
///
Disable,
}
/// Create an `Enable` instruction
pub fn enable_required_transfer_memos(
token_program_id: &Pubkey,
account: &Pubkey,
owner: &Pubkey,
signers: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*account, false),
AccountMeta::new_readonly(*owner, signers.is_empty()),
];
for signer_pubkey in signers.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::MemoTransferExtension,
RequiredMemoTransfersInstruction::Enable,
&(),
))
}
/// Create a `Disable` instruction
pub fn disable_required_transfer_memos(
token_program_id: &Pubkey,
account: &Pubkey,
owner: &Pubkey,
signers: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*account, false),
AccountMeta::new_readonly(*owner, signers.is_empty()),
];
for signer_pubkey in signers.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::MemoTransferExtension,
RequiredMemoTransfersInstruction::Disable,
&(),
))
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/transfer_fee/processor.rs
|
use {
crate::program::spl_token_2022::{
check_program_account,
error::TokenError,
extension::{
transfer_fee::{
instruction::TransferFeeInstruction, TransferFee, TransferFeeAmount,
TransferFeeConfig, MAX_FEE_BASIS_POINTS,
},
StateWithExtensions, StateWithExtensionsMut,
},
processor::Processor,
state::{Account, Mint},
},
solana_sdk::{
account_info::{next_account_info, AccountInfo},
clock::Clock,
entrypoint::ProgramResult,
msg,
program_option::COption,
pubkey::Pubkey,
sysvar::Sysvar,
},
std::convert::TryInto,
};
fn process_initialize_transfer_fee_config(
accounts: &[AccountInfo],
transfer_fee_config_authority: COption<Pubkey>,
withdraw_withheld_authority: COption<Pubkey>,
transfer_fee_basis_points: u16,
maximum_fee: u64,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let mut mint_data = mint_account_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack_uninitialized(&mut mint_data)?;
let extension = mint.init_extension::<TransferFeeConfig>(true)?;
extension.transfer_fee_config_authority = transfer_fee_config_authority.try_into()?;
extension.withdraw_withheld_authority = withdraw_withheld_authority.try_into()?;
extension.withheld_amount = 0u64.into();
if transfer_fee_basis_points > MAX_FEE_BASIS_POINTS {
return Err(TokenError::TransferFeeExceedsMaximum.into());
}
// To be safe, set newer and older transfer fees to the same thing on init,
// but only newer will actually be used
let epoch = Clock::get()?.epoch;
let transfer_fee = TransferFee {
epoch: epoch.into(),
transfer_fee_basis_points: transfer_fee_basis_points.into(),
maximum_fee: maximum_fee.into(),
};
extension.older_transfer_fee = transfer_fee;
extension.newer_transfer_fee = transfer_fee;
Ok(())
}
fn process_set_transfer_fee(
program_id: &Pubkey,
accounts: &[AccountInfo],
transfer_fee_basis_points: u16,
maximum_fee: u64,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
let mut mint_data = mint_account_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack(&mut mint_data)?;
let extension = mint.get_extension_mut::<TransferFeeConfig>()?;
let transfer_fee_config_authority =
Option::<Pubkey>::from(extension.transfer_fee_config_authority)
.ok_or(TokenError::NoAuthorityExists)?;
Processor::validate_owner(
program_id,
&transfer_fee_config_authority,
authority_info,
authority_info_data_len,
account_info_iter.as_slice(),
)?;
if transfer_fee_basis_points > MAX_FEE_BASIS_POINTS {
return Err(TokenError::TransferFeeExceedsMaximum.into());
}
// When setting the transfer fee, we have two situations:
// * newer transfer fee epoch <= current epoch:
// newer transfer fee is the active one, so overwrite older transfer fee with newer, then overwrite newer transfer fee
// * newer transfer fee epoch >= next epoch:
// it was never used, so just overwrite next transfer fee
let epoch = Clock::get()?.epoch;
if u64::from(extension.newer_transfer_fee.epoch) <= epoch {
extension.older_transfer_fee = extension.newer_transfer_fee;
}
// set two epochs ahead to avoid rug pulls at the end of an epoch
let newer_fee_start_epoch = epoch.saturating_add(2);
let transfer_fee = TransferFee {
epoch: newer_fee_start_epoch.into(),
transfer_fee_basis_points: transfer_fee_basis_points.into(),
maximum_fee: maximum_fee.into(),
};
extension.newer_transfer_fee = transfer_fee;
Ok(())
}
fn process_withdraw_withheld_tokens_from_mint(
program_id: &Pubkey,
accounts: &[AccountInfo],
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let destination_account_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
let mut mint_data = mint_account_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack(&mut mint_data)?;
let extension = mint.get_extension_mut::<TransferFeeConfig>()?;
let withdraw_withheld_authority = Option::<Pubkey>::from(extension.withdraw_withheld_authority)
.ok_or(TokenError::NoAuthorityExists)?;
Processor::validate_owner(
program_id,
&withdraw_withheld_authority,
authority_info,
authority_info_data_len,
account_info_iter.as_slice(),
)?;
let mut destination_account_data = destination_account_info.data.borrow_mut();
let mut destination_account =
StateWithExtensionsMut::<Account>::unpack(&mut destination_account_data)?;
if destination_account.base.mint != *mint_account_info.key {
return Err(TokenError::MintMismatch.into());
}
if destination_account.base.is_frozen() {
return Err(TokenError::AccountFrozen.into());
}
let withheld_amount = u64::from(extension.withheld_amount);
extension.withheld_amount = 0.into();
destination_account.base.amount = destination_account
.base
.amount
.checked_add(withheld_amount)
.ok_or(TokenError::Overflow)?;
destination_account.pack_base();
Ok(())
}
fn harvest_from_account<'b>(
mint_key: &'b Pubkey,
token_account_info: &'b AccountInfo<'_>,
) -> Result<u64, TokenError> {
let mut token_account_data = token_account_info.data.borrow_mut();
let mut token_account = StateWithExtensionsMut::<Account>::unpack(&mut token_account_data)
.map_err(|_| TokenError::InvalidState)?;
if token_account.base.mint != *mint_key {
return Err(TokenError::MintMismatch);
}
check_program_account(token_account_info.owner).map_err(|_| TokenError::InvalidState)?;
let token_account_extension = token_account
.get_extension_mut::<TransferFeeAmount>()
.map_err(|_| TokenError::InvalidState)?;
let account_withheld_amount = u64::from(token_account_extension.withheld_amount);
token_account_extension.withheld_amount = 0.into();
Ok(account_withheld_amount)
}
fn process_harvest_withheld_tokens_to_mint(accounts: &[AccountInfo]) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let token_account_infos = account_info_iter.as_slice();
let mut mint_data = mint_account_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack(&mut mint_data)?;
let mint_extension = mint.get_extension_mut::<TransferFeeConfig>()?;
for token_account_info in token_account_infos {
match harvest_from_account(mint_account_info.key, token_account_info) {
Ok(amount) => {
let mint_withheld_amount = u64::from(mint_extension.withheld_amount);
mint_extension.withheld_amount = mint_withheld_amount
.checked_add(amount)
.ok_or(TokenError::Overflow)?
.into();
}
Err(e) => {
msg!("Error harvesting from {}: {}", token_account_info.key, e);
}
}
}
Ok(())
}
fn process_withdraw_withheld_tokens_from_accounts(
program_id: &Pubkey,
accounts: &[AccountInfo],
num_token_accounts: u8,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let destination_account_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
let account_infos = account_info_iter.as_slice();
let num_signers = account_infos
.len()
.saturating_sub(num_token_accounts as usize);
let mint_data = mint_account_info.data.borrow();
let mint = StateWithExtensions::<Mint>::unpack(&mint_data)?;
let extension = mint.get_extension::<TransferFeeConfig>()?;
let withdraw_withheld_authority = Option::<Pubkey>::from(extension.withdraw_withheld_authority)
.ok_or(TokenError::NoAuthorityExists)?;
Processor::validate_owner(
program_id,
&withdraw_withheld_authority,
authority_info,
authority_info_data_len,
&account_infos[..num_signers],
)?;
let mut destination_account_data = destination_account_info.data.borrow_mut();
let mut destination_account =
StateWithExtensionsMut::<Account>::unpack(&mut destination_account_data)?;
if destination_account.base.mint != *mint_account_info.key {
return Err(TokenError::MintMismatch.into());
}
if destination_account.base.is_frozen() {
return Err(TokenError::AccountFrozen.into());
}
for account_info in &account_infos[num_signers..] {
// self-harvest, can't double-borrow the underlying data
if account_info.key == destination_account_info.key {
let token_account_extension = destination_account
.get_extension_mut::<TransferFeeAmount>()
.map_err(|_| TokenError::InvalidState)?;
let account_withheld_amount = u64::from(token_account_extension.withheld_amount);
token_account_extension.withheld_amount = 0.into();
destination_account.base.amount = destination_account
.base
.amount
.checked_add(account_withheld_amount)
.ok_or(TokenError::Overflow)?;
} else {
match harvest_from_account(mint_account_info.key, account_info) {
Ok(amount) => {
destination_account.base.amount = destination_account
.base
.amount
.checked_add(amount)
.ok_or(TokenError::Overflow)?;
}
Err(e) => {
msg!("Error harvesting from {}: {}", account_info.key, e);
}
}
}
}
destination_account.pack_base();
Ok(())
}
pub(crate) fn process_instruction(
program_id: &Pubkey,
accounts: &[AccountInfo],
instruction: TransferFeeInstruction,
) -> ProgramResult {
check_program_account(program_id)?;
match instruction {
TransferFeeInstruction::InitializeTransferFeeConfig {
transfer_fee_config_authority,
withdraw_withheld_authority,
transfer_fee_basis_points,
maximum_fee,
} => process_initialize_transfer_fee_config(
accounts,
transfer_fee_config_authority,
withdraw_withheld_authority,
transfer_fee_basis_points,
maximum_fee,
),
TransferFeeInstruction::TransferCheckedWithFee {
amount,
decimals,
fee,
} => {
msg!("TransferFeeInstruction: TransferCheckedWithFee");
Processor::process_transfer(program_id, accounts, amount, Some(decimals), Some(fee))
}
TransferFeeInstruction::WithdrawWithheldTokensFromMint => {
msg!("TransferFeeInstruction: WithdrawWithheldTokensFromMint");
process_withdraw_withheld_tokens_from_mint(program_id, accounts)
}
TransferFeeInstruction::WithdrawWithheldTokensFromAccounts { num_token_accounts } => {
msg!("TransferFeeInstruction: WithdrawWithheldTokensFromAccounts");
process_withdraw_withheld_tokens_from_accounts(program_id, accounts, num_token_accounts)
}
TransferFeeInstruction::HarvestWithheldTokensToMint => {
msg!("TransferFeeInstruction: HarvestWithheldTokensToMint");
process_harvest_withheld_tokens_to_mint(accounts)
}
TransferFeeInstruction::SetTransferFee {
transfer_fee_basis_points,
maximum_fee,
} => {
msg!("TransferFeeInstruction: SetTransferFee");
process_set_transfer_fee(program_id, accounts, transfer_fee_basis_points, maximum_fee)
}
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/transfer_fee/mod.rs
|
use {
crate::program::spl_token_2022::{
error::TokenError,
extension::{Extension, ExtensionType},
pod::*,
},
bytemuck::{Pod, Zeroable},
solana_sdk::{clock::Epoch, entrypoint::ProgramResult},
std::{cmp, convert::TryFrom},
};
/// Transfer fee extension instructions
pub mod instruction;
/// Transfer fee extension processor
pub mod processor;
/// Maximum possible fee in basis points is 100%, aka 10_000 basis points
pub const MAX_FEE_BASIS_POINTS: u16 = 10_000;
const ONE_IN_BASIS_POINTS: u128 = MAX_FEE_BASIS_POINTS as u128;
/// Transfer fee information
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
pub struct TransferFee {
/// First epoch where the transfer fee takes effect
pub epoch: PodU64, // Epoch,
/// Maximum fee assessed on transfers, expressed as an amount of tokens
pub maximum_fee: PodU64,
/// Amount of transfer collected as fees, expressed as basis points of the
/// transfer amount, ie. increments of 0.01%
pub transfer_fee_basis_points: PodU16,
}
impl TransferFee {
/// Calculate the transfer fee
pub fn calculate(&self, amount: u64) -> Option<u64> {
let transfer_fee_basis_points = u16::from(self.transfer_fee_basis_points) as u128;
if transfer_fee_basis_points == 0 || amount == 0 {
Some(0)
} else {
let numerator = (amount as u128).checked_mul(transfer_fee_basis_points)?;
let mut raw_fee = numerator.checked_div(ONE_IN_BASIS_POINTS)?;
let remainder = numerator.checked_rem(ONE_IN_BASIS_POINTS)?;
if remainder > 0 {
raw_fee = raw_fee.checked_add(1)?;
}
// guaranteed to be ok
let raw_fee = u64::try_from(raw_fee).ok()?;
Some(cmp::min(raw_fee, u64::from(self.maximum_fee)))
}
}
}
/// Transfer fee extension data for mints.
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
pub struct TransferFeeConfig {
/// Optional authority to set the fee
pub transfer_fee_config_authority: OptionalNonZeroPubkey,
/// Withdraw from mint instructions must be signed by this key
pub withdraw_withheld_authority: OptionalNonZeroPubkey,
/// Withheld transfer fee tokens that have been moved to the mint for withdrawal
pub withheld_amount: PodU64,
/// Older transfer fee, used if the current epoch < new_transfer_fee.epoch
pub older_transfer_fee: TransferFee,
/// Newer transfer fee, used if the current epoch >= new_transfer_fee.epoch
pub newer_transfer_fee: TransferFee,
}
impl TransferFeeConfig {
/// Get the fee for the given epoch
pub fn get_epoch_fee(&self, epoch: Epoch) -> &TransferFee {
if epoch >= self.newer_transfer_fee.epoch.into() {
&self.newer_transfer_fee
} else {
&self.older_transfer_fee
}
}
/// Calculate the fee for the given epoch
pub fn calculate_epoch_fee(&self, epoch: Epoch, amount: u64) -> Option<u64> {
self.get_epoch_fee(epoch).calculate(amount)
}
}
impl Extension for TransferFeeConfig {
const TYPE: ExtensionType = ExtensionType::TransferFeeConfig;
}
/// Transfer fee extension data for accounts.
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
pub struct TransferFeeAmount {
/// Amount withheld during transfers, to be harvested to the mint
pub withheld_amount: PodU64,
}
impl TransferFeeAmount {
/// Check if the extension is in a closable state
pub fn closable(&self) -> ProgramResult {
if self.withheld_amount == 0.into() {
Ok(())
} else {
Err(TokenError::AccountHasWithheldTransferFees.into())
}
}
}
impl Extension for TransferFeeAmount {
const TYPE: ExtensionType = ExtensionType::TransferFeeAmount;
}
#[cfg(test)]
pub(crate) mod test {
use {super::*, solana_sdk::pubkey::Pubkey, std::convert::TryFrom};
const NEWER_EPOCH: u64 = 100;
const OLDER_EPOCH: u64 = 1;
pub(crate) fn test_transfer_fee_config() -> TransferFeeConfig {
TransferFeeConfig {
transfer_fee_config_authority: OptionalNonZeroPubkey::try_from(Some(Pubkey::from(
[10; 32],
)))
.unwrap(),
withdraw_withheld_authority: OptionalNonZeroPubkey::try_from(Some(Pubkey::from(
[11; 32],
)))
.unwrap(),
withheld_amount: PodU64::from(u64::MAX),
older_transfer_fee: TransferFee {
epoch: PodU64::from(OLDER_EPOCH),
maximum_fee: PodU64::from(10),
transfer_fee_basis_points: PodU16::from(100),
},
newer_transfer_fee: TransferFee {
epoch: PodU64::from(NEWER_EPOCH),
maximum_fee: PodU64::from(5_000),
transfer_fee_basis_points: PodU16::from(1),
},
}
}
#[test]
fn epoch_fee() {
let transfer_fee_config = test_transfer_fee_config();
// during epoch 100 and after, use newer transfer fee
assert_eq!(
transfer_fee_config.get_epoch_fee(NEWER_EPOCH).epoch,
NEWER_EPOCH.into()
);
assert_eq!(
transfer_fee_config.get_epoch_fee(NEWER_EPOCH + 1).epoch,
NEWER_EPOCH.into()
);
assert_eq!(
transfer_fee_config.get_epoch_fee(u64::MAX).epoch,
NEWER_EPOCH.into()
);
// before that, use older transfer fee
assert_eq!(
transfer_fee_config.get_epoch_fee(NEWER_EPOCH - 1).epoch,
OLDER_EPOCH.into()
);
assert_eq!(
transfer_fee_config.get_epoch_fee(OLDER_EPOCH).epoch,
OLDER_EPOCH.into()
);
assert_eq!(
transfer_fee_config.get_epoch_fee(OLDER_EPOCH + 1).epoch,
OLDER_EPOCH.into()
);
}
#[test]
fn calculate_fee_max() {
let one = u64::try_from(ONE_IN_BASIS_POINTS).unwrap();
let transfer_fee = TransferFee {
epoch: PodU64::from(0),
maximum_fee: PodU64::from(5_000),
transfer_fee_basis_points: PodU16::from(1),
};
let maximum_fee = u64::from(transfer_fee.maximum_fee);
// hit maximum fee
assert_eq!(maximum_fee, transfer_fee.calculate(u64::MAX).unwrap());
// at exactly the max
assert_eq!(
maximum_fee,
transfer_fee.calculate(maximum_fee * one).unwrap()
);
// one token above, normally rounds up, but we're at the max
assert_eq!(
maximum_fee,
transfer_fee.calculate(maximum_fee * one + 1).unwrap()
);
// one token below, rounds up to the max
assert_eq!(
maximum_fee,
transfer_fee.calculate(maximum_fee * one - 1).unwrap()
);
}
#[test]
fn calculate_fee_min() {
let one = u64::try_from(ONE_IN_BASIS_POINTS).unwrap();
let transfer_fee = TransferFee {
epoch: PodU64::from(0),
maximum_fee: PodU64::from(5_000),
transfer_fee_basis_points: PodU16::from(1),
};
let minimum_fee = 1;
// hit minimum fee even with 1 token
assert_eq!(minimum_fee, transfer_fee.calculate(1).unwrap());
// still minimum at 2 tokens
assert_eq!(minimum_fee, transfer_fee.calculate(2).unwrap());
// still minimum at 10_000 tokens
assert_eq!(minimum_fee, transfer_fee.calculate(one).unwrap());
// 2 token fee at 10_001
assert_eq!(minimum_fee + 1, transfer_fee.calculate(one + 1).unwrap());
// zero is always zero
assert_eq!(0, transfer_fee.calculate(0).unwrap());
}
#[test]
fn calculate_fee_zero() {
let one = u64::try_from(ONE_IN_BASIS_POINTS).unwrap();
let transfer_fee = TransferFee {
epoch: PodU64::from(0),
maximum_fee: PodU64::from(u64::MAX),
transfer_fee_basis_points: PodU16::from(0),
};
// always zero fee
assert_eq!(0, transfer_fee.calculate(0).unwrap());
assert_eq!(0, transfer_fee.calculate(u64::MAX).unwrap());
assert_eq!(0, transfer_fee.calculate(1).unwrap());
assert_eq!(0, transfer_fee.calculate(one).unwrap());
let transfer_fee = TransferFee {
epoch: PodU64::from(0),
maximum_fee: PodU64::from(0),
transfer_fee_basis_points: PodU16::from(MAX_FEE_BASIS_POINTS),
};
// always zero fee
assert_eq!(0, transfer_fee.calculate(0).unwrap());
assert_eq!(0, transfer_fee.calculate(u64::MAX).unwrap());
assert_eq!(0, transfer_fee.calculate(1).unwrap());
assert_eq!(0, transfer_fee.calculate(one).unwrap());
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/transfer_fee/instruction.rs
|
use {
crate::program::spl_token_2022::{
check_program_account, error::TokenError, instruction::TokenInstruction,
},
solana_sdk::{
instruction::{AccountMeta, Instruction},
program_error::ProgramError,
program_option::COption,
pubkey::Pubkey,
},
std::convert::TryFrom,
};
/// Transfer Fee extension instructions
#[derive(Clone, Copy, Debug, PartialEq)]
#[repr(u8)]
pub enum TransferFeeInstruction {
/// Initialize the transfer fee on a new mint.
///
/// Fails if the mint has already been initialized, so must be called before
/// `InitializeMint`.
///
/// The mint must have exactly enough space allocated for the base mint (82
/// bytes), plus 83 bytes of padding, 1 byte reserved for the account type,
/// then space required for this extension, plus any others.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The mint to initialize.
InitializeTransferFeeConfig {
/// Pubkey that may update the fees
transfer_fee_config_authority: COption<Pubkey>,
/// Withdraw instructions must be signed by this key
withdraw_withheld_authority: COption<Pubkey>,
/// Amount of transfer collected as fees, expressed as basis points of the
/// transfer amount
transfer_fee_basis_points: u16,
/// Maximum fee assessed on transfers
maximum_fee: u64,
},
/// Transfer, providing expected mint information and fees
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The source account. Must include the `TransferFeeAmount` extension.
/// 1. `[]` The token mint. Must include the `TransferFeeConfig` extension.
/// 2. `[writable]` The destination account. Must include the `TransferFeeAmount` extension.
/// 3. `[signer]` The source account's owner/delegate.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The source account.
/// 1. `[]` The token mint.
/// 2. `[writable]` The destination account.
/// 3. `[]` The source account's multisignature owner/delegate.
/// 4. ..4+M `[signer]` M signer accounts.
TransferCheckedWithFee {
/// The amount of tokens to transfer.
amount: u64,
/// Expected number of base 10 digits to the right of the decimal place.
decimals: u8,
/// Expected fee assessed on this transfer, calculated off-chain based on
/// the transfer_fee_basis_points and maximum_fee of the mint.
fee: u64,
},
/// Transfer all withheld tokens in the mint to an account. Signed by the mint's
/// withdraw withheld tokens authority.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The token mint. Must include the `TransferFeeConfig` extension.
/// 1. `[writable]` The fee receiver account. Must include the `TransferFeeAmount` extension
/// associated with the provided mint.
/// 2. `[signer]` The mint's `withdraw_withheld_authority`.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The token mint.
/// 1. `[writable]` The destination account.
/// 2. `[]` The mint's multisig `withdraw_withheld_authority`.
/// 3. ..3+M `[signer]` M signer accounts.
WithdrawWithheldTokensFromMint,
/// Transfer all withheld tokens to an account. Signed by the mint's
/// withdraw withheld tokens authority.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[]` The token mint. Must include the `TransferFeeConfig` extension.
/// 1. `[writable]` The fee receiver account. Must include the `TransferFeeAmount`
/// extension and be associated with the provided mint.
/// 2. `[signer]` The mint's `withdraw_withheld_authority`.
/// 3. ..3+N `[writable]` The source accounts to withdraw from.
///
/// * Multisignature owner/delegate
/// 0. `[]` The token mint.
/// 1. `[writable]` The destination account.
/// 2. `[]` The mint's multisig `withdraw_withheld_authority`.
/// 3. ..3+M `[signer]` M signer accounts.
/// 3+M+1. ..3+M+N `[writable]` The source accounts to withdraw from.
WithdrawWithheldTokensFromAccounts {
/// Number of token accounts harvested
num_token_accounts: u8,
},
/// Permissionless instruction to transfer all withheld tokens to the mint.
///
/// Succeeds for frozen accounts.
///
/// Accounts provided should include the `TransferFeeAmount` extension. If not,
/// the account is skipped.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The mint.
/// 1. ..1+N `[writable]` The source accounts to harvest from.
HarvestWithheldTokensToMint,
/// Set transfer fee. Only supported for mints that include the `TransferFeeConfig` extension.
///
/// Accounts expected by this instruction:
///
/// * Single authority
/// 0. `[writable]` The mint.
/// 1. `[signer]` The mint's fee account owner.
///
/// * Multisignature authority
/// 0. `[writable]` The mint.
/// 1. `[]` The mint's multisignature fee account owner.
/// 2. ..2+M `[signer]` M signer accounts.
SetTransferFee {
/// Amount of transfer collected as fees, expressed as basis points of the
/// transfer amount
transfer_fee_basis_points: u16,
/// Maximum fee assessed on transfers
maximum_fee: u64,
},
}
impl TransferFeeInstruction {
/// Unpacks a byte buffer into a TransferFeeInstruction
pub fn unpack(input: &[u8]) -> Result<(Self, &[u8]), ProgramError> {
use TokenError::InvalidInstruction;
let (&tag, rest) = input.split_first().ok_or(InvalidInstruction)?;
Ok(match tag {
0 => {
let (transfer_fee_config_authority, rest) =
TokenInstruction::unpack_pubkey_option(rest)?;
let (withdraw_withheld_authority, rest) =
TokenInstruction::unpack_pubkey_option(rest)?;
let (transfer_fee_basis_points, rest) = TokenInstruction::unpack_u16(rest)?;
let (maximum_fee, rest) = TokenInstruction::unpack_u64(rest)?;
let instruction = Self::InitializeTransferFeeConfig {
transfer_fee_config_authority,
withdraw_withheld_authority,
transfer_fee_basis_points,
maximum_fee,
};
(instruction, rest)
}
1 => {
let (amount, decimals, rest) = TokenInstruction::unpack_amount_decimals(rest)?;
let (fee, rest) = TokenInstruction::unpack_u64(rest)?;
let instruction = Self::TransferCheckedWithFee {
amount,
decimals,
fee,
};
(instruction, rest)
}
2 => (Self::WithdrawWithheldTokensFromMint, rest),
3 => {
let (&num_token_accounts, rest) = rest.split_first().ok_or(InvalidInstruction)?;
let instruction = Self::WithdrawWithheldTokensFromAccounts { num_token_accounts };
(instruction, rest)
}
4 => (Self::HarvestWithheldTokensToMint, rest),
5 => {
let (transfer_fee_basis_points, rest) = TokenInstruction::unpack_u16(rest)?;
let (maximum_fee, rest) = TokenInstruction::unpack_u64(rest)?;
let instruction = Self::SetTransferFee {
transfer_fee_basis_points,
maximum_fee,
};
(instruction, rest)
}
_ => return Err(TokenError::InvalidInstruction.into()),
})
}
/// Packs a TransferFeeInstruction into a byte buffer.
pub fn pack(&self, buffer: &mut Vec<u8>) {
match *self {
Self::InitializeTransferFeeConfig {
ref transfer_fee_config_authority,
ref withdraw_withheld_authority,
transfer_fee_basis_points,
maximum_fee,
} => {
buffer.push(0);
TokenInstruction::pack_pubkey_option(transfer_fee_config_authority, buffer);
TokenInstruction::pack_pubkey_option(withdraw_withheld_authority, buffer);
buffer.extend_from_slice(&transfer_fee_basis_points.to_le_bytes());
buffer.extend_from_slice(&maximum_fee.to_le_bytes());
}
Self::TransferCheckedWithFee {
amount,
decimals,
fee,
} => {
buffer.push(1);
buffer.extend_from_slice(&amount.to_le_bytes());
buffer.extend_from_slice(&decimals.to_le_bytes());
buffer.extend_from_slice(&fee.to_le_bytes());
}
Self::WithdrawWithheldTokensFromMint => {
buffer.push(2);
}
Self::WithdrawWithheldTokensFromAccounts { num_token_accounts } => {
buffer.push(3);
buffer.push(num_token_accounts);
}
Self::HarvestWithheldTokensToMint => {
buffer.push(4);
}
Self::SetTransferFee {
transfer_fee_basis_points,
maximum_fee,
} => {
buffer.push(5);
buffer.extend_from_slice(&transfer_fee_basis_points.to_le_bytes());
buffer.extend_from_slice(&maximum_fee.to_le_bytes());
}
}
}
}
/// Create a `InitializeTransferFeeConfig` instruction
pub fn initialize_transfer_fee_config(
token_program_id: &Pubkey,
mint: &Pubkey,
transfer_fee_config_authority: Option<&Pubkey>,
withdraw_withheld_authority: Option<&Pubkey>,
transfer_fee_basis_points: u16,
maximum_fee: u64,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let transfer_fee_config_authority = transfer_fee_config_authority.cloned().into();
let withdraw_withheld_authority = withdraw_withheld_authority.cloned().into();
let data = TokenInstruction::TransferFeeExtension(
TransferFeeInstruction::InitializeTransferFeeConfig {
transfer_fee_config_authority,
withdraw_withheld_authority,
transfer_fee_basis_points,
maximum_fee,
},
)
.pack();
Ok(Instruction {
program_id: *token_program_id,
accounts: vec![AccountMeta::new(*mint, false)],
data,
})
}
/// Create a `TransferCheckedWithFee` instruction
#[allow(clippy::too_many_arguments)]
pub fn transfer_checked_with_fee(
token_program_id: &Pubkey,
source: &Pubkey,
mint: &Pubkey,
destination: &Pubkey,
authority: &Pubkey,
signers: &[&Pubkey],
amount: u64,
decimals: u8,
fee: u64,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let data =
TokenInstruction::TransferFeeExtension(TransferFeeInstruction::TransferCheckedWithFee {
amount,
decimals,
fee,
})
.pack();
let mut accounts = Vec::with_capacity(4 + signers.len());
accounts.push(AccountMeta::new(*source, false));
accounts.push(AccountMeta::new_readonly(*mint, false));
accounts.push(AccountMeta::new(*destination, false));
accounts.push(AccountMeta::new_readonly(*authority, signers.is_empty()));
for signer in signers.iter() {
accounts.push(AccountMeta::new_readonly(**signer, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data,
})
}
/// Creates a `WithdrawWithheldTokensFromMint` instruction
pub fn withdraw_withheld_tokens_from_mint(
token_program_id: &Pubkey,
mint: &Pubkey,
destination: &Pubkey,
authority: &Pubkey,
signers: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = Vec::with_capacity(3 + signers.len());
accounts.push(AccountMeta::new(*mint, false));
accounts.push(AccountMeta::new(*destination, false));
accounts.push(AccountMeta::new_readonly(*authority, signers.is_empty()));
for signer in signers.iter() {
accounts.push(AccountMeta::new_readonly(**signer, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data: TokenInstruction::TransferFeeExtension(
TransferFeeInstruction::WithdrawWithheldTokensFromMint,
)
.pack(),
})
}
/// Creates a `WithdrawWithheldTokensFromAccounts` instruction
pub fn withdraw_withheld_tokens_from_accounts(
token_program_id: &Pubkey,
mint: &Pubkey,
destination: &Pubkey,
authority: &Pubkey,
signers: &[&Pubkey],
sources: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let num_token_accounts =
u8::try_from(sources.len()).map_err(|_| ProgramError::InvalidInstructionData)?;
let mut accounts = Vec::with_capacity(3 + signers.len() + sources.len());
accounts.push(AccountMeta::new_readonly(*mint, false));
accounts.push(AccountMeta::new(*destination, false));
accounts.push(AccountMeta::new_readonly(*authority, signers.is_empty()));
for signer in signers.iter() {
accounts.push(AccountMeta::new_readonly(**signer, true));
}
for source in sources.iter() {
accounts.push(AccountMeta::new(**source, false));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data: TokenInstruction::TransferFeeExtension(
TransferFeeInstruction::WithdrawWithheldTokensFromAccounts { num_token_accounts },
)
.pack(),
})
}
/// Creates a `HarvestWithheldTokensToMint` instruction
pub fn harvest_withheld_tokens_to_mint(
token_program_id: &Pubkey,
mint: &Pubkey,
sources: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = Vec::with_capacity(1 + sources.len());
accounts.push(AccountMeta::new(*mint, false));
for source in sources.iter() {
accounts.push(AccountMeta::new(**source, false));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data: TokenInstruction::TransferFeeExtension(
TransferFeeInstruction::HarvestWithheldTokensToMint,
)
.pack(),
})
}
/// Creates a `SetTransferFee` instruction
pub fn set_transfer_fee(
token_program_id: &Pubkey,
mint: &Pubkey,
authority: &Pubkey,
signers: &[&Pubkey],
transfer_fee_basis_points: u16,
maximum_fee: u64,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = Vec::with_capacity(2 + signers.len());
accounts.push(AccountMeta::new(*mint, false));
accounts.push(AccountMeta::new_readonly(*authority, signers.is_empty()));
for signer in signers.iter() {
accounts.push(AccountMeta::new_readonly(**signer, true));
}
Ok(Instruction {
program_id: *token_program_id,
accounts,
data: TokenInstruction::TransferFeeExtension(TransferFeeInstruction::SetTransferFee {
transfer_fee_basis_points,
maximum_fee,
})
.pack(),
})
}
#[cfg(test)]
mod test {
use super::*;
const TRANSFER_FEE_PREFIX: u8 = 26;
#[test]
fn test_instruction_packing() {
let check = TokenInstruction::TransferFeeExtension(
TransferFeeInstruction::InitializeTransferFeeConfig {
transfer_fee_config_authority: COption::Some(Pubkey::from([11u8; 32])),
withdraw_withheld_authority: COption::None,
transfer_fee_basis_points: 111,
maximum_fee: u64::MAX,
},
);
let packed = check.pack();
let mut expect = vec![TRANSFER_FEE_PREFIX, 0, 1];
expect.extend_from_slice(&[11u8; 32]);
expect.extend_from_slice(&[0]);
expect.extend_from_slice(&111u16.to_le_bytes());
expect.extend_from_slice(&u64::MAX.to_le_bytes());
assert_eq!(packed, expect);
let unpacked = TokenInstruction::unpack(&expect).unwrap();
assert_eq!(unpacked, check);
let check = TokenInstruction::TransferFeeExtension(
TransferFeeInstruction::TransferCheckedWithFee {
amount: 24,
decimals: 24,
fee: 23,
},
);
let packed = check.pack();
let mut expect = vec![TRANSFER_FEE_PREFIX, 1];
expect.extend_from_slice(&24u64.to_le_bytes());
expect.extend_from_slice(&[24u8]);
expect.extend_from_slice(&23u64.to_le_bytes());
assert_eq!(packed, expect);
let unpacked = TokenInstruction::unpack(&expect).unwrap();
assert_eq!(unpacked, check);
let check = TokenInstruction::TransferFeeExtension(
TransferFeeInstruction::WithdrawWithheldTokensFromMint,
);
let packed = check.pack();
let expect = [TRANSFER_FEE_PREFIX, 2];
assert_eq!(packed, expect);
let unpacked = TokenInstruction::unpack(&expect).unwrap();
assert_eq!(unpacked, check);
let num_token_accounts = 255;
let check = TokenInstruction::TransferFeeExtension(
TransferFeeInstruction::WithdrawWithheldTokensFromAccounts { num_token_accounts },
);
let packed = check.pack();
let expect = [TRANSFER_FEE_PREFIX, 3, num_token_accounts];
assert_eq!(packed, expect);
let unpacked = TokenInstruction::unpack(&expect).unwrap();
assert_eq!(unpacked, check);
let check = TokenInstruction::TransferFeeExtension(
TransferFeeInstruction::HarvestWithheldTokensToMint,
);
let packed = check.pack();
let expect = [TRANSFER_FEE_PREFIX, 4];
assert_eq!(packed, expect);
let unpacked = TokenInstruction::unpack(&expect).unwrap();
assert_eq!(unpacked, check);
let check =
TokenInstruction::TransferFeeExtension(TransferFeeInstruction::SetTransferFee {
transfer_fee_basis_points: u16::MAX,
maximum_fee: u64::MAX,
});
let packed = check.pack();
let mut expect = vec![TRANSFER_FEE_PREFIX, 5];
expect.extend_from_slice(&u16::MAX.to_le_bytes());
expect.extend_from_slice(&u64::MAX.to_le_bytes());
assert_eq!(packed, expect);
let unpacked = TokenInstruction::unpack(&expect).unwrap();
assert_eq!(unpacked, check);
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/confidential_transfer/processor.rs
|
use {
crate::program::spl_token_2022::{
check_program_account,
error::TokenError,
extension::{
confidential_transfer::{instruction::*, *},
StateWithExtensions, StateWithExtensionsMut,
},
instruction::{decode_instruction_data, decode_instruction_type},
processor::Processor,
state::{Account, Mint},
},
solana_sdk::{
account_info::{next_account_info, AccountInfo},
entrypoint::ProgramResult,
instruction::Instruction,
msg,
program_error::ProgramError,
pubkey::Pubkey,
sysvar::instructions::get_instruction_relative,
},
solana_zk_token_sdk::zk_token_proof_program,
};
// Remove feature once zk ops syscalls are enabled on all networks
#[cfg(feature = "zk-ops")]
use {
crate::extension::transfer_fee::TransferFeeConfig,
solana_sdk::{clock::Clock, sysvar::Sysvar},
solana_zk_token_sdk::zk_token_elgamal::ops,
};
fn decode_proof_instruction<T: Pod>(
expected: ProofInstruction,
instruction: &Instruction,
) -> Result<&T, ProgramError> {
if instruction.program_id != zk_token_proof_program::id()
|| ProofInstruction::decode_type(&instruction.data) != Some(expected)
{
msg!("Unexpected proof instruction");
return Err(ProgramError::InvalidInstructionData);
}
ProofInstruction::decode_data(&instruction.data).ok_or(ProgramError::InvalidInstructionData)
}
/// Processes an [InitializeMint] instruction.
fn process_initialize_mint(
accounts: &[AccountInfo],
confidential_transfer_mint: &ConfidentialTransferMint,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_info = next_account_info(account_info_iter)?;
check_program_account(mint_info.owner)?;
let mint_data = &mut mint_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack_uninitialized(mint_data)?;
*mint.init_extension::<ConfidentialTransferMint>(true)? = *confidential_transfer_mint;
Ok(())
}
/// Processes an [UpdateMint] instruction.
fn process_update_mint(
accounts: &[AccountInfo],
new_confidential_transfer_mint: &ConfidentialTransferMint,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let new_authority_info = next_account_info(account_info_iter)?;
check_program_account(mint_info.owner)?;
let mint_data = &mut mint_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack(mint_data)?;
let confidential_transfer_mint = mint.get_extension_mut::<ConfidentialTransferMint>()?;
if authority_info.is_signer
&& confidential_transfer_mint.authority == *authority_info.key
&& (new_authority_info.is_signer || *new_authority_info.key == Pubkey::default())
&& new_confidential_transfer_mint.authority == *new_authority_info.key
{
*confidential_transfer_mint = *new_confidential_transfer_mint;
Ok(())
} else {
Err(ProgramError::MissingRequiredSignature)
}
}
/// Processes a [ConfigureAccount] instruction.
fn process_configure_account(
program_id: &Pubkey,
accounts: &[AccountInfo],
ConfigureAccountInstructionData {
encryption_pubkey,
decryptable_zero_balance,
maximum_pending_balance_credit_counter,
}: &ConfigureAccountInstructionData,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let token_account_info = next_account_info(account_info_iter)?;
let mint_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
check_program_account(token_account_info.owner)?;
let token_account_data = &mut token_account_info.data.borrow_mut();
let mut token_account = StateWithExtensionsMut::<Account>::unpack(token_account_data)?;
if token_account.base.mint != *mint_info.key {
return Err(TokenError::MintMismatch.into());
}
Processor::validate_owner(
program_id,
&token_account.base.owner,
authority_info,
authority_info_data_len,
account_info_iter.as_slice(),
)?;
check_program_account(mint_info.owner)?;
let mint_data = &mut mint_info.data.borrow();
let mint = StateWithExtensions::<Mint>::unpack(mint_data)?;
let confidential_transfer_mint = mint.get_extension::<ConfidentialTransferMint>()?;
// Note: The caller is expected to use the `Reallocate` instruction to ensure there is
// sufficient room in their token account for the new `ConfidentialTransferAccount` extension
let mut confidential_transfer_account =
token_account.init_extension::<ConfidentialTransferAccount>(false)?;
confidential_transfer_account.approved = confidential_transfer_mint.auto_approve_new_accounts;
confidential_transfer_account.encryption_pubkey = *encryption_pubkey;
confidential_transfer_account.maximum_pending_balance_credit_counter =
*maximum_pending_balance_credit_counter;
/*
An ElGamal ciphertext is of the form
ElGamalCiphertext {
msg_comm: r * H + x * G
decrypt_handle: r * P
}
where
- G, H: constants for the system (RistrettoPoint)
- P: ElGamal public key component (RistrettoPoint)
- r: encryption randomness (Scalar)
- x: message (Scalar)
Upon receiving a `ConfigureAccount` instruction, the ZK Token program should encrypt x=0
(i.e. Scalar::zero()) and store it as `pending_balance_lo`, `pending_balance_hi`, and
`available_balance`.
For regular encryption, it is important that r is generated from a proper randomness source. But
for the `ConfigureAccount` instruction, it is already known that x is always 0. So r can just be
set Scalar::zero().
This means that the ElGamalCiphertext should simply be
ElGamalCiphertext {
msg_comm: 0 * H + 0 * G = 0
decrypt_handle: 0 * P = 0
}
This should just be encoded as [0; 64]
*/
confidential_transfer_account.pending_balance_lo = EncryptedBalance::zeroed();
confidential_transfer_account.pending_balance_hi = EncryptedBalance::zeroed();
confidential_transfer_account.available_balance = EncryptedBalance::zeroed();
confidential_transfer_account.decryptable_available_balance = *decryptable_zero_balance;
confidential_transfer_account.allow_balance_credits = true.into();
confidential_transfer_account.pending_balance_credit_counter = 0.into();
confidential_transfer_account.expected_pending_balance_credit_counter = 0.into();
confidential_transfer_account.actual_pending_balance_credit_counter = 0.into();
confidential_transfer_account.withheld_amount = EncryptedWithheldAmount::zeroed();
Ok(())
}
/// Processes an [ApproveAccount] instruction.
fn process_approve_account(accounts: &[AccountInfo]) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let token_account_info = next_account_info(account_info_iter)?;
let mint_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
check_program_account(token_account_info.owner)?;
let token_account_data = &mut token_account_info.data.borrow_mut();
let mut token_account = StateWithExtensionsMut::<Account>::unpack(token_account_data)?;
check_program_account(mint_info.owner)?;
let mint_data = &mint_info.data.borrow_mut();
let mint = StateWithExtensions::<Mint>::unpack(mint_data)?;
let confidential_transfer_mint = mint.get_extension::<ConfidentialTransferMint>()?;
if authority_info.is_signer && *authority_info.key == confidential_transfer_mint.authority {
let mut confidential_transfer_state =
token_account.get_extension_mut::<ConfidentialTransferAccount>()?;
confidential_transfer_state.approved = true.into();
Ok(())
} else {
Err(ProgramError::MissingRequiredSignature)
}
}
/// Processes an [EmptyAccount] instruction.
fn process_empty_account(
program_id: &Pubkey,
accounts: &[AccountInfo],
proof_instruction_offset: i64,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let token_account_info = next_account_info(account_info_iter)?;
let instructions_sysvar_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
check_program_account(token_account_info.owner)?;
let token_account_data = &mut token_account_info.data.borrow_mut();
let mut token_account = StateWithExtensionsMut::<Account>::unpack(token_account_data)?;
Processor::validate_owner(
program_id,
&token_account.base.owner,
authority_info,
authority_info_data_len,
account_info_iter.as_slice(),
)?;
let mut confidential_transfer_account =
token_account.get_extension_mut::<ConfidentialTransferAccount>()?;
let previous_instruction =
get_instruction_relative(proof_instruction_offset, instructions_sysvar_info)?;
let proof_data = decode_proof_instruction::<CloseAccountData>(
ProofInstruction::VerifyCloseAccount,
&previous_instruction,
)?;
if confidential_transfer_account.pending_balance_lo != EncryptedBalance::zeroed() {
msg!("Pending balance is not zero");
return Err(ProgramError::InvalidAccountData);
}
if confidential_transfer_account.pending_balance_hi != EncryptedBalance::zeroed() {
msg!("Pending balance is not zero");
return Err(ProgramError::InvalidAccountData);
}
if confidential_transfer_account.available_balance != proof_data.ciphertext {
msg!("Available balance mismatch");
return Err(ProgramError::InvalidInstructionData);
}
confidential_transfer_account.available_balance = EncryptedBalance::zeroed();
confidential_transfer_account.closable()?;
Ok(())
}
/// Processes a [Deposit] instruction.
#[cfg(feature = "zk-ops")]
fn process_deposit(
program_id: &Pubkey,
accounts: &[AccountInfo],
amount: u64,
expected_decimals: u8,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let token_account_info = next_account_info(account_info_iter)?;
let destination_token_account_info = next_account_info(account_info_iter)?;
let mint_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
check_program_account(mint_info.owner)?;
let mint_data = &mint_info.data.borrow_mut();
let mint = StateWithExtensions::<Mint>::unpack(mint_data)?;
if expected_decimals != mint.base.decimals {
return Err(TokenError::MintDecimalsMismatch.into());
}
// Process source account
{
check_program_account(token_account_info.owner)?;
let token_account_data = &mut token_account_info.data.borrow_mut();
let mut token_account = StateWithExtensionsMut::<Account>::unpack(token_account_data)?;
Processor::validate_owner(
program_id,
&token_account.base.owner,
authority_info,
authority_info_data_len,
account_info_iter.as_slice(),
)?;
if token_account.base.is_frozen() {
return Err(TokenError::AccountFrozen.into());
}
if token_account.base.mint != *mint_info.key {
return Err(TokenError::MintMismatch.into());
}
// Wrapped SOL deposits are not supported because lamports cannot be vanished.
assert!(!token_account.base.is_native());
token_account.base.amount = token_account
.base
.amount
.checked_sub(amount)
.ok_or(TokenError::Overflow)?;
token_account.pack_base();
}
//
// Finished with the source token account at this point. Drop all references to it to avoid a
// double borrow if the source and destination accounts are the same
//
// Process destination account
{
check_program_account(destination_token_account_info.owner)?;
let destination_token_account_data = &mut destination_token_account_info.data.borrow_mut();
let mut destination_token_account =
StateWithExtensionsMut::<Account>::unpack(destination_token_account_data)?;
if destination_token_account.base.is_frozen() {
return Err(TokenError::AccountFrozen.into());
}
if destination_token_account.base.mint != *mint_info.key {
return Err(TokenError::MintMismatch.into());
}
let mut destination_confidential_transfer_account =
destination_token_account.get_extension_mut::<ConfidentialTransferAccount>()?;
destination_confidential_transfer_account.approved()?;
if !bool::from(&destination_confidential_transfer_account.allow_balance_credits) {
return Err(TokenError::ConfidentialTransferDepositsAndTransfersDisabled.into());
}
// Divide deposit into the low 16 and high 48 bits and then add to the appropriate pending
// ciphertexts
destination_confidential_transfer_account.pending_balance_lo = ops::add_to(
&destination_confidential_transfer_account.pending_balance_lo,
amount << PENDING_BALANCE_HI_BIT_LENGTH >> PENDING_BALANCE_HI_BIT_LENGTH,
)
.ok_or(ProgramError::InvalidInstructionData)?;
destination_confidential_transfer_account.pending_balance_hi = ops::add_to(
&destination_confidential_transfer_account.pending_balance_hi,
amount >> PENDING_BALANCE_LO_BIT_LENGTH,
)
.ok_or(ProgramError::InvalidInstructionData)?;
destination_confidential_transfer_account.pending_balance_credit_counter =
(u64::from(destination_confidential_transfer_account.pending_balance_credit_counter)
.checked_add(1)
.ok_or(ProgramError::InvalidInstructionData)?)
.into();
if u64::from(destination_confidential_transfer_account.pending_balance_credit_counter)
> u64::from(
destination_confidential_transfer_account.maximum_pending_balance_credit_counter,
)
{
return Err(TokenError::MaximumPendingBalanceCreditCounterExceeded.into());
}
}
Ok(())
}
/// Processes a [Withdraw] instruction.
#[cfg(feature = "zk-ops")]
fn process_withdraw(
program_id: &Pubkey,
accounts: &[AccountInfo],
amount: u64,
expected_decimals: u8,
new_decryptable_available_balance: DecryptableBalance,
proof_instruction_offset: i64,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let token_account_info = next_account_info(account_info_iter)?;
let destination_token_account_info = next_account_info(account_info_iter)?;
let mint_info = next_account_info(account_info_iter)?;
let instructions_sysvar_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
check_program_account(mint_info.owner)?;
let mint_data = &mint_info.data.borrow_mut();
let mint = StateWithExtensions::<Mint>::unpack(mint_data)?;
if expected_decimals != mint.base.decimals {
return Err(TokenError::MintDecimalsMismatch.into());
}
let previous_instruction =
get_instruction_relative(proof_instruction_offset, instructions_sysvar_info)?;
let proof_data = decode_proof_instruction::<WithdrawData>(
ProofInstruction::VerifyWithdraw,
&previous_instruction,
)?;
// Process source account
{
check_program_account(token_account_info.owner)?;
let token_account_data = &mut token_account_info.data.borrow_mut();
let mut token_account = StateWithExtensionsMut::<Account>::unpack(token_account_data)?;
Processor::validate_owner(
program_id,
&token_account.base.owner,
authority_info,
authority_info_data_len,
account_info_iter.as_slice(),
)?;
if token_account.base.is_frozen() {
return Err(TokenError::AccountFrozen.into());
}
if token_account.base.mint != *mint_info.key {
return Err(TokenError::MintMismatch.into());
}
let mut confidential_transfer_account =
token_account.get_extension_mut::<ConfidentialTransferAccount>()?;
confidential_transfer_account.available_balance =
ops::subtract_from(&confidential_transfer_account.available_balance, amount)
.ok_or(ProgramError::InvalidInstructionData)?;
if confidential_transfer_account.available_balance != proof_data.final_ciphertext {
return Err(TokenError::ConfidentialTransferBalanceMismatch.into());
}
confidential_transfer_account.decryptable_available_balance =
new_decryptable_available_balance;
}
//
// Finished with the source token account at this point. Drop all references to it to avoid a
// double borrow if the source and destination accounts are the same
//
// Process destination account
{
check_program_account(destination_token_account_info.owner)?;
let destination_token_account_data = &mut destination_token_account_info.data.borrow_mut();
let mut destination_token_account =
StateWithExtensionsMut::<Account>::unpack(destination_token_account_data)?;
if destination_token_account.base.is_frozen() {
return Err(TokenError::AccountFrozen.into());
}
if destination_token_account.base.mint != *mint_info.key {
return Err(TokenError::MintMismatch.into());
}
// Wrapped SOL withdrawals are not supported because lamports cannot be apparated.
assert!(!destination_token_account.base.is_native());
destination_token_account.base.amount = destination_token_account
.base
.amount
.checked_add(amount)
.ok_or(TokenError::Overflow)?;
destination_token_account.pack_base();
}
Ok(())
}
/// Processes an [Transfer] instruction.
#[cfg(feature = "zk-ops")]
fn process_transfer(
program_id: &Pubkey,
accounts: &[AccountInfo],
new_source_decryptable_available_balance: DecryptableBalance,
proof_instruction_offset: i64,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let token_account_info = next_account_info(account_info_iter)?;
let destination_token_account_info = next_account_info(account_info_iter)?;
let mint_info = next_account_info(account_info_iter)?;
let instructions_sysvar_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
check_program_account(mint_info.owner)?;
let mint_data = &mint_info.data.borrow_mut();
let mint = StateWithExtensions::<Mint>::unpack(mint_data)?;
let confidential_transfer_mint = mint.get_extension::<ConfidentialTransferMint>()?;
let previous_instruction =
get_instruction_relative(proof_instruction_offset, instructions_sysvar_info)?;
if let Ok(transfer_fee_config) = mint.get_extension::<TransferFeeConfig>() {
// mint is extended for fees
let proof_data = decode_proof_instruction::<TransferWithFeeData>(
ProofInstruction::VerifyTransferWithFee,
&previous_instruction,
)?;
if proof_data.transfer_with_fee_pubkeys.auditor_pubkey
!= confidential_transfer_mint.auditor_encryption_pubkey
{
return Err(TokenError::ConfidentialTransferElGamalPubkeyMismatch.into());
}
// `withdraw_withheld_authority` ElGamal pubkey in proof data and mint must match
if proof_data
.transfer_with_fee_pubkeys
.withdraw_withheld_authority_pubkey
!= confidential_transfer_mint.withdraw_withheld_authority_encryption_pubkey
{
return Err(TokenError::ConfidentialTransferElGamalPubkeyMismatch.into());
}
// fee parameters in proof data and mint must match
let epoch = Clock::get()?.epoch;
let (maximum_fee, transfer_fee_basis_points) =
if u64::from(transfer_fee_config.newer_transfer_fee.epoch) < epoch {
(
u64::from(transfer_fee_config.older_transfer_fee.maximum_fee),
u16::from(
transfer_fee_config
.older_transfer_fee
.transfer_fee_basis_points,
),
)
} else {
(
u64::from(transfer_fee_config.newer_transfer_fee.maximum_fee),
u16::from(
transfer_fee_config
.newer_transfer_fee
.transfer_fee_basis_points,
),
)
};
if u64::from(proof_data.fee_parameters.maximum_fee) != maximum_fee
|| u16::from(proof_data.fee_parameters.fee_rate_basis_points)
!= transfer_fee_basis_points
{
return Err(TokenError::FeeParametersMismatch.into());
}
let source_ciphertext_lo = EncryptedBalance::from((
proof_data.ciphertext_lo.commitment,
proof_data.ciphertext_lo.source_handle,
));
let source_ciphertext_hi = EncryptedBalance::from((
proof_data.ciphertext_hi.commitment,
proof_data.ciphertext_hi.source_handle,
));
process_source_for_transfer(
program_id,
token_account_info,
mint_info,
authority_info,
account_info_iter.as_slice(),
&proof_data.transfer_with_fee_pubkeys.source_pubkey,
&source_ciphertext_lo,
&source_ciphertext_hi,
new_source_decryptable_available_balance,
)?;
let destination_ciphertext_lo = EncryptedBalance::from((
proof_data.ciphertext_lo.commitment,
proof_data.ciphertext_lo.destination_handle,
));
let destination_ciphertext_hi = EncryptedBalance::from((
proof_data.ciphertext_hi.commitment,
proof_data.ciphertext_hi.destination_handle,
));
let fee_ciphertext = if token_account_info.key == destination_token_account_info.key {
None
} else {
Some(proof_data.fee_ciphertext)
};
process_destination_for_transfer(
destination_token_account_info,
mint_info,
&proof_data.transfer_with_fee_pubkeys.destination_pubkey,
&destination_ciphertext_lo,
&destination_ciphertext_hi,
fee_ciphertext,
)?;
} else {
// mint is not extended for fees
let proof_data = decode_proof_instruction::<TransferData>(
ProofInstruction::VerifyTransfer,
&previous_instruction,
)?;
if proof_data.transfer_pubkeys.auditor_pubkey
!= confidential_transfer_mint.auditor_encryption_pubkey
{
return Err(TokenError::ConfidentialTransferElGamalPubkeyMismatch.into());
}
let source_ciphertext_lo = EncryptedBalance::from((
proof_data.ciphertext_lo.commitment,
proof_data.ciphertext_lo.source_handle,
));
let source_ciphertext_hi = EncryptedBalance::from((
proof_data.ciphertext_hi.commitment,
proof_data.ciphertext_hi.source_handle,
));
process_source_for_transfer(
program_id,
token_account_info,
mint_info,
authority_info,
account_info_iter.as_slice(),
&proof_data.transfer_pubkeys.source_pubkey,
&source_ciphertext_lo,
&source_ciphertext_hi,
new_source_decryptable_available_balance,
)?;
let destination_ciphertext_lo = EncryptedBalance::from((
proof_data.ciphertext_lo.commitment,
proof_data.ciphertext_lo.destination_handle,
));
let destination_ciphertext_hi = EncryptedBalance::from((
proof_data.ciphertext_hi.commitment,
proof_data.ciphertext_hi.destination_handle,
));
process_destination_for_transfer(
destination_token_account_info,
mint_info,
&proof_data.transfer_pubkeys.destination_pubkey,
&destination_ciphertext_lo,
&destination_ciphertext_hi,
None,
)?;
}
Ok(())
}
#[allow(clippy::too_many_arguments)]
#[cfg(feature = "zk-ops")]
fn process_source_for_transfer(
program_id: &Pubkey,
token_account_info: &AccountInfo,
mint_info: &AccountInfo,
authority_info: &AccountInfo,
signers: &[AccountInfo],
source_encryption_pubkey: &EncryptionPubkey,
source_ciphertext_lo: &EncryptedBalance,
source_ciphertext_hi: &EncryptedBalance,
new_source_decryptable_available_balance: DecryptableBalance,
) -> ProgramResult {
check_program_account(token_account_info.owner)?;
let authority_info_data_len = authority_info.data_len();
let token_account_data = &mut token_account_info.data.borrow_mut();
let mut token_account = StateWithExtensionsMut::<Account>::unpack(token_account_data)?;
Processor::validate_owner(
program_id,
&token_account.base.owner,
authority_info,
authority_info_data_len,
signers,
)?;
if token_account.base.is_frozen() {
return Err(TokenError::AccountFrozen.into());
}
if token_account.base.mint != *mint_info.key {
return Err(TokenError::MintMismatch.into());
}
let mut confidential_transfer_account =
token_account.get_extension_mut::<ConfidentialTransferAccount>()?;
confidential_transfer_account.approved()?;
if *source_encryption_pubkey != confidential_transfer_account.encryption_pubkey {
return Err(TokenError::ConfidentialTransferElGamalPubkeyMismatch.into());
}
let new_source_available_balance = {
ops::subtract_with_lo_hi(
&confidential_transfer_account.available_balance,
source_ciphertext_lo,
source_ciphertext_hi,
)
.ok_or(ProgramError::InvalidInstructionData)?
};
confidential_transfer_account.available_balance = new_source_available_balance;
confidential_transfer_account.decryptable_available_balance =
new_source_decryptable_available_balance;
Ok(())
}
#[cfg(feature = "zk-ops")]
fn process_destination_for_transfer(
destination_token_account_info: &AccountInfo,
mint_info: &AccountInfo,
destination_encryption_pubkey: &EncryptionPubkey,
destination_ciphertext_lo: &EncryptedBalance,
destination_ciphertext_hi: &EncryptedBalance,
encrypted_fee: Option<EncryptedFee>,
) -> ProgramResult {
check_program_account(destination_token_account_info.owner)?;
let destination_token_account_data = &mut destination_token_account_info.data.borrow_mut();
let mut destination_token_account =
StateWithExtensionsMut::<Account>::unpack(destination_token_account_data)?;
if destination_token_account.base.is_frozen() {
return Err(TokenError::AccountFrozen.into());
}
if destination_token_account.base.mint != *mint_info.key {
return Err(TokenError::MintMismatch.into());
}
let mut destination_confidential_transfer_account =
destination_token_account.get_extension_mut::<ConfidentialTransferAccount>()?;
destination_confidential_transfer_account.approved()?;
if !bool::from(&destination_confidential_transfer_account.allow_balance_credits) {
return Err(TokenError::ConfidentialTransferDepositsAndTransfersDisabled.into());
}
if *destination_encryption_pubkey != destination_confidential_transfer_account.encryption_pubkey
{
return Err(TokenError::ConfidentialTransferElGamalPubkeyMismatch.into());
}
let new_destination_pending_balance_lo = ops::add(
&destination_confidential_transfer_account.pending_balance_lo,
destination_ciphertext_lo,
)
.ok_or(ProgramError::InvalidInstructionData)?;
let new_destination_pending_balance_hi = ops::add(
&destination_confidential_transfer_account.pending_balance_hi,
destination_ciphertext_hi,
)
.ok_or(ProgramError::InvalidInstructionData)?;
let new_destination_pending_balance_credit_counter =
u64::from(destination_confidential_transfer_account.pending_balance_credit_counter)
.checked_add(1)
.ok_or(ProgramError::InvalidInstructionData)?;
if new_destination_pending_balance_credit_counter
> u64::from(
destination_confidential_transfer_account.maximum_pending_balance_credit_counter,
)
{
return Err(TokenError::MaximumPendingBalanceCreditCounterExceeded.into());
}
destination_confidential_transfer_account.pending_balance_lo =
new_destination_pending_balance_lo;
destination_confidential_transfer_account.pending_balance_hi =
new_destination_pending_balance_hi;
destination_confidential_transfer_account.pending_balance_credit_counter =
new_destination_pending_balance_credit_counter.into();
// update destination account withheld fees
if let Some(ciphertext_fee) = encrypted_fee {
let ciphertext_fee_destination: EncryptedWithheldAmount =
(ciphertext_fee.commitment, ciphertext_fee.destination_handle).into();
let ciphertext_fee_withheld_authority: EncryptedWithheldAmount = (
ciphertext_fee.commitment,
ciphertext_fee.withdraw_withheld_authority_handle,
)
.into();
// subtract fee from destination pending balance
let new_destination_pending_balance = ops::subtract(
&destination_confidential_transfer_account.pending_balance_lo,
&ciphertext_fee_destination,
)
.ok_or(ProgramError::InvalidInstructionData)?;
// add encrypted fee to current withheld fee
let new_withheld_amount = ops::add(
&destination_confidential_transfer_account.withheld_amount,
&ciphertext_fee_withheld_authority,
)
.ok_or(ProgramError::InvalidInstructionData)?;
destination_confidential_transfer_account.pending_balance_lo =
new_destination_pending_balance;
destination_confidential_transfer_account.withheld_amount = new_withheld_amount;
}
Ok(())
}
/// Processes an [ApplyPendingBalance] instruction.
#[cfg(feature = "zk-ops")]
fn process_apply_pending_balance(
program_id: &Pubkey,
accounts: &[AccountInfo],
ApplyPendingBalanceData {
expected_pending_balance_credit_counter,
new_decryptable_available_balance,
}: &ApplyPendingBalanceData,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let token_account_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
check_program_account(token_account_info.owner)?;
let token_account_data = &mut token_account_info.data.borrow_mut();
let mut token_account = StateWithExtensionsMut::<Account>::unpack(token_account_data)?;
Processor::validate_owner(
program_id,
&token_account.base.owner,
authority_info,
authority_info_data_len,
account_info_iter.as_slice(),
)?;
let mut confidential_transfer_account =
token_account.get_extension_mut::<ConfidentialTransferAccount>()?;
confidential_transfer_account.available_balance = ops::add_with_lo_hi(
&confidential_transfer_account.available_balance,
&confidential_transfer_account.pending_balance_lo,
&confidential_transfer_account.pending_balance_hi,
)
.ok_or(ProgramError::InvalidInstructionData)?;
confidential_transfer_account.actual_pending_balance_credit_counter =
confidential_transfer_account.pending_balance_credit_counter;
confidential_transfer_account.expected_pending_balance_credit_counter =
*expected_pending_balance_credit_counter;
confidential_transfer_account.decryptable_available_balance =
*new_decryptable_available_balance;
confidential_transfer_account.pending_balance_credit_counter = 0.into();
confidential_transfer_account.pending_balance_lo = EncryptedBalance::zeroed();
confidential_transfer_account.pending_balance_hi = EncryptedBalance::zeroed();
Ok(())
}
/// Processes an [DisableBalanceCredits] or [EnableBalanceCredits] instruction.
fn process_allow_balance_credits(
program_id: &Pubkey,
accounts: &[AccountInfo],
allow_balance_credits: bool,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let token_account_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
check_program_account(token_account_info.owner)?;
let token_account_data = &mut token_account_info.data.borrow_mut();
let mut token_account = StateWithExtensionsMut::<Account>::unpack(token_account_data)?;
Processor::validate_owner(
program_id,
&token_account.base.owner,
authority_info,
authority_info_data_len,
account_info_iter.as_slice(),
)?;
let mut confidential_transfer_account =
token_account.get_extension_mut::<ConfidentialTransferAccount>()?;
confidential_transfer_account.allow_balance_credits = allow_balance_credits.into();
Ok(())
}
/// Processes an [WithdrawWithheldTokensFromMint] instruction.
#[cfg(feature = "zk-ops")]
fn process_withdraw_withheld_tokens_from_mint(
program_id: &Pubkey,
accounts: &[AccountInfo],
proof_instruction_offset: i64,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let destination_account_info = next_account_info(account_info_iter)?;
let instructions_sysvar_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
let mut mint_data = mint_account_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack(&mut mint_data)?;
// mint must be extended for fees
{
let transfer_fee_config = mint.get_extension::<TransferFeeConfig>()?;
let withdraw_withheld_authority =
Option::<Pubkey>::from(transfer_fee_config.withdraw_withheld_authority)
.ok_or(TokenError::NoAuthorityExists)?;
Processor::validate_owner(
program_id,
&withdraw_withheld_authority,
authority_info,
authority_info_data_len,
account_info_iter.as_slice(),
)?;
} // free `transfer_fee_config` to borrow `confidential_transfer_mint` as mutable
let confidential_transfer_mint = mint.get_extension_mut::<ConfidentialTransferMint>()?;
// basic checks for the destination account - must be extended for confidential transfers
let mut destination_account_data = destination_account_info.data.borrow_mut();
let mut destination_account =
StateWithExtensionsMut::<Account>::unpack(&mut destination_account_data)?;
if destination_account.base.mint != *mint_account_info.key {
return Err(TokenError::MintMismatch.into());
}
if destination_account.base.is_frozen() {
return Err(TokenError::AccountFrozen.into());
}
let mut destination_confidential_transfer_account =
destination_account.get_extension_mut::<ConfidentialTransferAccount>()?;
destination_confidential_transfer_account.approved()?;
// verify consistency of proof data
let previous_instruction =
get_instruction_relative(proof_instruction_offset, instructions_sysvar_info)?;
let proof_data = decode_proof_instruction::<WithdrawWithheldTokensData>(
ProofInstruction::VerifyWithdrawWithheldTokens,
&previous_instruction,
)?;
// withdraw withheld authority ElGamal pubkey should match in the proof data and mint
if proof_data.withdraw_withheld_authority_pubkey
!= confidential_transfer_mint.withdraw_withheld_authority_encryption_pubkey
{
return Err(TokenError::ConfidentialTransferElGamalPubkeyMismatch.into());
}
// destination ElGamal pubkey should match in the proof data and destination account
if proof_data.destination_pubkey != destination_confidential_transfer_account.encryption_pubkey
{
return Err(TokenError::ConfidentialTransferElGamalPubkeyMismatch.into());
}
// withheld amount ciphertext must match in the proof data and mint
if proof_data.withdraw_withheld_authority_ciphertext
!= confidential_transfer_mint.withheld_amount
{
return Err(TokenError::ConfidentialTransferBalanceMismatch.into());
}
// The proof data contains the mint withheld amount encrypted under the destination ElGamal pubkey.
// This amount should be added to the destination pending balance.
let new_destination_pending_balance = ops::add(
&destination_confidential_transfer_account.pending_balance_lo,
&proof_data.destination_ciphertext,
)
.ok_or(ProgramError::InvalidInstructionData)?;
destination_confidential_transfer_account.pending_balance_lo = new_destination_pending_balance;
// fee is now withdrawn, so zero out mint withheld amount
confidential_transfer_mint.withheld_amount = EncryptedWithheldAmount::zeroed();
Ok(())
}
#[cfg(feature = "zk-ops")]
fn process_withdraw_withheld_tokens_from_accounts(
program_id: &Pubkey,
accounts: &[AccountInfo],
num_token_accounts: u8,
proof_instruction_offset: i64,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let destination_account_info = next_account_info(account_info_iter)?;
let instructions_sysvar_info = next_account_info(account_info_iter)?;
let authority_info = next_account_info(account_info_iter)?;
let authority_info_data_len = authority_info.data_len();
let account_infos = account_info_iter.as_slice();
let num_signers = account_infos
.len()
.saturating_sub(num_token_accounts as usize);
let mut mint_data = mint_account_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack(&mut mint_data)?;
// mint must be extended for fees
let transfer_fee_config = mint.get_extension::<TransferFeeConfig>()?;
let withdraw_withheld_authority =
Option::<Pubkey>::from(transfer_fee_config.withdraw_withheld_authority)
.ok_or(TokenError::NoAuthorityExists)?;
Processor::validate_owner(
program_id,
&withdraw_withheld_authority,
authority_info,
authority_info_data_len,
&account_infos[..num_signers],
)?;
let mut destination_account_data = destination_account_info.data.borrow_mut();
let mut destination_account =
StateWithExtensionsMut::<Account>::unpack(&mut destination_account_data)?;
if destination_account.base.mint != *mint_account_info.key {
return Err(TokenError::MintMismatch.into());
}
if destination_account.base.is_frozen() {
return Err(TokenError::AccountFrozen.into());
}
// sum up the withheld amounts in all the accounts
let mut aggregate_withheld_amount = EncryptedWithheldAmount::zeroed();
for account_info in &account_infos[num_signers..] {
// self-harvest, can't double-borrow the underlying data
if account_info.key == destination_account_info.key {
let confidential_transfer_destination_account = destination_account
.get_extension_mut::<ConfidentialTransferAccount>()
.map_err(|_| TokenError::InvalidState)?;
aggregate_withheld_amount = ops::add(
&aggregate_withheld_amount,
&confidential_transfer_destination_account.withheld_amount,
)
.ok_or(ProgramError::InvalidInstructionData)?;
confidential_transfer_destination_account.withheld_amount =
EncryptedWithheldAmount::zeroed();
} else {
match harvest_from_account(mint_account_info.key, account_info) {
Ok(encrypted_withheld_amount) => {
aggregate_withheld_amount =
ops::add(&aggregate_withheld_amount, &encrypted_withheld_amount)
.ok_or(ProgramError::InvalidInstructionData)?;
}
Err(e) => {
msg!("Error harvesting from {}: {}", account_info.key, e);
}
}
}
}
let mut destination_confidential_transfer_account =
destination_account.get_extension_mut::<ConfidentialTransferAccount>()?;
destination_confidential_transfer_account.approved()?;
// verify consistency of proof data
let previous_instruction =
get_instruction_relative(proof_instruction_offset, instructions_sysvar_info)?;
let proof_data = decode_proof_instruction::<WithdrawWithheldTokensData>(
ProofInstruction::VerifyWithdrawWithheldTokens,
&previous_instruction,
)?;
// withdraw withheld authority ElGamal pubkey should match in the proof data and mint
let confidential_transfer_mint = mint.get_extension_mut::<ConfidentialTransferMint>()?;
if proof_data.withdraw_withheld_authority_pubkey
!= confidential_transfer_mint.withdraw_withheld_authority_encryption_pubkey
{
return Err(TokenError::ConfidentialTransferElGamalPubkeyMismatch.into());
}
// destination ElGamal pubkey should match in the proof data and destination account
if proof_data.destination_pubkey != destination_confidential_transfer_account.encryption_pubkey
{
return Err(TokenError::ConfidentialTransferElGamalPubkeyMismatch.into());
}
// withheld amount ciphertext must match in the proof data and mint
if proof_data.withdraw_withheld_authority_ciphertext != aggregate_withheld_amount {
return Err(TokenError::ConfidentialTransferBalanceMismatch.into());
}
// add the sum of the withheld fees to destination pending balance
let new_destination_pending_balance = ops::add(
&destination_confidential_transfer_account.pending_balance_lo,
&proof_data.destination_ciphertext,
)
.ok_or(ProgramError::InvalidInstructionData)?;
destination_confidential_transfer_account.pending_balance_lo = new_destination_pending_balance;
Ok(())
}
#[cfg(feature = "zk-ops")]
fn harvest_from_account<'a, 'b>(
mint_key: &'b Pubkey,
token_account_info: &'b AccountInfo<'a>,
) -> Result<EncryptedWithheldAmount, TokenError> {
let mut token_account_data = token_account_info.data.borrow_mut();
let mut token_account = StateWithExtensionsMut::<Account>::unpack(&mut token_account_data)
.map_err(|_| TokenError::InvalidState)?;
if token_account.base.mint != *mint_key {
return Err(TokenError::MintMismatch);
}
check_program_account(token_account_info.owner).map_err(|_| TokenError::InvalidState)?;
let confidential_transfer_token_account = token_account
.get_extension_mut::<ConfidentialTransferAccount>()
.map_err(|_| TokenError::InvalidState)?;
let withheld_amount = confidential_transfer_token_account.withheld_amount;
confidential_transfer_token_account.withheld_amount = EncryptedWithheldAmount::zeroed();
Ok(withheld_amount)
}
/// Processes an [HarvestWithheldTokensToMint] instruction.
#[cfg(feature = "zk-ops")]
fn process_harvest_withheld_tokens_to_mint(accounts: &[AccountInfo]) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let token_account_infos = account_info_iter.as_slice();
let mut mint_data = mint_account_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack(&mut mint_data)?;
mint.get_extension::<TransferFeeConfig>()?;
let confidential_transfer_mint = mint.get_extension_mut::<ConfidentialTransferMint>()?;
for token_account_info in token_account_infos {
match harvest_from_account(mint_account_info.key, token_account_info) {
Ok(withheld_amount) => {
let new_mint_withheld_amount = ops::add(
&confidential_transfer_mint.withheld_amount,
&withheld_amount,
)
.ok_or(ProgramError::InvalidInstructionData)?;
confidential_transfer_mint.withheld_amount = new_mint_withheld_amount;
}
Err(e) => {
msg!("Error harvesting from {}: {}", token_account_info.key, e);
}
}
}
Ok(())
}
#[allow(dead_code)]
pub(crate) fn process_instruction(
program_id: &Pubkey,
accounts: &[AccountInfo],
input: &[u8],
) -> ProgramResult {
check_program_account(program_id)?;
match decode_instruction_type(input)? {
ConfidentialTransferInstruction::InitializeMint => {
msg!("ConfidentialTransferInstruction::InitializeMint");
process_initialize_mint(
accounts,
decode_instruction_data::<ConfidentialTransferMint>(input)?,
)
}
ConfidentialTransferInstruction::UpdateMint => {
msg!("ConfidentialTransferInstruction::UpdateMint");
process_update_mint(
accounts,
decode_instruction_data::<ConfidentialTransferMint>(input)?,
)
}
ConfidentialTransferInstruction::ConfigureAccount => {
msg!("ConfidentialTransferInstruction::ConfigureAccount");
process_configure_account(
program_id,
accounts,
decode_instruction_data::<ConfigureAccountInstructionData>(input)?,
)
}
ConfidentialTransferInstruction::ApproveAccount => {
msg!("ConfidentialTransferInstruction::ApproveAccount");
process_approve_account(accounts)
}
ConfidentialTransferInstruction::EmptyAccount => {
msg!("ConfidentialTransferInstruction::EmptyAccount");
let data = decode_instruction_data::<EmptyAccountInstructionData>(input)?;
process_empty_account(program_id, accounts, data.proof_instruction_offset as i64)
}
ConfidentialTransferInstruction::Deposit => {
msg!("ConfidentialTransferInstruction::Deposit");
#[cfg(feature = "zk-ops")]
{
let data = decode_instruction_data::<DepositInstructionData>(input)?;
process_deposit(program_id, accounts, data.amount.into(), data.decimals)
}
#[cfg(not(feature = "zk-ops"))]
Err(ProgramError::InvalidInstructionData)
}
ConfidentialTransferInstruction::Withdraw => {
msg!("ConfidentialTransferInstruction::Withdraw");
#[cfg(feature = "zk-ops")]
{
let data = decode_instruction_data::<WithdrawInstructionData>(input)?;
process_withdraw(
program_id,
accounts,
data.amount.into(),
data.decimals,
data.new_decryptable_available_balance,
data.proof_instruction_offset as i64,
)
}
#[cfg(not(feature = "zk-ops"))]
Err(ProgramError::InvalidInstructionData)
}
ConfidentialTransferInstruction::Transfer => {
msg!("ConfidentialTransferInstruction::Transfer");
#[cfg(feature = "zk-ops")]
{
let data = decode_instruction_data::<TransferInstructionData>(input)?;
process_transfer(
program_id,
accounts,
data.new_source_decryptable_available_balance,
data.proof_instruction_offset as i64,
)
}
#[cfg(not(feature = "zk-ops"))]
Err(ProgramError::InvalidInstructionData)
}
ConfidentialTransferInstruction::TransferWithFee => {
msg!("ConfidentialTransferInstruction::TransferWithFee");
#[cfg(feature = "zk-ops")]
{
let data = decode_instruction_data::<TransferWithFeeInstructionData>(input)?;
process_transfer(
program_id,
accounts,
data.new_source_decryptable_available_balance,
data.proof_instruction_offset as i64,
)
}
#[cfg(not(feature = "zk-ops"))]
{
Err(ProgramError::InvalidInstructionData)
}
}
ConfidentialTransferInstruction::ApplyPendingBalance => {
msg!("ConfidentialTransferInstruction::ApplyPendingBalance");
#[cfg(feature = "zk-ops")]
{
process_apply_pending_balance(
program_id,
accounts,
decode_instruction_data::<ApplyPendingBalanceData>(input)?,
)
}
#[cfg(not(feature = "zk-ops"))]
{
Err(ProgramError::InvalidInstructionData)
}
}
ConfidentialTransferInstruction::DisableBalanceCredits => {
msg!("ConfidentialTransferInstruction::DisableBalanceCredits");
process_allow_balance_credits(program_id, accounts, false)
}
ConfidentialTransferInstruction::EnableBalanceCredits => {
msg!("ConfidentialTransferInstruction::EnableBalanceCredits");
process_allow_balance_credits(program_id, accounts, true)
}
ConfidentialTransferInstruction::WithdrawWithheldTokensFromMint => {
msg!("ConfidentialTransferInstruction::WithdrawWithheldTokensFromMint");
#[cfg(feature = "zk-ops")]
{
let data = decode_instruction_data::<WithdrawWithheldTokensFromMintData>(input)?;
process_withdraw_withheld_tokens_from_mint(
program_id,
accounts,
data.proof_instruction_offset as i64,
)
}
#[cfg(not(feature = "zk-ops"))]
Err(ProgramError::InvalidInstructionData)
}
ConfidentialTransferInstruction::WithdrawWithheldTokensFromAccounts => {
msg!("ConfidentialTransferInstruction::WithdrawWithheldTokensFromAccounts");
#[cfg(feature = "zk-ops")]
{
let data =
decode_instruction_data::<WithdrawWithheldTokensFromAccountsData>(input)?;
process_withdraw_withheld_tokens_from_accounts(
program_id,
accounts,
data.num_token_accounts,
data.proof_instruction_offset as i64,
)
}
#[cfg(not(feature = "zk-ops"))]
Err(ProgramError::InvalidInstructionData)
}
ConfidentialTransferInstruction::HarvestWithheldTokensToMint => {
msg!("ConfidentialTransferInstruction::HarvestWithheldTokensToMint");
#[cfg(feature = "zk-ops")]
{
process_harvest_withheld_tokens_to_mint(accounts)
}
#[cfg(not(feature = "zk-ops"))]
{
Err(ProgramError::InvalidInstructionData)
}
}
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/confidential_transfer/mod.rs
|
use {
crate::program::spl_token_2022::{
error::TokenError,
extension::{Extension, ExtensionType},
pod::*,
},
bytemuck::{Pod, Zeroable},
solana_sdk::{entrypoint::ProgramResult, pubkey::Pubkey},
solana_zk_token_sdk::zk_token_elgamal::pod,
};
/// Maximum bit length of any deposit or transfer amount
///
/// Any deposit or transfer amount must be less than 2^48
pub const MAXIMUM_DEPOSIT_TRANSFER_AMOUNT_BIT_LENGTH: usize = 48;
const PENDING_BALANCE_LO_BIT_LENGTH: usize = 16;
const PENDING_BALANCE_HI_BIT_LENGTH: usize = 48;
/// Confidential Transfer Extension instructions
pub mod instruction;
/// Confidential Transfer Extension processor
pub mod processor;
/// ElGamal public key used for encryption
pub type EncryptionPubkey = pod::ElGamalPubkey;
/// ElGamal ciphertext containing an account balance
pub type EncryptedBalance = pod::ElGamalCiphertext;
/// Authenticated encryption containing an account balance
pub type DecryptableBalance = pod::AeCiphertext;
/// (aggregated) ElGamal ciphertext containing a transfer fee
pub type EncryptedFee = pod::FeeEncryption;
/// ElGamal ciphertext containing a withheld amount
pub type EncryptedWithheldAmount = pod::ElGamalCiphertext;
/// Confidential transfer mint configuration
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
pub struct ConfidentialTransferMint {
/// Authority to modify the `ConfidentialTransferMint` configuration and to approve new
/// accounts (if `auto_approve_new_accounts` is true)
///
/// Note that setting an authority of `Pubkey::default()` is the idiomatic way to disable
/// future changes to the configuration.
///
/// The legacy Token Multisig account is not supported as the authority
pub authority: Pubkey,
/// Indicate if newly configured accounts must be approved by the `authority` before they may be
/// used by the user.
///
/// * If `true`, no approval is required and new accounts may be used immediately
/// * If `false`, the authority must approve newly configured accounts (see
/// `ConfidentialTransferInstruction::ConfigureAccount`)
pub auto_approve_new_accounts: PodBool,
/// * If non-zero, transfers must include ElGamal cypertext with this public key permitting the
/// auditor to decode the transfer amount.
/// * If all zero, auditing is currently disabled.
pub auditor_encryption_pubkey: EncryptionPubkey,
/// * If non-zero, transfers must include ElGamal cypertext of the transfer fee with this
/// public key. If this is the case, but the base mint is not extended for fees, then any
/// transfer will fail.
/// * If all zero, transfer fee is disabled. If this is the case, but the base mint is extended
/// for fees, then any transfer will fail.
pub withdraw_withheld_authority_encryption_pubkey: EncryptionPubkey,
/// Withheld transfer fee confidential tokens that have been moved to the mint for withdrawal.
/// This will always be zero if fees are never enabled.
pub withheld_amount: EncryptedWithheldAmount,
}
impl Extension for ConfidentialTransferMint {
const TYPE: ExtensionType = ExtensionType::ConfidentialTransferMint;
}
/// Confidential account state
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
pub struct ConfidentialTransferAccount {
/// `true` if this account has been approved for use. All confidential transfer operations for
/// the account will fail until approval is granted.
pub approved: PodBool,
/// The public key associated with ElGamal encryption
pub encryption_pubkey: EncryptionPubkey,
/// The low 16 bits of the pending balance (encrypted by `encryption_pubkey`)
pub pending_balance_lo: EncryptedBalance,
/// The high 48 bits of the pending balance (encrypted by `encryption_pubkey`)
pub pending_balance_hi: EncryptedBalance,
/// The available balance (encrypted by `encrypiton_pubkey`)
pub available_balance: EncryptedBalance,
/// The decryptable available balance
pub decryptable_available_balance: DecryptableBalance,
/// `pending_balance` may only be credited by `Deposit` or `Transfer` instructions if `true`
pub allow_balance_credits: PodBool,
/// The total number of `Deposit` and `Transfer` instructions that have credited
/// `pending_balance`
pub pending_balance_credit_counter: PodU64,
/// The maximum number of `Deposit` and `Transfer` instructions that can credit
/// `pending_balance` before the `ApplyPendingBalance` instruction is executed
pub maximum_pending_balance_credit_counter: PodU64,
/// The `expected_pending_balance_credit_counter` value that was included in the last
/// `ApplyPendingBalance` instruction
pub expected_pending_balance_credit_counter: PodU64,
/// The actual `pending_balance_credit_counter` when the last `ApplyPendingBalance` instruction
/// was executed
pub actual_pending_balance_credit_counter: PodU64,
/// The withheld amount of fees. This will always be zero if fees are never enabled.
pub withheld_amount: EncryptedWithheldAmount,
}
impl Extension for ConfidentialTransferAccount {
const TYPE: ExtensionType = ExtensionType::ConfidentialTransferAccount;
}
impl ConfidentialTransferAccount {
/// Check if a `ConfidentialTransferAccount` has been approved for use
pub fn approved(&self) -> ProgramResult {
if bool::from(&self.approved) {
Ok(())
} else {
Err(TokenError::ConfidentialTransferAccountNotApproved.into())
}
}
/// Check if a `ConfidentialTransferAccount` is in a closable state
pub fn closable(&self) -> ProgramResult {
if self.pending_balance_lo == EncryptedBalance::zeroed()
&& self.pending_balance_hi == EncryptedBalance::zeroed()
&& self.available_balance == EncryptedBalance::zeroed()
&& self.withheld_amount == EncryptedWithheldAmount::zeroed()
{
Ok(())
} else {
Err(TokenError::ConfidentialTransferAccountHasBalance.into())
}
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/confidential_transfer/instruction.rs
|
#[cfg(not(target_os = "solana"))]
use solana_zk_token_sdk::encryption::auth_encryption::AeCiphertext;
pub use solana_zk_token_sdk::zk_token_proof_instruction::*;
use {
crate::program::spl_token_2022::{
check_program_account,
extension::confidential_transfer::*,
instruction::{encode_instruction, TokenInstruction},
},
bytemuck::{Pod, Zeroable},
num_enum::{IntoPrimitive, TryFromPrimitive},
solana_sdk::{
instruction::{AccountMeta, Instruction},
program_error::ProgramError,
pubkey::Pubkey,
sysvar,
},
solana_zk_token_sdk::zk_token_elgamal::pod,
std::convert::TryFrom,
};
/// Confidential Transfer extension instructions
#[derive(Clone, Copy, Debug, TryFromPrimitive, IntoPrimitive)]
#[repr(u8)]
pub enum ConfidentialTransferInstruction {
/// Initializes confidential transfers for a mint.
///
/// The `ConfidentialTransferInstruction::InitializeMint` instruction requires no signers
/// and MUST be included within the same Transaction as `TokenInstruction::InitializeMint`.
/// Otherwise another party can initialize the configuration.
///
/// The instruction fails if the `TokenInstruction::InitializeMint` instruction has already
/// executed for the mint.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The SPL Token mint.
///
/// Data expected by this instruction:
/// `ConfidentialTransferMint`
///
InitializeMint,
/// Updates the confidential transfer mint configuration for a mint.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The SPL Token mint.
/// 1. `[signer]` Confidential transfer mint authority.
/// 2. `[signer]` New confidential transfer mint authority.
///
/// Data expected by this instruction:
/// `ConfidentialTransferMint`
///
UpdateMint,
/// Configures confidential transfers for a token account.
///
/// The instruction fails if the confidential transfers are already configured, or if the mint
/// was not initialized with confidential transfer support.
///
/// The instruction fails if the `TokenInstruction::InitializeAccount` instruction has not yet
/// successfully executed for the token account.
///
/// Upon success confidential deposits and transfers are enabled, use the
/// `DisableBalanceCredits` instruction to disable.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writeable]` The SPL Token account.
/// 1. `[]` The corresponding SPL Token mint.
/// 2. `[signer]` The single source account owner.
///
/// * Multisignature owner/delegate
/// 0. `[writeable]` The SPL Token account.
/// 1. `[]` The corresponding SPL Token mint.
/// 2. `[]` The multisig source account owner.
/// 3.. `[signer]` Required M signer accounts for the SPL Token Multisig account.
///
/// Data expected by this instruction:
/// `ConfigureAccountInstructionData`
///
ConfigureAccount,
/// Approves a token account for confidential transfers.
///
/// Approval is only required when the `ConfidentialTransferMint::approve_new_accounts`
/// field is set in the SPL Token mint. This instruction must be executed after the account
/// owner configures their account for confidential transfers with
/// `ConfidentialTransferInstruction::ConfigureAccount`.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The SPL Token account to approve.
/// 1. `[]` The SPL Token mint.
/// 2. `[signer]` Confidential transfer auditor authority.
///
/// Data expected by this instruction:
/// None
///
ApproveAccount,
/// Prepare a token account for closing. The account must not hold any confidential tokens in
/// its pending or available balances. Use
/// `ConfidentialTransferInstruction::DisableBalanceCredits` to block balance credit changes
/// first if necessary.
///
/// Note that a newly configured account is always empty, so this instruction is not required
/// prior to account closing if no instructions beyond
/// `ConfidentialTransferInstruction::ConfigureAccount` have affected the token account.
///
/// * Single owner/delegate
/// 0. `[writable]` The SPL Token account.
/// 1. `[]` Instructions sysvar.
/// 2. `[signer]` The single account owner.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The SPL Token account.
/// 1. `[]` Instructions sysvar.
/// 2. `[]` The multisig account owner.
/// 3.. `[signer]` Required M signer accounts for the SPL Token Multisig account.
///
/// Data expected by this instruction:
/// `EmptyAccountInstructionData`
///
EmptyAccount,
/// Deposit SPL Tokens into the pending balance of a confidential token account.
///
/// The account owner can then invoke the `ApplyPendingBalance` instruction to roll the deposit
/// into their available balance at a time of their choosing.
///
/// Fails if the source or destination accounts are frozen.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The source SPL Token account.
/// 1. `[writable]` The destination SPL Token account with confidential transfers configured.
/// 2. `[]` The token mint.
/// 3. `[signer]` The single source account owner or delegate.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The source SPL Token account.
/// 1. `[writable]` The destination SPL Token account with confidential transfers configured.
/// 2. `[]` The token mint.
/// 3. `[]` The multisig source account owner or delegate.
/// 4.. `[signer]` Required M signer accounts for the SPL Token Multisig account.
///
/// Data expected by this instruction:
/// `DepositInstructionData`
///
Deposit,
/// Withdraw SPL Tokens from the available balance of a confidential token account.
///
/// Fails if the source or destination accounts are frozen.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The source SPL Token account with confidential transfers configured.
/// 1. `[writable]` The destination SPL Token account.
/// 2. `[]` The token mint.
/// 3. `[]` Instructions sysvar.
/// 4. `[signer]` The single source account owner.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The source SPL Token account with confidential transfers configured.
/// 1. `[writable]` The destination SPL Token account.
/// 2. `[]` The token mint.
/// 3. `[]` Instructions sysvar.
/// 4. `[]` The multisig source account owner.
/// 5.. `[signer]` Required M signer accounts for the SPL Token Multisig account.
///
/// Data expected by this instruction:
/// `WithdrawInstructionData`
///
Withdraw,
/// Transfer tokens confidentially.
///
/// * Single owner/delegate
/// 1. `[writable]` The source SPL Token account.
/// 2. `[writable]` The destination SPL Token account.
/// 3. `[]` The token mint.
/// 4. `[]` Instructions sysvar.
/// 5. `[signer]` The single source account owner.
///
/// * Multisignature owner/delegate
/// 1. `[writable]` The source SPL Token account.
/// 2. `[writable]` The destination SPL Token account.
/// 3. `[]` The token mint.
/// 4. `[]` Instructions sysvar.
/// 5. `[]` The multisig source account owner.
/// 6.. `[signer]` Required M signer accounts for the SPL Token Multisig account.
///
/// Data expected by this instruction:
/// `TransferInstructionData`
///
Transfer,
/// Transfer tokens confidentially with fee.
///
/// * Single owner/delegate
/// 1. `[writable]` The source SPL Token account.
/// 2. `[writable]` The destination SPL Token account.
/// 3. `[]` The token mint.
/// 4. `[]` Instructions sysvar.
/// 5. `[signer]` The single source account owner.
///
/// * Multisignature owner/delegate
/// 1. `[writable]` The source SPL Token account.
/// 2. `[writable]` The destination SPL Token account.
/// 3. `[]` The token mint.
/// 4. `[]` Instructions sysvar.
/// 5. `[]` The multisig source account owner.
/// 6.. `[signer]` Required M signer accounts for the SPL Token Multisig account.
///
/// Data expected by this instruction:
/// `TransferWithFeeInstructionData`
///
TransferWithFee,
/// Applies the pending balance to the available balance, based on the history of `Deposit`
/// and/or `Transfer` instructions.
///
/// After submitting `ApplyPendingBalance`, the client should compare
/// `ConfidentialTransferAccount::expected_pending_balance_credit_counter` with
/// `ConfidentialTransferAccount::actual_applied_pending_balance_instructions`. If they are
/// equal then the `ConfidentialTransferAccount::decryptable_available_balance` is consistent
/// with `ConfidentialTransferAccount::available_balance`. If they differ then there is more
/// pending balance to be applied.
///
/// Account expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The SPL Token account.
/// 1. `[signer]` The single account owner.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The SPL Token account.
/// 1. `[]` The multisig account owner.
/// 2.. `[signer]` Required M signer accounts for the SPL Token Multisig account.
///
/// Data expected by this instruction:
/// `ApplyPendingBalanceData`
///
ApplyPendingBalance,
/// Enable confidential transfer `Deposit` and `Transfer` instructions for a token account.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The SPL Token account.
/// 1. `[signer]` Single authority.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The SPL Token account.
/// 1. `[]` Multisig authority.
/// 2.. `[signer]` Required M signer accounts for the SPL Token Multisig account.
///
/// Data expected by this instruction:
/// None
///
EnableBalanceCredits,
/// Disable confidential transfer `Deposit` and `Transfer` instructions for a token account.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The SPL Token account.
/// 1. `[signer]` The single account owner.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The SPL Token account.
/// 1. `[]` The multisig account owner.
/// 2.. `[signer]` Required M signer accounts for the SPL Token Multisig account.
///
/// Data expected by this instruction:
/// None
///
DisableBalanceCredits,
/// Transfer all withheld confidential tokens in the mint to an account. Signed by the mint's
/// withdraw withheld tokens authority.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[writable]` The token mint. Must include the `TransferFeeConfig` extension.
/// 1. `[writable]` The fee receiver account. Must include the `TransferFeeAmount` and
/// `ConfidentialTransferAccount` extensions.
/// 2. `[]` Instructions sysvar.
/// 3. `[signer]` The mint's `withdraw_withheld_authority`.
///
/// * Multisignature owner/delegate
/// 0. `[writable]` The token mint. Must include the `TransferFeeConfig` extension.
/// 1. `[writable]` The fee receiver account. Must include the `TransferFeeAmount` and
/// `ConfidentialTransferAccount` extensions.
/// 2. `[]` Instructions sysvar.
/// 3. `[]` The mint's multisig `withdraw_withheld_authority`.
/// 4. ..3+M `[signer]` M signer accounts.
///
/// Data expected by this instruction:
/// WithdrawWithheldTokensFromMintData
///
WithdrawWithheldTokensFromMint,
/// Transfer all withheld tokens to an account. Signed by the mint's withdraw withheld tokens
/// authority. This instruction is susceptible to front-running. Use
/// `HarvestWithheldTokensToMint` and `WithdrawWithheldTokensFromMint` as an alternative.
///
/// Note on front-running: This instruction requires a zero-knowledge proof verification
/// instruction that is checked with respect to the account state (the currently withheld
/// fees). Suppose that a withdraw withheld authority generates the
/// `WithdrawWithheldTokensFromAccounts` instruction along with a corresponding zero-knowledge
/// proof for a specified set of accounts, and submits it on chain. If the withheld fees at any
/// of the specified accounts change before the `WithdrawWithheldTokensFromAccounts` is
/// executed on chain, the zero-knowledge proof will not verify with respect to the new state,
/// forcing the transaction to fail.
///
/// If front-running occurs, then users can look up the updated states of the accounts,
/// generate a new zero-knowledge proof and try again. Alternatively, withdraw withheld
/// authority can first move the withheld amount to the mint using
/// `HarvestWithheldTokensToMint` and then move the withheld fees from mint to a specified
/// destination account using `WithdrawWithheldTokensFromMint`.
///
/// Accounts expected by this instruction:
///
/// * Single owner/delegate
/// 0. `[]` The token mint. Must include the `TransferFeeConfig` extension.
/// 1. `[writable]` The fee receiver account. Must include the `TransferFeeAmount` and
/// `ConfidentialTransferAccount` extensions.
/// 2. `[]` Instructions sysvar.
/// 3. `[signer]` The mint's `withdraw_withheld_authority`.
/// 4. ..3+N `[writable]` The source accounts to withdraw from.
///
/// * Multisignature owner/delegate
/// 0. `[]` The token mint. Must include the `TransferFeeConfig` extension.
/// 1. `[writable]` The fee receiver account. Must include the `TransferFeeAmount` and
/// `ConfidentialTransferAccount` extensions.
/// 2. `[]` Instructions sysvar.
/// 3. `[]` The mint's multisig `withdraw_withheld_authority`.
/// 4. ..4+M `[signer]` M signer accounts.
/// 4+M+1. ..3+M+N `[writable]` The source accounts to withdraw from.
///
/// Data expected by this instruction:
/// WithdrawWithheldTokensFromAccountsData
///
WithdrawWithheldTokensFromAccounts,
/// Permissionless instruction to transfer all withheld confidential tokens to the mint.
///
/// Succeeds for frozen accounts.
///
/// Accounts provided should include both the `TransferFeeAmount` and
/// `ConfidentialTransferAccount` extension. If not, the account is skipped.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The mint.
/// 1. ..1+N `[writable]` The source accounts to harvest from.
///
/// Data expected by this instruction:
/// None
///
HarvestWithheldTokensToMint,
}
/// Data expected by `ConfidentialTransferInstruction::ConfigureAccount`
#[derive(Clone, Copy, Pod, Zeroable)]
#[repr(C)]
pub struct ConfigureAccountInstructionData {
/// The public key associated with the account
pub encryption_pubkey: EncryptionPubkey,
/// The decryptable balance (always 0) once the configure account succeeds
pub decryptable_zero_balance: DecryptableBalance,
/// The maximum number of despots and transfers that an account can receiver before the
/// `ApplyPendingBalance` is executed
pub maximum_pending_balance_credit_counter: PodU64,
}
/// Data expected by `ConfidentialTransferInstruction::EmptyAccount`
#[derive(Clone, Copy, Pod, Zeroable)]
#[repr(C)]
pub struct EmptyAccountInstructionData {
/// Relative location of the `ProofInstruction::VerifyCloseAccount` instruction to the
/// `EmptyAccount` instruction in the transaction
pub proof_instruction_offset: i8,
}
/// Data expected by `ConfidentialTransferInstruction::Deposit`
#[derive(Clone, Copy, Pod, Zeroable)]
#[repr(C)]
pub struct DepositInstructionData {
/// The amount of tokens to deposit
pub amount: PodU64,
/// Expected number of base 10 digits to the right of the decimal place
pub decimals: u8,
}
/// Data expected by `ConfidentialTransferInstruction::Withdraw`
#[derive(Clone, Copy, Pod, Zeroable)]
#[repr(C)]
pub struct WithdrawInstructionData {
/// The amount of tokens to withdraw
pub amount: PodU64,
/// Expected number of base 10 digits to the right of the decimal place
pub decimals: u8,
/// The new decryptable balance if the withdrawal succeeds
pub new_decryptable_available_balance: DecryptableBalance,
/// Relative location of the `ProofInstruction::VerifyWithdraw` instruction to the `Withdraw`
/// instruction in the transaction
pub proof_instruction_offset: i8,
}
/// Data expected by `ConfidentialTransferInstruction::Transfer`
#[derive(Clone, Copy, Pod, Zeroable)]
#[repr(C)]
pub struct TransferInstructionData {
/// The new source decryptable balance if the transfer succeeds
pub new_source_decryptable_available_balance: DecryptableBalance,
/// Relative location of the `ProofInstruction::VerifyTransfer` instruction to the
/// `Transfer` instruction in the transaction
pub proof_instruction_offset: i8,
}
/// Data expected by `ConfidentialTransferInstruction::TransferWithFee`
#[derive(Clone, Copy, Pod, Zeroable)]
#[repr(C)]
pub struct TransferWithFeeInstructionData {
/// The new source decryptable balance if the transfer succeeds
pub new_source_decryptable_available_balance: DecryptableBalance,
/// Relative location of the `ProofInstruction::VerifyTransfer` instruction to the
/// `Transfer` instruction in the transaction
pub proof_instruction_offset: i8,
}
/// Data expected by `ConfidentialTransferInstruction::ApplyPendingBalance`
#[derive(Clone, Copy, Pod, Zeroable)]
#[repr(C)]
pub struct ApplyPendingBalanceData {
/// The expected number of pending balance credits since the last successful
/// `ApplyPendingBalance` instruction
pub expected_pending_balance_credit_counter: PodU64,
/// The new decryptable balance if the pending balance is applied successfully
pub new_decryptable_available_balance: pod::AeCiphertext,
}
/// Data expected by `ConfidentialTransferInstruction::WithdrawWithheldTokensFromMint`
#[derive(Clone, Copy, Pod, Zeroable)]
#[repr(C)]
pub struct WithdrawWithheldTokensFromMintData {
/// Relative location of the `ProofInstruction::VerifyWithdrawWithheld` instruction to the
/// `WithdrawWithheldTokensFromMint` instruction in the transaction
pub proof_instruction_offset: i8,
}
/// Data expected by `ConfidentialTransferInstruction::WithdrawWithheldTokensFromAccounts`
#[derive(Clone, Copy, Pod, Zeroable)]
#[repr(C)]
pub struct WithdrawWithheldTokensFromAccountsData {
/// Number of token accounts harvested
pub num_token_accounts: u8,
/// Relative location of the `ProofInstruction::VerifyWithdrawWithheld` instruction to the
/// `VerifyWithdrawWithheldTokensFromAccounts` instruction in the transaction
pub proof_instruction_offset: i8,
}
/// Create a `InitializeMint` instruction
pub fn initialize_mint(
token_program_id: &Pubkey,
mint: &Pubkey,
ct_mint: &ConfidentialTransferMint,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let accounts = vec![AccountMeta::new(*mint, false)];
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::InitializeMint,
ct_mint,
))
}
/// Create a `UpdateMint` instruction
pub fn update_mint(
token_program_id: &Pubkey,
mint: &Pubkey,
new_ct_mint: &ConfidentialTransferMint,
authority: &Pubkey,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let accounts = vec![
AccountMeta::new(*mint, false),
AccountMeta::new_readonly(*authority, true),
AccountMeta::new_readonly(
new_ct_mint.authority,
new_ct_mint.authority != Pubkey::default(),
),
];
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::UpdateMint,
new_ct_mint,
))
}
/// Create a `ConfigureAccount` instruction
#[allow(clippy::too_many_arguments)]
#[cfg(not(target_os = "solana"))]
pub fn configure_account(
token_program_id: &Pubkey,
token_account: &Pubkey,
mint: &Pubkey,
encryption_pubkey: EncryptionPubkey,
decryptable_zero_balance: AeCiphertext,
maximum_pending_balance_credit_counter: u64,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*token_account, false),
AccountMeta::new_readonly(*mint, false),
AccountMeta::new_readonly(*authority, multisig_signers.is_empty()),
];
for multisig_signer in multisig_signers.iter() {
accounts.push(AccountMeta::new_readonly(**multisig_signer, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::ConfigureAccount,
&ConfigureAccountInstructionData {
encryption_pubkey,
decryptable_zero_balance: decryptable_zero_balance.into(),
maximum_pending_balance_credit_counter: maximum_pending_balance_credit_counter.into(),
},
))
}
/// Create an `ApproveAccount` instruction
pub fn approve_account(
token_program_id: &Pubkey,
account_to_approve: &Pubkey,
mint: &Pubkey,
authority: &Pubkey,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let accounts = vec![
AccountMeta::new(*account_to_approve, false),
AccountMeta::new_readonly(*mint, false),
AccountMeta::new_readonly(*authority, true),
];
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::ApproveAccount,
&(),
))
}
/// Create an inner `EmptyAccount` instruction
///
/// This instruction is suitable for use with a cross-program `invoke`
pub fn inner_empty_account(
token_program_id: &Pubkey,
token_account: &Pubkey,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
proof_instruction_offset: i8,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*token_account, false),
AccountMeta::new_readonly(sysvar::instructions::id(), false),
AccountMeta::new_readonly(*authority, multisig_signers.is_empty()),
];
for multisig_signer in multisig_signers.iter() {
accounts.push(AccountMeta::new_readonly(**multisig_signer, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::EmptyAccount,
&EmptyAccountInstructionData {
proof_instruction_offset,
},
))
}
/// Create a `EmptyAccount` instruction
pub fn empty_account(
token_program_id: &Pubkey,
token_account: &Pubkey,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
proof_data: &CloseAccountData,
) -> Result<Vec<Instruction>, ProgramError> {
Ok(vec![
verify_close_account(proof_data),
inner_empty_account(
token_program_id,
token_account,
authority,
multisig_signers,
-1,
)?, // calls check_program_account
])
}
/// Create a `Deposit` instruction
#[allow(clippy::too_many_arguments)]
pub fn deposit(
token_program_id: &Pubkey,
source_token_account: &Pubkey,
mint: &Pubkey,
destination_token_account: &Pubkey,
amount: u64,
decimals: u8,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*source_token_account, false),
AccountMeta::new(*destination_token_account, false),
AccountMeta::new_readonly(*mint, false),
AccountMeta::new_readonly(*authority, multisig_signers.is_empty()),
];
for multisig_signer in multisig_signers.iter() {
accounts.push(AccountMeta::new_readonly(**multisig_signer, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::Deposit,
&DepositInstructionData {
amount: amount.into(),
decimals,
},
))
}
/// Create a inner `Withdraw` instruction
///
/// This instruction is suitable for use with a cross-program `invoke`
#[allow(clippy::too_many_arguments)]
pub fn inner_withdraw(
token_program_id: &Pubkey,
source_token_account: &Pubkey,
destination_token_account: &Pubkey,
mint: &Pubkey,
amount: u64,
decimals: u8,
new_decryptable_available_balance: DecryptableBalance,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
proof_instruction_offset: i8,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*source_token_account, false),
AccountMeta::new(*destination_token_account, false),
AccountMeta::new_readonly(*mint, false),
AccountMeta::new_readonly(sysvar::instructions::id(), false),
AccountMeta::new_readonly(*authority, multisig_signers.is_empty()),
];
for multisig_signer in multisig_signers.iter() {
accounts.push(AccountMeta::new_readonly(**multisig_signer, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::Withdraw,
&WithdrawInstructionData {
amount: amount.into(),
decimals,
new_decryptable_available_balance,
proof_instruction_offset,
},
))
}
/// Create a `Withdraw` instruction
#[allow(clippy::too_many_arguments)]
#[cfg(not(target_os = "solana"))]
pub fn withdraw(
token_program_id: &Pubkey,
source_token_account: &Pubkey,
destination_token_account: &Pubkey,
mint: &Pubkey,
amount: u64,
decimals: u8,
new_decryptable_available_balance: AeCiphertext,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
proof_data: &WithdrawData,
) -> Result<Vec<Instruction>, ProgramError> {
Ok(vec![
verify_withdraw(proof_data),
inner_withdraw(
token_program_id,
source_token_account,
destination_token_account,
mint,
amount,
decimals,
new_decryptable_available_balance.into(),
authority,
multisig_signers,
-1,
)?, // calls check_program_account
])
}
/// Create a inner `Transfer` instruction
///
/// This instruction is suitable for use with a cross-program `invoke`
#[allow(clippy::too_many_arguments)]
pub fn inner_transfer(
token_program_id: &Pubkey,
source_token_account: &Pubkey,
destination_token_account: &Pubkey,
mint: &Pubkey,
new_source_decryptable_available_balance: DecryptableBalance,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
proof_instruction_offset: i8,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*source_token_account, false),
AccountMeta::new(*destination_token_account, false),
AccountMeta::new_readonly(*mint, false),
AccountMeta::new_readonly(sysvar::instructions::id(), false),
AccountMeta::new_readonly(*authority, multisig_signers.is_empty()),
];
for multisig_signer in multisig_signers.iter() {
accounts.push(AccountMeta::new_readonly(**multisig_signer, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::Transfer,
&TransferInstructionData {
new_source_decryptable_available_balance,
proof_instruction_offset,
},
))
}
/// Create a `Transfer` instruction
#[allow(clippy::too_many_arguments)]
#[cfg(not(target_os = "solana"))]
pub fn transfer(
token_program_id: &Pubkey,
source_token_account: &Pubkey,
destination_token_account: &Pubkey,
mint: &Pubkey,
new_source_decryptable_available_balance: AeCiphertext,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
proof_data: &TransferData,
) -> Result<Vec<Instruction>, ProgramError> {
Ok(vec![
verify_transfer(proof_data),
inner_transfer(
token_program_id,
source_token_account,
destination_token_account,
mint,
new_source_decryptable_available_balance.into(),
authority,
multisig_signers,
-1,
)?, // calls check_program_account
])
}
/// Create a inner `TransferWithFee` instruction
///
/// This instruction is suitable for use with a cross-program `invoke`
#[allow(clippy::too_many_arguments)]
pub fn inner_transfer_with_fee(
token_program_id: &Pubkey,
source_token_account: &Pubkey,
destination_token_account: &Pubkey,
mint: &Pubkey,
new_source_decryptable_available_balance: DecryptableBalance,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
proof_instruction_offset: i8,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*source_token_account, false),
AccountMeta::new(*destination_token_account, false),
AccountMeta::new_readonly(*mint, false),
AccountMeta::new_readonly(sysvar::instructions::id(), false),
AccountMeta::new_readonly(*authority, multisig_signers.is_empty()),
];
for multisig_signer in multisig_signers.iter() {
accounts.push(AccountMeta::new_readonly(**multisig_signer, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::TransferWithFee,
&TransferWithFeeInstructionData {
new_source_decryptable_available_balance,
proof_instruction_offset,
},
))
}
/// Create a `Transfer` instruction
#[allow(clippy::too_many_arguments)]
#[cfg(not(target_os = "solana"))]
pub fn transfer_with_fee(
token_program_id: &Pubkey,
source_token_account: &Pubkey,
destination_token_account: &Pubkey,
mint: &Pubkey,
new_source_decryptable_available_balance: AeCiphertext,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
proof_data: &TransferWithFeeData,
) -> Result<Vec<Instruction>, ProgramError> {
Ok(vec![
verify_transfer_with_fee(proof_data),
inner_transfer_with_fee(
token_program_id,
source_token_account,
destination_token_account,
mint,
new_source_decryptable_available_balance.into(),
authority,
multisig_signers,
-1,
)?, // calls check_program_account
])
}
/// Create a inner `ApplyPendingBalance` instruction
///
/// This instruction is suitable for use with a cross-program `invoke`
pub fn inner_apply_pending_balance(
token_program_id: &Pubkey,
token_account: &Pubkey,
expected_pending_balance_credit_counter: u64,
new_decryptable_available_balance: DecryptableBalance,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*token_account, false),
AccountMeta::new_readonly(*authority, multisig_signers.is_empty()),
];
for multisig_signer in multisig_signers.iter() {
accounts.push(AccountMeta::new_readonly(**multisig_signer, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::ApplyPendingBalance,
&ApplyPendingBalanceData {
expected_pending_balance_credit_counter: expected_pending_balance_credit_counter.into(),
new_decryptable_available_balance,
},
))
}
/// Create a `ApplyPendingBalance` instruction
#[cfg(not(target_os = "solana"))]
pub fn apply_pending_balance(
token_program_id: &Pubkey,
token_account: &Pubkey,
pending_balance_instructions: u64,
new_decryptable_available_balance: AeCiphertext,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
inner_apply_pending_balance(
token_program_id,
token_account,
pending_balance_instructions,
new_decryptable_available_balance.into(),
authority,
multisig_signers,
) // calls check_program_account
}
fn enable_or_disable_balance_credits(
instruction: ConfidentialTransferInstruction,
token_program_id: &Pubkey,
token_account: &Pubkey,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*token_account, false),
AccountMeta::new_readonly(*authority, multisig_signers.is_empty()),
];
for multisig_signer in multisig_signers.iter() {
accounts.push(AccountMeta::new_readonly(**multisig_signer, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
instruction,
&(),
))
}
/// Create a `EnableBalanceCredits` instruction
pub fn enable_balance_credits(
token_program_id: &Pubkey,
token_account: &Pubkey,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
enable_or_disable_balance_credits(
ConfidentialTransferInstruction::EnableBalanceCredits,
token_program_id,
token_account,
authority,
multisig_signers,
)
}
/// Create a `DisableBalanceCredits` instruction
pub fn disable_balance_credits(
token_program_id: &Pubkey,
token_account: &Pubkey,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
enable_or_disable_balance_credits(
ConfidentialTransferInstruction::DisableBalanceCredits,
token_program_id,
token_account,
authority,
multisig_signers,
)
}
/// Create a inner `WithdrawWithheldTokensFromMint` instruction
///
/// This instruction is suitable for use with a cross-program `invoke`
pub fn inner_withdraw_withheld_tokens_from_mint(
token_program_id: &Pubkey,
mint: &Pubkey,
destination: &Pubkey,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
proof_instruction_offset: i8,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*mint, false),
AccountMeta::new(*destination, false),
AccountMeta::new_readonly(sysvar::instructions::id(), false),
AccountMeta::new_readonly(*authority, multisig_signers.is_empty()),
];
for multisig_signer in multisig_signers.iter() {
accounts.push(AccountMeta::new(**multisig_signer, false));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::WithdrawWithheldTokensFromMint,
&WithdrawWithheldTokensFromMintData {
proof_instruction_offset,
},
))
}
/// Create a `WithdrawWithheldTokensFromMint` instruction
pub fn withdraw_withheld_tokens_from_mint(
token_program_id: &Pubkey,
mint: &Pubkey,
destination: &Pubkey,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
proof_data: &WithdrawWithheldTokensData,
) -> Result<Vec<Instruction>, ProgramError> {
Ok(vec![
verify_withdraw_withheld_tokens(proof_data),
inner_withdraw_withheld_tokens_from_mint(
token_program_id,
mint,
destination,
authority,
multisig_signers,
-1,
)?,
])
}
/// Create a inner `WithdrawWithheldTokensFromMint` instruction
///
/// This instruction is suitable for use with a cross-program `invoke`
pub fn inner_withdraw_withheld_tokens_from_accounts(
token_program_id: &Pubkey,
mint: &Pubkey,
destination: &Pubkey,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
sources: &[&Pubkey],
proof_instruction_offset: i8,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let num_token_accounts =
u8::try_from(sources.len()).map_err(|_| ProgramError::InvalidInstructionData)?;
let mut accounts = vec![
AccountMeta::new(*mint, false),
AccountMeta::new(*destination, false),
AccountMeta::new_readonly(sysvar::instructions::id(), false),
AccountMeta::new_readonly(*authority, multisig_signers.is_empty()),
];
for multisig_signer in multisig_signers.iter() {
accounts.push(AccountMeta::new(**multisig_signer, false));
}
for source in sources.iter() {
accounts.push(AccountMeta::new(**source, false));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::WithdrawWithheldTokensFromAccounts,
&WithdrawWithheldTokensFromAccountsData {
proof_instruction_offset,
num_token_accounts,
},
))
}
/// Create a `WithdrawWithheldTokensFromAccounts` instruction
pub fn withdraw_withheld_tokens_from_accounts(
token_program_id: &Pubkey,
mint: &Pubkey,
destination: &Pubkey,
authority: &Pubkey,
multisig_signers: &[&Pubkey],
sources: &[&Pubkey],
proof_data: &WithdrawWithheldTokensData,
) -> Result<Vec<Instruction>, ProgramError> {
Ok(vec![
verify_withdraw_withheld_tokens(proof_data),
inner_withdraw_withheld_tokens_from_accounts(
token_program_id,
mint,
destination,
authority,
multisig_signers,
sources,
-1,
)?,
])
}
/// Creates a `HarvestWithheldTokensToMint` instruction
pub fn harvest_withheld_tokens_to_mint(
token_program_id: &Pubkey,
mint: &Pubkey,
sources: &[&Pubkey],
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![AccountMeta::new(*mint, false)];
for source in sources.iter() {
accounts.push(AccountMeta::new(**source, false));
}
Ok(encode_instruction(
token_program_id,
accounts,
TokenInstruction::ConfidentialTransferExtension,
ConfidentialTransferInstruction::HarvestWithheldTokensToMint,
&(),
))
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/default_account_state/processor.rs
|
use {
crate::program::spl_token_2022::{
check_program_account,
error::TokenError,
extension::{
default_account_state::{
instruction::{decode_instruction, DefaultAccountStateInstruction},
DefaultAccountState,
},
StateWithExtensionsMut,
},
processor::Processor,
state::{AccountState, Mint},
},
solana_sdk::{
account_info::{next_account_info, AccountInfo},
entrypoint::ProgramResult,
msg,
pubkey::Pubkey,
},
};
fn check_valid_default_state(state: AccountState) -> ProgramResult {
match state {
AccountState::Uninitialized => Err(TokenError::InvalidState.into()),
_ => Ok(()),
}
}
fn process_initialize_default_account_state(
accounts: &[AccountInfo],
state: AccountState,
) -> ProgramResult {
check_valid_default_state(state)?;
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let mut mint_data = mint_account_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack_uninitialized(&mut mint_data)?;
let extension = mint.init_extension::<DefaultAccountState>(true)?;
extension.state = state.into();
Ok(())
}
fn process_update_default_account_state(
program_id: &Pubkey,
accounts: &[AccountInfo],
state: AccountState,
) -> ProgramResult {
check_valid_default_state(state)?;
let account_info_iter = &mut accounts.iter();
let mint_account_info = next_account_info(account_info_iter)?;
let freeze_authority_info = next_account_info(account_info_iter)?;
let freeze_authority_info_data_len = freeze_authority_info.data_len();
let mut mint_data = mint_account_info.data.borrow_mut();
let mut mint = StateWithExtensionsMut::<Mint>::unpack(&mut mint_data)?;
let freeze_authority =
Option::<Pubkey>::from(mint.base.freeze_authority).ok_or(TokenError::NoAuthorityExists)?;
Processor::validate_owner(
program_id,
&freeze_authority,
freeze_authority_info,
freeze_authority_info_data_len,
account_info_iter.as_slice(),
)?;
let extension = mint.get_extension_mut::<DefaultAccountState>()?;
extension.state = state.into();
Ok(())
}
pub(crate) fn process_instruction(
program_id: &Pubkey,
accounts: &[AccountInfo],
input: &[u8],
) -> ProgramResult {
check_program_account(program_id)?;
let (instruction, state) = decode_instruction(input)?;
match instruction {
DefaultAccountStateInstruction::Initialize => {
msg!("DefaultAccountStateInstruction::Initialize");
process_initialize_default_account_state(accounts, state)
}
DefaultAccountStateInstruction::Update => {
msg!("DefaultAccountStateInstruction::Update");
process_update_default_account_state(program_id, accounts, state)
}
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/default_account_state/mod.rs
|
use {
crate::program::spl_token_2022::extension::{Extension, ExtensionType},
bytemuck::{Pod, Zeroable},
};
/// Default Account state extension instructions
pub mod instruction;
/// Default Account state extension processor
pub mod processor;
/// Default Account::state extension data for mints.
#[repr(C)]
#[derive(Clone, Copy, Debug, Default, PartialEq, Pod, Zeroable)]
pub struct DefaultAccountState {
/// Default Account::state in which new Accounts should be initialized
pub state: PodAccountState,
}
impl Extension for DefaultAccountState {
const TYPE: ExtensionType = ExtensionType::DefaultAccountState;
}
type PodAccountState = u8;
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_token_2022/extension/default_account_state/instruction.rs
|
use {
crate::program::spl_token_2022::{
check_program_account, error::TokenError, instruction::TokenInstruction,
state::AccountState,
},
num_enum::{IntoPrimitive, TryFromPrimitive},
solana_sdk::{
instruction::{AccountMeta, Instruction},
program_error::ProgramError,
pubkey::Pubkey,
},
std::convert::TryFrom,
};
/// Default Account State extension instructions
#[derive(Clone, Copy, Debug, PartialEq, IntoPrimitive, TryFromPrimitive)]
#[repr(u8)]
pub enum DefaultAccountStateInstruction {
/// Initialize a new mint with the default state for new Accounts.
///
/// Fails if the mint has already been initialized, so must be called before
/// `InitializeMint`.
///
/// The mint must have exactly enough space allocated for the base mint (82
/// bytes), plus 83 bytes of padding, 1 byte reserved for the account type,
/// then space required for this extension, plus any others.
///
/// Accounts expected by this instruction:
///
/// 0. `[writable]` The mint to initialize.
///
/// Data expected by this instruction:
/// `crate::state::AccountState`
///
Initialize,
/// Update the default state for new Accounts. Only supported for mints that include the
/// `DefaultAccountState` extension.
///
/// Accounts expected by this instruction:
///
/// * Single authority
/// 0. `[writable]` The mint.
/// 1. `[signer]` The mint freeze authority.
///
/// * Multisignature authority
/// 0. `[writable]` The mint.
/// 1. `[]` The mint's multisignature freeze authority.
/// 2. ..2+M `[signer]` M signer accounts.
///
/// Data expected by this instruction:
/// `crate::state::AccountState`
///
Update,
}
/// Utility function for decoding a DefaultAccountState instruction and its data
pub fn decode_instruction(
input: &[u8],
) -> Result<(DefaultAccountStateInstruction, AccountState), ProgramError> {
if input.len() != 2 {
return Err(TokenError::InvalidInstruction.into());
}
Ok((
DefaultAccountStateInstruction::try_from(input[0])
.or(Err(TokenError::InvalidInstruction))?,
AccountState::try_from(input[1]).or(Err(TokenError::InvalidInstruction))?,
))
}
fn encode_instruction(
token_program_id: &Pubkey,
accounts: Vec<AccountMeta>,
instruction_type: DefaultAccountStateInstruction,
state: &AccountState,
) -> Instruction {
let mut data = TokenInstruction::DefaultAccountStateExtension.pack();
data.push(instruction_type.into());
data.push((*state).into());
Instruction {
program_id: *token_program_id,
accounts,
data,
}
}
/// Create an `Initialize` instruction
pub fn initialize_default_account_state(
token_program_id: &Pubkey,
mint: &Pubkey,
state: &AccountState,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let accounts = vec![AccountMeta::new(*mint, false)];
Ok(encode_instruction(
token_program_id,
accounts,
DefaultAccountStateInstruction::Initialize,
state,
))
}
/// Create an `Initialize` instruction
pub fn update_default_account_state(
token_program_id: &Pubkey,
mint: &Pubkey,
freeze_authority: &Pubkey,
signers: &[&Pubkey],
state: &AccountState,
) -> Result<Instruction, ProgramError> {
check_program_account(token_program_id)?;
let mut accounts = vec![
AccountMeta::new(*mint, false),
AccountMeta::new_readonly(*freeze_authority, signers.is_empty()),
];
for signer_pubkey in signers.iter() {
accounts.push(AccountMeta::new_readonly(**signer_pubkey, true));
}
Ok(encode_instruction(
token_program_id,
accounts,
DefaultAccountStateInstruction::Update,
state,
))
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/program/spl_memo/mod.rs
|
use solana_sdk::{
instruction::{AccountMeta, Instruction},
pubkey::Pubkey,
};
/// Legacy symbols from Memo v1
pub mod v1 {
solana_sdk::declare_id!("Memo1UhkJRfHyvLMcVucJwxXeuD728EqVDDwQDxFMNo");
}
solana_sdk::declare_id!("MemoSq4gqABAXKb96qnH8TysNcWxMyWCqXgDLGmfcHr");
/// Build a memo instruction, possibly signed
///
/// Accounts expected by this instruction:
///
/// 0. ..0+N. `[signer]` Expected signers; if zero provided, instruction will be processed as a
/// normal, unsigned spl-memo
///
pub fn build_memo(memo: &[u8], signer_pubkeys: &[&Pubkey]) -> Instruction {
Instruction {
program_id: id(),
accounts: signer_pubkeys
.iter()
.map(|&pubkey| AccountMeta::new_readonly(*pubkey, true))
.collect(),
data: memo.to_vec(),
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/account_decoder/parse_token_extension.rs
|
use solana_sdk::pubkey::Pubkey;
use crate::program::spl_token_2022::extension;
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "camelCase")]
pub struct UiTransferFee {
pub epoch: u64,
pub maximum_fee: u64,
pub transfer_fee_basis_points: u16,
}
impl From<extension::transfer_fee::TransferFee> for UiTransferFee {
fn from(transfer_fee: extension::transfer_fee::TransferFee) -> Self {
Self {
epoch: u64::from(transfer_fee.epoch),
maximum_fee: u64::from(transfer_fee.maximum_fee),
transfer_fee_basis_points: u16::from(transfer_fee.transfer_fee_basis_points),
}
}
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "camelCase")]
pub struct UiTransferFeeConfig {
pub transfer_fee_config_authority: Option<String>,
pub withdraw_withheld_authority: Option<String>,
pub withheld_amount: u64,
pub older_transfer_fee: UiTransferFee,
pub newer_transfer_fee: UiTransferFee,
}
impl From<extension::transfer_fee::TransferFeeConfig> for UiTransferFeeConfig {
fn from(transfer_fee_config: extension::transfer_fee::TransferFeeConfig) -> Self {
let transfer_fee_config_authority: Option<Pubkey> =
transfer_fee_config.transfer_fee_config_authority.into();
let withdraw_withheld_authority: Option<Pubkey> =
transfer_fee_config.withdraw_withheld_authority.into();
Self {
transfer_fee_config_authority: transfer_fee_config_authority
.map(|pubkey| pubkey.to_string()),
withdraw_withheld_authority: withdraw_withheld_authority
.map(|pubkey| pubkey.to_string()),
withheld_amount: u64::from(transfer_fee_config.withheld_amount),
older_transfer_fee: transfer_fee_config.older_transfer_fee.into(),
newer_transfer_fee: transfer_fee_config.newer_transfer_fee.into(),
}
}
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "camelCase", tag = "extension", content = "state")]
pub enum UiExtension {
Uninitialized,
TransferFeeConfig(UiTransferFeeConfig),
// TransferFeeAmount(UiTransferFeeAmount),
// MintCloseAuthority(UiMintCloseAuthority),
// ConfidentialTransferMint, // Implementation of extension state to come
// ConfidentialTransferAccount, // Implementation of extension state to come
// DefaultAccountState(UiDefaultAccountState),
// ImmutableOwner,
// MemoTransfer(UiMemoTransfer),
// UnparseableExtension,
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/account_decoder/parse_token.rs
|
use std::str::FromStr;
use super::{parse_token_extension::UiExtension, StringAmount, StringDecimals};
use crate::program::spl_token_2022::state::AccountState;
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "camelCase")]
pub enum UiAccountState {
Uninitialized,
Initialized,
Frozen,
}
impl From<AccountState> for UiAccountState {
fn from(state: AccountState) -> Self {
match state {
AccountState::Uninitialized => UiAccountState::Uninitialized,
AccountState::Initialized => UiAccountState::Initialized,
AccountState::Frozen => UiAccountState::Frozen,
}
}
}
pub fn real_number_string(amount: u64, decimals: u8) -> StringDecimals {
let decimals = decimals as usize;
if decimals > 0 {
// Left-pad zeros to decimals + 1, so we at least have an integer zero
let mut s = format!("{:01$}", amount, decimals + 1);
// Add the decimal point (Sorry, "," locales!)
s.insert(s.len() - decimals, '.');
s
} else {
amount.to_string()
}
}
pub fn real_number_string_trimmed(amount: u64, decimals: u8) -> StringDecimals {
let mut s = real_number_string(amount, decimals);
if decimals > 0 {
let zeros_trimmed = s.trim_end_matches('0');
s = zeros_trimmed.trim_end_matches('.').to_string();
}
s
}
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)]
#[serde(rename_all = "camelCase")]
pub struct UiTokenAmount {
pub ui_amount: Option<f64>,
pub decimals: u8,
pub amount: StringAmount,
pub ui_amount_string: StringDecimals,
}
impl UiTokenAmount {
pub fn real_number_string(&self) -> String {
real_number_string(
u64::from_str(&self.amount).unwrap_or_default(),
self.decimals,
)
}
pub fn real_number_string_trimmed(&self) -> String {
if !self.ui_amount_string.is_empty() {
self.ui_amount_string.clone()
} else {
real_number_string_trimmed(
u64::from_str(&self.amount).unwrap_or_default(),
self.decimals,
)
}
}
}
#[derive(Debug, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "camelCase")]
pub struct UiTokenAccount {
pub mint: String,
pub owner: String,
pub token_amount: UiTokenAmount,
#[serde(skip_serializing_if = "Option::is_none")]
pub delegate: Option<String>,
pub state: UiAccountState,
pub is_native: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub rent_exempt_reserve: Option<UiTokenAmount>,
#[serde(skip_serializing_if = "Option::is_none")]
pub delegated_amount: Option<UiTokenAmount>,
#[serde(skip_serializing_if = "Option::is_none")]
pub close_authority: Option<String>,
#[serde(skip_serializing_if = "Vec::is_empty", default)]
pub extensions: Vec<UiExtension>,
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "camelCase")]
pub struct UiMint {
pub mint_authority: Option<String>,
pub supply: StringAmount,
pub decimals: u8,
pub is_initialized: bool,
pub freeze_authority: Option<String>,
#[serde(skip_serializing_if = "Vec::is_empty", default)]
pub extensions: Vec<UiExtension>,
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "camelCase")]
pub struct UiMultisig {
pub num_required_signers: u8,
pub num_valid_signers: u8,
pub is_initialized: bool,
pub signers: Vec<String>,
}
#[derive(Debug, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "camelCase", tag = "type", content = "info")]
#[allow(clippy::large_enum_variant)]
pub enum TokenAccountType {
Account(UiTokenAccount),
Mint(UiMint),
Multisig(UiMultisig),
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-extra/src/account_decoder/mod.rs
|
pub mod parse_token;
pub mod parse_token_extension;
use std::str::FromStr;
use serde_json::Value;
use solana_sdk::{
account::{ReadableAccount, WritableAccount},
clock::Epoch,
instruction::InstructionError,
pubkey::Pubkey,
};
// TODO:
// use spl_token_2022::extension::{self, BaseState, ExtensionType, StateWithExtensions};
use thiserror::Error;
pub type StringAmount = String;
pub type StringDecimals = String;
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct UiDataSliceConfig {
pub offset: usize,
pub length: usize,
}
fn slice_data(data: &[u8], data_slice_config: Option<UiDataSliceConfig>) -> &[u8] {
if let Some(UiDataSliceConfig { offset, length }) = data_slice_config {
if offset >= data.len() {
&[]
} else if length > data.len() - offset {
&data[offset..]
} else {
&data[offset..offset + length]
}
} else {
data
}
}
/// A duplicate representation of an Account for pretty JSON serialization
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]
#[serde(rename_all = "camelCase")]
pub struct UiAccount {
pub lamports: u64,
pub data: UiAccountData,
pub owner: String,
pub executable: bool,
pub rent_epoch: Epoch,
pub space: Option<u64>,
}
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase", untagged)]
pub enum UiAccountData {
LegacyBinary(String), // Legacy. Retained for RPC backwards compatibility
Json(ParsedAccount),
Binary(String, UiAccountEncoding),
}
#[derive(Serialize, Deserialize, Clone, Copy, Debug, PartialEq, Eq, Hash)]
#[serde(rename_all = "camelCase")]
pub enum UiAccountEncoding {
Binary, // Legacy. Retained for RPC backwards compatibility
Base58,
Base64,
JsonParsed,
// NOTE: Not supported in WASM
// #[serde(rename = "base64+zstd")]
// Base64Zstd,
}
pub const MAX_BASE58_BYTES: usize = 128;
impl UiAccount {
fn encode_bs58<T: ReadableAccount>(
account: &T,
data_slice_config: Option<UiDataSliceConfig>,
) -> String {
if account.data().len() <= MAX_BASE58_BYTES {
bs58::encode(slice_data(account.data(), data_slice_config)).into_string()
} else {
"error: data too large for bs58 encoding".to_string()
}
}
pub fn encode<T: ReadableAccount>(
pubkey: &Pubkey,
account: &T,
encoding: UiAccountEncoding,
additional_data: Option<AccountAdditionalData>,
data_slice_config: Option<UiDataSliceConfig>,
) -> Self {
let data = match encoding {
UiAccountEncoding::Binary => {
let data = Self::encode_bs58(account, data_slice_config);
UiAccountData::LegacyBinary(data)
}
UiAccountEncoding::Base58 => {
let data = Self::encode_bs58(account, data_slice_config);
UiAccountData::Binary(data, encoding)
}
UiAccountEncoding::Base64 => UiAccountData::Binary(
base64::encode(slice_data(account.data(), data_slice_config)),
encoding,
),
// NOTE: Not supported in WASM
// UiAccountEncoding::Base64Zstd => {
// let mut encoder = zstd::stream::write::Encoder::new(Vec::new(), 0).unwrap();
// match encoder
// .write_all(slice_data(account.data(), data_slice_config))
// .and_then(|()| encoder.finish())
// {
// Ok(zstd_data) => UiAccountData::Binary(base64::encode(zstd_data), encoding),
// Err(_) => UiAccountData::Binary(
// base64::encode(slice_data(account.data(), data_slice_config)),
// UiAccountEncoding::Base64,
// ),
// }
// }
UiAccountEncoding::JsonParsed => {
if let Ok(parsed_data) =
parse_account_data(pubkey, account.owner(), account.data(), additional_data)
{
UiAccountData::Json(parsed_data)
} else {
UiAccountData::Binary(base64::encode(account.data()), UiAccountEncoding::Base64)
}
}
};
UiAccount {
lamports: account.lamports(),
space: Some(account.data().len() as u64),
data,
owner: account.owner().to_string(),
executable: account.executable(),
rent_epoch: account.rent_epoch(),
}
}
pub fn decode<T: WritableAccount>(&self) -> Option<T> {
let data = match &self.data {
UiAccountData::Json(_) => None,
UiAccountData::LegacyBinary(blob) => bs58::decode(blob).into_vec().ok(),
UiAccountData::Binary(blob, encoding) => match encoding {
UiAccountEncoding::Base58 => bs58::decode(blob).into_vec().ok(),
UiAccountEncoding::Base64 => base64::decode(blob).ok(),
// NOTE: Not supported in WASM
// UiAccountEncoding::Base64Zstd => base64::decode(blob).ok().and_then(|zstd_data| {
// let mut data = vec![];
// zstd::stream::read::Decoder::new(zstd_data.as_slice())
// .and_then(|mut reader| reader.read_to_end(&mut data))
// .map(|_| data)
// .ok()
// }),
UiAccountEncoding::Binary | UiAccountEncoding::JsonParsed => None,
},
}?;
Some(T::create(
self.lamports,
data,
Pubkey::from_str(&self.owner).ok()?,
self.executable,
self.rent_epoch,
))
}
}
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "camelCase")]
pub struct ParsedAccount {
pub program: String,
pub parsed: Value,
pub space: u64,
}
#[derive(Default)]
pub struct AccountAdditionalData {
pub spl_token_decimals: Option<u8>,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub enum ParsableAccount {
BpfUpgradeableLoader,
Config,
Nonce,
SplToken,
SplToken2022,
Stake,
Sysvar,
Vote,
}
#[derive(Error, Debug)]
pub enum ParseAccountError {
#[error("{0:?} account not parsable")]
AccountNotParsable(ParsableAccount),
#[error("Program not parsable")]
ProgramNotParsable,
#[error("Additional data required to parse: {0}")]
AdditionalDataMissing(String),
#[error("Instruction error")]
InstructionError(#[from] InstructionError),
#[error("Serde json error")]
SerdeJsonError(#[from] serde_json::error::Error),
}
pub fn parse_account_data(
_pubkey: &Pubkey,
_program_id: &Pubkey,
_data: &[u8],
_additional_data: Option<AccountAdditionalData>,
) -> Result<ParsedAccount, ParseAccountError> {
// TODO:
Err(ParseAccountError::ProgramNotParsable)
// let program_name = PARSABLE_PROGRAM_IDS
// .get(program_id)
// .ok_or(ParseAccountError::ProgramNotParsable)?;
// let additional_data = additional_data.unwrap_or_default();
// let parsed_json = match program_name {
// ParsableAccount::BpfUpgradeableLoader => {
// serde_json::to_value(parse_bpf_upgradeable_loader(data)?)?
// }
// ParsableAccount::Config => serde_json::to_value(parse_config(data, pubkey)?)?,
// ParsableAccount::Nonce => serde_json::to_value(parse_nonce(data)?)?,
// ParsableAccount::SplToken | ParsableAccount::SplToken2022 => {
// serde_json::to_value(parse_token(data, additional_data.spl_token_decimals)?)?
// }
// ParsableAccount::Stake => serde_json::to_value(parse_stake(data)?)?,
// ParsableAccount::Sysvar => serde_json::to_value(parse_sysvar(data, pubkey)?)?,
// ParsableAccount::Vote => serde_json::to_value(parse_vote(data)?)?,
// };
// Ok(ParsedAccount {
// program: format!("{:?}", program_name).to_kebab_case(),
// parsed: parsed_json,
// space: data.len() as u64,
// })
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils/Cargo.toml
|
[package]
name = "solana-clap-v3-utils-wasm"
version = "1.11.0"
description = "Solana WASM utilities for clap v3"
authors = ["Acheron <acheroncrypto@gmail.com>"]
repository = "https://github.com/solana-playground/solana-playground"
license = "Apache-2.0"
homepage = "https://beta.solpg.io"
edition = "2021"
keywords = ["solana", "playground", "wasm", "clap"]
[dependencies]
chrono = "0.4"
clap = "*"
rpassword = "6.0"
solana-remote-wallet = { version = "*", default-features = false }
solana-sdk = "*"
thiserror = "1.0.31"
tiny-bip39 = "0.8.2"
uriparse = "0.6.4"
url = "2.2.2"
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils/src/offline.rs
|
use {
crate::{input_validators::*, ArgConstant},
clap::{Arg, Command},
};
pub const BLOCKHASH_ARG: ArgConstant<'static> = ArgConstant {
name: "blockhash",
long: "blockhash",
help: "Use the supplied blockhash",
};
pub const SIGN_ONLY_ARG: ArgConstant<'static> = ArgConstant {
name: "sign_only",
long: "sign-only",
help: "Sign the transaction offline",
};
pub const SIGNER_ARG: ArgConstant<'static> = ArgConstant {
name: "signer",
long: "signer",
help: "Provide a public-key/signature pair for the transaction",
};
pub const DUMP_TRANSACTION_MESSAGE: ArgConstant<'static> = ArgConstant {
name: "dump_transaction_message",
long: "dump-transaction-message",
help: "Display the base64 encoded binary transaction message in sign-only mode",
};
pub fn blockhash_arg<'a>() -> Arg<'a> {
Arg::new(BLOCKHASH_ARG.name)
.long(BLOCKHASH_ARG.long)
.takes_value(true)
.value_name("BLOCKHASH")
.validator(is_hash)
.help(BLOCKHASH_ARG.help)
}
pub fn sign_only_arg<'a>() -> Arg<'a> {
Arg::new(SIGN_ONLY_ARG.name)
.long(SIGN_ONLY_ARG.long)
.takes_value(false)
.requires(BLOCKHASH_ARG.name)
.help(SIGN_ONLY_ARG.help)
}
fn signer_arg<'a>() -> Arg<'a> {
Arg::new(SIGNER_ARG.name)
.long(SIGNER_ARG.long)
.takes_value(true)
.value_name("PUBKEY=SIGNATURE")
.validator(is_pubkey_sig)
.requires(BLOCKHASH_ARG.name)
.multiple_occurrences(true)
.multiple_values(true)
.help(SIGNER_ARG.help)
}
pub fn dump_transaction_message<'a>() -> Arg<'a> {
Arg::new(DUMP_TRANSACTION_MESSAGE.name)
.long(DUMP_TRANSACTION_MESSAGE.long)
.takes_value(false)
.requires(SIGN_ONLY_ARG.name)
.help(DUMP_TRANSACTION_MESSAGE.help)
}
pub trait ArgsConfig {
fn blockhash_arg<'a>(&self, arg: Arg<'a>) -> Arg<'a> {
arg
}
fn sign_only_arg<'a>(&self, arg: Arg<'a>) -> Arg<'a> {
arg
}
fn signer_arg<'a>(&self, arg: Arg<'a>) -> Arg<'a> {
arg
}
fn dump_transaction_message_arg<'a>(&self, arg: Arg<'a>) -> Arg<'a> {
arg
}
}
pub trait OfflineArgs {
fn offline_args(self) -> Self;
fn offline_args_config(self, config: &dyn ArgsConfig) -> Self;
}
impl OfflineArgs for Command<'_> {
fn offline_args_config(self, config: &dyn ArgsConfig) -> Self {
self.arg(config.blockhash_arg(blockhash_arg()))
.arg(config.sign_only_arg(sign_only_arg()))
.arg(config.signer_arg(signer_arg()))
.arg(config.dump_transaction_message_arg(dump_transaction_message()))
}
fn offline_args(self) -> Self {
struct NullArgsConfig {}
impl ArgsConfig for NullArgsConfig {}
self.offline_args_config(&NullArgsConfig {})
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils/src/memo.rs
|
use {crate::ArgConstant, clap::Arg};
pub const MEMO_ARG: ArgConstant<'static> = ArgConstant {
name: "memo",
long: "--with-memo",
help: "Specify a memo string to include in the transaction.",
};
pub fn memo_arg<'a>() -> Arg<'a> {
Arg::new(MEMO_ARG.name)
.long(MEMO_ARG.long)
.takes_value(true)
.value_name("MEMO")
.help(MEMO_ARG.help)
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils/src/lib.rs
|
use thiserror::Error;
pub struct ArgConstant<'a> {
pub long: &'a str,
pub name: &'a str,
pub help: &'a str,
}
/// Error type for forwarding Errors out of `main()` of a `clap` app
/// and still using the `Display` formatter
#[derive(Error)]
#[error("{0}")]
pub struct DisplayError(Box<dyn std::error::Error>);
impl DisplayError {
pub fn new_as_boxed(inner: Box<dyn std::error::Error>) -> Box<Self> {
DisplayError(inner).into()
}
}
impl std::fmt::Debug for DisplayError {
fn fmt(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(fmt, "{}", self.0)
}
}
pub mod fee_payer;
pub mod input_parsers;
pub mod input_validators;
pub mod keypair;
pub mod memo;
pub mod nonce;
pub mod offline;
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils/src/nonce.rs
|
use {
crate::{input_validators::*, offline::BLOCKHASH_ARG, ArgConstant},
clap::{Arg, Command},
};
pub const NONCE_ARG: ArgConstant<'static> = ArgConstant {
name: "nonce",
long: "nonce",
help: "Provide the nonce account to use when creating a nonced \n\
transaction. Nonced transactions are useful when a transaction \n\
requires a lengthy signing process. Learn more about nonced \n\
transactions at https://docs.solana.com/offline-signing/durable-nonce",
};
pub const NONCE_AUTHORITY_ARG: ArgConstant<'static> = ArgConstant {
name: "nonce_authority",
long: "nonce-authority",
help: "Provide the nonce authority keypair to use when signing a nonced transaction",
};
fn nonce_arg<'a>() -> Arg<'a> {
Arg::new(NONCE_ARG.name)
.long(NONCE_ARG.long)
.takes_value(true)
.value_name("PUBKEY")
.requires(BLOCKHASH_ARG.name)
.validator(is_valid_pubkey)
.help(NONCE_ARG.help)
}
pub fn nonce_authority_arg<'a>() -> Arg<'a> {
Arg::new(NONCE_AUTHORITY_ARG.name)
.long(NONCE_AUTHORITY_ARG.long)
.takes_value(true)
.value_name("KEYPAIR")
.validator(is_valid_signer)
.help(NONCE_AUTHORITY_ARG.help)
}
pub trait NonceArgs {
fn nonce_args(self, global: bool) -> Self;
}
impl NonceArgs for Command<'_> {
fn nonce_args(self, global: bool) -> Self {
self.arg(nonce_arg().global(global)).arg(
nonce_authority_arg()
.requires(NONCE_ARG.name)
.global(global),
)
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils/src/input_validators.rs
|
use {
crate::keypair::{parse_signer_source, SignerSourceKind, ASK_KEYWORD},
chrono::DateTime,
solana_sdk::{
clock::{Epoch, Slot},
hash::Hash,
pubkey::{Pubkey, MAX_SEED_LEN},
signature::{read_keypair_file, Signature},
},
std::{fmt::Display, str::FromStr},
};
fn is_parsable_generic<U>(string: &str) -> Result<(), String>
where
U: FromStr,
U::Err: Display,
{
string
.parse::<U>()
.map(|_| ())
.map_err(|err| format!("error parsing '{}': {}", string, err))
}
// Return an error if string cannot be parsed as type T.
// Takes a String to avoid second type parameter when used as a clap validator
pub fn is_parsable<T>(string: &str) -> Result<(), String>
where
T: FromStr,
T::Err: Display,
{
is_parsable_generic::<T>(string)
}
// Return an error if string cannot be parsed as numeric type T, and value not within specified
// range
pub fn is_within_range<T>(string: &str, range_min: T, range_max: T) -> Result<(), String>
where
T: FromStr + Copy + std::fmt::Debug + PartialOrd + std::ops::Add<Output = T> + From<usize>,
T::Err: Display,
{
match string.parse::<T>() {
Ok(input) => {
let range = range_min..range_max + 1.into();
if !range.contains(&input) {
Err(format!(
"input '{:?}' out of range ({:?}..{:?}]",
input, range_min, range_max
))
} else {
Ok(())
}
}
Err(err) => Err(format!("error parsing '{}': {}", string, err)),
}
}
// Return an error if a pubkey cannot be parsed.
pub fn is_pubkey(string: &str) -> Result<(), String> {
is_parsable_generic::<Pubkey>(string)
}
// Return an error if a hash cannot be parsed.
pub fn is_hash(string: &str) -> Result<(), String> {
is_parsable_generic::<Hash>(string)
}
// Return an error if a keypair file cannot be parsed.
pub fn is_keypair(string: &str) -> Result<(), String> {
read_keypair_file(string)
.map(|_| ())
.map_err(|err| format!("{}", err))
}
// Return an error if a keypair file cannot be parsed
pub fn is_keypair_or_ask_keyword(string: &str) -> Result<(), String> {
if string == ASK_KEYWORD {
return Ok(());
}
read_keypair_file(string)
.map(|_| ())
.map_err(|err| format!("{}", err))
}
// Return an error if a `SignerSourceKind::Prompt` cannot be parsed
pub fn is_prompt_signer_source(string: &str) -> Result<(), String> {
if string == ASK_KEYWORD {
return Ok(());
}
match parse_signer_source(string)
.map_err(|err| format!("{}", err))?
.kind
{
SignerSourceKind::Prompt => Ok(()),
_ => Err(format!(
"Unable to parse input as `prompt:` URI scheme or `ASK` keyword: {}",
string
)),
}
}
// Return an error if string cannot be parsed as pubkey string or keypair file location
pub fn is_pubkey_or_keypair(string: &str) -> Result<(), String> {
is_pubkey(string).or_else(|_| is_keypair(string))
}
// Return an error if string cannot be parsed as a pubkey string, or a valid Signer that can
// produce a pubkey()
pub fn is_valid_pubkey(string: &str) -> Result<(), String> {
match parse_signer_source(string)
.map_err(|err| format!("{}", err))?
.kind
{
SignerSourceKind::Filepath(path) => is_keypair(&path),
_ => Ok(()),
}
}
// Return an error if string cannot be parsed as a valid Signer. This is an alias of
// `is_valid_pubkey`, and does accept pubkey strings, even though a Pubkey is not by itself
// sufficient to sign a transaction.
//
// In the current offline-signing implementation, a pubkey is the valid input for a signer field
// when paired with an offline `--signer` argument to provide a Presigner (pubkey + signature).
// Clap validators can't check multiple fields at once, so the verification that a `--signer` is
// also provided and correct happens in parsing, not in validation.
pub fn is_valid_signer(string: &str) -> Result<(), String> {
is_valid_pubkey(string)
}
// Return an error if string cannot be parsed as pubkey=signature string
pub fn is_pubkey_sig(string: &str) -> Result<(), String> {
let mut signer = string.split('=');
match Pubkey::from_str(
signer
.next()
.ok_or_else(|| "Malformed signer string".to_string())?,
) {
Ok(_) => {
match Signature::from_str(
signer
.next()
.ok_or_else(|| "Malformed signer string".to_string())?,
) {
Ok(_) => Ok(()),
Err(err) => Err(format!("{}", err)),
}
}
Err(err) => Err(format!("{}", err)),
}
}
// Return an error if a url cannot be parsed.
pub fn is_url(string: &str) -> Result<(), String> {
match url::Url::parse(string) {
Ok(url) => {
if url.has_host() {
Ok(())
} else {
Err("no host provided".to_string())
}
}
Err(err) => Err(format!("{}", err)),
}
}
pub fn is_url_or_moniker(string: &str) -> Result<(), String> {
match url::Url::parse(&normalize_to_url_if_moniker(string)) {
Ok(url) => {
if url.has_host() {
Ok(())
} else {
Err("no host provided".to_string())
}
}
Err(err) => Err(format!("{}", err)),
}
}
pub fn normalize_to_url_if_moniker<T: AsRef<str>>(url_or_moniker: T) -> String {
match url_or_moniker.as_ref() {
"m" | "mainnet-beta" => "https://api.mainnet-beta.solana.com",
"t" | "testnet" => "https://api.testnet.solana.com",
"d" | "devnet" => "https://api.devnet.solana.com",
"l" | "localhost" => "http://localhost:8899",
url => url,
}
.to_string()
}
pub fn is_epoch(epoch: &str) -> Result<(), String> {
is_parsable_generic::<Epoch>(epoch)
}
pub fn is_slot(slot: &str) -> Result<(), String> {
is_parsable_generic::<Slot>(slot)
}
pub fn is_pow2<T>(bins: T) -> Result<(), String>
where
T: AsRef<str> + Display,
{
bins.as_ref()
.parse::<usize>()
.map_err(|e| format!("Unable to parse, provided: {}, err: {}", bins, e))
.and_then(|v| {
if !v.is_power_of_two() {
Err(format!("Must be a power of 2: {}", v))
} else {
Ok(())
}
})
}
pub fn is_port(port: &str) -> Result<(), String> {
is_parsable_generic::<u16>(port)
}
pub fn is_valid_percentage<T>(percentage: T) -> Result<(), String>
where
T: AsRef<str> + Display,
{
percentage
.as_ref()
.parse::<u8>()
.map_err(|e| {
format!(
"Unable to parse input percentage, provided: {}, err: {}",
percentage, e
)
})
.and_then(|v| {
if v > 100 {
Err(format!(
"Percentage must be in range of 0 to 100, provided: {}",
v
))
} else {
Ok(())
}
})
}
pub fn is_amount(amount: &str) -> Result<(), String> {
if amount.parse::<u64>().is_ok() || amount.parse::<f64>().is_ok() {
Ok(())
} else {
Err(format!(
"Unable to parse input amount as integer or float, provided: {}",
amount
))
}
}
pub fn is_amount_or_all(amount: &str) -> Result<(), String> {
if amount.parse::<u64>().is_ok() || amount.parse::<f64>().is_ok() || amount == "ALL" {
Ok(())
} else {
Err(format!(
"Unable to parse input amount as integer or float, provided: {}",
amount
))
}
}
pub fn is_rfc3339_datetime<T>(value: T) -> Result<(), String>
where
T: AsRef<str> + Display,
{
DateTime::parse_from_rfc3339(value.as_ref())
.map(|_| ())
.map_err(|e| format!("{}", e))
}
pub fn is_derivation<T>(value: T) -> Result<(), String>
where
T: AsRef<str> + Display,
{
let value = value.as_ref().replace('\'', "");
let mut parts = value.split('/');
let account = parts.next().unwrap();
account
.parse::<u32>()
.map_err(|e| {
format!(
"Unable to parse derivation, provided: {}, err: {}",
account, e
)
})
.and_then(|_| {
if let Some(change) = parts.next() {
change.parse::<u32>().map_err(|e| {
format!(
"Unable to parse derivation, provided: {}, err: {}",
change, e
)
})
} else {
Ok(0)
}
})
.map(|_| ())
}
pub fn is_derived_address_seed(value: &str) -> Result<(), String> {
if value.len() > MAX_SEED_LEN {
Err(format!(
"Address seed must not be longer than {} bytes",
MAX_SEED_LEN
))
} else {
Ok(())
}
}
// pub fn is_niceness_adjustment_valid<T>(value: T) -> Result<(), String>
// where
// T: AsRef<str> + Display,
// {
// let adjustment = value.as_ref().parse::<i8>().map_err(|err| {
// format!(
// "error parsing niceness adjustment value '{}': {}",
// value, err
// )
// })?;
// if solana_perf::thread::is_renice_allowed(adjustment) {
// Ok(())
// } else {
// Err(String::from(
// "niceness adjustment supported only on Linux; negative adjustment \
// (priority increase) requires root or CAP_SYS_NICE (see `man 7 capabilities` \
// for details)",
// ))
// }
// }
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_is_derivation() {
assert_eq!(is_derivation("2"), Ok(()));
assert_eq!(is_derivation("0"), Ok(()));
assert_eq!(is_derivation("65537"), Ok(()));
assert_eq!(is_derivation("0/2"), Ok(()));
assert_eq!(is_derivation("0'/2'"), Ok(()));
assert!(is_derivation("a").is_err());
assert!(is_derivation("4294967296").is_err());
assert!(is_derivation("a/b").is_err());
assert!(is_derivation("0/4294967296").is_err());
}
// #[test]
// fn test_is_niceness_adjustment_valid() {
// assert_eq!(is_niceness_adjustment_valid("0"), Ok(()));
// assert!(is_niceness_adjustment_valid("128").is_err());
// assert!(is_niceness_adjustment_valid("-129").is_err());
// }
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils/src/fee_payer.rs
|
use {
crate::{input_validators, ArgConstant},
clap::Arg,
};
pub const FEE_PAYER_ARG: ArgConstant<'static> = ArgConstant {
name: "fee_payer",
long: "fee-payer",
help: "Specify the fee-payer account. This may be a keypair file, the ASK keyword \n\
or the pubkey of an offline signer, provided an appropriate --signer argument \n\
is also passed. Defaults to the client keypair.",
};
pub fn fee_payer_arg<'a>() -> Arg<'a> {
Arg::new(FEE_PAYER_ARG.name)
.long(FEE_PAYER_ARG.long)
.takes_value(true)
.value_name("KEYPAIR")
.validator(input_validators::is_valid_signer)
.help(FEE_PAYER_ARG.help)
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils/src/input_parsers.rs
|
use {
crate::keypair::{
keypair_from_seed_phrase, pubkey_from_path, resolve_signer_from_path, signer_from_path,
ASK_KEYWORD, SKIP_SEED_PHRASE_VALIDATION_ARG,
},
chrono::DateTime,
clap::ArgMatches,
solana_remote_wallet::remote_wallet::RemoteWalletManager,
solana_sdk::{
clock::UnixTimestamp,
commitment_config::CommitmentConfig,
genesis_config::ClusterType,
native_token::sol_to_lamports,
pubkey::Pubkey,
signature::{read_keypair_file, Keypair, Signature, Signer},
},
std::{rc::Rc, str::FromStr},
};
// Sentinel value used to indicate to write to screen instead of file
pub const STDOUT_OUTFILE_TOKEN: &str = "-";
// Return parsed values from matches at `name`
pub fn values_of<T>(matches: &ArgMatches, name: &str) -> Option<Vec<T>>
where
T: std::str::FromStr,
<T as std::str::FromStr>::Err: std::fmt::Debug,
{
matches
.values_of(name)
.map(|xs| xs.map(|x| x.parse::<T>().unwrap()).collect())
}
// Return a parsed value from matches at `name`
pub fn value_of<T>(matches: &ArgMatches, name: &str) -> Option<T>
where
T: std::str::FromStr,
<T as std::str::FromStr>::Err: std::fmt::Debug,
{
if let Some(value) = matches.value_of(name) {
value.parse::<T>().ok()
} else {
None
}
}
pub fn unix_timestamp_from_rfc3339_datetime(
matches: &ArgMatches,
name: &str,
) -> Option<UnixTimestamp> {
matches.value_of(name).and_then(|value| {
DateTime::parse_from_rfc3339(value)
.ok()
.map(|date_time| date_time.timestamp())
})
}
// Return the keypair for an argument with filename `name` or None if not present.
pub fn keypair_of(matches: &ArgMatches, name: &str) -> Option<Keypair> {
if let Some(value) = matches.value_of(name) {
if value == ASK_KEYWORD {
let skip_validation = matches.is_present(SKIP_SEED_PHRASE_VALIDATION_ARG.name);
keypair_from_seed_phrase(name, skip_validation, true, None, true).ok()
} else {
read_keypair_file(value).ok()
}
} else {
None
}
}
pub fn keypairs_of(matches: &ArgMatches, name: &str) -> Option<Vec<Keypair>> {
matches.values_of(name).map(|values| {
values
.filter_map(|value| {
if value == ASK_KEYWORD {
let skip_validation = matches.is_present(SKIP_SEED_PHRASE_VALIDATION_ARG.name);
keypair_from_seed_phrase(name, skip_validation, true, None, true).ok()
} else {
read_keypair_file(value).ok()
}
})
.collect()
})
}
// Return a pubkey for an argument that can itself be parsed into a pubkey,
// or is a filename that can be read as a keypair
pub fn pubkey_of(matches: &ArgMatches, name: &str) -> Option<Pubkey> {
value_of(matches, name).or_else(|| keypair_of(matches, name).map(|keypair| keypair.pubkey()))
}
pub fn pubkeys_of(matches: &ArgMatches, name: &str) -> Option<Vec<Pubkey>> {
matches.values_of(name).map(|values| {
values
.map(|value| {
value.parse::<Pubkey>().unwrap_or_else(|_| {
read_keypair_file(value)
.expect("read_keypair_file failed")
.pubkey()
})
})
.collect()
})
}
// Return pubkey/signature pairs for a string of the form pubkey=signature
pub fn pubkeys_sigs_of(matches: &ArgMatches, name: &str) -> Option<Vec<(Pubkey, Signature)>> {
matches.values_of(name).map(|values| {
values
.map(|pubkey_signer_string| {
let mut signer = pubkey_signer_string.split('=');
let key = Pubkey::from_str(signer.next().unwrap()).unwrap();
let sig = Signature::from_str(signer.next().unwrap()).unwrap();
(key, sig)
})
.collect()
})
}
// Return a signer from matches at `name`
#[allow(clippy::type_complexity)]
pub fn signer_of(
matches: &ArgMatches,
name: &str,
wallet_manager: &mut Option<Rc<RemoteWalletManager>>,
) -> Result<(Option<Box<dyn Signer>>, Option<Pubkey>), Box<dyn std::error::Error>> {
if let Some(location) = matches.value_of(name) {
let signer = signer_from_path(matches, location, name, wallet_manager)?;
let signer_pubkey = signer.pubkey();
Ok((Some(signer), Some(signer_pubkey)))
} else {
Ok((None, None))
}
}
pub fn pubkey_of_signer(
matches: &ArgMatches,
name: &str,
wallet_manager: &mut Option<Rc<RemoteWalletManager>>,
) -> Result<Option<Pubkey>, Box<dyn std::error::Error>> {
if let Some(location) = matches.value_of(name) {
Ok(Some(pubkey_from_path(
matches,
location,
name,
wallet_manager,
)?))
} else {
Ok(None)
}
}
pub fn pubkeys_of_multiple_signers(
matches: &ArgMatches,
name: &str,
wallet_manager: &mut Option<Rc<RemoteWalletManager>>,
) -> Result<Option<Vec<Pubkey>>, Box<dyn std::error::Error>> {
if let Some(pubkey_matches) = matches.values_of(name) {
let mut pubkeys: Vec<Pubkey> = vec![];
for signer in pubkey_matches {
pubkeys.push(pubkey_from_path(matches, signer, name, wallet_manager)?);
}
Ok(Some(pubkeys))
} else {
Ok(None)
}
}
pub fn resolve_signer(
matches: &ArgMatches,
name: &str,
wallet_manager: &mut Option<Rc<RemoteWalletManager>>,
) -> Result<Option<String>, Box<dyn std::error::Error>> {
resolve_signer_from_path(
matches,
matches.value_of(name).unwrap(),
name,
wallet_manager,
)
}
pub fn lamports_of_sol(matches: &ArgMatches, name: &str) -> Option<u64> {
value_of(matches, name).map(sol_to_lamports)
}
pub fn cluster_type_of(matches: &ArgMatches, name: &str) -> Option<ClusterType> {
value_of(matches, name)
}
pub fn commitment_of(matches: &ArgMatches, name: &str) -> Option<CommitmentConfig> {
matches
.value_of(name)
.map(|value| CommitmentConfig::from_str(value).unwrap_or_default())
}
#[cfg(test)]
mod tests {
use {
super::*,
clap::{Arg, Command},
solana_sdk::signature::write_keypair_file,
std::fs,
};
fn app<'ab>() -> Command<'ab> {
Command::new("test")
.arg(
Arg::new("multiple")
.long("multiple")
.takes_value(true)
.multiple_occurrences(true)
.multiple_values(true),
)
.arg(Arg::new("single").takes_value(true).long("single"))
.arg(Arg::new("unit").takes_value(true).long("unit"))
}
fn tmp_file_path(name: &str, pubkey: &Pubkey) -> String {
use std::env;
let out_dir = env::var("FARF_DIR").unwrap_or_else(|_| "farf".to_string());
format!("{}/tmp/{}-{}", out_dir, name, pubkey)
}
#[test]
fn test_values_of() {
let matches =
app()
.clone()
.get_matches_from(vec!["test", "--multiple", "50", "--multiple", "39"]);
assert_eq!(values_of(&matches, "multiple"), Some(vec![50, 39]));
assert_eq!(values_of::<u64>(&matches, "single"), None);
let pubkey0 = solana_sdk::pubkey::new_rand();
let pubkey1 = solana_sdk::pubkey::new_rand();
let matches = app().clone().get_matches_from(vec![
"test",
"--multiple",
&pubkey0.to_string(),
"--multiple",
&pubkey1.to_string(),
]);
assert_eq!(
values_of(&matches, "multiple"),
Some(vec![pubkey0, pubkey1])
);
}
#[test]
fn test_value_of() {
let matches = app()
.clone()
.get_matches_from(vec!["test", "--single", "50"]);
assert_eq!(value_of(&matches, "single"), Some(50));
assert_eq!(value_of::<u64>(&matches, "multiple"), None);
let pubkey = solana_sdk::pubkey::new_rand();
let matches = app()
.clone()
.get_matches_from(vec!["test", "--single", &pubkey.to_string()]);
assert_eq!(value_of(&matches, "single"), Some(pubkey));
}
#[test]
fn test_keypair_of() {
let keypair = Keypair::new();
let outfile = tmp_file_path("test_keypair_of.json", &keypair.pubkey());
let _ = write_keypair_file(&keypair, &outfile).unwrap();
let matches = app()
.clone()
.get_matches_from(vec!["test", "--single", &outfile]);
assert_eq!(
keypair_of(&matches, "single").unwrap().pubkey(),
keypair.pubkey()
);
assert!(keypair_of(&matches, "multiple").is_none());
let matches =
app()
.clone()
.get_matches_from(vec!["test", "--single", "random_keypair_file.json"]);
assert!(keypair_of(&matches, "single").is_none());
fs::remove_file(&outfile).unwrap();
}
#[test]
fn test_pubkey_of() {
let keypair = Keypair::new();
let outfile = tmp_file_path("test_pubkey_of.json", &keypair.pubkey());
let _ = write_keypair_file(&keypair, &outfile).unwrap();
let matches = app()
.clone()
.get_matches_from(vec!["test", "--single", &outfile]);
assert_eq!(pubkey_of(&matches, "single"), Some(keypair.pubkey()));
assert_eq!(pubkey_of(&matches, "multiple"), None);
let matches =
app()
.clone()
.get_matches_from(vec!["test", "--single", &keypair.pubkey().to_string()]);
assert_eq!(pubkey_of(&matches, "single"), Some(keypair.pubkey()));
let matches =
app()
.clone()
.get_matches_from(vec!["test", "--single", "random_keypair_file.json"]);
assert_eq!(pubkey_of(&matches, "single"), None);
fs::remove_file(&outfile).unwrap();
}
#[test]
fn test_pubkeys_of() {
let keypair = Keypair::new();
let outfile = tmp_file_path("test_pubkeys_of.json", &keypair.pubkey());
let _ = write_keypair_file(&keypair, &outfile).unwrap();
let matches = app().clone().get_matches_from(vec![
"test",
"--multiple",
&keypair.pubkey().to_string(),
"--multiple",
&outfile,
]);
assert_eq!(
pubkeys_of(&matches, "multiple"),
Some(vec![keypair.pubkey(), keypair.pubkey()])
);
fs::remove_file(&outfile).unwrap();
}
#[test]
fn test_pubkeys_sigs_of() {
let key1 = solana_sdk::pubkey::new_rand();
let key2 = solana_sdk::pubkey::new_rand();
let sig1 = Keypair::new().sign_message(&[0u8]);
let sig2 = Keypair::new().sign_message(&[1u8]);
let signer1 = format!("{}={}", key1, sig1);
let signer2 = format!("{}={}", key2, sig2);
let matches = app().clone().get_matches_from(vec![
"test",
"--multiple",
&signer1,
"--multiple",
&signer2,
]);
assert_eq!(
pubkeys_sigs_of(&matches, "multiple"),
Some(vec![(key1, sig1), (key2, sig2)])
);
}
#[test]
fn test_lamports_of_sol() {
let matches = app()
.clone()
.get_matches_from(vec!["test", "--single", "50"]);
assert_eq!(lamports_of_sol(&matches, "single"), Some(50_000_000_000));
assert_eq!(lamports_of_sol(&matches, "multiple"), None);
let matches = app()
.clone()
.get_matches_from(vec!["test", "--single", "1.5"]);
assert_eq!(lamports_of_sol(&matches, "single"), Some(1_500_000_000));
assert_eq!(lamports_of_sol(&matches, "multiple"), None);
let matches = app()
.clone()
.get_matches_from(vec!["test", "--single", "0.03"]);
assert_eq!(lamports_of_sol(&matches, "single"), Some(30_000_000));
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils
|
solana_public_repos/solana-playground/solana-playground/wasm/utils/solana-clap-v3-utils/src/keypair.rs
|
//! Loading signers and keypairs from the command line.
//!
//! This module contains utilities for loading [Signer]s and [Keypair]s from
//! standard signing sources, from the command line, as in the Solana CLI.
//!
//! The key function here is [`signer_from_path`], which loads a `Signer` from
//! one of several possible sources by interpreting a "path" command line
//! argument. Its documentation includes a description of all possible signing
//! sources supported by the Solana CLI. Many other functions here are
//! variations on, or delegate to, `signer_from_path`.
use std::rc::Rc;
use {
crate::{
input_parsers::{pubkeys_sigs_of, STDOUT_OUTFILE_TOKEN},
offline::{SIGNER_ARG, SIGN_ONLY_ARG},
ArgConstant,
},
bip39::{Language, Mnemonic, Seed},
clap::ArgMatches,
rpassword::prompt_password,
solana_remote_wallet::{
locator::{Locator as RemoteWalletLocator, LocatorError as RemoteWalletLocatorError},
remote_keypair::generate_remote_keypair,
remote_wallet::{maybe_wallet_manager, RemoteWalletError, RemoteWalletManager},
},
solana_sdk::{
derivation_path::{DerivationPath, DerivationPathError},
hash::Hash,
message::Message,
pubkey::Pubkey,
signature::{
generate_seed_from_seed_phrase_and_passphrase, keypair_from_seed,
keypair_from_seed_and_derivation_path, keypair_from_seed_phrase_and_passphrase,
read_keypair, read_keypair_file, Keypair, NullSigner, Presigner, Signature, Signer,
},
},
std::{
cell::RefCell,
convert::TryFrom,
error,
io::{stdin, stdout, Write},
ops::Deref,
str::FromStr,
},
thiserror::Error,
};
pub struct SignOnly {
pub blockhash: Hash,
pub message: Option<String>,
pub present_signers: Vec<(Pubkey, Signature)>,
pub absent_signers: Vec<Pubkey>,
pub bad_signers: Vec<Pubkey>,
}
impl SignOnly {
pub fn has_all_signers(&self) -> bool {
self.absent_signers.is_empty() && self.bad_signers.is_empty()
}
pub fn presigner_of(&self, pubkey: &Pubkey) -> Option<Presigner> {
presigner_from_pubkey_sigs(pubkey, &self.present_signers)
}
}
pub type CliSigners = Vec<Box<dyn Signer>>;
pub type SignerIndex = usize;
pub struct CliSignerInfo {
pub signers: CliSigners,
}
impl CliSignerInfo {
pub fn index_of(&self, pubkey: Option<Pubkey>) -> Option<usize> {
if let Some(pubkey) = pubkey {
self.signers
.iter()
.position(|signer| signer.pubkey() == pubkey)
} else {
Some(0)
}
}
pub fn index_of_or_none(&self, pubkey: Option<Pubkey>) -> Option<usize> {
if let Some(pubkey) = pubkey {
self.signers
.iter()
.position(|signer| signer.pubkey() == pubkey)
} else {
None
}
}
pub fn signers_for_message(&self, message: &Message) -> Vec<&dyn Signer> {
self.signers
.iter()
.filter_map(|k| {
if message.signer_keys().contains(&&k.pubkey()) {
Some(k.as_ref())
} else {
None
}
})
.collect()
}
}
/// A command line argument that loads a default signer in absence of other signers.
///
/// This type manages a default signing source which may be overridden by other
/// signing sources via its [`generate_unique_signers`] method.
///
/// [`generate_unique_signers`]: DefaultSigner::generate_unique_signers
///
/// `path` is a signing source as documented by [`signer_from_path`], and
/// `arg_name` is the name of its [clap] command line argument, which is passed
/// to `signer_from_path` as its `keypair_name` argument.
#[derive(Debug, Default)]
pub struct DefaultSigner {
/// The name of the signers command line argument.
pub arg_name: String,
/// The signing source.
pub path: String,
is_path_checked: RefCell<bool>,
}
impl DefaultSigner {
/// Create a new `DefaultSigner`.
///
/// `path` is a signing source as documented by [`signer_from_path`], and
/// `arg_name` is the name of its [clap] command line argument, which is
/// passed to `signer_from_path` as its `keypair_name` argument.
///
/// [clap]: https://docs.rs/clap
///
/// # Examples
///
/// ```no_run
/// use clap::{Arg, Command};
/// use solana_clap_v3_utils_wasm::keypair::DefaultSigner;
/// use solana_clap_v3_utils_wasm::offline::OfflineArgs;
///
/// let clap_app = Command::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::new("keypair")
/// .required(true)
/// .help("The default signer"))
/// .offline_args();
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str: String = clap_matches.value_of_t_or_exit("keypair");
///
/// let default_signer = DefaultSigner::new("keypair", &keypair_str);
/// # assert!(default_signer.arg_name.len() > 0);
/// assert_eq!(default_signer.path, keypair_str);
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn new<AN: AsRef<str>, P: AsRef<str>>(arg_name: AN, path: P) -> Self {
let arg_name = arg_name.as_ref().to_string();
let path = path.as_ref().to_string();
Self {
arg_name,
path,
..Self::default()
}
}
fn path(&self) -> Result<&str, Box<dyn std::error::Error>> {
if !self.is_path_checked.borrow().deref() {
parse_signer_source(&self.path)
.and_then(|s| {
if let SignerSourceKind::Filepath(path) = &s.kind {
std::fs::metadata(path).map(|_| ()).map_err(|e| e.into())
} else {
Ok(())
}
})
.map_err(|_| {
std::io::Error::new(
std::io::ErrorKind::Other,
format!(
"No default signer found, run \"solana-keygen new -o {}\" to create a new one",
self.path
),
)
})?;
*self.is_path_checked.borrow_mut() = true;
}
Ok(&self.path)
}
/// Generate a unique set of signers, possibly excluding this default signer.
///
/// This function allows a command line application to have a default
/// signer, perhaps representing a default wallet, but to override that
/// signer and instead sign with one or more other signers.
///
/// `bulk_signers` is a vector of signers, all of which are optional. If any
/// of those signers is `None`, then the default signer will be loaded; if
/// all of those signers are `Some`, then the default signer will not be
/// loaded.
///
/// The returned value includes all of the `bulk_signers` that were not
/// `None`, and maybe the default signer, if it was loaded.
///
/// # Examples
///
/// ```no_run
/// use clap::{Arg, Command};
/// use solana_clap_v3_utils_wasm::keypair::{DefaultSigner, signer_from_path};
/// use solana_clap_v3_utils_wasm::offline::OfflineArgs;
/// use solana_sdk::signer::Signer;
///
/// let clap_app = Command::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::new("keypair")
/// .required(true)
/// .help("The default signer"))
/// .arg(Arg::new("payer")
/// .long("payer")
/// .help("The account paying for the transaction"))
/// .offline_args();
///
/// let mut wallet_manager = None;
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str: String = clap_matches.value_of_t_or_exit("keypair");
/// let maybe_payer = clap_matches.value_of("payer");
///
/// let default_signer = DefaultSigner::new("keypair", &keypair_str);
/// let maybe_payer_signer = maybe_payer.map(|payer| {
/// signer_from_path(&clap_matches, payer, "payer", &mut wallet_manager)
/// }).transpose()?;
/// let bulk_signers = vec![maybe_payer_signer];
///
/// let unique_signers = default_signer.generate_unique_signers(
/// bulk_signers,
/// &clap_matches,
/// &mut wallet_manager,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn generate_unique_signers(
&self,
bulk_signers: Vec<Option<Box<dyn Signer>>>,
matches: &ArgMatches,
wallet_manager: &mut Option<Rc<RemoteWalletManager>>,
) -> Result<CliSignerInfo, Box<dyn error::Error>> {
let mut unique_signers = vec![];
// Determine if the default signer is needed
if bulk_signers.iter().any(|signer| signer.is_none()) {
let default_signer = self.signer_from_path(matches, wallet_manager)?;
unique_signers.push(default_signer);
}
for signer in bulk_signers.into_iter().flatten() {
if !unique_signers.iter().any(|s| s == &signer) {
unique_signers.push(signer);
}
}
Ok(CliSignerInfo {
signers: unique_signers,
})
}
/// Loads the default [Signer] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as
/// various types of _signing source_, depending on its format, one of which
/// is a path to a keypair file. Some sources may require user interaction
/// in the course of calling this function.
///
/// This simply delegates to the [`signer_from_path`] free function, passing
/// it the `DefaultSigner`s `path` and `arg_name` fields as the `path` and
/// `keypair_name` arguments.
///
/// See the [`signer_from_path`] free function for full documentation of how
/// this function interprets its arguments.
///
/// # Examples
///
/// ```no_run
/// use clap::{Arg, Command};
/// use solana_clap_v3_utils_wasm::keypair::DefaultSigner;
/// use solana_clap_v3_utils_wasm::offline::OfflineArgs;
///
/// let clap_app = Command::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::new("keypair")
/// .required(true)
/// .help("The default signer"))
/// .offline_args();
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str: String = clap_matches.value_of_t_or_exit("keypair");
/// let default_signer = DefaultSigner::new("keypair", &keypair_str);
/// let mut wallet_manager = None;
///
/// let signer = default_signer.signer_from_path(
/// &clap_matches,
/// &mut wallet_manager,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn signer_from_path(
&self,
matches: &ArgMatches,
wallet_manager: &mut Option<Rc<RemoteWalletManager>>,
) -> Result<Box<dyn Signer>, Box<dyn std::error::Error>> {
signer_from_path(matches, self.path()?, &self.arg_name, wallet_manager)
}
/// Loads the default [Signer] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as
/// various types of _signing source_, depending on its format, one of which
/// is a path to a keypair file. Some sources may require user interaction
/// in the course of calling this function.
///
/// This simply delegates to the [`signer_from_path_with_config`] free
/// function, passing it the `DefaultSigner`s `path` and `arg_name` fields
/// as the `path` and `keypair_name` arguments.
///
/// See the [`signer_from_path`] free function for full documentation of how
/// this function interprets its arguments.
///
/// # Examples
///
/// ```no_run
/// use clap::{Arg, Command};
/// use solana_clap_v3_utils_wasm::keypair::{SignerFromPathConfig, DefaultSigner};
/// use solana_clap_v3_utils_wasm::offline::OfflineArgs;
///
/// let clap_app = Command::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::new("keypair")
/// .required(true)
/// .help("The default signer"))
/// .offline_args();
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str: String = clap_matches.value_of_t_or_exit("keypair");
/// let default_signer = DefaultSigner::new("keypair", &keypair_str);
/// let mut wallet_manager = None;
///
/// // Allow pubkey signers without accompanying signatures
/// let config = SignerFromPathConfig {
/// allow_null_signer: true,
/// };
///
/// let signer = default_signer.signer_from_path_with_config(
/// &clap_matches,
/// &mut wallet_manager,
/// &config,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn signer_from_path_with_config(
&self,
matches: &ArgMatches,
wallet_manager: &mut Option<Rc<RemoteWalletManager>>,
config: &SignerFromPathConfig,
) -> Result<Box<dyn Signer>, Box<dyn std::error::Error>> {
signer_from_path_with_config(
matches,
self.path()?,
&self.arg_name,
wallet_manager,
config,
)
}
}
pub(crate) struct SignerSource {
pub kind: SignerSourceKind,
pub derivation_path: Option<DerivationPath>,
pub legacy: bool,
}
impl SignerSource {
fn new(kind: SignerSourceKind) -> Self {
Self {
kind,
derivation_path: None,
legacy: false,
}
}
fn new_legacy(kind: SignerSourceKind) -> Self {
Self {
kind,
derivation_path: None,
legacy: true,
}
}
}
const SIGNER_SOURCE_PROMPT: &str = "prompt";
const SIGNER_SOURCE_FILEPATH: &str = "file";
const SIGNER_SOURCE_USB: &str = "usb";
const SIGNER_SOURCE_STDIN: &str = "stdin";
const SIGNER_SOURCE_PUBKEY: &str = "pubkey";
pub(crate) enum SignerSourceKind {
Prompt,
Filepath(String),
Usb(RemoteWalletLocator),
Stdin,
Pubkey(Pubkey),
}
impl AsRef<str> for SignerSourceKind {
fn as_ref(&self) -> &str {
match self {
Self::Prompt => SIGNER_SOURCE_PROMPT,
Self::Filepath(_) => SIGNER_SOURCE_FILEPATH,
Self::Usb(_) => SIGNER_SOURCE_USB,
Self::Stdin => SIGNER_SOURCE_STDIN,
Self::Pubkey(_) => SIGNER_SOURCE_PUBKEY,
}
}
}
impl std::fmt::Debug for SignerSourceKind {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
let s: &str = self.as_ref();
write!(f, "{}", s)
}
}
#[derive(Debug, Error)]
pub(crate) enum SignerSourceError {
#[error("unrecognized signer source")]
UnrecognizedSource,
#[error(transparent)]
RemoteWalletLocatorError(#[from] RemoteWalletLocatorError),
#[error(transparent)]
DerivationPathError(#[from] DerivationPathError),
#[error(transparent)]
IoError(#[from] std::io::Error),
}
pub(crate) fn parse_signer_source<S: AsRef<str>>(
source: S,
) -> Result<SignerSource, SignerSourceError> {
let source = source.as_ref();
let source = {
#[cfg(target_family = "windows")]
{
// trim matched single-quotes since cmd.exe won't
let mut source = source;
while let Some(trimmed) = source.strip_prefix('\'') {
source = if let Some(trimmed) = trimmed.strip_suffix('\'') {
trimmed
} else {
break;
}
}
source.replace('\\', "/")
}
#[cfg(not(target_family = "windows"))]
{
source.to_string()
}
};
match uriparse::URIReference::try_from(source.as_str()) {
Err(_) => Err(SignerSourceError::UnrecognizedSource),
Ok(uri) => {
if let Some(scheme) = uri.scheme() {
let scheme = scheme.as_str().to_ascii_lowercase();
match scheme.as_str() {
SIGNER_SOURCE_PROMPT => Ok(SignerSource {
kind: SignerSourceKind::Prompt,
derivation_path: DerivationPath::from_uri_any_query(&uri)?,
legacy: false,
}),
SIGNER_SOURCE_FILEPATH => Ok(SignerSource::new(SignerSourceKind::Filepath(
uri.path().to_string(),
))),
SIGNER_SOURCE_USB => Ok(SignerSource {
kind: SignerSourceKind::Usb(RemoteWalletLocator::new_from_uri(&uri)?),
derivation_path: DerivationPath::from_uri_key_query(&uri)?,
legacy: false,
}),
SIGNER_SOURCE_STDIN => Ok(SignerSource::new(SignerSourceKind::Stdin)),
_ => {
#[cfg(target_family = "windows")]
// On Windows, an absolute path's drive letter will be parsed as the URI
// scheme. Assume a filepath source in case of a single character shceme.
if scheme.len() == 1 {
return Ok(SignerSource::new(SignerSourceKind::Filepath(source)));
}
Err(SignerSourceError::UnrecognizedSource)
}
}
} else {
match source.as_str() {
STDOUT_OUTFILE_TOKEN => Ok(SignerSource::new(SignerSourceKind::Stdin)),
ASK_KEYWORD => Ok(SignerSource::new_legacy(SignerSourceKind::Prompt)),
_ => match Pubkey::from_str(source.as_str()) {
Ok(pubkey) => Ok(SignerSource::new(SignerSourceKind::Pubkey(pubkey))),
Err(_) => std::fs::metadata(source.as_str())
.map(|_| SignerSource::new(SignerSourceKind::Filepath(source)))
.map_err(|err| err.into()),
},
}
}
}
}
}
pub fn presigner_from_pubkey_sigs(
pubkey: &Pubkey,
signers: &[(Pubkey, Signature)],
) -> Option<Presigner> {
signers.iter().find_map(|(signer, sig)| {
if *signer == *pubkey {
Some(Presigner::new(signer, sig))
} else {
None
}
})
}
#[derive(Debug, Default)]
pub struct SignerFromPathConfig {
pub allow_null_signer: bool,
}
/// Loads a [Signer] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as various
/// types of _signing source_, depending on its format, one of which is a path
/// to a keypair file. Some sources may require user interaction in the course
/// of calling this function.
///
/// The result of this function is a boxed object of the [Signer] trait. To load
/// a concrete [Keypair], use the [keypair_from_path] function, though note that
/// it does not support all signer sources.
///
/// The `matches` argument is the same set of parsed [clap] matches from which
/// `path` was parsed. It is used to parse various additional command line
/// arguments, depending on which signing source is requested, as described
/// below in "Signing sources".
///
/// [clap]: https//docs.rs/clap
///
/// The `keypair_name` argument is the "name" of the signer, and is typically
/// the name of the clap argument from which the `path` argument was parsed,
/// like "keypair", "from", or "fee-payer". It is used solely for interactively
/// prompting the user, either when entering seed phrases or selecting from
/// multiple hardware wallets.
///
/// The `wallet_manager` is used for establishing connections to a hardware
/// device such as Ledger. If `wallet_manager` is a reference to `None`, and a
/// hardware signer is requested, then this function will attempt to create a
/// wallet manager, assigning it to the mutable `wallet_manager` reference. This
/// argument is typically a reference to `None`.
///
/// # Signing sources
///
/// The `path` argument can simply be a path to a keypair file, but it may also
/// be interpreted in several other ways, in the following order.
///
/// Firstly, the `path` argument may be interpreted as a [URI], with the URI
/// scheme indicating where to load the signer from. If it parses as a URI, then
/// the following schemes are supported:
///
/// - `file:` — Read the keypair from a JSON keypair file. The path portion
/// of the URI is the file path.
///
/// - `stdin:` — Read the keypair from stdin, in the JSON format used by
/// the keypair file.
///
/// Non-scheme parts of the URI are ignored.
///
/// - `prompt:` — The user will be prompted at the command line
/// for their seed phrase and passphrase.
///
/// In this URI the [query string][qs] may contain zero or one of the
/// following key/value pairs that determine the [BIP44 derivation path][dp]
/// of the private key from the seed:
///
/// - `key` — In this case the value is either one or two numerical
/// indexes separated by a slash, which represent the "account", and
/// "change" components of the BIP44 derivation path. Example: `key=0/0`.
///
/// - `full-path` — In this case the value is a full derivation path,
/// and the user is responsible for ensuring it is correct. Example:
/// `full-path=m/44/501/0/0/0`.
///
/// If neither is provided, then the default derivation path is used.
///
/// Note that when specifying derivation paths, this routine will convert all
/// indexes into ["hardened"] indexes, even if written as "normal" indexes.
///
/// Other components of the URI besides the scheme and query string are ignored.
///
/// If the "skip_seed_phrase_validation" argument, as defined in
/// [SKIP_SEED_PHRASE_VALIDATION_ARG] is found in `matches`, then the keypair
/// seed will be generated directly from the seed phrase, without parsing or
/// validating it as a BIP39 seed phrase. This allows the use of non-BIP39 seed
/// phrases.
///
/// - `usb:` — Use a USB hardware device as the signer. In this case, the
/// URI host indicates the device type, and is required. The only currently valid host
/// value is "ledger".
///
/// Optionally, the first segment of the URI path indicates the base-58
/// encoded pubkey of the wallet, and the "account" and "change" indices of
/// the derivation path can be specified with the `key=` query parameter, as
/// with the `prompt:` URI.
///
/// Examples:
///
/// - `usb://ledger`
/// - `usb://ledger?key=0/0`
/// - `usb://ledger/9rPVSygg3brqghvdZ6wsL2i5YNQTGhXGdJzF65YxaCQd`
/// - `usb://ledger/9rPVSygg3brqghvdZ6wsL2i5YNQTGhXGdJzF65YxaCQd?key=0/0`
///
/// Next the `path` argument may be one of the following strings:
///
/// - `-` — Read the keypair from stdin. This is the same as the `stdin:`
/// URI scheme.
///
/// - `ASK` — The user will be prompted at the command line for their seed
/// phrase and passphrase. _This uses a legacy key derivation method and should
/// usually be avoided in favor of `prompt:`._
///
/// Next, if the `path` argument parses as a base-58 public key, then the signer
/// is created without a private key, but with presigned signatures, each parsed
/// from the additional command line arguments, provided by the `matches`
/// argument.
///
/// In this case, the remaining command line arguments are searched for clap
/// arguments named "signer", as defined by [SIGNER_ARG], and each is parsed as
/// a key-value pair of the form "pubkey=signature", where `pubkey` is the same
/// base-58 public key, and `signature` is a serialized signature produced by
/// the corresponding keypair. One of the "signer" signatures must be for the
/// pubkey specified in `path` or this function will return an error; unless the
/// "sign_only" clap argument, as defined by [SIGN_ONLY_ARG], is present in
/// `matches`, in which case the signer will be created with no associated
/// signatures.
///
/// Finally, if `path`, interpreted as a file path, represents a file on disk,
/// then the signer is created by reading that file as a JSON-serialized
/// keypair. This is the same as the `file:` URI scheme.
///
/// [qs]: https://en.wikipedia.org/wiki/Query_string
/// [dp]: https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki
/// [URI]: https://en.wikipedia.org/wiki/Uniform_Resource_Identifier
/// ["hardened"]: https://wiki.trezor.io/Hardened_and_non-hardened_derivation
///
/// # Examples
///
/// This shows a reasonable way to set up clap to parse all possible signer
/// sources. Note the use of the [`OfflineArgs::offline_args`] method to add
/// correct clap definitions of the `--signer` and `--sign-only` arguments, as
/// required by the base-58 pubkey offline signing method.
///
/// [`OfflineArgs::offline_args`]: crate::offline::OfflineArgs::offline_args
///
/// ```no_run
/// use clap::{Arg, Command};
/// use solana_clap_v3_utils_wasm::keypair::signer_from_path;
/// use solana_clap_v3_utils_wasm::offline::OfflineArgs;
///
/// let clap_app = Command::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::new("keypair")
/// .required(true)
/// .help("The default signer"))
/// .offline_args();
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str: String = clap_matches.value_of_t_or_exit("keypair");
/// let mut wallet_manager = None;
/// let signer = signer_from_path(
/// &clap_matches,
/// &keypair_str,
/// "keypair",
/// &mut wallet_manager,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn signer_from_path(
matches: &ArgMatches,
path: &str,
keypair_name: &str,
wallet_manager: &mut Option<Rc<RemoteWalletManager>>,
) -> Result<Box<dyn Signer>, Box<dyn error::Error>> {
let config = SignerFromPathConfig::default();
signer_from_path_with_config(matches, path, keypair_name, wallet_manager, &config)
}
/// Loads a [Signer] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as various
/// types of _signing source_, depending on its format, one of which is a path
/// to a keypair file. Some sources may require user interaction in the course
/// of calling this function.
///
/// This is the same as [`signer_from_path`] except that it additionaolly
/// accepts a [`SignerFromPathConfig`] argument.
///
/// If the `allow_null_signer` field of `config` is `true`, then pubkey signers
/// are allowed to have zero associated signatures via additional "signer"
/// command line arguments. It the same effect as if the "sign_only" clap
/// argument is present.
///
/// See [`signer_from_path`] for full documentation of how this function
/// interprets its arguments.
///
/// # Examples
///
/// This shows a reasonable way to set up clap to parse all possible signer
/// sources. Note the use of the [`OfflineArgs::offline_args`] method to add
/// correct clap definitions of the `--signer` and `--sign-only` arguments, as
/// required by the base-58 pubkey offline signing method.
///
/// [`OfflineArgs::offline_args`]: crate::offline::OfflineArgs::offline_args
///
/// ```no_run
/// use clap::{Arg, Command};
/// use solana_clap_v3_utils_wasm::keypair::{signer_from_path_with_config, SignerFromPathConfig};
/// use solana_clap_v3_utils_wasm::offline::OfflineArgs;
///
/// let clap_app = Command::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::new("keypair")
/// .required(true)
/// .help("The default signer"))
/// .offline_args();
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str: String = clap_matches.value_of_t_or_exit("keypair");
/// let mut wallet_manager = None;
///
/// // Allow pubkey signers without accompanying signatures
/// let config = SignerFromPathConfig {
/// allow_null_signer: true,
/// };
///
/// let signer = signer_from_path_with_config(
/// &clap_matches,
/// &keypair_str,
/// "keypair",
/// &mut wallet_manager,
/// &config,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn signer_from_path_with_config(
matches: &ArgMatches,
path: &str,
keypair_name: &str,
wallet_manager: &mut Option<Rc<RemoteWalletManager>>,
config: &SignerFromPathConfig,
) -> Result<Box<dyn Signer>, Box<dyn error::Error>> {
let SignerSource {
kind,
derivation_path,
legacy,
} = parse_signer_source(path)?;
match kind {
SignerSourceKind::Prompt => {
let skip_validation = matches.is_present(SKIP_SEED_PHRASE_VALIDATION_ARG.name);
Ok(Box::new(keypair_from_seed_phrase(
keypair_name,
skip_validation,
false,
derivation_path,
legacy,
)?))
}
SignerSourceKind::Filepath(path) => match read_keypair_file(&path) {
Err(e) => Err(std::io::Error::new(
std::io::ErrorKind::Other,
format!("could not read keypair file \"{}\". Run \"solana-keygen new\" to create a keypair file: {}", path, e),
)
.into()),
Ok(file) => Ok(Box::new(file)),
},
SignerSourceKind::Stdin => {
let mut stdin = std::io::stdin();
Ok(Box::new(read_keypair(&mut stdin)?))
}
SignerSourceKind::Usb(locator) => {
if wallet_manager.is_none() {
*wallet_manager = maybe_wallet_manager()?;
}
if let Some(wallet_manager) = wallet_manager {
Ok(Box::new(generate_remote_keypair(
locator,
derivation_path.unwrap_or_default(),
wallet_manager,
matches.is_present("confirm_key"),
keypair_name,
)?))
} else {
Err(RemoteWalletError::NoDeviceFound.into())
}
}
SignerSourceKind::Pubkey(pubkey) => {
let presigner = pubkeys_sigs_of(matches, SIGNER_ARG.name)
.as_ref()
.and_then(|presigners| presigner_from_pubkey_sigs(&pubkey, presigners));
if let Some(presigner) = presigner {
Ok(Box::new(presigner))
} else if config.allow_null_signer || matches.is_present(SIGN_ONLY_ARG.name) {
Ok(Box::new(NullSigner::new(&pubkey)))
} else {
Err(std::io::Error::new(
std::io::ErrorKind::Other,
format!("missing signature for supplied pubkey: {}", pubkey),
)
.into())
}
}
}
}
/// Loads the pubkey of a [Signer] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as various
/// types of _signing source_, depending on its format, one of which is a path
/// to a keypair file. Some sources may require user interaction in the course
/// of calling this function.
///
/// The only difference between this function and [`signer_from_path`] is in the
/// case of a "pubkey" path: this function does not require that accompanying
/// command line arguments contain an offline signature.
///
/// See [`signer_from_path`] for full documentation of how this function
/// interprets its arguments.
///
/// # Examples
///
/// ```no_run
/// use clap::{Arg, Command};
/// use solana_clap_v3_utils_wasm::keypair::pubkey_from_path;
///
/// let clap_app = Command::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::new("keypair")
/// .required(true)
/// .help("The default signer"));
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str: String = clap_matches.value_of_t_or_exit("keypair");
/// let mut wallet_manager = None;
/// let pubkey = pubkey_from_path(
/// &clap_matches,
/// &keypair_str,
/// "keypair",
/// &mut wallet_manager,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn pubkey_from_path(
matches: &ArgMatches,
path: &str,
keypair_name: &str,
wallet_manager: &mut Option<Rc<RemoteWalletManager>>,
) -> Result<Pubkey, Box<dyn error::Error>> {
let SignerSource { kind, .. } = parse_signer_source(path)?;
match kind {
SignerSourceKind::Pubkey(pubkey) => Ok(pubkey),
_ => Ok(signer_from_path(matches, path, keypair_name, wallet_manager)?.pubkey()),
}
}
pub fn resolve_signer_from_path(
matches: &ArgMatches,
path: &str,
keypair_name: &str,
wallet_manager: &mut Option<Rc<RemoteWalletManager>>,
) -> Result<Option<String>, Box<dyn error::Error>> {
let SignerSource {
kind,
derivation_path,
legacy,
} = parse_signer_source(path)?;
match kind {
SignerSourceKind::Prompt => {
let skip_validation = matches.is_present(SKIP_SEED_PHRASE_VALIDATION_ARG.name);
// This method validates the seed phrase, but returns `None` because there is no path
// on disk or to a device
keypair_from_seed_phrase(
keypair_name,
skip_validation,
false,
derivation_path,
legacy,
)
.map(|_| None)
}
SignerSourceKind::Filepath(path) => match read_keypair_file(&path) {
Err(e) => Err(std::io::Error::new(
std::io::ErrorKind::Other,
format!(
"could not read keypair file \"{}\". \
Run \"solana-keygen new\" to create a keypair file: {}",
path, e
),
)
.into()),
Ok(_) => Ok(Some(path.to_string())),
},
SignerSourceKind::Stdin => {
let mut stdin = std::io::stdin();
// This method validates the keypair from stdin, but returns `None` because there is no
// path on disk or to a device
read_keypair(&mut stdin).map(|_| None)
}
SignerSourceKind::Usb(locator) => {
if wallet_manager.is_none() {
*wallet_manager = maybe_wallet_manager()?;
}
if let Some(wallet_manager) = wallet_manager {
let path = generate_remote_keypair(
locator,
derivation_path.unwrap_or_default(),
wallet_manager,
matches.is_present("confirm_key"),
keypair_name,
)
.map(|keypair| keypair.path)?;
Ok(Some(path))
} else {
Err(RemoteWalletError::NoDeviceFound.into())
}
}
_ => Ok(Some(path.to_string())),
}
}
// Keyword used to indicate that the user should be prompted for a keypair seed phrase
pub const ASK_KEYWORD: &str = "ASK";
pub const SKIP_SEED_PHRASE_VALIDATION_ARG: ArgConstant<'static> = ArgConstant {
long: "skip-seed-phrase-validation",
name: "skip_seed_phrase_validation",
help: "Skip validation of seed phrases. Use this if your phrase does not use the BIP39 official English word list",
};
/// Prompts user for a passphrase and then asks for confirmirmation to check for mistakes
pub fn prompt_passphrase(prompt: &str) -> Result<String, Box<dyn error::Error>> {
let passphrase = prompt_password(prompt)?;
if !passphrase.is_empty() {
let confirmed = rpassword::prompt_password("Enter same passphrase again: ")?;
if confirmed != passphrase {
return Err("Passphrases did not match".into());
}
}
Ok(passphrase)
}
/// Loads a [Keypair] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as various
/// types of _signing source_, depending on its format, one of which is a path
/// to a keypair file. Some sources may require user interaction in the course
/// of calling this function.
///
/// This is the same as [`signer_from_path`] except that it only supports
/// signing sources that can result in a [Keypair]: prompt for seed phrase,
/// keypair file, and stdin.
///
/// If `confirm_pubkey` is `true` then after deriving the pubkey, the user will
/// be prompted to confirm that the pubkey is as expected.
///
/// See [`signer_from_path`] for full documentation of how this function
/// interprets its arguments.
///
/// # Examples
///
/// ```no_run
/// use clap::{Arg, Command};
/// use solana_clap_v3_utils_wasm::keypair::keypair_from_path;
///
/// let clap_app = Command::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::new("keypair")
/// .required(true)
/// .help("The default signer"));
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str: String = clap_matches.value_of_t_or_exit("keypair");
///
/// let signer = keypair_from_path(
/// &clap_matches,
/// &keypair_str,
/// "keypair",
/// false,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn keypair_from_path(
matches: &ArgMatches,
path: &str,
keypair_name: &str,
confirm_pubkey: bool,
) -> Result<Keypair, Box<dyn error::Error>> {
let SignerSource {
kind,
derivation_path,
legacy,
} = parse_signer_source(path)?;
match kind {
SignerSourceKind::Prompt => {
let skip_validation = matches.is_present(SKIP_SEED_PHRASE_VALIDATION_ARG.name);
Ok(keypair_from_seed_phrase(
keypair_name,
skip_validation,
confirm_pubkey,
derivation_path,
legacy,
)?)
}
SignerSourceKind::Filepath(path) => match read_keypair_file(&path) {
Err(e) => Err(std::io::Error::new(
std::io::ErrorKind::Other,
format!(
"could not read keypair file \"{}\". \
Run \"solana-keygen new\" to create a keypair file: {}",
path, e
),
)
.into()),
Ok(file) => Ok(file),
},
SignerSourceKind::Stdin => {
let mut stdin = std::io::stdin();
Ok(read_keypair(&mut stdin)?)
}
_ => Err(std::io::Error::new(
std::io::ErrorKind::Other,
format!(
"signer of type `{:?}` does not support Keypair output",
kind
),
)
.into()),
}
}
/// Reads user input from stdin to retrieve a seed phrase and passphrase for keypair derivation.
///
/// Optionally skips validation of seed phrase. Optionally confirms recovered
/// public key.
pub fn keypair_from_seed_phrase(
keypair_name: &str,
skip_validation: bool,
confirm_pubkey: bool,
derivation_path: Option<DerivationPath>,
legacy: bool,
) -> Result<Keypair, Box<dyn error::Error>> {
let seed_phrase = prompt_password(format!("[{keypair_name}] seed phrase: "))?;
let seed_phrase = seed_phrase.trim();
let passphrase_prompt = format!(
"[{keypair_name}] If this seed phrase has an associated passphrase, enter it now. Otherwise, press ENTER to continue: "
);
let keypair = if skip_validation {
let passphrase = prompt_passphrase(&passphrase_prompt)?;
if legacy {
keypair_from_seed_phrase_and_passphrase(seed_phrase, &passphrase)?
} else {
let seed = generate_seed_from_seed_phrase_and_passphrase(seed_phrase, &passphrase);
keypair_from_seed_and_derivation_path(&seed, derivation_path)?
}
} else {
let sanitized = sanitize_seed_phrase(seed_phrase);
let parse_language_fn = || {
for language in &[
Language::English,
Language::ChineseSimplified,
Language::ChineseTraditional,
Language::Japanese,
Language::Spanish,
Language::Korean,
Language::French,
Language::Italian,
] {
if let Ok(mnemonic) = Mnemonic::from_phrase(&sanitized, *language) {
return Ok(mnemonic);
}
}
Err("Can't get mnemonic from seed phrases")
};
let mnemonic = parse_language_fn()?;
let passphrase = prompt_passphrase(&passphrase_prompt)?;
let seed = Seed::new(&mnemonic, &passphrase);
if legacy {
keypair_from_seed(seed.as_bytes())?
} else {
keypair_from_seed_and_derivation_path(seed.as_bytes(), derivation_path)?
}
};
if confirm_pubkey {
let pubkey = keypair.pubkey();
print!("Recovered pubkey `{:?}`. Continue? (y/n): ", pubkey);
let _ignored = stdout().flush();
let mut input = String::new();
stdin().read_line(&mut input).expect("Unexpected input");
if input.to_lowercase().trim() != "y" {
println!("Exiting");
panic!();
}
}
Ok(keypair)
}
fn sanitize_seed_phrase(seed_phrase: &str) -> String {
seed_phrase
.split_whitespace()
.collect::<Vec<&str>>()
.join(" ")
}
// For use with WASM since DefaultSigner is not compatible
pub fn generate_unique_signers(
default_signer: Box<dyn Signer>,
bulk_signers: Vec<Option<Box<dyn Signer>>>,
) -> Result<CliSignerInfo, Box<dyn error::Error>> {
let mut unique_signers = vec![];
// Determine if the default signer is needed
if bulk_signers.iter().any(|signer| signer.is_none()) {
unique_signers.push(default_signer);
}
for signer in bulk_signers.into_iter().flatten() {
if !unique_signers.iter().any(|s| s == &signer) {
unique_signers.push(signer);
}
}
Ok(CliSignerInfo {
signers: unique_signers,
})
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs/playnet/playnet_bg.js
|
import * as wasm from './playnet_bg.wasm';
const heap = new Array(32).fill(undefined);
heap.push(undefined, null, true, false);
function getObject(idx) { return heap[idx]; }
let heap_next = heap.length;
function dropObject(idx) {
if (idx < 36) return;
heap[idx] = heap_next;
heap_next = idx;
}
function takeObject(idx) {
const ret = getObject(idx);
dropObject(idx);
return ret;
}
const lTextDecoder = typeof TextDecoder === 'undefined' ? (0, module.require)('util').TextDecoder : TextDecoder;
let cachedTextDecoder = new lTextDecoder('utf-8', { ignoreBOM: true, fatal: true });
cachedTextDecoder.decode();
let cachedUint8Memory0 = new Uint8Array();
function getUint8Memory0() {
if (cachedUint8Memory0.byteLength === 0) {
cachedUint8Memory0 = new Uint8Array(wasm.memory.buffer);
}
return cachedUint8Memory0;
}
function getStringFromWasm0(ptr, len) {
return cachedTextDecoder.decode(getUint8Memory0().subarray(ptr, ptr + len));
}
function addHeapObject(obj) {
if (heap_next === heap.length) heap.push(heap.length + 1);
const idx = heap_next;
heap_next = heap[idx];
heap[idx] = obj;
return idx;
}
let WASM_VECTOR_LEN = 0;
const lTextEncoder = typeof TextEncoder === 'undefined' ? (0, module.require)('util').TextEncoder : TextEncoder;
let cachedTextEncoder = new lTextEncoder('utf-8');
const encodeString = (typeof cachedTextEncoder.encodeInto === 'function'
? function (arg, view) {
return cachedTextEncoder.encodeInto(arg, view);
}
: function (arg, view) {
const buf = cachedTextEncoder.encode(arg);
view.set(buf);
return {
read: arg.length,
written: buf.length
};
});
function passStringToWasm0(arg, malloc, realloc) {
if (realloc === undefined) {
const buf = cachedTextEncoder.encode(arg);
const ptr = malloc(buf.length);
getUint8Memory0().subarray(ptr, ptr + buf.length).set(buf);
WASM_VECTOR_LEN = buf.length;
return ptr;
}
let len = arg.length;
let ptr = malloc(len);
const mem = getUint8Memory0();
let offset = 0;
for (; offset < len; offset++) {
const code = arg.charCodeAt(offset);
if (code > 0x7F) break;
mem[ptr + offset] = code;
}
if (offset !== len) {
if (offset !== 0) {
arg = arg.slice(offset);
}
ptr = realloc(ptr, len, len = offset + arg.length * 3);
const view = getUint8Memory0().subarray(ptr + offset, ptr + len);
const ret = encodeString(arg, view);
offset += ret.written;
}
WASM_VECTOR_LEN = offset;
return ptr;
}
function isLikeNone(x) {
return x === undefined || x === null;
}
let cachedInt32Memory0 = new Int32Array();
function getInt32Memory0() {
if (cachedInt32Memory0.byteLength === 0) {
cachedInt32Memory0 = new Int32Array(wasm.memory.buffer);
}
return cachedInt32Memory0;
}
let cachedFloat64Memory0 = new Float64Array();
function getFloat64Memory0() {
if (cachedFloat64Memory0.byteLength === 0) {
cachedFloat64Memory0 = new Float64Array(wasm.memory.buffer);
}
return cachedFloat64Memory0;
}
function debugString(val) {
// primitive types
const type = typeof val;
if (type == 'number' || type == 'boolean' || val == null) {
return `${val}`;
}
if (type == 'string') {
return `"${val}"`;
}
if (type == 'symbol') {
const description = val.description;
if (description == null) {
return 'Symbol';
} else {
return `Symbol(${description})`;
}
}
if (type == 'function') {
const name = val.name;
if (typeof name == 'string' && name.length > 0) {
return `Function(${name})`;
} else {
return 'Function';
}
}
// objects
if (Array.isArray(val)) {
const length = val.length;
let debug = '[';
if (length > 0) {
debug += debugString(val[0]);
}
for(let i = 1; i < length; i++) {
debug += ', ' + debugString(val[i]);
}
debug += ']';
return debug;
}
// Test for built-in
const builtInMatches = /\[object ([^\]]+)\]/.exec(toString.call(val));
let className;
if (builtInMatches.length > 1) {
className = builtInMatches[1];
} else {
// Failed to match the standard '[object ClassName]'
return toString.call(val);
}
if (className == 'Object') {
// we're a user defined class or Object
// JSON.stringify avoids problems with cycles, and is generally much
// easier than looping through ownProperties of `val`.
try {
return 'Object(' + JSON.stringify(val) + ')';
} catch (_) {
return 'Object';
}
}
// errors
if (val instanceof Error) {
return `${val.name}: ${val.message}\n${val.stack}`;
}
// TODO we could test for more things here, like `Set`s and `Map`s.
return className;
}
function passArray8ToWasm0(arg, malloc) {
const ptr = malloc(arg.length * 1);
getUint8Memory0().set(arg, ptr / 1);
WASM_VECTOR_LEN = arg.length;
return ptr;
}
let cachedBigInt64Memory0 = new BigInt64Array();
function getBigInt64Memory0() {
if (cachedBigInt64Memory0.byteLength === 0) {
cachedBigInt64Memory0 = new BigInt64Array(wasm.memory.buffer);
}
return cachedBigInt64Memory0;
}
let cachedUint32Memory0 = new Uint32Array();
function getUint32Memory0() {
if (cachedUint32Memory0.byteLength === 0) {
cachedUint32Memory0 = new Uint32Array(wasm.memory.buffer);
}
return cachedUint32Memory0;
}
function passArrayJsValueToWasm0(array, malloc) {
const ptr = malloc(array.length * 4);
const mem = getUint32Memory0();
for (let i = 0; i < array.length; i++) {
mem[ptr / 4 + i] = addHeapObject(array[i]);
}
WASM_VECTOR_LEN = array.length;
return ptr;
}
let cachedBigUint64Memory0 = new BigUint64Array();
function getBigUint64Memory0() {
if (cachedBigUint64Memory0.byteLength === 0) {
cachedBigUint64Memory0 = new BigUint64Array(wasm.memory.buffer);
}
return cachedBigUint64Memory0;
}
function getArrayU64FromWasm0(ptr, len) {
return getBigUint64Memory0().subarray(ptr / 8, ptr / 8 + len);
}
function getArrayJsValueFromWasm0(ptr, len) {
const mem = getUint32Memory0();
const slice = mem.subarray(ptr / 4, ptr / 4 + len);
const result = [];
for (let i = 0; i < slice.length; i++) {
result.push(takeObject(slice[i]));
}
return result;
}
function getArrayU8FromWasm0(ptr, len) {
return getUint8Memory0().subarray(ptr / 1, ptr / 1 + len);
}
function _assertClass(instance, klass) {
if (!(instance instanceof klass)) {
throw new Error(`expected instance of ${klass.name}`);
}
return instance.ptr;
}
/**
* Initialize Javascript logging and panic handler
*/
export function solana_program_init() {
wasm.solana_program_init();
}
function handleError(f, args) {
try {
return f.apply(this, args);
} catch (e) {
wasm.__wbindgen_exn_store(addHeapObject(e));
}
}
/**
*/
export const WasmCommitmentLevel = Object.freeze({ Processed:0,"0":"Processed",Confirmed:1,"1":"Confirmed",Finalized:2,"2":"Finalized", });
/**
* Metadata for a confirmed transaction on the ledger
*/
export class ConfirmedTransactionMeta {
static __wrap(ptr) {
const obj = Object.create(ConfirmedTransactionMeta.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_confirmedtransactionmeta_free(ptr);
}
/**
* @returns {bigint}
*/
fee() {
const ret = wasm.confirmedtransactionmeta_fee(this.ptr);
return BigInt.asUintN(64, ret);
}
/**
* TODO:
* @returns {number | undefined}
*/
innerInstructions() {
const ret = wasm.confirmedtransactionmeta_innerInstructions(this.ptr);
return ret === 0xFFFFFF ? undefined : ret;
}
/**
* @returns {BigUint64Array}
*/
preBalances() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.confirmedtransactionmeta_preBalances(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var v0 = getArrayU64FromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 8);
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* @returns {BigUint64Array}
*/
postBalances() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.confirmedtransactionmeta_postBalances(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var v0 = getArrayU64FromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 8);
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* @returns {any[] | undefined}
*/
logs() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.confirmedtransactionmeta_logs(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
let v0;
if (r0 !== 0) {
v0 = getArrayJsValueFromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 4);
}
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* TODO:
* @returns {number | undefined}
*/
preTokenBalances() {
const ret = wasm.confirmedtransactionmeta_innerInstructions(this.ptr);
return ret === 0xFFFFFF ? undefined : ret;
}
/**
* TODO:
* @returns {number | undefined}
*/
postTokenBalances() {
const ret = wasm.confirmedtransactionmeta_innerInstructions(this.ptr);
return ret === 0xFFFFFF ? undefined : ret;
}
/**
* @returns {string | undefined}
*/
err() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.confirmedtransactionmeta_err(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
let v0;
if (r0 !== 0) {
v0 = getStringFromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 1);
}
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* TODO:
* @returns {number | undefined}
*/
loadedAddresses() {
const ret = wasm.confirmedtransactionmeta_innerInstructions(this.ptr);
return ret === 0xFFFFFF ? undefined : ret;
}
/**
* @returns {bigint | undefined}
*/
computeUnitsConsumed() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.confirmedtransactionmeta_computeUnitsConsumed(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r2 = getBigInt64Memory0()[retptr / 8 + 1];
return r0 === 0 ? undefined : BigInt.asUintN(64, r2);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
}
/**
*/
export class GetLatestBlockhashResult {
static __wrap(ptr) {
const obj = Object.create(GetLatestBlockhashResult.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_getlatestblockhashresult_free(ptr);
}
/**
* @returns {string}
*/
blockhash() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.getlatestblockhashresult_blockhash(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
return getStringFromWasm0(r0, r1);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
wasm.__wbindgen_free(r0, r1);
}
}
/**
* @returns {bigint}
*/
lastValidBlockHeight() {
const ret = wasm.__wbg_get_transactionstatus_slot(this.ptr);
return BigInt.asUintN(64, ret);
}
}
/**
*/
export class GetSignatureStatusesResult {
static __wrap(ptr) {
const obj = Object.create(GetSignatureStatusesResult.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_getsignaturestatusesresult_free(ptr);
}
/**
* @returns {any[]}
*/
statuses() {
try {
const ptr = this.__destroy_into_raw();
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.getsignaturestatusesresult_statuses(retptr, ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var v0 = getArrayJsValueFromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 4);
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
}
/**
*/
export class GetTransactionResult {
static __wrap(ptr) {
const obj = Object.create(GetTransactionResult.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_gettransactionresult_free(ptr);
}
/**
* NOTE: This method should be called before accessing any other data
* @returns {boolean}
*/
exists() {
const ret = wasm.gettransactionresult_exists(this.ptr);
return ret !== 0;
}
/**
* @returns {bigint | undefined}
*/
blockTime() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.gettransactionresult_blockTime(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r2 = getBigInt64Memory0()[retptr / 8 + 1];
return r0 === 0 ? undefined : r2;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Returns the transaction version or `None` for legacy transactions
* @returns {number | undefined}
*/
version() {
const ret = wasm.gettransactionresult_version(this.ptr);
return ret === 0xFFFFFF ? undefined : ret;
}
/**
* @returns {ConfirmedTransactionMeta}
*/
meta() {
const ret = wasm.gettransactionresult_meta(this.ptr);
return ConfirmedTransactionMeta.__wrap(ret);
}
/**
* Returns the base64 encoded tx string
* @returns {string}
*/
transaction() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.gettransactionresult_transaction(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
return getStringFromWasm0(r0, r1);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
wasm.__wbindgen_free(r0, r1);
}
}
}
/**
* A hash; the 32-byte output of a hashing algorithm.
*
* This struct is used most often in `solana-sdk` and related crates to contain
* a [SHA-256] hash, but may instead contain a [blake3] hash, as created by the
* [`blake3`] module (and used in [`Message::hash`]).
*
* [SHA-256]: https://en.wikipedia.org/wiki/SHA-2
* [blake3]: https://github.com/BLAKE3-team/BLAKE3
* [`blake3`]: crate::blake3
* [`Message::hash`]: crate::message::Message::hash
*/
export class Hash {
static __wrap(ptr) {
const obj = Object.create(Hash.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_hash_free(ptr);
}
/**
* Create a new Hash object
*
* * `value` - optional hash as a base58 encoded string, `Uint8Array`, `[number]`
* @param {any} value
*/
constructor(value) {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.hash_constructor(retptr, addHeapObject(value));
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var r2 = getInt32Memory0()[retptr / 4 + 2];
if (r2) {
throw takeObject(r1);
}
return Hash.__wrap(r0);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Return the base58 string representation of the hash
* @returns {string}
*/
toString() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.hash_toString(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
return getStringFromWasm0(r0, r1);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
wasm.__wbindgen_free(r0, r1);
}
}
/**
* Checks if two `Hash`s are equal
* @param {Hash} other
* @returns {boolean}
*/
equals(other) {
_assertClass(other, Hash);
const ret = wasm.hash_equals(this.ptr, other.ptr);
return ret !== 0;
}
/**
* Return the `Uint8Array` representation of the hash
* @returns {Uint8Array}
*/
toBytes() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.hash_toBytes(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var v0 = getArrayU8FromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 1);
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
}
/**
* A directive for a single invocation of a Solana program.
*
* An instruction specifies which program it is calling, which accounts it may
* read or modify, and additional data that serves as input to the program. One
* or more instructions are included in transactions submitted by Solana
* clients. Instructions are also used to describe [cross-program
* invocations][cpi].
*
* [cpi]: https://docs.solana.com/developing/programming-model/calling-between-programs
*
* During execution, a program will receive a list of account data as one of
* its arguments, in the same order as specified during `Instruction`
* construction.
*
* While Solana is agnostic to the format of the instruction data, it has
* built-in support for serialization via [`borsh`] and [`bincode`].
*
* [`borsh`]: https://docs.rs/borsh/latest/borsh/
* [`bincode`]: https://docs.rs/bincode/latest/bincode/
*
* # Specifying account metadata
*
* When constructing an [`Instruction`], a list of all accounts that may be
* read or written during the execution of that instruction must be supplied as
* [`AccountMeta`] values.
*
* Any account whose data may be mutated by the program during execution must
* be specified as writable. During execution, writing to an account that was
* not specified as writable will cause the transaction to fail. Writing to an
* account that is not owned by the program will cause the transaction to fail.
*
* Any account whose lamport balance may be mutated by the program during
* execution must be specified as writable. During execution, mutating the
* lamports of an account that was not specified as writable will cause the
* transaction to fail. While _subtracting_ lamports from an account not owned
* by the program will cause the transaction to fail, _adding_ lamports to any
* account is allowed, as long is it is mutable.
*
* Accounts that are not read or written by the program may still be specified
* in an `Instruction`'s account list. These will affect scheduling of program
* execution by the runtime, but will otherwise be ignored.
*
* When building a transaction, the Solana runtime coalesces all accounts used
* by all instructions in that transaction, along with accounts and permissions
* required by the runtime, into a single account list. Some accounts and
* account permissions required by the runtime to process a transaction are
* _not_ required to be included in an `Instruction`s account list. These
* include:
*
* - The program ID — it is a separate field of `Instruction`
* - The transaction's fee-paying account — it is added during [`Message`]
* construction. A program may still require the fee payer as part of the
* account list if it directly references it.
*
* [`Message`]: crate::message::Message
*
* Programs may require signatures from some accounts, in which case they
* should be specified as signers during `Instruction` construction. The
* program must still validate during execution that the account is a signer.
*/
export class Instruction {
static __wrap(ptr) {
const obj = Object.create(Instruction.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_instruction_free(ptr);
}
}
/**
*/
export class Instructions {
static __wrap(ptr) {
const obj = Object.create(Instructions.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_instructions_free(ptr);
}
/**
*/
constructor() {
const ret = wasm.instructions_constructor();
return Instructions.__wrap(ret);
}
/**
* @param {Instruction} instruction
*/
push(instruction) {
_assertClass(instruction, Instruction);
var ptr0 = instruction.ptr;
instruction.ptr = 0;
wasm.instructions_push(this.ptr, ptr0);
}
}
/**
* A vanilla Ed25519 key pair
*/
export class Keypair {
static __wrap(ptr) {
const obj = Object.create(Keypair.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_keypair_free(ptr);
}
/**
* Create a new `Keypair `
*/
constructor() {
const ret = wasm.keypair_constructor();
return Keypair.__wrap(ret);
}
/**
* Convert a `Keypair` to a `Uint8Array`
* @returns {Uint8Array}
*/
toBytes() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.keypair_toBytes(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var v0 = getArrayU8FromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 1);
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Recover a `Keypair` from a `Uint8Array`
* @param {Uint8Array} bytes
* @returns {Keypair}
*/
static fromBytes(bytes) {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
const ptr0 = passArray8ToWasm0(bytes, wasm.__wbindgen_malloc);
const len0 = WASM_VECTOR_LEN;
wasm.keypair_fromBytes(retptr, ptr0, len0);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var r2 = getInt32Memory0()[retptr / 4 + 2];
if (r2) {
throw takeObject(r1);
}
return Keypair.__wrap(r0);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Return the `Pubkey` for this `Keypair`
* @returns {Pubkey}
*/
pubkey() {
const ret = wasm.keypair_pubkey(this.ptr);
return Pubkey.__wrap(ret);
}
}
/**
* A Solana transaction message (legacy).
*
* See the [`message`] module documentation for further description.
*
* [`message`]: crate::message
*
* Some constructors accept an optional `payer`, the account responsible for
* paying the cost of executing a transaction. In most cases, callers should
* specify the payer explicitly in these constructors. In some cases though,
* the caller is not _required_ to specify the payer, but is still allowed to:
* in the `Message` structure, the first account is always the fee-payer, so if
* the caller has knowledge that the first account of the constructed
* transaction's `Message` is both a signer and the expected fee-payer, then
* redundantly specifying the fee-payer is not strictly required.
*/
export class Message {
static __wrap(ptr) {
const obj = Object.create(Message.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_message_free(ptr);
}
/**
* The id of a recent ledger entry.
* @returns {Hash}
*/
get recent_blockhash() {
const ret = wasm.__wbg_get_message_recent_blockhash(this.ptr);
return Hash.__wrap(ret);
}
/**
* The id of a recent ledger entry.
* @param {Hash} arg0
*/
set recent_blockhash(arg0) {
_assertClass(arg0, Hash);
var ptr0 = arg0.ptr;
arg0.ptr = 0;
wasm.__wbg_set_message_recent_blockhash(this.ptr, ptr0);
}
}
/**
*/
export class PgRpc {
static __wrap(ptr) {
const obj = Object.create(PgRpc.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_pgrpc_free(ptr);
}
/**
* @param {string} pubkey_str
* @returns {WasmAccount}
*/
getAccountInfo(pubkey_str) {
const ptr0 = passStringToWasm0(pubkey_str, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
const ret = wasm.pgrpc_getAccountInfo(this.ptr, ptr0, len0);
return WasmAccount.__wrap(ret);
}
/**
* @returns {bigint}
*/
getSlot() {
const ret = wasm.pgrpc_getSlot(this.ptr);
return BigInt.asUintN(64, ret);
}
/**
* @returns {bigint}
*/
getBlockHeight() {
const ret = wasm.pgrpc_getBlockHeight(this.ptr);
return BigInt.asUintN(64, ret);
}
/**
* @returns {string}
*/
getGenesisHash() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.pgrpc_getGenesisHash(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
return getStringFromWasm0(r0, r1);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
wasm.__wbindgen_free(r0, r1);
}
}
/**
* @returns {GetLatestBlockhashResult}
*/
getLatestBlockhash() {
const ret = wasm.pgrpc_getLatestBlockhash(this.ptr);
return GetLatestBlockhashResult.__wrap(ret);
}
/**
* @param {number} data_len
* @returns {bigint}
*/
getMinimumBalanceForRentExemption(data_len) {
const ret = wasm.pgrpc_getMinimumBalanceForRentExemption(this.ptr, data_len);
return BigInt.asUintN(64, ret);
}
/**
* @param {Uint8Array} serialized_msg
* @returns {bigint | undefined}
*/
getFeeForMessage(serialized_msg) {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
const ptr0 = passArray8ToWasm0(serialized_msg, wasm.__wbindgen_malloc);
const len0 = WASM_VECTOR_LEN;
wasm.pgrpc_getFeeForMessage(retptr, this.ptr, ptr0, len0);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r2 = getBigInt64Memory0()[retptr / 8 + 1];
return r0 === 0 ? undefined : BigInt.asUintN(64, r2);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* @param {Uint8Array} serialized_tx
* @returns {SimulateTransactionResult}
*/
simulateTransaction(serialized_tx) {
const ptr0 = passArray8ToWasm0(serialized_tx, wasm.__wbindgen_malloc);
const len0 = WASM_VECTOR_LEN;
const ret = wasm.pgrpc_simulateTransaction(this.ptr, ptr0, len0);
return SimulateTransactionResult.__wrap(ret);
}
/**
* @param {Uint8Array} serialized_tx
* @returns {SendTransactionResult}
*/
sendTransaction(serialized_tx) {
const ptr0 = passArray8ToWasm0(serialized_tx, wasm.__wbindgen_malloc);
const len0 = WASM_VECTOR_LEN;
const ret = wasm.pgrpc_sendTransaction(this.ptr, ptr0, len0);
return SendTransactionResult.__wrap(ret);
}
/**
* @param {any[]} signatures
* @returns {GetSignatureStatusesResult}
*/
getSignatureStatuses(signatures) {
const ptr0 = passArrayJsValueToWasm0(signatures, wasm.__wbindgen_malloc);
const len0 = WASM_VECTOR_LEN;
const ret = wasm.pgrpc_getSignatureStatuses(this.ptr, ptr0, len0);
return GetSignatureStatusesResult.__wrap(ret);
}
/**
* @param {string} signature_str
* @returns {GetTransactionResult}
*/
getTransaction(signature_str) {
const ptr0 = passStringToWasm0(signature_str, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
const ret = wasm.pgrpc_getTransaction(this.ptr, ptr0, len0);
return GetTransactionResult.__wrap(ret);
}
/**
* @param {string} pubkey_str
* @param {bigint} lamports
* @returns {SendTransactionResult}
*/
requestAirdrop(pubkey_str, lamports) {
const ptr0 = passStringToWasm0(pubkey_str, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
const ret = wasm.pgrpc_requestAirdrop(this.ptr, ptr0, len0, lamports);
return SendTransactionResult.__wrap(ret);
}
}
/**
*/
export class Playnet {
static __wrap(ptr) {
const obj = Object.create(Playnet.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_playnet_free(ptr);
}
/**
* RPC methods to interact with the Playnet
* @returns {PgRpc}
*/
get rpc() {
const ret = wasm.__wbg_get_playnet_rpc(this.ptr);
return PgRpc.__wrap(ret);
}
/**
* RPC methods to interact with the Playnet
* @param {PgRpc} arg0
*/
set rpc(arg0) {
_assertClass(arg0, PgRpc);
var ptr0 = arg0.ptr;
arg0.ptr = 0;
wasm.__wbg_set_playnet_rpc(this.ptr, ptr0);
}
/**
* Playnet lifecycle starts after constructing a Playnet instance
* @param {string | undefined} maybe_bank_string
*/
constructor(maybe_bank_string) {
var ptr0 = isLikeNone(maybe_bank_string) ? 0 : passStringToWasm0(maybe_bank_string, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
var len0 = WASM_VECTOR_LEN;
const ret = wasm.playnet_new(ptr0, len0);
return Playnet.__wrap(ret);
}
/**
* Get the save data necessary to recover from the next time Playnet instance gets created
* @returns {string}
*/
getSaveData() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.playnet_getSaveData(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
return getStringFromWasm0(r0, r1);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
wasm.__wbindgen_free(r0, r1);
}
}
}
/**
* The address of a [Solana account][acc].
*
* Some account addresses are [ed25519] public keys, with corresponding secret
* keys that are managed off-chain. Often, though, account addresses do not
* have corresponding secret keys — as with [_program derived
* addresses_][pdas] — or the secret key is not relevant to the operation
* of a program, and may have even been disposed of. As running Solana programs
* can not safely create or manage secret keys, the full [`Keypair`] is not
* defined in `solana-program` but in `solana-sdk`.
*
* [acc]: https://docs.solana.com/developing/programming-model/accounts
* [ed25519]: https://ed25519.cr.yp.to/
* [pdas]: https://docs.solana.com/developing/programming-model/calling-between-programs#program-derived-addresses
* [`Keypair`]: https://docs.rs/solana-sdk/latest/solana_sdk/signer/keypair/struct.Keypair.html
*/
export class Pubkey {
static __wrap(ptr) {
const obj = Object.create(Pubkey.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_pubkey_free(ptr);
}
/**
* Create a new Pubkey object
*
* * `value` - optional public key as a base58 encoded string, `Uint8Array`, `[number]`
* @param {any} value
*/
constructor(value) {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.pubkey_constructor(retptr, addHeapObject(value));
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var r2 = getInt32Memory0()[retptr / 4 + 2];
if (r2) {
throw takeObject(r1);
}
return Pubkey.__wrap(r0);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Return the base58 string representation of the public key
* @returns {string}
*/
toString() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.pubkey_toString(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
return getStringFromWasm0(r0, r1);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
wasm.__wbindgen_free(r0, r1);
}
}
/**
* Check if a `Pubkey` is on the ed25519 curve.
* @returns {boolean}
*/
isOnCurve() {
const ret = wasm.pubkey_isOnCurve(this.ptr);
return ret !== 0;
}
/**
* Checks if two `Pubkey`s are equal
* @param {Pubkey} other
* @returns {boolean}
*/
equals(other) {
_assertClass(other, Pubkey);
const ret = wasm.pubkey_equals(this.ptr, other.ptr);
return ret !== 0;
}
/**
* Return the `Uint8Array` representation of the public key
* @returns {Uint8Array}
*/
toBytes() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.pubkey_toBytes(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var v0 = getArrayU8FromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 1);
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Derive a Pubkey from another Pubkey, string seed, and a program id
* @param {Pubkey} base
* @param {string} seed
* @param {Pubkey} owner
* @returns {Pubkey}
*/
static createWithSeed(base, seed, owner) {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
_assertClass(base, Pubkey);
const ptr0 = passStringToWasm0(seed, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
_assertClass(owner, Pubkey);
wasm.pubkey_createWithSeed(retptr, base.ptr, ptr0, len0, owner.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var r2 = getInt32Memory0()[retptr / 4 + 2];
if (r2) {
throw takeObject(r1);
}
return Pubkey.__wrap(r0);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Derive a program address from seeds and a program id
* @param {any[]} seeds
* @param {Pubkey} program_id
* @returns {Pubkey}
*/
static createProgramAddress(seeds, program_id) {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
const ptr0 = passArrayJsValueToWasm0(seeds, wasm.__wbindgen_malloc);
const len0 = WASM_VECTOR_LEN;
_assertClass(program_id, Pubkey);
wasm.pubkey_createProgramAddress(retptr, ptr0, len0, program_id.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var r2 = getInt32Memory0()[retptr / 4 + 2];
if (r2) {
throw takeObject(r1);
}
return Pubkey.__wrap(r0);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Find a valid program address
*
* Returns:
* * `[PubKey, number]` - the program address and bump seed
* @param {any[]} seeds
* @param {Pubkey} program_id
* @returns {any}
*/
static findProgramAddress(seeds, program_id) {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
const ptr0 = passArrayJsValueToWasm0(seeds, wasm.__wbindgen_malloc);
const len0 = WASM_VECTOR_LEN;
_assertClass(program_id, Pubkey);
wasm.pubkey_findProgramAddress(retptr, ptr0, len0, program_id.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var r2 = getInt32Memory0()[retptr / 4 + 2];
if (r2) {
throw takeObject(r1);
}
return takeObject(r0);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
}
/**
*/
export class SendTransactionResult {
static __wrap(ptr) {
const obj = Object.create(SendTransactionResult.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_sendtransactionresult_free(ptr);
}
/**
* @returns {string | undefined}
*/
error() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.sendtransactionresult_error(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
let v0;
if (r0 !== 0) {
v0 = getStringFromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 1);
}
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* @returns {string}
*/
txHash() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.sendtransactionresult_txHash(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
return getStringFromWasm0(r0, r1);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
wasm.__wbindgen_free(r0, r1);
}
}
}
/**
*/
export class SimulateTransactionResult {
static __wrap(ptr) {
const obj = Object.create(SimulateTransactionResult.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_simulatetransactionresult_free(ptr);
}
/**
* @returns {string | undefined}
*/
error() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.simulatetransactionresult_error(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
let v0;
if (r0 !== 0) {
v0 = getStringFromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 1);
}
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* @returns {any[]}
*/
logs() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.simulatetransactionresult_logs(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var v0 = getArrayJsValueFromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 4);
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* @returns {bigint}
*/
unitsConsumed() {
const ret = wasm.__wbg_get_transactionstatus_slot(this.ptr);
return BigInt.asUintN(64, ret);
}
/**
* @returns {WasmTransactionReturnData | undefined}
*/
returnData() {
const ret = wasm.simulatetransactionresult_returnData(this.ptr);
return ret === 0 ? undefined : WasmTransactionReturnData.__wrap(ret);
}
}
export class SystemInstruction {
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
// free() {
// const ptr = this.__destroy_into_raw();
// wasm.__wbg_systeminstruction_free(ptr);
// }
/**
* @param {Pubkey} from_pubkey
* @param {Pubkey} to_pubkey
* @param {bigint} lamports
* @param {bigint} space
* @param {Pubkey} owner
* @returns {Instruction}
*/
static createAccount(from_pubkey, to_pubkey, lamports, space, owner) {
_assertClass(from_pubkey, Pubkey);
_assertClass(to_pubkey, Pubkey);
_assertClass(owner, Pubkey);
const ret = wasm.systeminstruction_createAccount(from_pubkey.ptr, to_pubkey.ptr, lamports, space, owner.ptr);
return Instruction.__wrap(ret);
}
/**
* @param {Pubkey} from_pubkey
* @param {Pubkey} to_pubkey
* @param {Pubkey} base
* @param {string} seed
* @param {bigint} lamports
* @param {bigint} space
* @param {Pubkey} owner
* @returns {Instruction}
*/
static createAccountWithSeed(from_pubkey, to_pubkey, base, seed, lamports, space, owner) {
_assertClass(from_pubkey, Pubkey);
_assertClass(to_pubkey, Pubkey);
_assertClass(base, Pubkey);
const ptr0 = passStringToWasm0(seed, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
_assertClass(owner, Pubkey);
const ret = wasm.systeminstruction_createAccountWithSeed(from_pubkey.ptr, to_pubkey.ptr, base.ptr, ptr0, len0, lamports, space, owner.ptr);
return Instruction.__wrap(ret);
}
/**
* @param {Pubkey} pubkey
* @param {Pubkey} owner
* @returns {Instruction}
*/
static assign(pubkey, owner) {
_assertClass(pubkey, Pubkey);
_assertClass(owner, Pubkey);
const ret = wasm.systeminstruction_assign(pubkey.ptr, owner.ptr);
return Instruction.__wrap(ret);
}
/**
* @param {Pubkey} pubkey
* @param {Pubkey} base
* @param {string} seed
* @param {Pubkey} owner
* @returns {Instruction}
*/
static assignWithSeed(pubkey, base, seed, owner) {
_assertClass(pubkey, Pubkey);
_assertClass(base, Pubkey);
const ptr0 = passStringToWasm0(seed, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
_assertClass(owner, Pubkey);
const ret = wasm.systeminstruction_assignWithSeed(pubkey.ptr, base.ptr, ptr0, len0, owner.ptr);
return Instruction.__wrap(ret);
}
/**
* @param {Pubkey} from_pubkey
* @param {Pubkey} to_pubkey
* @param {bigint} lamports
* @returns {Instruction}
*/
static transfer(from_pubkey, to_pubkey, lamports) {
_assertClass(from_pubkey, Pubkey);
_assertClass(to_pubkey, Pubkey);
const ret = wasm.systeminstruction_transfer(from_pubkey.ptr, to_pubkey.ptr, lamports);
return Instruction.__wrap(ret);
}
/**
* @param {Pubkey} from_pubkey
* @param {Pubkey} from_base
* @param {string} from_seed
* @param {Pubkey} from_owner
* @param {Pubkey} to_pubkey
* @param {bigint} lamports
* @returns {Instruction}
*/
static transferWithSeed(from_pubkey, from_base, from_seed, from_owner, to_pubkey, lamports) {
_assertClass(from_pubkey, Pubkey);
_assertClass(from_base, Pubkey);
const ptr0 = passStringToWasm0(from_seed, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
_assertClass(from_owner, Pubkey);
_assertClass(to_pubkey, Pubkey);
const ret = wasm.systeminstruction_transferWithSeed(from_pubkey.ptr, from_base.ptr, ptr0, len0, from_owner.ptr, to_pubkey.ptr, lamports);
return Instruction.__wrap(ret);
}
/**
* @param {Pubkey} pubkey
* @param {bigint} space
* @returns {Instruction}
*/
static allocate(pubkey, space) {
_assertClass(pubkey, Pubkey);
const ret = wasm.systeminstruction_allocate(pubkey.ptr, space);
return Instruction.__wrap(ret);
}
/**
* @param {Pubkey} address
* @param {Pubkey} base
* @param {string} seed
* @param {bigint} space
* @param {Pubkey} owner
* @returns {Instruction}
*/
static allocateWithSeed(address, base, seed, space, owner) {
_assertClass(address, Pubkey);
_assertClass(base, Pubkey);
const ptr0 = passStringToWasm0(seed, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
_assertClass(owner, Pubkey);
const ret = wasm.systeminstruction_allocateWithSeed(address.ptr, base.ptr, ptr0, len0, space, owner.ptr);
return Instruction.__wrap(ret);
}
/**
* @param {Pubkey} from_pubkey
* @param {Pubkey} nonce_pubkey
* @param {Pubkey} authority
* @param {bigint} lamports
* @returns {Array<any>}
*/
static createNonceAccount(from_pubkey, nonce_pubkey, authority, lamports) {
_assertClass(from_pubkey, Pubkey);
_assertClass(nonce_pubkey, Pubkey);
_assertClass(authority, Pubkey);
const ret = wasm.systeminstruction_createNonceAccount(from_pubkey.ptr, nonce_pubkey.ptr, authority.ptr, lamports);
return takeObject(ret);
}
/**
* @param {Pubkey} nonce_pubkey
* @param {Pubkey} authorized_pubkey
* @returns {Instruction}
*/
static advanceNonceAccount(nonce_pubkey, authorized_pubkey) {
_assertClass(nonce_pubkey, Pubkey);
_assertClass(authorized_pubkey, Pubkey);
const ret = wasm.systeminstruction_advanceNonceAccount(nonce_pubkey.ptr, authorized_pubkey.ptr);
return Instruction.__wrap(ret);
}
/**
* @param {Pubkey} nonce_pubkey
* @param {Pubkey} authorized_pubkey
* @param {Pubkey} to_pubkey
* @param {bigint} lamports
* @returns {Instruction}
*/
static withdrawNonceAccount(nonce_pubkey, authorized_pubkey, to_pubkey, lamports) {
_assertClass(nonce_pubkey, Pubkey);
_assertClass(authorized_pubkey, Pubkey);
_assertClass(to_pubkey, Pubkey);
const ret = wasm.systeminstruction_withdrawNonceAccount(nonce_pubkey.ptr, authorized_pubkey.ptr, to_pubkey.ptr, lamports);
return Instruction.__wrap(ret);
}
/**
* @param {Pubkey} nonce_pubkey
* @param {Pubkey} authorized_pubkey
* @param {Pubkey} new_authority
* @returns {Instruction}
*/
static authorizeNonceAccount(nonce_pubkey, authorized_pubkey, new_authority) {
_assertClass(nonce_pubkey, Pubkey);
_assertClass(authorized_pubkey, Pubkey);
_assertClass(new_authority, Pubkey);
const ret = wasm.systeminstruction_authorizeNonceAccount(nonce_pubkey.ptr, authorized_pubkey.ptr, new_authority.ptr);
return Instruction.__wrap(ret);
}
}
/**
* An atomically-commited sequence of instructions.
*
* While [`Instruction`]s are the basic unit of computation in Solana,
* they are submitted by clients in [`Transaction`]s containing one or
* more instructions, and signed by one or more [`Signer`]s.
*
* [`Signer`]: crate::signer::Signer
*
* See the [module documentation] for more details about transactions.
*
* [module documentation]: self
*
* Some constructors accept an optional `payer`, the account responsible for
* paying the cost of executing a transaction. In most cases, callers should
* specify the payer explicitly in these constructors. In some cases though,
* the caller is not _required_ to specify the payer, but is still allowed to:
* in the [`Message`] structure, the first account is always the fee-payer, so
* if the caller has knowledge that the first account of the constructed
* transaction's `Message` is both a signer and the expected fee-payer, then
* redundantly specifying the fee-payer is not strictly required.
*/
export class Transaction {
static __wrap(ptr) {
const obj = Object.create(Transaction.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_transaction_free(ptr);
}
/**
* Create a new `Transaction`
* @param {Instructions} instructions
* @param {Pubkey | undefined} payer
*/
constructor(instructions, payer) {
_assertClass(instructions, Instructions);
var ptr0 = instructions.ptr;
instructions.ptr = 0;
let ptr1 = 0;
if (!isLikeNone(payer)) {
_assertClass(payer, Pubkey);
ptr1 = payer.ptr;
payer.ptr = 0;
}
const ret = wasm.transaction_constructor(ptr0, ptr1);
return Transaction.__wrap(ret);
}
/**
* Return a message containing all data that should be signed.
* @returns {Message}
*/
message() {
const ret = wasm.transaction_message(this.ptr);
return Message.__wrap(ret);
}
/**
* Return the serialized message data to sign.
* @returns {Uint8Array}
*/
messageData() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.transaction_messageData(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var v0 = getArrayU8FromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 1);
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Verify the transaction
*/
verify() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.transaction_verify(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
if (r1) {
throw takeObject(r0);
}
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* @param {Keypair} keypair
* @param {Hash} recent_blockhash
*/
partialSign(keypair, recent_blockhash) {
_assertClass(keypair, Keypair);
_assertClass(recent_blockhash, Hash);
wasm.transaction_partialSign(this.ptr, keypair.ptr, recent_blockhash.ptr);
}
/**
* @returns {boolean}
*/
isSigned() {
const ret = wasm.transaction_isSigned(this.ptr);
return ret !== 0;
}
/**
* @returns {Uint8Array}
*/
toBytes() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.transaction_toBytes(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var v0 = getArrayU8FromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 1);
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* @param {Uint8Array} bytes
* @returns {Transaction}
*/
static fromBytes(bytes) {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
const ptr0 = passArray8ToWasm0(bytes, wasm.__wbindgen_malloc);
const len0 = WASM_VECTOR_LEN;
wasm.transaction_fromBytes(retptr, ptr0, len0);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var r2 = getInt32Memory0()[retptr / 4 + 2];
if (r2) {
throw takeObject(r1);
}
return Transaction.__wrap(r0);
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
}
/**
*/
export class TransactionStatus {
static __wrap(ptr) {
const obj = Object.create(TransactionStatus.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_transactionstatus_free(ptr);
}
/**
* @returns {number | undefined}
*/
get confirmationStatus() {
const ret = wasm.__wbg_get_transactionstatus_confirmationStatus(this.ptr);
return ret === 3 ? undefined : ret;
}
/**
* @param {number | undefined} arg0
*/
set confirmationStatus(arg0) {
wasm.__wbg_set_transactionstatus_confirmationStatus(this.ptr, isLikeNone(arg0) ? 3 : arg0);
}
/**
* @returns {number | undefined}
*/
get confirmations() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.__wbg_get_transactionstatus_confirmations(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
return r0 === 0 ? undefined : r1 >>> 0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* @param {number | undefined} arg0
*/
set confirmations(arg0) {
wasm.__wbg_set_transactionstatus_confirmations(this.ptr, !isLikeNone(arg0), isLikeNone(arg0) ? 0 : arg0);
}
/**
* @returns {bigint}
*/
get slot() {
const ret = wasm.__wbg_get_transactionstatus_slot(this.ptr);
return BigInt.asUintN(64, ret);
}
/**
* @param {bigint} arg0
*/
set slot(arg0) {
wasm.__wbg_set_transactionstatus_slot(this.ptr, arg0);
}
/**
* @returns {string | undefined}
*/
error() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.transactionstatus_error(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
let v0;
if (r0 !== 0) {
v0 = getStringFromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 1);
}
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
}
/**
*/
export class WasmAccount {
static __wrap(ptr) {
const obj = Object.create(WasmAccount.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_wasmaccount_free(ptr);
}
/**
* Lamports in the account
* @returns {bigint}
*/
get lamports() {
const ret = wasm.__wbg_get_transactionstatus_slot(this.ptr);
return BigInt.asUintN(64, ret);
}
/**
* Lamports in the account
* @param {bigint} arg0
*/
set lamports(arg0) {
wasm.__wbg_set_transactionstatus_slot(this.ptr, arg0);
}
/**
* Data held in this account
* @returns {Uint8Array}
*/
get data() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.__wbg_get_wasmaccount_data(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var v0 = getArrayU8FromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 1);
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* Data held in this account
* @param {Uint8Array} arg0
*/
set data(arg0) {
const ptr0 = passArray8ToWasm0(arg0, wasm.__wbindgen_malloc);
const len0 = WASM_VECTOR_LEN;
wasm.__wbg_set_wasmaccount_data(this.ptr, ptr0, len0);
}
/**
* The program that owns this account. If executable, the program that loads this account.
* @returns {Pubkey}
*/
get owner() {
const ret = wasm.__wbg_get_wasmaccount_owner(this.ptr);
return Pubkey.__wrap(ret);
}
/**
* The program that owns this account. If executable, the program that loads this account.
* @param {Pubkey} arg0
*/
set owner(arg0) {
_assertClass(arg0, Pubkey);
var ptr0 = arg0.ptr;
arg0.ptr = 0;
wasm.__wbg_set_wasmaccount_owner(this.ptr, ptr0);
}
/**
* This account's data contains a loaded program (and is now read-only)
* @returns {boolean}
*/
get executable() {
const ret = wasm.__wbg_get_wasmaccount_executable(this.ptr);
return ret !== 0;
}
/**
* This account's data contains a loaded program (and is now read-only)
* @param {boolean} arg0
*/
set executable(arg0) {
wasm.__wbg_set_wasmaccount_executable(this.ptr, arg0);
}
/**
* The epoch at which this account will next owe rent
* @returns {bigint}
*/
get rentEpoch() {
const ret = wasm.__wbg_get_wasmaccount_rentEpoch(this.ptr);
return BigInt.asUintN(64, ret);
}
/**
* The epoch at which this account will next owe rent
* @param {bigint} arg0
*/
set rentEpoch(arg0) {
wasm.__wbg_set_wasmaccount_rentEpoch(this.ptr, arg0);
}
}
/**
* Return data at the end of a transaction
*/
export class WasmTransactionReturnData {
static __wrap(ptr) {
const obj = Object.create(WasmTransactionReturnData.prototype);
obj.ptr = ptr;
return obj;
}
__destroy_into_raw() {
const ptr = this.ptr;
this.ptr = 0;
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_wasmtransactionreturndata_free(ptr);
}
/**
* @returns {Pubkey}
*/
get programId() {
const ret = wasm.__wbg_get_wasmtransactionreturndata_programId(this.ptr);
return Pubkey.__wrap(ret);
}
/**
* @param {Pubkey} arg0
*/
set programId(arg0) {
_assertClass(arg0, Pubkey);
var ptr0 = arg0.ptr;
arg0.ptr = 0;
wasm.__wbg_set_wasmtransactionreturndata_programId(this.ptr, ptr0);
}
/**
* @returns {Uint8Array}
*/
get data() {
try {
const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);
wasm.__wbg_get_wasmtransactionreturndata_data(retptr, this.ptr);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
var v0 = getArrayU8FromWasm0(r0, r1).slice();
wasm.__wbindgen_free(r0, r1 * 1);
return v0;
} finally {
wasm.__wbindgen_add_to_stack_pointer(16);
}
}
/**
* @param {Uint8Array} arg0
*/
set data(arg0) {
const ptr0 = passArray8ToWasm0(arg0, wasm.__wbindgen_malloc);
const len0 = WASM_VECTOR_LEN;
wasm.__wbg_set_wasmtransactionreturndata_data(this.ptr, ptr0, len0);
}
}
export function __wbindgen_object_drop_ref(arg0) {
takeObject(arg0);
};
export function __wbindgen_string_new(arg0, arg1) {
const ret = getStringFromWasm0(arg0, arg1);
return addHeapObject(ret);
};
export function __wbg_transactionstatus_new(arg0) {
const ret = TransactionStatus.__wrap(arg0);
return addHeapObject(ret);
};
export function __wbindgen_string_get(arg0, arg1) {
const obj = getObject(arg1);
const ret = typeof(obj) === 'string' ? obj : undefined;
var ptr0 = isLikeNone(ret) ? 0 : passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
var len0 = WASM_VECTOR_LEN;
getInt32Memory0()[arg0 / 4 + 1] = len0;
getInt32Memory0()[arg0 / 4 + 0] = ptr0;
};
export function __wbg_instruction_new(arg0) {
const ret = Instruction.__wrap(arg0);
return addHeapObject(ret);
};
export function __wbindgen_is_undefined(arg0) {
const ret = getObject(arg0) === undefined;
return ret;
};
export function __wbindgen_number_get(arg0, arg1) {
const obj = getObject(arg1);
const ret = typeof(obj) === 'number' ? obj : undefined;
getFloat64Memory0()[arg0 / 8 + 1] = isLikeNone(ret) ? 0 : ret;
getInt32Memory0()[arg0 / 4 + 0] = !isLikeNone(ret);
};
export function __wbg_pubkey_new(arg0) {
const ret = Pubkey.__wrap(arg0);
return addHeapObject(ret);
};
export function __wbindgen_number_new(arg0) {
const ret = arg0;
return addHeapObject(ret);
};
export function __wbg_debug_f15cb542ea509609(arg0) {
console.debug(getObject(arg0));
};
export function __wbg_error_ef9a0be47931175f(arg0) {
console.error(getObject(arg0));
};
export function __wbg_info_2874fdd5393f35ce(arg0) {
console.info(getObject(arg0));
};
export function __wbg_log_4b5638ad60bdc54a(arg0) {
console.log(getObject(arg0));
};
export function __wbg_warn_58110c4a199df084(arg0) {
console.warn(getObject(arg0));
};
export function __wbg_new_abda76e883ba8a5f() {
const ret = new Error();
return addHeapObject(ret);
};
export function __wbg_stack_658279fe44541cf6(arg0, arg1) {
const ret = getObject(arg1).stack;
const ptr0 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
getInt32Memory0()[arg0 / 4 + 1] = len0;
getInt32Memory0()[arg0 / 4 + 0] = ptr0;
};
export function __wbg_error_f851667af71bcfc6(arg0, arg1) {
try {
console.error(getStringFromWasm0(arg0, arg1));
} finally {
wasm.__wbindgen_free(arg0, arg1);
}
};
export function __wbg_self_7eede1f4488bf346() { return handleError(function () {
const ret = self.self;
return addHeapObject(ret);
}, arguments) };
export function __wbg_crypto_c909fb428dcbddb6(arg0) {
const ret = getObject(arg0).crypto;
return addHeapObject(ret);
};
export function __wbg_msCrypto_511eefefbfc70ae4(arg0) {
const ret = getObject(arg0).msCrypto;
return addHeapObject(ret);
};
export function __wbg_static_accessor_MODULE_ef3aa2eb251158a5() {
const ret = module;
return addHeapObject(ret);
};
export function __wbg_require_900d5c3984fe7703(arg0, arg1, arg2) {
const ret = getObject(arg0).require(getStringFromWasm0(arg1, arg2));
return addHeapObject(ret);
};
export function __wbg_getRandomValues_307049345d0bd88c(arg0) {
const ret = getObject(arg0).getRandomValues;
return addHeapObject(ret);
};
export function __wbg_getRandomValues_cd175915511f705e(arg0, arg1) {
getObject(arg0).getRandomValues(getObject(arg1));
};
export function __wbg_randomFillSync_85b3f4c52c56c313(arg0, arg1, arg2) {
getObject(arg0).randomFillSync(getArrayU8FromWasm0(arg1, arg2));
};
export function __wbindgen_is_object(arg0) {
const val = getObject(arg0);
const ret = typeof(val) === 'object' && val !== null;
return ret;
};
export function __wbindgen_is_function(arg0) {
const ret = typeof(getObject(arg0)) === 'function';
return ret;
};
export function __wbg_new_1d9a920c6bfc44a8() {
const ret = new Array();
return addHeapObject(ret);
};
export function __wbg_next_579e583d33566a86(arg0) {
const ret = getObject(arg0).next;
return addHeapObject(ret);
};
export function __wbg_next_aaef7c8aa5e212ac() { return handleError(function (arg0) {
const ret = getObject(arg0).next();
return addHeapObject(ret);
}, arguments) };
export function __wbg_done_1b73b0672e15f234(arg0) {
const ret = getObject(arg0).done;
return ret;
};
export function __wbg_value_1ccc36bc03462d71(arg0) {
const ret = getObject(arg0).value;
return addHeapObject(ret);
};
export function __wbg_iterator_6f9d4f28845f426c() {
const ret = Symbol.iterator;
return addHeapObject(ret);
};
export function __wbg_get_765201544a2b6869() { return handleError(function (arg0, arg1) {
const ret = Reflect.get(getObject(arg0), getObject(arg1));
return addHeapObject(ret);
}, arguments) };
export function __wbg_call_97ae9d8645dc388b() { return handleError(function (arg0, arg1) {
const ret = getObject(arg0).call(getObject(arg1));
return addHeapObject(ret);
}, arguments) };
export function __wbg_newwithlength_7c42f7e738a9d5d3(arg0) {
const ret = new Array(arg0 >>> 0);
return addHeapObject(ret);
};
export function __wbg_set_a68214f35c417fa9(arg0, arg1, arg2) {
getObject(arg0)[arg1 >>> 0] = takeObject(arg2);
};
export function __wbg_isArray_27c46c67f498e15d(arg0) {
const ret = Array.isArray(getObject(arg0));
return ret;
};
export function __wbg_push_740e4b286702d964(arg0, arg1) {
const ret = getObject(arg0).push(getObject(arg1));
return ret;
};
export function __wbg_values_e42671acbf11ec04(arg0) {
const ret = getObject(arg0).values();
return addHeapObject(ret);
};
export function __wbg_buffer_3f3d764d4747d564(arg0) {
const ret = getObject(arg0).buffer;
return addHeapObject(ret);
};
export function __wbg_new_8c3f0052272a457a(arg0) {
const ret = new Uint8Array(getObject(arg0));
return addHeapObject(ret);
};
export function __wbg_set_83db9690f9353e79(arg0, arg1, arg2) {
getObject(arg0).set(getObject(arg1), arg2 >>> 0);
};
export function __wbg_length_9e1ae1900cb0fbd5(arg0) {
const ret = getObject(arg0).length;
return ret;
};
export function __wbg_instanceof_Uint8Array_971eeda69eb75003(arg0) {
let result;
try {
result = getObject(arg0) instanceof Uint8Array;
} catch {
result = false;
}
const ret = result;
return ret;
};
export function __wbg_newwithlength_f5933855e4f48a19(arg0) {
const ret = new Uint8Array(arg0 >>> 0);
return addHeapObject(ret);
};
export function __wbg_subarray_58ad4efbb5bcb886(arg0, arg1, arg2) {
const ret = getObject(arg0).subarray(arg1 >>> 0, arg2 >>> 0);
return addHeapObject(ret);
};
export function __wbindgen_debug_string(arg0, arg1) {
const ret = debugString(getObject(arg1));
const ptr0 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
getInt32Memory0()[arg0 / 4 + 1] = len0;
getInt32Memory0()[arg0 / 4 + 0] = ptr0;
};
export function __wbindgen_throw(arg0, arg1) {
throw new Error(getStringFromWasm0(arg0, arg1));
};
export function __wbindgen_memory() {
const ret = wasm.memory;
return addHeapObject(ret);
};
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs/playnet/playnet.js
|
import * as wasm from "./playnet_bg.wasm";
export * from "./playnet_bg.js";
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs/playnet/playnet.d.ts
|
/* tslint:disable */
/* eslint-disable */
/**
* Initialize Javascript logging and panic handler
*/
export function solana_program_init(): void;
/**
*/
export enum WasmCommitmentLevel {
Processed,
Confirmed,
Finalized,
}
/**
* Metadata for a confirmed transaction on the ledger
*/
export class ConfirmedTransactionMeta {
free(): void;
/**
* @returns {bigint}
*/
fee(): bigint;
/**
* TODO:
* @returns {number | undefined}
*/
innerInstructions(): number | undefined;
/**
* @returns {BigUint64Array}
*/
preBalances(): BigUint64Array;
/**
* @returns {BigUint64Array}
*/
postBalances(): BigUint64Array;
/**
* @returns {any[] | undefined}
*/
logs(): any[] | undefined;
/**
* TODO:
* @returns {number | undefined}
*/
preTokenBalances(): number | undefined;
/**
* TODO:
* @returns {number | undefined}
*/
postTokenBalances(): number | undefined;
/**
* @returns {string | undefined}
*/
err(): string | undefined;
/**
* TODO:
* @returns {number | undefined}
*/
loadedAddresses(): number | undefined;
/**
* @returns {bigint | undefined}
*/
computeUnitsConsumed(): bigint | undefined;
}
/**
*/
export class GetLatestBlockhashResult {
free(): void;
/**
* @returns {string}
*/
blockhash(): string;
/**
* @returns {bigint}
*/
lastValidBlockHeight(): bigint;
}
/**
*/
export class GetSignatureStatusesResult {
free(): void;
/**
* @returns {any[]}
*/
statuses(): any[];
}
/**
*/
export class GetTransactionResult {
free(): void;
/**
* NOTE: This method should be called before accessing any other data
* @returns {boolean}
*/
exists(): boolean;
/**
* @returns {bigint | undefined}
*/
blockTime(): bigint | undefined;
/**
* Returns the transaction version or `None` for legacy transactions
* @returns {number | undefined}
*/
version(): number | undefined;
/**
* @returns {ConfirmedTransactionMeta}
*/
meta(): ConfirmedTransactionMeta;
/**
* Returns the base64 encoded tx string
* @returns {string}
*/
transaction(): string;
}
/**
* A hash; the 32-byte output of a hashing algorithm.
*
* This struct is used most often in `solana-sdk` and related crates to contain
* a [SHA-256] hash, but may instead contain a [blake3] hash, as created by the
* [`blake3`] module (and used in [`Message::hash`]).
*
* [SHA-256]: https://en.wikipedia.org/wiki/SHA-2
* [blake3]: https://github.com/BLAKE3-team/BLAKE3
* [`blake3`]: crate::blake3
* [`Message::hash`]: crate::message::Message::hash
*/
export class Hash {
free(): void;
/**
* Create a new Hash object
*
* * `value` - optional hash as a base58 encoded string, `Uint8Array`, `[number]`
* @param {any} value
*/
constructor(value: any);
/**
* Return the base58 string representation of the hash
* @returns {string}
*/
toString(): string;
/**
* Checks if two `Hash`s are equal
* @param {Hash} other
* @returns {boolean}
*/
equals(other: Hash): boolean;
/**
* Return the `Uint8Array` representation of the hash
* @returns {Uint8Array}
*/
toBytes(): Uint8Array;
}
/**
* A directive for a single invocation of a Solana program.
*
* An instruction specifies which program it is calling, which accounts it may
* read or modify, and additional data that serves as input to the program. One
* or more instructions are included in transactions submitted by Solana
* clients. Instructions are also used to describe [cross-program
* invocations][cpi].
*
* [cpi]: https://docs.solana.com/developing/programming-model/calling-between-programs
*
* During execution, a program will receive a list of account data as one of
* its arguments, in the same order as specified during `Instruction`
* construction.
*
* While Solana is agnostic to the format of the instruction data, it has
* built-in support for serialization via [`borsh`] and [`bincode`].
*
* [`borsh`]: https://docs.rs/borsh/latest/borsh/
* [`bincode`]: https://docs.rs/bincode/latest/bincode/
*
* # Specifying account metadata
*
* When constructing an [`Instruction`], a list of all accounts that may be
* read or written during the execution of that instruction must be supplied as
* [`AccountMeta`] values.
*
* Any account whose data may be mutated by the program during execution must
* be specified as writable. During execution, writing to an account that was
* not specified as writable will cause the transaction to fail. Writing to an
* account that is not owned by the program will cause the transaction to fail.
*
* Any account whose lamport balance may be mutated by the program during
* execution must be specified as writable. During execution, mutating the
* lamports of an account that was not specified as writable will cause the
* transaction to fail. While _subtracting_ lamports from an account not owned
* by the program will cause the transaction to fail, _adding_ lamports to any
* account is allowed, as long is it is mutable.
*
* Accounts that are not read or written by the program may still be specified
* in an `Instruction`'s account list. These will affect scheduling of program
* execution by the runtime, but will otherwise be ignored.
*
* When building a transaction, the Solana runtime coalesces all accounts used
* by all instructions in that transaction, along with accounts and permissions
* required by the runtime, into a single account list. Some accounts and
* account permissions required by the runtime to process a transaction are
* _not_ required to be included in an `Instruction`s account list. These
* include:
*
* - The program ID — it is a separate field of `Instruction`
* - The transaction's fee-paying account — it is added during [`Message`]
* construction. A program may still require the fee payer as part of the
* account list if it directly references it.
*
* [`Message`]: crate::message::Message
*
* Programs may require signatures from some accounts, in which case they
* should be specified as signers during `Instruction` construction. The
* program must still validate during execution that the account is a signer.
*/
export class Instruction {
free(): void;
}
/**
*/
export class Instructions {
free(): void;
/**
*/
constructor();
/**
* @param {Instruction} instruction
*/
push(instruction: Instruction): void;
}
/**
* A vanilla Ed25519 key pair
*/
export class Keypair {
free(): void;
/**
* Create a new `Keypair `
*/
constructor();
/**
* Convert a `Keypair` to a `Uint8Array`
* @returns {Uint8Array}
*/
toBytes(): Uint8Array;
/**
* Recover a `Keypair` from a `Uint8Array`
* @param {Uint8Array} bytes
* @returns {Keypair}
*/
static fromBytes(bytes: Uint8Array): Keypair;
/**
* Return the `Pubkey` for this `Keypair`
* @returns {Pubkey}
*/
pubkey(): Pubkey;
}
/**
* A Solana transaction message (legacy).
*
* See the [`message`] module documentation for further description.
*
* [`message`]: crate::message
*
* Some constructors accept an optional `payer`, the account responsible for
* paying the cost of executing a transaction. In most cases, callers should
* specify the payer explicitly in these constructors. In some cases though,
* the caller is not _required_ to specify the payer, but is still allowed to:
* in the `Message` structure, the first account is always the fee-payer, so if
* the caller has knowledge that the first account of the constructed
* transaction's `Message` is both a signer and the expected fee-payer, then
* redundantly specifying the fee-payer is not strictly required.
*/
export class Message {
free(): void;
/**
* The id of a recent ledger entry.
*/
recent_blockhash: Hash;
}
/**
*/
export class PgRpc {
free(): void;
/**
* @param {string} pubkey_str
* @returns {WasmAccount}
*/
getAccountInfo(pubkey_str: string): WasmAccount;
/**
* @returns {bigint}
*/
getSlot(): bigint;
/**
* @returns {bigint}
*/
getBlockHeight(): bigint;
/**
* @returns {string}
*/
getGenesisHash(): string;
/**
* @returns {GetLatestBlockhashResult}
*/
getLatestBlockhash(): GetLatestBlockhashResult;
/**
* @param {number} data_len
* @returns {bigint}
*/
getMinimumBalanceForRentExemption(data_len: number): bigint;
/**
* @param {Uint8Array} serialized_msg
* @returns {bigint | undefined}
*/
getFeeForMessage(serialized_msg: Uint8Array): bigint | undefined;
/**
* @param {Uint8Array} serialized_tx
* @returns {SimulateTransactionResult}
*/
simulateTransaction(serialized_tx: Uint8Array): SimulateTransactionResult;
/**
* @param {Uint8Array} serialized_tx
* @returns {SendTransactionResult}
*/
sendTransaction(serialized_tx: Uint8Array): SendTransactionResult;
/**
* @param {any[]} signatures
* @returns {GetSignatureStatusesResult}
*/
getSignatureStatuses(signatures: any[]): GetSignatureStatusesResult;
/**
* @param {string} signature_str
* @returns {GetTransactionResult}
*/
getTransaction(signature_str: string): GetTransactionResult;
/**
* @param {string} pubkey_str
* @param {bigint} lamports
* @returns {SendTransactionResult}
*/
requestAirdrop(pubkey_str: string, lamports: bigint): SendTransactionResult;
}
/**
*/
export class Playnet {
free(): void;
/**
* Playnet lifecycle starts after constructing a Playnet instance
* @param {string | undefined} maybe_bank_string
*/
constructor(maybe_bank_string?: string);
/**
* Get the save data necessary to recover from the next time Playnet instance gets created
* @returns {string}
*/
getSaveData(): string;
/**
* RPC methods to interact with the Playnet
*/
rpc: PgRpc;
}
/**
* The address of a [Solana account][acc].
*
* Some account addresses are [ed25519] public keys, with corresponding secret
* keys that are managed off-chain. Often, though, account addresses do not
* have corresponding secret keys — as with [_program derived
* addresses_][pdas] — or the secret key is not relevant to the operation
* of a program, and may have even been disposed of. As running Solana programs
* can not safely create or manage secret keys, the full [`Keypair`] is not
* defined in `solana-program` but in `solana-sdk`.
*
* [acc]: https://docs.solana.com/developing/programming-model/accounts
* [ed25519]: https://ed25519.cr.yp.to/
* [pdas]: https://docs.solana.com/developing/programming-model/calling-between-programs#program-derived-addresses
* [`Keypair`]: https://docs.rs/solana-sdk/latest/solana_sdk/signer/keypair/struct.Keypair.html
*/
export class Pubkey {
free(): void;
/**
* Create a new Pubkey object
*
* * `value` - optional public key as a base58 encoded string, `Uint8Array`, `[number]`
* @param {any} value
*/
constructor(value: any);
/**
* Return the base58 string representation of the public key
* @returns {string}
*/
toString(): string;
/**
* Check if a `Pubkey` is on the ed25519 curve.
* @returns {boolean}
*/
isOnCurve(): boolean;
/**
* Checks if two `Pubkey`s are equal
* @param {Pubkey} other
* @returns {boolean}
*/
equals(other: Pubkey): boolean;
/**
* Return the `Uint8Array` representation of the public key
* @returns {Uint8Array}
*/
toBytes(): Uint8Array;
/**
* Derive a Pubkey from another Pubkey, string seed, and a program id
* @param {Pubkey} base
* @param {string} seed
* @param {Pubkey} owner
* @returns {Pubkey}
*/
static createWithSeed(base: Pubkey, seed: string, owner: Pubkey): Pubkey;
/**
* Derive a program address from seeds and a program id
* @param {any[]} seeds
* @param {Pubkey} program_id
* @returns {Pubkey}
*/
static createProgramAddress(seeds: any[], program_id: Pubkey): Pubkey;
/**
* Find a valid program address
*
* Returns:
* * `[PubKey, number]` - the program address and bump seed
* @param {any[]} seeds
* @param {Pubkey} program_id
* @returns {any}
*/
static findProgramAddress(seeds: any[], program_id: Pubkey): any;
}
/**
*/
export class SendTransactionResult {
free(): void;
/**
* @returns {string | undefined}
*/
error(): string | undefined;
/**
* @returns {string}
*/
txHash(): string;
}
/**
*/
export class SimulateTransactionResult {
free(): void;
/**
* @returns {string | undefined}
*/
error(): string | undefined;
/**
* @returns {any[]}
*/
logs(): any[];
/**
* @returns {bigint}
*/
unitsConsumed(): bigint;
/**
* @returns {WasmTransactionReturnData | undefined}
*/
returnData(): WasmTransactionReturnData | undefined;
}
/**
* An atomically-commited sequence of instructions.
*
* While [`Instruction`]s are the basic unit of computation in Solana,
* they are submitted by clients in [`Transaction`]s containing one or
* more instructions, and signed by one or more [`Signer`]s.
*
* [`Signer`]: crate::signer::Signer
*
* See the [module documentation] for more details about transactions.
*
* [module documentation]: self
*
* Some constructors accept an optional `payer`, the account responsible for
* paying the cost of executing a transaction. In most cases, callers should
* specify the payer explicitly in these constructors. In some cases though,
* the caller is not _required_ to specify the payer, but is still allowed to:
* in the [`Message`] structure, the first account is always the fee-payer, so
* if the caller has knowledge that the first account of the constructed
* transaction's `Message` is both a signer and the expected fee-payer, then
* redundantly specifying the fee-payer is not strictly required.
*/
export class Transaction {
free(): void;
/**
* Create a new `Transaction`
* @param {Instructions} instructions
* @param {Pubkey | undefined} payer
*/
constructor(instructions: Instructions, payer?: Pubkey);
/**
* Return a message containing all data that should be signed.
* @returns {Message}
*/
message(): Message;
/**
* Return the serialized message data to sign.
* @returns {Uint8Array}
*/
messageData(): Uint8Array;
/**
* Verify the transaction
*/
verify(): void;
/**
* @param {Keypair} keypair
* @param {Hash} recent_blockhash
*/
partialSign(keypair: Keypair, recent_blockhash: Hash): void;
/**
* @returns {boolean}
*/
isSigned(): boolean;
/**
* @returns {Uint8Array}
*/
toBytes(): Uint8Array;
/**
* @param {Uint8Array} bytes
* @returns {Transaction}
*/
static fromBytes(bytes: Uint8Array): Transaction;
}
/**
*/
export class TransactionStatus {
free(): void;
/**
* @returns {string | undefined}
*/
error(): string | undefined;
/**
*/
confirmationStatus?: number;
/**
*/
confirmations?: number;
/**
*/
slot: bigint;
}
/**
*/
export class WasmAccount {
free(): void;
/**
* Data held in this account
*/
data: Uint8Array;
/**
* This account's data contains a loaded program (and is now read-only)
*/
executable: boolean;
/**
* Lamports in the account
*/
lamports: bigint;
/**
* The program that owns this account. If executable, the program that loads this account.
*/
owner: Pubkey;
/**
* The epoch at which this account will next owe rent
*/
rentEpoch: bigint;
}
/**
* Return data at the end of a transaction
*/
export class WasmTransactionReturnData {
free(): void;
/**
*/
data: Uint8Array;
/**
*/
programId: Pubkey;
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs/playnet/package.json
|
{
"name": "playnet",
"collaborators": [
"Acheron <acheroncrypto@gmail.com>"
],
"description": "A minimal runtime to execute Solana programs",
"version": "0.1.0",
"license": "GPL-3.0",
"repository": {
"type": "git",
"url": "https://github.com/solana-playground/solana-playground"
},
"files": [
"playnet_bg.wasm",
"playnet.js",
"playnet_bg.js",
"playnet.d.ts"
],
"module": "playnet.js",
"homepage": "https://beta.solpg.io",
"types": "playnet.d.ts",
"sideEffects": false,
"keywords": [
"playnet",
"solana",
"playground",
"wasm"
]
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs/playnet/playnet_bg.wasm.d.ts
|
/* tslint:disable */
/* eslint-disable */
export const memory: WebAssembly.Memory;
export function __wbg_pgrpc_free(a: number): void;
export function pgrpc_getAccountInfo(a: number, b: number, c: number): number;
export function pgrpc_getSlot(a: number): number;
export function pgrpc_getBlockHeight(a: number): number;
export function pgrpc_getGenesisHash(a: number, b: number): void;
export function pgrpc_getLatestBlockhash(a: number): number;
export function pgrpc_getMinimumBalanceForRentExemption(a: number, b: number): number;
export function pgrpc_getFeeForMessage(a: number, b: number, c: number, d: number): void;
export function pgrpc_simulateTransaction(a: number, b: number, c: number): number;
export function pgrpc_sendTransaction(a: number, b: number, c: number): number;
export function pgrpc_getSignatureStatuses(a: number, b: number, c: number): number;
export function pgrpc_getTransaction(a: number, b: number, c: number): number;
export function pgrpc_requestAirdrop(a: number, b: number, c: number, d: number): number;
export function __wbg_confirmedtransactionmeta_free(a: number): void;
export function confirmedtransactionmeta_fee(a: number): number;
export function confirmedtransactionmeta_innerInstructions(a: number): number;
export function confirmedtransactionmeta_preBalances(a: number, b: number): void;
export function confirmedtransactionmeta_postBalances(a: number, b: number): void;
export function confirmedtransactionmeta_logs(a: number, b: number): void;
export function confirmedtransactionmeta_err(a: number, b: number): void;
export function confirmedtransactionmeta_computeUnitsConsumed(a: number, b: number): void;
export function confirmedtransactionmeta_preTokenBalances(a: number): number;
export function confirmedtransactionmeta_postTokenBalances(a: number): number;
export function confirmedtransactionmeta_loadedAddresses(a: number): number;
export function __wbg_wasmaccount_free(a: number): void;
export function __wbg_get_wasmaccount_data(a: number, b: number): void;
export function __wbg_set_wasmaccount_data(a: number, b: number, c: number): void;
export function __wbg_get_wasmaccount_owner(a: number): number;
export function __wbg_set_wasmaccount_owner(a: number, b: number): void;
export function __wbg_get_wasmaccount_executable(a: number): number;
export function __wbg_set_wasmaccount_executable(a: number, b: number): void;
export function __wbg_get_wasmaccount_rentEpoch(a: number): number;
export function __wbg_set_wasmaccount_rentEpoch(a: number, b: number): void;
export function __wbg_getlatestblockhashresult_free(a: number): void;
export function getlatestblockhashresult_blockhash(a: number, b: number): void;
export function __wbg_wasmtransactionreturndata_free(a: number): void;
export function __wbg_get_wasmtransactionreturndata_programId(a: number): number;
export function __wbg_set_wasmtransactionreturndata_programId(a: number, b: number): void;
export function __wbg_get_wasmtransactionreturndata_data(a: number, b: number): void;
export function __wbg_set_wasmtransactionreturndata_data(a: number, b: number, c: number): void;
export function __wbg_simulatetransactionresult_free(a: number): void;
export function simulatetransactionresult_error(a: number, b: number): void;
export function simulatetransactionresult_logs(a: number, b: number): void;
export function simulatetransactionresult_returnData(a: number): number;
export function __wbg_sendtransactionresult_free(a: number): void;
export function sendtransactionresult_error(a: number, b: number): void;
export function sendtransactionresult_txHash(a: number, b: number): void;
export function __wbg_getsignaturestatusesresult_free(a: number): void;
export function getsignaturestatusesresult_statuses(a: number, b: number): void;
export function __wbg_transactionstatus_free(a: number): void;
export function __wbg_get_transactionstatus_confirmationStatus(a: number): number;
export function __wbg_set_transactionstatus_confirmationStatus(a: number, b: number): void;
export function __wbg_get_transactionstatus_confirmations(a: number, b: number): void;
export function __wbg_set_transactionstatus_confirmations(a: number, b: number, c: number): void;
export function __wbg_get_transactionstatus_slot(a: number): number;
export function __wbg_set_transactionstatus_slot(a: number, b: number): void;
export function transactionstatus_error(a: number, b: number): void;
export function __wbg_gettransactionresult_free(a: number): void;
export function gettransactionresult_exists(a: number): number;
export function gettransactionresult_blockTime(a: number, b: number): void;
export function gettransactionresult_version(a: number): number;
export function gettransactionresult_meta(a: number): number;
export function gettransactionresult_transaction(a: number, b: number): void;
export function __wbg_set_wasmaccount_lamports(a: number, b: number): void;
export function getlatestblockhashresult_lastValidBlockHeight(a: number): number;
export function simulatetransactionresult_unitsConsumed(a: number): number;
export function __wbg_get_wasmaccount_lamports(a: number): number;
export function __wbg_playnet_free(a: number): void;
export function __wbg_get_playnet_rpc(a: number): number;
export function __wbg_set_playnet_rpc(a: number, b: number): void;
export function playnet_new(a: number, b: number): number;
export function playnet_getSaveData(a: number, b: number): void;
export function __wbg_transaction_free(a: number): void;
export function transaction_constructor(a: number, b: number): number;
export function transaction_message(a: number): number;
export function transaction_messageData(a: number, b: number): void;
export function transaction_verify(a: number, b: number): void;
export function transaction_partialSign(a: number, b: number, c: number): void;
export function transaction_isSigned(a: number): number;
export function transaction_toBytes(a: number, b: number): void;
export function transaction_fromBytes(a: number, b: number, c: number): void;
export function __wbg_keypair_free(a: number): void;
export function keypair_constructor(): number;
export function keypair_toBytes(a: number, b: number): void;
export function keypair_fromBytes(a: number, b: number, c: number): void;
export function keypair_pubkey(a: number): number;
export function __wbg_instruction_free(a: number): void;
export function __wbg_hash_free(a: number): void;
export function __wbg_pubkey_free(a: number): void;
export function __wbg_instructions_free(a: number): void;
export function instructions_constructor(): number;
export function instructions_push(a: number, b: number): void;
export function __wbg_message_free(a: number): void;
export function __wbg_get_message_recent_blockhash(a: number): number;
export function __wbg_set_message_recent_blockhash(a: number, b: number): void;
export function systeminstruction_createAccount(a: number, b: number, c: number, d: number, e: number): number;
export function systeminstruction_createAccountWithSeed(a: number, b: number, c: number, d: number, e: number, f: number, g: number, h: number): number;
export function systeminstruction_assign(a: number, b: number): number;
export function systeminstruction_assignWithSeed(a: number, b: number, c: number, d: number, e: number): number;
export function systeminstruction_transfer(a: number, b: number, c: number): number;
export function systeminstruction_transferWithSeed(a: number, b: number, c: number, d: number, e: number, f: number, g: number): number;
export function systeminstruction_allocate(a: number, b: number): number;
export function systeminstruction_allocateWithSeed(a: number, b: number, c: number, d: number, e: number, f: number): number;
export function systeminstruction_createNonceAccount(a: number, b: number, c: number, d: number): number;
export function systeminstruction_advanceNonceAccount(a: number, b: number): number;
export function systeminstruction_withdrawNonceAccount(a: number, b: number, c: number, d: number): number;
export function systeminstruction_authorizeNonceAccount(a: number, b: number, c: number): number;
export function pubkey_constructor(a: number, b: number): void;
export function pubkey_toString(a: number, b: number): void;
export function pubkey_isOnCurve(a: number): number;
export function pubkey_equals(a: number, b: number): number;
export function pubkey_toBytes(a: number, b: number): void;
export function pubkey_createWithSeed(a: number, b: number, c: number, d: number, e: number): void;
export function pubkey_createProgramAddress(a: number, b: number, c: number, d: number): void;
export function pubkey_findProgramAddress(a: number, b: number, c: number, d: number): void;
export function hash_constructor(a: number, b: number): void;
export function hash_toString(a: number, b: number): void;
export function hash_equals(a: number, b: number): number;
export function hash_toBytes(a: number, b: number): void;
export function solana_program_init(): void;
export function __wbindgen_malloc(a: number): number;
export function __wbindgen_realloc(a: number, b: number, c: number): number;
export function __wbindgen_add_to_stack_pointer(a: number): number;
export function __wbindgen_free(a: number, b: number): void;
export function __wbindgen_exn_store(a: number): void;
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs/rustfmt/rustfmt_wasm.d.ts
|
/* tslint:disable */
export function rustfmt(arg0: string): RustfmtResult;
export class RustfmtResult {
free(): void;
code(): string;
error(): string;
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs/rustfmt/package.json
|
{
"name": "rustfmt-wasm",
"collaborators": [
"Acheron <acheroncrypto@gmail.com>"
],
"description": "Rust formatter for WASM",
"version": "0.99.4",
"license": "Apache-2.0",
"repository": {
"type": "git",
"url": "https://github.com/solana-playground/solana-playground"
},
"files": [
"rustfmt_wasm_bg.wasm",
"rustfmt_wasm.js",
"rustfmt_wasm_bg.js",
"rustfmt_wasm.d.ts"
],
"module": "rustfmt_wasm.js",
"homepage": "https://beta.solpg.io",
"types": "rustfmt_wasm.d.ts",
"sideEffects": false,
"keywords": [
"rustfmt",
"wasm",
"rust",
"format",
"rustwasm"
]
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs/rustfmt/rustfmt_wasm.js
|
/* tslint:disable */
import * as wasm from './rustfmt_wasm_bg';
let cachedTextDecoder = new TextDecoder('utf-8');
let cachegetUint8Memory = null;
function getUint8Memory() {
if (cachegetUint8Memory === null || cachegetUint8Memory.buffer !== wasm.memory.buffer) {
cachegetUint8Memory = new Uint8Array(wasm.memory.buffer);
}
return cachegetUint8Memory;
}
function getStringFromWasm(ptr, len) {
return cachedTextDecoder.decode(getUint8Memory().subarray(ptr, ptr + len));
}
export function __wbg_error_cc95a3d302735ca3(arg0, arg1) {
let varg0 = getStringFromWasm(arg0, arg1);
varg0 = varg0.slice();
wasm.__wbindgen_free(arg0, arg1 * 1);
console.error(varg0);
}
let cachedTextEncoder = new TextEncoder('utf-8');
let WASM_VECTOR_LEN = 0;
function passStringToWasm(arg) {
const buf = cachedTextEncoder.encode(arg);
const ptr = wasm.__wbindgen_malloc(buf.length);
getUint8Memory().set(buf, ptr);
WASM_VECTOR_LEN = buf.length;
return ptr;
}
/**
* @param {string} arg0
* @returns {RustfmtResult}
*/
export function rustfmt(arg0) {
const ptr0 = passStringToWasm(arg0);
const len0 = WASM_VECTOR_LEN;
try {
return RustfmtResult.__wrap(wasm.rustfmt(ptr0, len0));
} finally {
wasm.__wbindgen_free(ptr0, len0 * 1);
}
}
let cachedGlobalArgumentPtr = null;
function globalArgumentPtr() {
if (cachedGlobalArgumentPtr === null) {
cachedGlobalArgumentPtr = wasm.__wbindgen_global_argument_ptr();
}
return cachedGlobalArgumentPtr;
}
let cachegetUint32Memory = null;
function getUint32Memory() {
if (cachegetUint32Memory === null || cachegetUint32Memory.buffer !== wasm.memory.buffer) {
cachegetUint32Memory = new Uint32Array(wasm.memory.buffer);
}
return cachegetUint32Memory;
}
function freeRustfmtResult(ptr) {
wasm.__wbg_rustfmtresult_free(ptr);
}
/**
*/
export class RustfmtResult {
static __wrap(ptr) {
const obj = Object.create(RustfmtResult.prototype);
obj.ptr = ptr;
return obj;
}
free() {
const ptr = this.ptr;
this.ptr = 0;
freeRustfmtResult(ptr);
}
/**
* @returns {string}
*/
code() {
const ptr = this.ptr;
this.ptr = 0;
const retptr = globalArgumentPtr();
wasm.rustfmtresult_code(retptr, ptr);
const mem = getUint32Memory();
const rustptr = mem[retptr / 4];
const rustlen = mem[retptr / 4 + 1];
const realRet = getStringFromWasm(rustptr, rustlen).slice();
wasm.__wbindgen_free(rustptr, rustlen * 1);
return realRet;
}
/**
* @returns {string}
*/
error() {
const retptr = globalArgumentPtr();
wasm.rustfmtresult_error(retptr, this.ptr);
const mem = getUint32Memory();
const rustptr = mem[retptr / 4];
const rustlen = mem[retptr / 4 + 1];
if (rustptr === 0) return;
const realRet = getStringFromWasm(rustptr, rustlen).slice();
wasm.__wbindgen_free(rustptr, rustlen * 1);
return realRet;
}
}
export function __wbindgen_throw(ptr, len) {
throw new Error(getStringFromWasm(ptr, len));
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs
|
solana_public_repos/solana-playground/solana-playground/wasm/pkgs/rustfmt/rustfmt_wasm_bg.d.ts
|
/* tslint:disable */
export const memory: WebAssembly.Memory;
export function __wbindgen_global_argument_ptr(): number;
export function __wbindgen_malloc(a: number): number;
export function __wbindgen_free(a: number, b: number): void;
export function rustfmt(a: number, b: number): number;
export function __wbg_rustfmtresult_free(a: number): void;
export function rustfmtresult_code(a: number, b: number): void;
export function rustfmtresult_error(a: number, b: number): void;
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm
|
solana_public_repos/solana-playground/solana-playground/wasm/seahorse-compile/Cargo.toml
|
[package]
name = "seahorse-compile-wasm"
version = "0.2.0" # mirror seahorse-dev version
edition = "2021"
authors = ["Callum McIntyre <callum.mcintyre@solana.com>"]
description = "Seahorse compiler for Solana Playground with WASM."
repository = "https://github.com/solana-playground/solana-playground"
license = "GPL-3.0"
[lib]
crate-type = ["cdylib", "rlib"]
[dependencies]
console_error_panic_hook = "*"
seahorse-dev = "0.2.0"
solana-playground-utils-wasm = { path = "../utils/solana-playground-utils" }
wasm-bindgen = "*"
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/seahorse-compile
|
solana_public_repos/solana-playground/solana-playground/wasm/seahorse-compile/src/lib.rs
|
use std::{panic, path::PathBuf, str::FromStr};
use seahorse_dev::core::{compile, Tree};
use solana_playground_utils_wasm::js::PgTerminal;
use wasm_bindgen::prelude::*;
#[wasm_bindgen(js_name = "compileSeahorse")]
pub fn compile_seahorse(python_source: String, program_name: String) -> Vec<JsValue> {
panic::set_hook(Box::new(console_error_panic_hook::hook));
// Seahorse gives a file tree relative to the src/ directory
// Playground expects to include the src/ prefix on files
let base_path: PathBuf = PathBuf::from_str("/src").unwrap();
match compile(python_source, program_name, Some(base_path.clone())) {
Ok(out_tree) => build_src_tree(&out_tree.tree, base_path)
// we need to change from Vec<String> to Vec<JsValue> for wasm-bindgen
.iter()
.map(|s| JsValue::from_str(s))
.collect::<Vec<_>>(),
Err(e) => {
// Log the compile error to Playground terminal
PgTerminal::log_wasm(&e.to_string());
vec![]
}
}
}
/// Convert the Seahorse file tree to an array that we can return via wasm
/// Seahorse gives a file tree.
/// The nodes are the file path (eg. `dot` -> `mod`) would be dot/mod.rs
/// The leaves are the rust content of the file
/// We output to wasm an array of flattened tuples [filepath, content, filepath, content]
fn build_src_tree(tree: &Tree<String>, path: PathBuf) -> Vec<String> {
match tree {
Tree::Leaf(src) => {
// We add the `.rs` extension to each file
let path = path.with_extension("rs").to_str().unwrap().to_owned();
vec![path, src.to_string()]
}
Tree::Node(node) => node
// Recursively find the leaves from this node and flatten
.iter()
.flat_map(|(subpath, subtree)| build_src_tree(subtree, path.join(subpath)))
.collect::<Vec<_>>(),
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/Cargo.toml
|
[workspace]
members = [
"deps/*",
"rustfmt-wasm",
# Not including rustfmt-nightly to the workspace because of incompatible deps
]
[profile.release]
lto = true
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/Cargo.lock
|
[[package]]
name = "aho-corasick"
version = "0.6.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "arena"
version = "0.0.0"
dependencies = [
"rustc_data_structures 0.0.0",
]
[[package]]
name = "arrayvec"
version = "0.4.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"nodrop 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "assert_cli"
version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"colored 1.6.1 (registry+https://github.com/rust-lang/crates.io-index)",
"difference 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"environment 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"failure 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"failure_derive 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_json 1.0.26 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "atty"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "backtrace"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"backtrace-sys 0.1.24 (registry+https://github.com/rust-lang/crates.io-index)",
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-demangle 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "backtrace-sys"
version = "0.1.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cc 1.0.18 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "bitflags"
version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "byteorder"
version = "1.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "cargo_metadata"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"error-chain 0.12.0 (registry+https://github.com/rust-lang/crates.io-index)",
"semver 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_derive 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_json 1.0.26 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "cc"
version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "cfg-if"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "colored"
version = "1.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "console_error_panic_hook"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"wasm-bindgen 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "crossbeam-deque"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"crossbeam-epoch 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"crossbeam-utils 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "crossbeam-epoch"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"arrayvec 0.4.7 (registry+https://github.com/rust-lang/crates.io-index)",
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"crossbeam-utils 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"memoffset 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"nodrop 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)",
"scopeguard 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "crossbeam-utils"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "derive-new"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.30 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.14.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "diff"
version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "difference"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "either"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "ena"
version = "0.9.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "env_logger"
version = "0.5.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"humantime 1.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"termcolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "environment"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "error-chain"
version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"backtrace 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "failure"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"backtrace 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
"failure_derive 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "failure_derive"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.30 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.14.4 (registry+https://github.com/rust-lang/crates.io-index)",
"synstructure 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "fuchsia-zircon"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"fuchsia-zircon-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "fuchsia-zircon-sys"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "getopts"
version = "0.2.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "humantime"
version = "1.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"quick-error 1.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "itertools"
version = "0.7.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"either 1.5.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "itoa"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "lazy_static"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "libc"
version = "0.2.43"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "log"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "maybe-uninit"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "memchr"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "memoffset"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "nodrop"
version = "0.1.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "num_cpus"
version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "owning_ref"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"stable_deref_trait 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "parking_lot"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"owning_ref 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
"parking_lot_core 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "parking_lot_core"
version = "0.2.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"smallvec 0.6.14 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "proc-macro2"
version = "0.4.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "quick-error"
version = "1.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "quote"
version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.30 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "redox_syscall"
version = "0.1.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "redox_termios"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"redox_syscall 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"aho-corasick 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.6.1 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex-syntax"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"ucd-util 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc-demangle"
version = "0.1.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "rustc-hash"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"byteorder 1.2.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc-rayon"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"either 1.5.0 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-rayon-core 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc-rayon-core"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"crossbeam-deque 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"num_cpus 1.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc_cratesio_shim"
version = "0.0.0"
dependencies = [
"bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc_data_structures"
version = "0.0.0"
dependencies = [
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"ena 0.9.3 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"parking_lot 0.5.5 (registry+https://github.com/rust-lang/crates.io-index)",
"parking_lot_core 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-hash 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-rayon 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-rayon-core 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_cratesio_shim 0.0.0",
"serialize 0.0.0",
"smallvec 0.6.14 (registry+https://github.com/rust-lang/crates.io-index)",
"stable_deref_trait 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc_errors"
version = "0.0.0"
dependencies = [
"atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_data_structures 0.0.0",
"serialize 0.0.0",
"syntax_pos 0.0.0",
"termcolor 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc_target"
version = "0.0.0"
dependencies = [
"bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_cratesio_shim 0.0.0",
"serialize 0.0.0",
]
[[package]]
name = "rustfmt-nightly"
version = "0.99.4"
dependencies = [
"assert_cli 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"cargo_metadata 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)",
"derive-new 0.5.5 (registry+https://github.com/rust-lang/crates.io-index)",
"diff 0.1.11 (registry+https://github.com/rust-lang/crates.io-index)",
"env_logger 0.5.12 (registry+https://github.com/rust-lang/crates.io-index)",
"failure 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"getopts 0.2.18 (registry+https://github.com/rust-lang/crates.io-index)",
"itertools 0.7.8 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_target 0.0.0",
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_derive 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_json 1.0.26 (registry+https://github.com/rust-lang/crates.io-index)",
"syntax 0.0.0",
"syntax_pos 0.0.0",
"term 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"toml 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-segmentation 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustfmt-wasm"
version = "0.99.4"
dependencies = [
"console_error_panic_hook 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
"rustfmt-nightly 0.99.4",
"wasm-bindgen 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ryu"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "scoped-tls"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "scopeguard"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "semver"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"semver-parser 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "semver-parser"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "serde"
version = "1.0.71"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "serde_derive"
version = "1.0.71"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.30 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.14.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "serde_json"
version = "1.0.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"itoa 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"ryu 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "serialize"
version = "0.0.0"
dependencies = [
"smallvec 0.6.14 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "smallvec"
version = "0.6.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"maybe-uninit 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "stable_deref_trait"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "syn"
version = "0.14.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.30 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "syn"
version = "0.15.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.30 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "synstructure"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.30 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.14.4 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "syntax"
version = "0.0.0"
dependencies = [
"bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_data_structures 0.0.0",
"rustc_errors 0.0.0",
"rustc_target 0.0.0",
"scoped-tls 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"serialize 0.0.0",
"smallvec 0.6.14 (registry+https://github.com/rust-lang/crates.io-index)",
"syntax_pos 0.0.0",
]
[[package]]
name = "syntax_pos"
version = "0.0.0"
dependencies = [
"arena 0.0.0",
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_data_structures 0.0.0",
"scoped-tls 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"serialize 0.0.0",
"unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "term"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"byteorder 1.2.3 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "termcolor"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"wincolor 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "termcolor"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"wincolor 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "termion"
version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "thread_local"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "toml"
version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ucd-util"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "unicode-segmentation"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "unicode-width"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "unicode-xid"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "unreachable"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"void 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "utf8-ranges"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "void"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "wasm-bindgen"
version = "0.2.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"wasm-bindgen-macro 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "wasm-bindgen-backend"
version = "0.2.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"proc-macro2 0.4.30 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.15.44 (registry+https://github.com/rust-lang/crates.io-index)",
"wasm-bindgen-shared 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"wasm-bindgen-macro-support 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.30 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.15.44 (registry+https://github.com/rust-lang/crates.io-index)",
"wasm-bindgen-backend 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)",
"wasm-bindgen-shared 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "wincolor"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "wincolor"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[metadata]
"checksum aho-corasick 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c1c6d463cbe7ed28720b5b489e7c083eeb8f90d08be2a0d6bb9e1ffea9ce1afa"
"checksum arrayvec 0.4.7 (registry+https://github.com/rust-lang/crates.io-index)" = "a1e964f9e24d588183fcb43503abda40d288c8657dfc27311516ce2f05675aef"
"checksum assert_cli 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)" = "a29ab7c0ed62970beb0534d637a8688842506d0ff9157de83286dacd065c8149"
"checksum atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "9a7d5b8723950951411ee34d271d99dddcc2035a16ab25310ea2c8cfd4369652"
"checksum backtrace 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)" = "89a47830402e9981c5c41223151efcced65a0510c13097c769cede7efb34782a"
"checksum backtrace-sys 0.1.24 (registry+https://github.com/rust-lang/crates.io-index)" = "c66d56ac8dabd07f6aacdaf633f4b8262f5b3601a810a0dcddffd5c22c69daa0"
"checksum bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)" = "d0c54bb8f454c567f21197eefcdbf5679d0bd99f2ddbe52e84c77061952e6789"
"checksum byteorder 1.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "74c0b906e9446b0a2e4f760cdb3fa4b2c48cdc6db8766a845c54b6ff063fd2e9"
"checksum cargo_metadata 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)" = "2d6809b327f87369e6f3651efd2c5a96c49847a3ed2559477ecba79014751ee1"
"checksum cc 1.0.18 (registry+https://github.com/rust-lang/crates.io-index)" = "2119ea4867bd2b8ed3aecab467709720b2d55b1bcfe09f772fd68066eaf15275"
"checksum cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "efe5c877e17a9c717a0bf3613b2709f723202c4e4675cc8f12926ded29bcb17e"
"checksum colored 1.6.1 (registry+https://github.com/rust-lang/crates.io-index)" = "dc0a60679001b62fb628c4da80e574b9645ab4646056d7c9018885efffe45533"
"checksum console_error_panic_hook 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)" = "6c5dd2c094474ec60a6acaf31780af270275e3153bafff2db5995b715295762e"
"checksum crossbeam-deque 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "f739f8c5363aca78cfb059edf753d8f0d36908c348f3d8d1503f03d8b75d9cf3"
"checksum crossbeam-epoch 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)" = "927121f5407de9956180ff5e936fe3cf4324279280001cd56b669d28ee7e9150"
"checksum crossbeam-utils 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "2760899e32a1d58d5abb31129f8fae5de75220bc2176e77ff7c627ae45c918d9"
"checksum derive-new 0.5.5 (registry+https://github.com/rust-lang/crates.io-index)" = "899ec79626c14e00ccc9729b4d750bbe67fe76a8f436824c16e0233bbd9d7daa"
"checksum diff 0.1.11 (registry+https://github.com/rust-lang/crates.io-index)" = "3c2b69f912779fbb121ceb775d74d51e915af17aaebc38d28a592843a2dd0a3a"
"checksum difference 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "524cbf6897b527295dff137cec09ecf3a05f4fddffd7dfcd1585403449e74198"
"checksum either 1.5.0 (registry+https://github.com/rust-lang/crates.io-index)" = "3be565ca5c557d7f59e7cfcf1844f9e3033650c929c6566f511e8005f205c1d0"
"checksum ena 0.9.3 (registry+https://github.com/rust-lang/crates.io-index)" = "88dc8393b3c7352f94092497f6b52019643e493b6b890eb417cdb7c46117e621"
"checksum env_logger 0.5.12 (registry+https://github.com/rust-lang/crates.io-index)" = "f4d7e69c283751083d53d01eac767407343b8b69c4bd70058e08adc2637cb257"
"checksum environment 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "1f4b14e20978669064c33b4c1e0fb4083412e40fe56cbea2eae80fd7591503ee"
"checksum error-chain 0.12.0 (registry+https://github.com/rust-lang/crates.io-index)" = "07e791d3be96241c77c43846b665ef1384606da2cd2a48730abe606a12906e02"
"checksum failure 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "7efb22686e4a466b1ec1a15c2898f91fa9cb340452496dca654032de20ff95b9"
"checksum failure_derive 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "946d0e98a50d9831f5d589038d2ca7f8f455b1c21028c0db0e84116a12696426"
"checksum fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "2e9763c69ebaae630ba35f74888db465e49e259ba1bc0eda7d06f4a067615d82"
"checksum fuchsia-zircon-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "3dcaa9ae7725d12cdb85b3ad99a434db70b468c09ded17e012d86b5c1010f7a7"
"checksum getopts 0.2.18 (registry+https://github.com/rust-lang/crates.io-index)" = "0a7292d30132fb5424b354f5dc02512a86e4c516fe544bb7a25e7f266951b797"
"checksum humantime 1.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "0484fda3e7007f2a4a0d9c3a703ca38c71c54c55602ce4660c419fd32e188c9e"
"checksum itertools 0.7.8 (registry+https://github.com/rust-lang/crates.io-index)" = "f58856976b776fedd95533137617a02fb25719f40e7d9b01c7043cd65474f450"
"checksum itoa 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "5adb58558dcd1d786b5f0bd15f3226ee23486e24b7b58304b60f64dc68e62606"
"checksum lazy_static 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "fb497c35d362b6a331cfd94956a07fc2c78a4604cdbee844a81170386b996dd3"
"checksum libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)" = "76e3a3ef172f1a0b9a9ff0dd1491ae5e6c948b94479a3021819ba7d860c8645d"
"checksum log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)" = "61bd98ae7f7b754bc53dca7d44b604f733c6bba044ea6f41bc8d89272d8161d2"
"checksum maybe-uninit 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "60302e4db3a61da70c0cb7991976248362f30319e88850c487b9b95bbf059e00"
"checksum memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "796fba70e76612589ed2ce7f45282f5af869e0fdd7cc6199fa1aa1f1d591ba9d"
"checksum memoffset 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "0f9dc261e2b62d7a622bf416ea3c5245cdd5d9a7fcc428c0d06804dfce1775b3"
"checksum nodrop 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)" = "9a2228dca57108069a5262f2ed8bd2e82496d2e074a06d1ccc7ce1687b6ae0a2"
"checksum num_cpus 1.8.0 (registry+https://github.com/rust-lang/crates.io-index)" = "c51a3322e4bca9d212ad9a158a02abc6934d005490c054a2778df73a70aa0a30"
"checksum owning_ref 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "cdf84f41639e037b484f93433aa3897863b561ed65c6e59c7073d7c561710f37"
"checksum parking_lot 0.5.5 (registry+https://github.com/rust-lang/crates.io-index)" = "d4d05f1349491390b1730afba60bb20d55761bef489a954546b58b4b34e1e2ac"
"checksum parking_lot_core 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)" = "4db1a8ccf734a7bce794cc19b3df06ed87ab2f3907036b693c68f56b4d4537fa"
"checksum proc-macro2 0.4.30 (registry+https://github.com/rust-lang/crates.io-index)" = "cf3d2011ab5c909338f7887f4fc896d35932e29146c12c8d01da6b22a80ba759"
"checksum quick-error 1.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "9274b940887ce9addde99c4eee6b5c44cc494b182b97e73dc8ffdcb3397fd3f0"
"checksum quote 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)" = "e44651a0dc4cdd99f71c83b561e221f714912d11af1a4dff0631f923d53af035"
"checksum rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "eba5f8cb59cc50ed56be8880a5c7b496bfd9bd26394e176bc67884094145c2c5"
"checksum redox_syscall 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)" = "c214e91d3ecf43e9a4e41e578973adeb14b474f2bee858742d127af75a0112b1"
"checksum redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "7e891cfe48e9100a70a3b6eb652fef28920c117d366339687bd5576160db0f76"
"checksum regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "5bbbea44c5490a1e84357ff28b7d518b4619a159fed5d25f6c1de2d19cc42814"
"checksum regex-syntax 0.6.1 (registry+https://github.com/rust-lang/crates.io-index)" = "05b06a75f5217880fc5e905952a42750bf44787e56a6c6d6852ed0992f5e1d54"
"checksum rustc-demangle 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)" = "76d7ba1feafada44f2d38eed812bd2489a03c0f5abb975799251518b68848649"
"checksum rustc-hash 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "7540fc8b0c49f096ee9c961cda096467dce8084bec6bdca2fc83895fd9b28cb8"
"checksum rustc-rayon 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "8c6d5a683c6ba4ed37959097e88d71c9e8e26659a3cb5be8b389078e7ad45306"
"checksum rustc-rayon-core 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "40f06724db71e18d68b3b946fdf890ca8c921d9edccc1404fdfdb537b0d12649"
"checksum ryu 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)" = "b96a9549dc8d48f2c283938303c4b5a77aa29bfbc5b54b084fb1630408899a8f"
"checksum scoped-tls 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "332ffa32bf586782a3efaeb58f127980944bbc8c4d6913a86107ac2a5ab24b28"
"checksum scopeguard 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "94258f53601af11e6a49f722422f6e3425c52b06245a5cf9bc09908b174f5e27"
"checksum semver 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)" = "1d7eb9ef2c18661902cc47e535f9bc51b78acd254da71d375c2f6720d9a40403"
"checksum semver-parser 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3"
"checksum serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)" = "6dfad05c8854584e5f72fb859385ecdfa03af69c3fd0572f0da2d4c95f060bdb"
"checksum serde_derive 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)" = "b719c6d5e9f73fbc37892246d5852333f040caa617b8873c6aced84bcb28e7bb"
"checksum serde_json 1.0.26 (registry+https://github.com/rust-lang/crates.io-index)" = "44dd2cfde475037451fa99b7e5df77aa3cfd1536575fa8e7a538ab36dcde49ae"
"checksum smallvec 0.6.14 (registry+https://github.com/rust-lang/crates.io-index)" = "b97fcaeba89edba30f044a10c6a3cc39df9c3f17d7cd829dd1446cab35f890e0"
"checksum stable_deref_trait 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ffbc596e092fe5f598b12ef46cc03754085ac2f4d8c739ad61c4ae266cc3b3fa"
"checksum syn 0.14.4 (registry+https://github.com/rust-lang/crates.io-index)" = "2beff8ebc3658f07512a413866875adddd20f4fd47b2a4e6c9da65cd281baaea"
"checksum syn 0.15.44 (registry+https://github.com/rust-lang/crates.io-index)" = "9ca4b3b69a77cbe1ffc9e198781b7acb0c7365a883670e8f1c1bc66fba79a5c5"
"checksum synstructure 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)" = "85bb9b7550d063ea184027c9b8c20ac167cd36d3e06b3a40bceb9d746dc1a7b7"
"checksum term 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "5e6b677dd1e8214ea1ef4297f85dbcbed8e8cdddb561040cc998ca2551c37561"
"checksum termcolor 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)" = "adc4587ead41bf016f11af03e55a624c06568b5a19db4e90fde573d805074f83"
"checksum termcolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "722426c4a0539da2c4ffd9b419d90ad540b4cff4a053be9069c908d4d07e2836"
"checksum termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "689a3bdfaab439fd92bc87df5c4c78417d3cbe537487274e9b0b2dce76e92096"
"checksum thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "279ef31c19ededf577bfd12dfae728040a21f635b06a24cd670ff510edd38963"
"checksum toml 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)" = "a0263c6c02c4db6c8f7681f9fd35e90de799ebd4cfdeab77a38f4ff6b3d8c0d9"
"checksum ucd-util 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "fd2be2d6639d0f8fe6cdda291ad456e23629558d466e2789d2c3e9892bda285d"
"checksum unicode-segmentation 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "aa6024fc12ddfd1c6dbc14a80fa2324d4568849869b779f6bd37e5e4c03344d1"
"checksum unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)" = "882386231c45df4700b275c7ff55b6f3698780a650026380e72dabe76fa46526"
"checksum unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "fc72304796d0818e357ead4e000d19c9c174ab23dc11093ac919054d20a6a7fc"
"checksum unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "382810877fe448991dfc7f0dd6e3ae5d58088fd0ea5e35189655f84e6814fa56"
"checksum utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "662fab6525a98beff2921d7f61a39e7d59e0b425ebc7d0d9e66d316e55124122"
"checksum void 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "6a02e4885ed3bc0f2de90ea6dd45ebcbb66dacffe03547fadbb0eeae2770887d"
"checksum wasm-bindgen 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)" = "00272b34a477f42a334a1da1ede709672c00175506456fcf97ff8be03531598f"
"checksum wasm-bindgen-backend 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)" = "7688b8b25c71bb05600d007f653cf64bff3d43f094d361b623efe09f36238818"
"checksum wasm-bindgen-macro 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)" = "3d71de51e662a77f03f06d2d404afc8447a116bf95633a3b8f1d25ce31394920"
"checksum wasm-bindgen-macro-support 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)" = "d593bf3e9f9e3aad2ec90182c2cf0a9c1c1dc7f463457ec5ab9967d6786c908c"
"checksum wasm-bindgen-shared 0.2.30 (registry+https://github.com/rust-lang/crates.io-index)" = "44c9992a374d369415ea871edbeb9fe68a62145f933e41478933cd4a7e957d15"
"checksum winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "773ef9dcc5f24b7d850d0ff101e542ff24c3b090a9768e03ff889fdef41f00fd"
"checksum winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
"checksum winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
"checksum wincolor 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)" = "eeb06499a3a4d44302791052df005d5232b927ed1a9658146d842165c4de7767"
"checksum wincolor 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "b9dc3aa9dcda98b5a16150c54619c1ead22e3d3a5d458778ae914be760aa981a"
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rust-toolchain.toml
|
[toolchain]
channel = "nightly-2018-09-07"
targets = [ "wasm32-unknown-unknown" ]
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/Cargo.toml
|
[package]
name = "rustfmt-nightly"
version = "0.99.4"
authors = ["Nicholas Cameron <ncameron@mozilla.com>", "The Rustfmt developers"]
description = "Tool to find and fix Rust formatting issues"
repository = "https://github.com/rust-lang-nursery/rustfmt"
readme = "README.md"
license = "Apache-2.0/MIT"
build = "build.rs"
categories = ["development-tools"]
[[bin]]
name = "rustfmt"
path = "src/bin/main.rs"
[[bin]]
name = "cargo-fmt"
path = "src/cargo-fmt/main.rs"
[[bin]]
name = "rustfmt-format-diff"
path = "src/format-diff/main.rs"
[[bin]]
name = "git-rustfmt"
path = "src/git-rustfmt/main.rs"
[features]
default = ["cargo-fmt", "rustfmt-format-diff"]
cargo-fmt = []
rustfmt-format-diff = []
[dependencies]
# isatty = "=0.1.8"
itertools = "=0.7.8"
toml = "=0.4.6"
serde = "=1.0.71"
serde_derive = "=1.0.71"
serde_json = "=1.0.26"
unicode-segmentation = "=1.2.1"
regex = "=1.0.2"
term = "=0.5.1"
diff = "=0.1.11"
log = "0.4.3"
env_logger = "=0.5.12"
getopts = "=0.2.18"
derive-new = "=0.5.5"
cargo_metadata = "=0.6"
rustc_target = { path = "../deps/librustc_target" }
syntax = { path = "../deps/libsyntax" }
syntax_pos = { path = "../deps/libsyntax_pos" }
failure = "=0.1.2"
[dev-dependencies]
assert_cli = "0.6"
lazy_static = "1.0.0"
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/Cargo.lock
|
[[package]]
name = "aho-corasick"
version = "0.6.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "arena"
version = "0.0.0"
dependencies = [
"rustc_data_structures 0.0.0",
]
[[package]]
name = "arrayvec"
version = "0.4.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"nodrop 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "assert_cli"
version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"colored 1.6.1 (registry+https://github.com/rust-lang/crates.io-index)",
"difference 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"environment 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"failure 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"failure_derive 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_json 1.0.26 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "atty"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "backtrace"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"backtrace-sys 0.1.24 (registry+https://github.com/rust-lang/crates.io-index)",
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-demangle 0.1.9 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "backtrace-sys"
version = "0.1.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cc 1.0.18 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "bitflags"
version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "byteorder"
version = "1.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "cargo_metadata"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"error-chain 0.12.0 (registry+https://github.com/rust-lang/crates.io-index)",
"semver 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_derive 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_json 1.0.26 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "cc"
version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "cfg-if"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "colored"
version = "1.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "crossbeam-deque"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"crossbeam-epoch 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"crossbeam-utils 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "crossbeam-epoch"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"arrayvec 0.4.7 (registry+https://github.com/rust-lang/crates.io-index)",
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"crossbeam-utils 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"memoffset 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"nodrop 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)",
"scopeguard 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "crossbeam-utils"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "derive-new"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.14.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "diff"
version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "difference"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "either"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "ena"
version = "0.9.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "env_logger"
version = "0.5.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"humantime 1.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"termcolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "environment"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "error-chain"
version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"backtrace 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "failure"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"backtrace 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
"failure_derive 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "failure_derive"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.14.8 (registry+https://github.com/rust-lang/crates.io-index)",
"synstructure 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "fuchsia-zircon"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"fuchsia-zircon-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "fuchsia-zircon-sys"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "getopts"
version = "0.2.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "humantime"
version = "1.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"quick-error 1.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "itertools"
version = "0.7.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"either 1.5.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "itoa"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "lazy_static"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"version_check 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "libc"
version = "0.2.43"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "log"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "maybe-uninit"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "memchr"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "memoffset"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "nodrop"
version = "0.1.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "num_cpus"
version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "owning_ref"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"stable_deref_trait 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "parking_lot"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"owning_ref 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
"parking_lot_core 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "parking_lot_core"
version = "0.2.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"smallvec 0.6.14 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "proc-macro2"
version = "0.4.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "quick-error"
version = "1.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "quote"
version = "0.6.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "redox_syscall"
version = "0.1.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "redox_termios"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"redox_syscall 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"aho-corasick 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
"utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex-syntax"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"ucd-util 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc-demangle"
version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "rustc-hash"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"byteorder 1.2.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc-rayon"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"either 1.5.0 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-rayon-core 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc-rayon-core"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"crossbeam-deque 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"num_cpus 1.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc_cratesio_shim"
version = "0.0.0"
dependencies = [
"bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc_data_structures"
version = "0.0.0"
dependencies = [
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"ena 0.9.3 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"parking_lot 0.5.5 (registry+https://github.com/rust-lang/crates.io-index)",
"parking_lot_core 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-hash 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-rayon 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-rayon-core 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_cratesio_shim 0.0.0",
"serialize 0.0.0",
"smallvec 0.6.14 (registry+https://github.com/rust-lang/crates.io-index)",
"stable_deref_trait 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc_errors"
version = "0.0.0"
dependencies = [
"atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_data_structures 0.0.0",
"serialize 0.0.0",
"syntax_pos 0.0.0",
"termcolor 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc_target"
version = "0.0.0"
dependencies = [
"bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_cratesio_shim 0.0.0",
"serialize 0.0.0",
]
[[package]]
name = "rustfmt-nightly"
version = "0.99.4"
dependencies = [
"assert_cli 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"cargo_metadata 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)",
"derive-new 0.5.5 (registry+https://github.com/rust-lang/crates.io-index)",
"diff 0.1.11 (registry+https://github.com/rust-lang/crates.io-index)",
"env_logger 0.5.12 (registry+https://github.com/rust-lang/crates.io-index)",
"failure 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"getopts 0.2.18 (registry+https://github.com/rust-lang/crates.io-index)",
"itertools 0.7.8 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_target 0.0.0",
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_derive 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_json 1.0.26 (registry+https://github.com/rust-lang/crates.io-index)",
"syntax 0.0.0",
"syntax_pos 0.0.0",
"term 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"toml 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-segmentation 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ryu"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "scoped-tls"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "scopeguard"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "semver"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"semver-parser 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "semver-parser"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "serde"
version = "1.0.71"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "serde_derive"
version = "1.0.71"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.14.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "serde_json"
version = "1.0.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"itoa 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"ryu 0.2.4 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "serialize"
version = "0.0.0"
dependencies = [
"smallvec 0.6.14 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "smallvec"
version = "0.6.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"maybe-uninit 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "stable_deref_trait"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "syn"
version = "0.14.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "synstructure"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"proc-macro2 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.14.8 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "syntax"
version = "0.0.0"
dependencies = [
"bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_data_structures 0.0.0",
"rustc_errors 0.0.0",
"rustc_target 0.0.0",
"scoped-tls 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"serialize 0.0.0",
"smallvec 0.6.14 (registry+https://github.com/rust-lang/crates.io-index)",
"syntax_pos 0.0.0",
]
[[package]]
name = "syntax_pos"
version = "0.0.0"
dependencies = [
"arena 0.0.0",
"cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc_data_structures 0.0.0",
"scoped-tls 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"serialize 0.0.0",
"unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "term"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"byteorder 1.2.4 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "termcolor"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"wincolor 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "termcolor"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"wincolor 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "termion"
version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "thread_local"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "toml"
version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ucd-util"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "unicode-segmentation"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "unicode-width"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "unicode-xid"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "utf8-ranges"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "version_check"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "wincolor"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "wincolor"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[metadata]
"checksum aho-corasick 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c1c6d463cbe7ed28720b5b489e7c083eeb8f90d08be2a0d6bb9e1ffea9ce1afa"
"checksum arrayvec 0.4.7 (registry+https://github.com/rust-lang/crates.io-index)" = "a1e964f9e24d588183fcb43503abda40d288c8657dfc27311516ce2f05675aef"
"checksum assert_cli 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)" = "a29ab7c0ed62970beb0534d637a8688842506d0ff9157de83286dacd065c8149"
"checksum atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "9a7d5b8723950951411ee34d271d99dddcc2035a16ab25310ea2c8cfd4369652"
"checksum backtrace 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)" = "89a47830402e9981c5c41223151efcced65a0510c13097c769cede7efb34782a"
"checksum backtrace-sys 0.1.24 (registry+https://github.com/rust-lang/crates.io-index)" = "c66d56ac8dabd07f6aacdaf633f4b8262f5b3601a810a0dcddffd5c22c69daa0"
"checksum bitflags 1.0.3 (registry+https://github.com/rust-lang/crates.io-index)" = "d0c54bb8f454c567f21197eefcdbf5679d0bd99f2ddbe52e84c77061952e6789"
"checksum byteorder 1.2.4 (registry+https://github.com/rust-lang/crates.io-index)" = "8389c509ec62b9fe8eca58c502a0acaf017737355615243496cde4994f8fa4f9"
"checksum cargo_metadata 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)" = "2d6809b327f87369e6f3651efd2c5a96c49847a3ed2559477ecba79014751ee1"
"checksum cc 1.0.18 (registry+https://github.com/rust-lang/crates.io-index)" = "2119ea4867bd2b8ed3aecab467709720b2d55b1bcfe09f772fd68066eaf15275"
"checksum cfg-if 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "efe5c877e17a9c717a0bf3613b2709f723202c4e4675cc8f12926ded29bcb17e"
"checksum colored 1.6.1 (registry+https://github.com/rust-lang/crates.io-index)" = "dc0a60679001b62fb628c4da80e574b9645ab4646056d7c9018885efffe45533"
"checksum crossbeam-deque 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "f739f8c5363aca78cfb059edf753d8f0d36908c348f3d8d1503f03d8b75d9cf3"
"checksum crossbeam-epoch 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)" = "927121f5407de9956180ff5e936fe3cf4324279280001cd56b669d28ee7e9150"
"checksum crossbeam-utils 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "2760899e32a1d58d5abb31129f8fae5de75220bc2176e77ff7c627ae45c918d9"
"checksum derive-new 0.5.5 (registry+https://github.com/rust-lang/crates.io-index)" = "899ec79626c14e00ccc9729b4d750bbe67fe76a8f436824c16e0233bbd9d7daa"
"checksum diff 0.1.11 (registry+https://github.com/rust-lang/crates.io-index)" = "3c2b69f912779fbb121ceb775d74d51e915af17aaebc38d28a592843a2dd0a3a"
"checksum difference 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "524cbf6897b527295dff137cec09ecf3a05f4fddffd7dfcd1585403449e74198"
"checksum either 1.5.0 (registry+https://github.com/rust-lang/crates.io-index)" = "3be565ca5c557d7f59e7cfcf1844f9e3033650c929c6566f511e8005f205c1d0"
"checksum ena 0.9.3 (registry+https://github.com/rust-lang/crates.io-index)" = "88dc8393b3c7352f94092497f6b52019643e493b6b890eb417cdb7c46117e621"
"checksum env_logger 0.5.12 (registry+https://github.com/rust-lang/crates.io-index)" = "f4d7e69c283751083d53d01eac767407343b8b69c4bd70058e08adc2637cb257"
"checksum environment 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "1f4b14e20978669064c33b4c1e0fb4083412e40fe56cbea2eae80fd7591503ee"
"checksum error-chain 0.12.0 (registry+https://github.com/rust-lang/crates.io-index)" = "07e791d3be96241c77c43846b665ef1384606da2cd2a48730abe606a12906e02"
"checksum failure 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "7efb22686e4a466b1ec1a15c2898f91fa9cb340452496dca654032de20ff95b9"
"checksum failure_derive 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "946d0e98a50d9831f5d589038d2ca7f8f455b1c21028c0db0e84116a12696426"
"checksum fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "2e9763c69ebaae630ba35f74888db465e49e259ba1bc0eda7d06f4a067615d82"
"checksum fuchsia-zircon-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "3dcaa9ae7725d12cdb85b3ad99a434db70b468c09ded17e012d86b5c1010f7a7"
"checksum getopts 0.2.18 (registry+https://github.com/rust-lang/crates.io-index)" = "0a7292d30132fb5424b354f5dc02512a86e4c516fe544bb7a25e7f266951b797"
"checksum humantime 1.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "0484fda3e7007f2a4a0d9c3a703ca38c71c54c55602ce4660c419fd32e188c9e"
"checksum itertools 0.7.8 (registry+https://github.com/rust-lang/crates.io-index)" = "f58856976b776fedd95533137617a02fb25719f40e7d9b01c7043cd65474f450"
"checksum itoa 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "5adb58558dcd1d786b5f0bd15f3226ee23486e24b7b58304b60f64dc68e62606"
"checksum lazy_static 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ca488b89a5657b0a2ecd45b95609b3e848cf1755da332a0da46e2b2b1cb371a7"
"checksum libc 0.2.43 (registry+https://github.com/rust-lang/crates.io-index)" = "76e3a3ef172f1a0b9a9ff0dd1491ae5e6c948b94479a3021819ba7d860c8645d"
"checksum log 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)" = "61bd98ae7f7b754bc53dca7d44b604f733c6bba044ea6f41bc8d89272d8161d2"
"checksum maybe-uninit 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "60302e4db3a61da70c0cb7991976248362f30319e88850c487b9b95bbf059e00"
"checksum memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "796fba70e76612589ed2ce7f45282f5af869e0fdd7cc6199fa1aa1f1d591ba9d"
"checksum memoffset 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "0f9dc261e2b62d7a622bf416ea3c5245cdd5d9a7fcc428c0d06804dfce1775b3"
"checksum nodrop 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)" = "9a2228dca57108069a5262f2ed8bd2e82496d2e074a06d1ccc7ce1687b6ae0a2"
"checksum num_cpus 1.8.0 (registry+https://github.com/rust-lang/crates.io-index)" = "c51a3322e4bca9d212ad9a158a02abc6934d005490c054a2778df73a70aa0a30"
"checksum owning_ref 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "cdf84f41639e037b484f93433aa3897863b561ed65c6e59c7073d7c561710f37"
"checksum parking_lot 0.5.5 (registry+https://github.com/rust-lang/crates.io-index)" = "d4d05f1349491390b1730afba60bb20d55761bef489a954546b58b4b34e1e2ac"
"checksum parking_lot_core 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)" = "4db1a8ccf734a7bce794cc19b3df06ed87ab2f3907036b693c68f56b4d4537fa"
"checksum proc-macro2 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)" = "ee5697238f0d893c7f0ecc59c0999f18d2af85e424de441178bcacc9f9e6cf67"
"checksum quick-error 1.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "9274b940887ce9addde99c4eee6b5c44cc494b182b97e73dc8ffdcb3397fd3f0"
"checksum quote 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)" = "ed7d650913520df631972f21e104a4fa2f9c82a14afc65d17b388a2e29731e7c"
"checksum rand 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)" = "8356f47b32624fef5b3301c1be97e5944ecdd595409cc5da11d05f211db6cfbd"
"checksum redox_syscall 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)" = "c214e91d3ecf43e9a4e41e578973adeb14b474f2bee858742d127af75a0112b1"
"checksum redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "7e891cfe48e9100a70a3b6eb652fef28920c117d366339687bd5576160db0f76"
"checksum regex 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "5bbbea44c5490a1e84357ff28b7d518b4619a159fed5d25f6c1de2d19cc42814"
"checksum regex-syntax 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)" = "747ba3b235651f6e2f67dfa8bcdcd073ddb7c243cb21c442fc12395dfcac212d"
"checksum rustc-demangle 0.1.9 (registry+https://github.com/rust-lang/crates.io-index)" = "bcfe5b13211b4d78e5c2cadfebd7769197d95c639c35a50057eb4c05de811395"
"checksum rustc-hash 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "7540fc8b0c49f096ee9c961cda096467dce8084bec6bdca2fc83895fd9b28cb8"
"checksum rustc-rayon 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "8c6d5a683c6ba4ed37959097e88d71c9e8e26659a3cb5be8b389078e7ad45306"
"checksum rustc-rayon-core 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "40f06724db71e18d68b3b946fdf890ca8c921d9edccc1404fdfdb537b0d12649"
"checksum ryu 0.2.4 (registry+https://github.com/rust-lang/crates.io-index)" = "fd0568787116e13c652377b6846f5931454a363a8fdf8ae50463ee40935b278b"
"checksum scoped-tls 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "332ffa32bf586782a3efaeb58f127980944bbc8c4d6913a86107ac2a5ab24b28"
"checksum scopeguard 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "94258f53601af11e6a49f722422f6e3425c52b06245a5cf9bc09908b174f5e27"
"checksum semver 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)" = "1d7eb9ef2c18661902cc47e535f9bc51b78acd254da71d375c2f6720d9a40403"
"checksum semver-parser 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3"
"checksum serde 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)" = "6dfad05c8854584e5f72fb859385ecdfa03af69c3fd0572f0da2d4c95f060bdb"
"checksum serde_derive 1.0.71 (registry+https://github.com/rust-lang/crates.io-index)" = "b719c6d5e9f73fbc37892246d5852333f040caa617b8873c6aced84bcb28e7bb"
"checksum serde_json 1.0.26 (registry+https://github.com/rust-lang/crates.io-index)" = "44dd2cfde475037451fa99b7e5df77aa3cfd1536575fa8e7a538ab36dcde49ae"
"checksum smallvec 0.6.14 (registry+https://github.com/rust-lang/crates.io-index)" = "b97fcaeba89edba30f044a10c6a3cc39df9c3f17d7cd829dd1446cab35f890e0"
"checksum stable_deref_trait 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ffbc596e092fe5f598b12ef46cc03754085ac2f4d8c739ad61c4ae266cc3b3fa"
"checksum syn 0.14.8 (registry+https://github.com/rust-lang/crates.io-index)" = "b7bfcbb0c068d0f642a0ffbd5c604965a360a61f99e8add013cef23a838614f3"
"checksum synstructure 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)" = "85bb9b7550d063ea184027c9b8c20ac167cd36d3e06b3a40bceb9d746dc1a7b7"
"checksum term 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "5e6b677dd1e8214ea1ef4297f85dbcbed8e8cdddb561040cc998ca2551c37561"
"checksum termcolor 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)" = "adc4587ead41bf016f11af03e55a624c06568b5a19db4e90fde573d805074f83"
"checksum termcolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "722426c4a0539da2c4ffd9b419d90ad540b4cff4a053be9069c908d4d07e2836"
"checksum termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "689a3bdfaab439fd92bc87df5c4c78417d3cbe537487274e9b0b2dce76e92096"
"checksum thread_local 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c6b53e329000edc2b34dbe8545fd20e55a333362d0a321909685a19bd28c3f1b"
"checksum toml 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)" = "a0263c6c02c4db6c8f7681f9fd35e90de799ebd4cfdeab77a38f4ff6b3d8c0d9"
"checksum ucd-util 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "fd2be2d6639d0f8fe6cdda291ad456e23629558d466e2789d2c3e9892bda285d"
"checksum unicode-segmentation 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "aa6024fc12ddfd1c6dbc14a80fa2324d4568849869b779f6bd37e5e4c03344d1"
"checksum unicode-width 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)" = "882386231c45df4700b275c7ff55b6f3698780a650026380e72dabe76fa46526"
"checksum unicode-xid 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "fc72304796d0818e357ead4e000d19c9c174ab23dc11093ac919054d20a6a7fc"
"checksum utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "662fab6525a98beff2921d7f61a39e7d59e0b425ebc7d0d9e66d316e55124122"
"checksum version_check 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "7716c242968ee87e5542f8021178248f267f295a5c4803beae8b8b7fd9bc6051"
"checksum winapi 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "773ef9dcc5f24b7d850d0ff101e542ff24c3b090a9768e03ff889fdef41f00fd"
"checksum winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
"checksum winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
"checksum wincolor 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)" = "eeb06499a3a4d44302791052df005d5232b927ed1a9658146d842165c4de7767"
"checksum wincolor 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "b9dc3aa9dcda98b5a16150c54619c1ead22e3d3a5d458778ae914be760aa981a"
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/build.rs
|
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::env;
use std::fs::File;
use std::io::Write;
use std::path::{Path, PathBuf};
use std::process::Command;
fn main() {
// Only check .git/HEAD dirty status if it exists - doing so when
// building dependent crates may lead to false positives and rebuilds
if Path::new(".git/HEAD").exists() {
println!("cargo:rerun-if-changed=.git/HEAD");
}
println!("cargo:rerun-if-env-changed=CFG_RELEASE_CHANNEL");
let out_dir = PathBuf::from(env::var_os("OUT_DIR").unwrap());
File::create(out_dir.join("commit-info.txt"))
.unwrap()
.write_all(commit_info().as_bytes())
.unwrap();
}
// Try to get hash and date of the last commit on a best effort basis. If anything goes wrong
// (git not installed or if this is not a git repository) just return an empty string.
fn commit_info() -> String {
match (channel(), commit_hash(), commit_date()) {
(channel, Some(hash), Some(date)) => {
format!("{} ({} {})", channel, hash.trim_right(), date)
}
_ => String::new(),
}
}
fn channel() -> String {
if let Ok(channel) = env::var("CFG_RELEASE_CHANNEL") {
channel
} else {
"nightly".to_owned()
}
}
fn commit_hash() -> Option<String> {
Command::new("git")
.args(&["rev-parse", "--short", "HEAD"])
.output()
.ok()
.and_then(|r| String::from_utf8(r.stdout).ok())
}
fn commit_date() -> Option<String> {
Command::new("git")
.args(&["log", "-1", "--date=short", "--pretty=format:%cd"])
.output()
.ok()
.and_then(|r| String::from_utf8(r.stdout).ok())
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/closures.rs
|
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use config::lists::*;
use syntax::parse::classify;
use syntax::source_map::Span;
use syntax::{ast, ptr};
use expr::{block_contains_comment, is_simple_block, is_unsafe_block, rewrite_cond, ToExpr};
use items::{span_hi_for_arg, span_lo_for_arg};
use lists::{definitive_tactic, itemize_list, write_list, ListFormatting, Separator};
use rewrite::{Rewrite, RewriteContext};
use shape::Shape;
use source_map::SpanUtils;
use utils::{last_line_width, left_most_sub_expr, stmt_expr};
// This module is pretty messy because of the rules around closures and blocks:
// FIXME - the below is probably no longer true in full.
// * if there is a return type, then there must be braces,
// * given a closure with braces, whether that is parsed to give an inner block
// or not depends on if there is a return type and if there are statements
// in that block,
// * if the first expression in the body ends with a block (i.e., is a
// statement without needing a semi-colon), then adding or removing braces
// can change whether it is treated as an expression or statement.
pub fn rewrite_closure(
capture: ast::CaptureBy,
asyncness: ast::IsAsync,
movability: ast::Movability,
fn_decl: &ast::FnDecl,
body: &ast::Expr,
span: Span,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
debug!("rewrite_closure {:?}", body);
let (prefix, extra_offset) = rewrite_closure_fn_decl(
capture, asyncness, movability, fn_decl, body, span, context, shape,
)?;
// 1 = space between `|...|` and body.
let body_shape = shape.offset_left(extra_offset)?;
if let ast::ExprKind::Block(ref block, _) = body.node {
// The body of the closure is an empty block.
if block.stmts.is_empty() && !block_contains_comment(block, context.source_map) {
return body
.rewrite(context, shape)
.map(|s| format!("{} {}", prefix, s));
}
let result = match fn_decl.output {
ast::FunctionRetTy::Default(_) => {
try_rewrite_without_block(body, &prefix, context, shape, body_shape)
}
_ => None,
};
result.or_else(|| {
// Either we require a block, or tried without and failed.
rewrite_closure_block(block, &prefix, context, body_shape)
})
} else {
rewrite_closure_expr(body, &prefix, context, body_shape).or_else(|| {
// The closure originally had a non-block expression, but we can't fit on
// one line, so we'll insert a block.
rewrite_closure_with_block(body, &prefix, context, body_shape)
})
}
}
fn try_rewrite_without_block(
expr: &ast::Expr,
prefix: &str,
context: &RewriteContext,
shape: Shape,
body_shape: Shape,
) -> Option<String> {
let expr = get_inner_expr(expr, prefix, context);
if is_block_closure_forced(context, expr) {
rewrite_closure_with_block(expr, prefix, context, shape)
} else {
rewrite_closure_expr(expr, prefix, context, body_shape)
}
}
fn get_inner_expr<'a>(
expr: &'a ast::Expr,
prefix: &str,
context: &RewriteContext,
) -> &'a ast::Expr {
if let ast::ExprKind::Block(ref block, _) = expr.node {
if !needs_block(block, prefix, context) {
// block.stmts.len() == 1
if let Some(expr) = stmt_expr(&block.stmts[0]) {
return get_inner_expr(expr, prefix, context);
}
}
}
expr
}
// Figure out if a block is necessary.
fn needs_block(block: &ast::Block, prefix: &str, context: &RewriteContext) -> bool {
is_unsafe_block(block)
|| block.stmts.len() > 1
|| block_contains_comment(block, context.source_map)
|| prefix.contains('\n')
}
fn veto_block(e: &ast::Expr) -> bool {
match e.node {
ast::ExprKind::Call(..)
| ast::ExprKind::Binary(..)
| ast::ExprKind::Cast(..)
| ast::ExprKind::Type(..)
| ast::ExprKind::Assign(..)
| ast::ExprKind::AssignOp(..)
| ast::ExprKind::Field(..)
| ast::ExprKind::Index(..)
| ast::ExprKind::Range(..)
| ast::ExprKind::Try(..) => true,
_ => false,
}
}
// Rewrite closure with a single expression wrapping its body with block.
fn rewrite_closure_with_block(
body: &ast::Expr,
prefix: &str,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
let left_most = left_most_sub_expr(body);
let veto_block = veto_block(body) && !classify::expr_requires_semi_to_be_stmt(left_most);
if veto_block {
return None;
}
let block = ast::Block {
stmts: vec![ast::Stmt {
id: ast::NodeId::new(0),
node: ast::StmtKind::Expr(ptr::P(body.clone())),
span: body.span,
}],
id: ast::NodeId::new(0),
rules: ast::BlockCheckMode::Default,
span: body.span,
recovered: false,
};
let block = ::expr::rewrite_block_with_visitor(context, "", &block, None, None, shape, false)?;
Some(format!("{} {}", prefix, block))
}
// Rewrite closure with a single expression without wrapping its body with block.
fn rewrite_closure_expr(
expr: &ast::Expr,
prefix: &str,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
fn allow_multi_line(expr: &ast::Expr) -> bool {
match expr.node {
ast::ExprKind::Match(..)
| ast::ExprKind::Block(..)
// | ast::ExprKind::TryBlock(..)
| ast::ExprKind::Loop(..)
| ast::ExprKind::Struct(..) => true,
ast::ExprKind::AddrOf(_, ref expr)
| ast::ExprKind::Box(ref expr)
| ast::ExprKind::Try(ref expr)
| ast::ExprKind::Unary(_, ref expr)
| ast::ExprKind::Cast(ref expr, _) => allow_multi_line(expr),
_ => false,
}
}
// When rewriting closure's body without block, we require it to fit in a single line
// unless it is a block-like expression or we are inside macro call.
let veto_multiline = (!allow_multi_line(expr) && !context.inside_macro())
|| context.config.force_multiline_blocks();
expr.rewrite(context, shape)
.and_then(|rw| {
if veto_multiline && rw.contains('\n') {
None
} else {
Some(rw)
}
}).map(|rw| format!("{} {}", prefix, rw))
}
// Rewrite closure whose body is block.
fn rewrite_closure_block(
block: &ast::Block,
prefix: &str,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
Some(format!("{} {}", prefix, block.rewrite(context, shape)?))
}
// Return type is (prefix, extra_offset)
fn rewrite_closure_fn_decl(
capture: ast::CaptureBy,
asyncness: ast::IsAsync,
movability: ast::Movability,
fn_decl: &ast::FnDecl,
body: &ast::Expr,
span: Span,
context: &RewriteContext,
shape: Shape,
) -> Option<(String, usize)> {
let is_async = if asyncness.is_async() { "async " } else { "" };
let mover = if capture == ast::CaptureBy::Value {
"move "
} else {
""
};
let immovable = if movability == ast::Movability::Static {
"static "
} else {
""
};
// 4 = "|| {".len(), which is overconservative when the closure consists of
// a single expression.
let nested_shape = shape
.shrink_left(is_async.len() + mover.len() + immovable.len())?
.sub_width(4)?;
// 1 = |
let argument_offset = nested_shape.indent + 1;
let arg_shape = nested_shape.offset_left(1)?.visual_indent(0);
let ret_str = fn_decl.output.rewrite(context, arg_shape)?;
let arg_items = itemize_list(
context.snippet_provider,
fn_decl.inputs.iter(),
"|",
",",
|arg| span_lo_for_arg(arg),
|arg| span_hi_for_arg(context, arg),
|arg| arg.rewrite(context, arg_shape),
context.snippet_provider.span_after(span, "|"),
body.span.lo(),
false,
);
let item_vec = arg_items.collect::<Vec<_>>();
// 1 = space between arguments and return type.
let horizontal_budget = nested_shape.width.saturating_sub(ret_str.len() + 1);
let tactic = definitive_tactic(
&item_vec,
ListTactic::HorizontalVertical,
Separator::Comma,
horizontal_budget,
);
let arg_shape = match tactic {
DefinitiveListTactic::Horizontal => arg_shape.sub_width(ret_str.len() + 1)?,
_ => arg_shape,
};
let fmt = ListFormatting::new(arg_shape, context.config)
.tactic(tactic)
.preserve_newline(true);
let list_str = write_list(&item_vec, &fmt)?;
let mut prefix = format!("{}{}{}|{}|", is_async, immovable, mover, list_str);
if !ret_str.is_empty() {
if prefix.contains('\n') {
prefix.push('\n');
prefix.push_str(&argument_offset.to_string(context.config));
} else {
prefix.push(' ');
}
prefix.push_str(&ret_str);
}
// 1 = space between `|...|` and body.
let extra_offset = last_line_width(&prefix) + 1;
Some((prefix, extra_offset))
}
// Rewriting closure which is placed at the end of the function call's arg.
// Returns `None` if the reformatted closure 'looks bad'.
pub fn rewrite_last_closure(
context: &RewriteContext,
expr: &ast::Expr,
shape: Shape,
) -> Option<String> {
if let ast::ExprKind::Closure(capture, asyncness, movability, ref fn_decl, ref body, _) =
expr.node
{
let body = match body.node {
ast::ExprKind::Block(ref block, _)
if !is_unsafe_block(block)
&& is_simple_block(block, Some(&body.attrs), context.source_map) =>
{
stmt_expr(&block.stmts[0]).unwrap_or(body)
}
_ => body,
};
let (prefix, extra_offset) = rewrite_closure_fn_decl(
capture, asyncness, movability, fn_decl, body, expr.span, context, shape,
)?;
// If the closure goes multi line before its body, do not overflow the closure.
if prefix.contains('\n') {
return None;
}
let body_shape = shape.offset_left(extra_offset)?;
// We force to use block for the body of the closure for certain kinds of expressions.
if is_block_closure_forced(context, body) {
return rewrite_closure_with_block(body, &prefix, context, body_shape).and_then(
|body_str| {
// If the expression can fit in a single line, we need not force block closure.
if body_str.lines().count() <= 7 {
match rewrite_closure_expr(body, &prefix, context, shape) {
Some(ref single_line_body_str)
if !single_line_body_str.contains('\n') =>
{
Some(single_line_body_str.clone())
}
_ => Some(body_str),
}
} else {
Some(body_str)
}
},
);
}
// When overflowing the closure which consists of a single control flow expression,
// force to use block if its condition uses multi line.
let is_multi_lined_cond = rewrite_cond(context, body, body_shape)
.map(|cond| cond.contains('\n') || cond.len() > body_shape.width)
.unwrap_or(false);
if is_multi_lined_cond {
return rewrite_closure_with_block(body, &prefix, context, body_shape);
}
// Seems fine, just format the closure in usual manner.
return expr.rewrite(context, shape);
}
None
}
/// Returns true if the given vector of arguments has more than one `ast::ExprKind::Closure`.
pub fn args_have_many_closure<T>(args: &[&T]) -> bool
where
T: ToExpr,
{
args.iter()
.filter(|arg| {
arg.to_expr()
.map(|e| match e.node {
ast::ExprKind::Closure(..) => true,
_ => false,
}).unwrap_or(false)
}).count()
> 1
}
fn is_block_closure_forced(context: &RewriteContext, expr: &ast::Expr) -> bool {
// If we are inside macro, we do not want to add or remove block from closure body.
if context.inside_macro() {
false
} else {
is_block_closure_forced_inner(expr)
}
}
fn is_block_closure_forced_inner(expr: &ast::Expr) -> bool {
match expr.node {
ast::ExprKind::If(..)
| ast::ExprKind::IfLet(..)
| ast::ExprKind::While(..)
| ast::ExprKind::WhileLet(..)
| ast::ExprKind::ForLoop(..) => true,
ast::ExprKind::AddrOf(_, ref expr)
| ast::ExprKind::Box(ref expr)
| ast::ExprKind::Try(ref expr)
| ast::ExprKind::Unary(_, ref expr)
| ast::ExprKind::Cast(ref expr, _) => is_block_closure_forced_inner(expr),
_ => false,
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/imports.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::cmp::Ordering;
use config::lists::*;
use syntax::ast::{self, UseTreeKind};
use syntax::source_map::{self, BytePos, Span, DUMMY_SP};
use comment::combine_strs_with_missing_comments;
use config::IndentStyle;
use lists::{definitive_tactic, itemize_list, write_list, ListFormatting, ListItem, Separator};
use rewrite::{Rewrite, RewriteContext};
use shape::Shape;
use source_map::SpanUtils;
use spanned::Spanned;
use utils::{is_same_visibility, mk_sp, rewrite_ident};
use visitor::FmtVisitor;
use std::borrow::Cow;
use std::fmt;
/// Returns a name imported by a `use` declaration. e.g. returns `Ordering`
/// for `std::cmp::Ordering` and `self` for `std::cmp::self`.
pub fn path_to_imported_ident(path: &ast::Path) -> ast::Ident {
path.segments.last().unwrap().ident
}
impl<'a> FmtVisitor<'a> {
pub fn format_import(&mut self, item: &ast::Item, tree: &ast::UseTree) {
let span = item.span();
let shape = self.shape();
let rw = UseTree::from_ast(
&self.get_context(),
tree,
None,
Some(item.vis.clone()),
Some(item.span.lo()),
Some(item.attrs.clone()),
).rewrite_top_level(&self.get_context(), shape);
match rw {
Some(ref s) if s.is_empty() => {
// Format up to last newline
let prev_span = mk_sp(self.last_pos, source!(self, span).lo());
let trimmed_snippet = self.snippet(prev_span).trim_right();
let span_end = self.last_pos + BytePos(trimmed_snippet.len() as u32);
self.format_missing(span_end);
// We have an excessive newline from the removed import.
if self.buffer.ends_with('\n') {
self.buffer.pop();
self.line_number -= 1;
}
self.last_pos = source!(self, span).hi();
}
Some(ref s) => {
self.format_missing_with_indent(source!(self, span).lo());
self.push_str(s);
self.last_pos = source!(self, span).hi();
}
None => {
self.format_missing_with_indent(source!(self, span).lo());
self.format_missing(source!(self, span).hi());
}
}
}
}
// Ordering of imports
// We order imports by translating to our own representation and then sorting.
// The Rust AST data structures are really bad for this. Rustfmt applies a bunch
// of normalisations to imports and since we want to sort based on the result
// of these (and to maintain idempotence) we must apply the same normalisations
// to the data structures for sorting.
//
// We sort `self` and `super` before other imports, then identifier imports,
// then glob imports, then lists of imports. We do not take aliases into account
// when ordering unless the imports are identical except for the alias (rare in
// practice).
// FIXME(#2531) - we should unify the comparison code here with the formatting
// code elsewhere since we are essentially string-ifying twice. Furthermore, by
// parsing to our own format on comparison, we repeat a lot of work when
// sorting.
// FIXME we do a lot of allocation to make our own representation.
#[derive(Clone, Eq, PartialEq)]
pub enum UseSegment {
Ident(String, Option<String>),
Slf(Option<String>),
Super(Option<String>),
Glob,
List(Vec<UseTree>),
}
#[derive(Clone)]
pub struct UseTree {
pub path: Vec<UseSegment>,
pub span: Span,
// Comment information within nested use tree.
pub list_item: Option<ListItem>,
// Additional fields for top level use items.
// Should we have another struct for top-level use items rather than reusing this?
visibility: Option<ast::Visibility>,
attrs: Option<Vec<ast::Attribute>>,
}
impl PartialEq for UseTree {
fn eq(&self, other: &UseTree) -> bool {
self.path == other.path
}
}
impl Eq for UseTree {}
impl Spanned for UseTree {
fn span(&self) -> Span {
let lo = if let Some(ref attrs) = self.attrs {
attrs.iter().next().map_or(self.span.lo(), |a| a.span.lo())
} else {
self.span.lo()
};
mk_sp(lo, self.span.hi())
}
}
impl UseSegment {
// Clone a version of self with any top-level alias removed.
fn remove_alias(&self) -> UseSegment {
match *self {
UseSegment::Ident(ref s, _) => UseSegment::Ident(s.clone(), None),
UseSegment::Slf(_) => UseSegment::Slf(None),
UseSegment::Super(_) => UseSegment::Super(None),
_ => self.clone(),
}
}
fn from_path_segment(
context: &RewriteContext,
path_seg: &ast::PathSegment,
) -> Option<UseSegment> {
let name = rewrite_ident(context, path_seg.ident);
if name.is_empty() || name == "{{root}}" {
return None;
}
Some(match name {
"self" => UseSegment::Slf(None),
"super" => UseSegment::Super(None),
_ => UseSegment::Ident((*name).to_owned(), None),
})
}
}
pub fn merge_use_trees(use_trees: Vec<UseTree>) -> Vec<UseTree> {
let mut result = Vec::with_capacity(use_trees.len());
for use_tree in use_trees {
if use_tree.has_comment() || use_tree.attrs.is_some() {
result.push(use_tree);
continue;
}
for flattened in use_tree.flatten() {
merge_use_trees_inner(&mut result, flattened);
}
}
result
}
fn merge_use_trees_inner(trees: &mut Vec<UseTree>, use_tree: UseTree) {
for tree in trees.iter_mut() {
if tree.share_prefix(&use_tree) {
tree.merge(use_tree);
return;
}
}
trees.push(use_tree);
}
impl fmt::Debug for UseTree {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fmt::Display::fmt(self, f)
}
}
impl fmt::Debug for UseSegment {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fmt::Display::fmt(self, f)
}
}
impl fmt::Display for UseSegment {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
UseSegment::Glob => write!(f, "*"),
UseSegment::Ident(ref s, _) => write!(f, "{}", s),
UseSegment::Slf(..) => write!(f, "self"),
UseSegment::Super(..) => write!(f, "super"),
UseSegment::List(ref list) => {
write!(f, "{{")?;
for (i, item) in list.iter().enumerate() {
let is_last = i == list.len() - 1;
write!(f, "{}", item)?;
if !is_last {
write!(f, ", ")?;
}
}
write!(f, "}}")
}
}
}
}
impl fmt::Display for UseTree {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
for (i, segment) in self.path.iter().enumerate() {
let is_last = i == self.path.len() - 1;
write!(f, "{}", segment)?;
if !is_last {
write!(f, "::")?;
}
}
write!(f, "")
}
}
impl UseTree {
// Rewrite use tree with `use ` and a trailing `;`.
pub fn rewrite_top_level(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
let vis = self.visibility.as_ref().map_or(Cow::from(""), |vis| {
::utils::format_visibility(context, &vis)
});
let use_str = self
.rewrite(context, shape.offset_left(vis.len())?)
.map(|s| {
if s.is_empty() {
s.to_owned()
} else {
format!("{}use {};", vis, s)
}
})?;
if let Some(ref attrs) = self.attrs {
let attr_str = attrs.rewrite(context, shape)?;
let lo = attrs.last().as_ref()?.span().hi();
let hi = self.span.lo();
let span = mk_sp(lo, hi);
combine_strs_with_missing_comments(context, &attr_str, &use_str, span, shape, false)
} else {
Some(use_str)
}
}
// FIXME: Use correct span?
// The given span is essentially incorrect, since we are reconstructing
// use statements. This should not be a problem, though, since we have
// already tried to extract comment and observed that there are no comment
// around the given use item, and the span will not be used afterward.
fn from_path(path: Vec<UseSegment>, span: Span) -> UseTree {
UseTree {
path,
span,
list_item: None,
visibility: None,
attrs: None,
}
}
pub fn from_ast_with_normalization(
context: &RewriteContext,
item: &ast::Item,
) -> Option<UseTree> {
match item.node {
ast::ItemKind::Use(ref use_tree) => Some(
UseTree::from_ast(
context,
use_tree,
None,
Some(item.vis.clone()),
Some(item.span.lo()),
if item.attrs.is_empty() {
None
} else {
Some(item.attrs.clone())
},
).normalize(),
),
_ => None,
}
}
fn from_ast(
context: &RewriteContext,
a: &ast::UseTree,
list_item: Option<ListItem>,
visibility: Option<ast::Visibility>,
opt_lo: Option<BytePos>,
attrs: Option<Vec<ast::Attribute>>,
) -> UseTree {
let span = if let Some(lo) = opt_lo {
mk_sp(lo, a.span.hi())
} else {
a.span
};
let mut result = UseTree {
path: vec![],
span,
list_item,
visibility,
attrs,
};
for p in &a.prefix.segments {
if let Some(use_segment) = UseSegment::from_path_segment(context, p) {
result.path.push(use_segment);
}
}
match a.kind {
UseTreeKind::Glob => {
result.path.push(UseSegment::Glob);
}
UseTreeKind::Nested(ref list) => {
// Extract comments between nested use items.
// This needs to be done before sorting use items.
let items: Vec<_> = itemize_list(
context.snippet_provider,
list.iter().map(|(tree, _)| tree),
"}",
",",
|tree| tree.span.lo(),
|tree| tree.span.hi(),
|_| Some("".to_owned()), // We only need comments for now.
context.snippet_provider.span_after(a.span, "{"),
a.span.hi(),
false,
).collect();
result.path.push(UseSegment::List(
list.iter()
.zip(items.into_iter())
.map(|(t, list_item)| {
Self::from_ast(context, &t.0, Some(list_item), None, None, None)
}).collect(),
));
}
UseTreeKind::Simple(ref rename, ..) => {
let name = rewrite_ident(context, path_to_imported_ident(&a.prefix)).to_owned();
let alias = rename.and_then(|ident| {
if ident.name == "_" {
// for impl-only-use
Some("_".to_owned())
} else if ident == path_to_imported_ident(&a.prefix) {
None
} else {
Some(rewrite_ident(context, ident).to_owned())
}
});
let segment = match name.as_ref() {
"self" => UseSegment::Slf(alias),
"super" => UseSegment::Super(alias),
_ => UseSegment::Ident(name, alias),
};
// `name` is already in result.
result.path.pop();
result.path.push(segment);
}
}
result
}
// Do the adjustments that rustfmt does elsewhere to use paths.
pub fn normalize(mut self) -> UseTree {
let mut last = self.path.pop().expect("Empty use tree?");
// Hack around borrow checker.
let mut normalize_sole_list = false;
let mut aliased_self = false;
// Remove foo::{} or self without attributes.
match last {
_ if self.attrs.is_some() => (),
UseSegment::List(ref list) if list.is_empty() => {
self.path = vec![];
return self;
}
UseSegment::Slf(None) if self.path.is_empty() && self.visibility.is_some() => {
self.path = vec![];
return self;
}
_ => (),
}
// Normalise foo::self -> foo.
if let UseSegment::Slf(None) = last {
if !self.path.is_empty() {
return self;
}
}
// Normalise foo::self as bar -> foo as bar.
if let UseSegment::Slf(_) = last {
match self.path.last() {
None => {}
Some(UseSegment::Ident(_, None)) => {
aliased_self = true;
}
_ => unreachable!(),
}
}
let mut done = false;
if aliased_self {
match self.path.last_mut() {
Some(UseSegment::Ident(_, ref mut old_rename)) => {
assert!(old_rename.is_none());
if let UseSegment::Slf(Some(rename)) = last.clone() {
*old_rename = Some(rename);
done = true;
}
}
_ => unreachable!(),
}
}
if done {
return self;
}
// Normalise foo::{bar} -> foo::bar
if let UseSegment::List(ref list) = last {
if list.len() == 1 {
normalize_sole_list = true;
}
}
if normalize_sole_list {
match last {
UseSegment::List(list) => {
for seg in &list[0].path {
self.path.push(seg.clone());
}
return self.normalize();
}
_ => unreachable!(),
}
}
// Recursively normalize elements of a list use (including sorting the list).
if let UseSegment::List(list) = last {
let mut list = list
.into_iter()
.map(|ut| ut.normalize())
.collect::<Vec<_>>();
list.sort();
last = UseSegment::List(list);
}
self.path.push(last);
self
}
fn has_comment(&self) -> bool {
self.list_item.as_ref().map_or(false, ListItem::has_comment)
}
fn same_visibility(&self, other: &UseTree) -> bool {
match (&self.visibility, &other.visibility) {
(
Some(source_map::Spanned {
node: ast::VisibilityKind::Inherited,
..
}),
None,
)
| (
None,
Some(source_map::Spanned {
node: ast::VisibilityKind::Inherited,
..
}),
)
| (None, None) => true,
(Some(ref a), Some(ref b)) => is_same_visibility(a, b),
_ => false,
}
}
fn share_prefix(&self, other: &UseTree) -> bool {
if self.path.is_empty()
|| other.path.is_empty()
|| self.attrs.is_some()
|| !self.same_visibility(other)
{
false
} else {
self.path[0] == other.path[0]
}
}
fn flatten(self) -> Vec<UseTree> {
if self.path.is_empty() {
return vec![self];
}
match self.path.clone().last().unwrap() {
UseSegment::List(list) => {
let prefix = &self.path[..self.path.len() - 1];
let mut result = vec![];
for nested_use_tree in list {
for flattend in &mut nested_use_tree.clone().flatten() {
let mut new_path = prefix.to_vec();
new_path.append(&mut flattend.path);
result.push(UseTree {
path: new_path,
span: self.span,
list_item: None,
visibility: self.visibility.clone(),
attrs: None,
});
}
}
result
}
_ => vec![self],
}
}
fn merge(&mut self, other: UseTree) {
let mut new_path = vec![];
for (a, b) in self
.path
.clone()
.iter_mut()
.zip(other.path.clone().into_iter())
{
if *a == b {
new_path.push(b);
} else {
break;
}
}
if let Some(merged) = merge_rest(&self.path, &other.path, new_path.len()) {
new_path.push(merged);
self.span = self.span.to(other.span);
}
self.path = new_path;
}
}
fn merge_rest(a: &[UseSegment], b: &[UseSegment], len: usize) -> Option<UseSegment> {
let a_rest = &a[len..];
let b_rest = &b[len..];
if a_rest.is_empty() && b_rest.is_empty() {
return None;
}
if a_rest.is_empty() {
return Some(UseSegment::List(vec![
UseTree::from_path(vec![UseSegment::Slf(None)], DUMMY_SP),
UseTree::from_path(b_rest.to_vec(), DUMMY_SP),
]));
}
if b_rest.is_empty() {
return Some(UseSegment::List(vec![
UseTree::from_path(vec![UseSegment::Slf(None)], DUMMY_SP),
UseTree::from_path(a_rest.to_vec(), DUMMY_SP),
]));
}
if let UseSegment::List(mut list) = a_rest[0].clone() {
merge_use_trees_inner(&mut list, UseTree::from_path(b_rest.to_vec(), DUMMY_SP));
list.sort();
return Some(UseSegment::List(list.clone()));
}
let mut list = vec![
UseTree::from_path(a_rest.to_vec(), DUMMY_SP),
UseTree::from_path(b_rest.to_vec(), DUMMY_SP),
];
list.sort();
Some(UseSegment::List(list))
}
impl PartialOrd for UseSegment {
fn partial_cmp(&self, other: &UseSegment) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl PartialOrd for UseTree {
fn partial_cmp(&self, other: &UseTree) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Ord for UseSegment {
fn cmp(&self, other: &UseSegment) -> Ordering {
use self::UseSegment::*;
fn is_upper_snake_case(s: &str) -> bool {
s.chars()
.all(|c| c.is_uppercase() || c == '_' || c.is_numeric())
}
match (self, other) {
(&Slf(ref a), &Slf(ref b)) | (&Super(ref a), &Super(ref b)) => a.cmp(b),
(&Glob, &Glob) => Ordering::Equal,
(&Ident(ref ia, ref aa), &Ident(ref ib, ref ab)) => {
// snake_case < CamelCase < UPPER_SNAKE_CASE
if ia.starts_with(char::is_uppercase) && ib.starts_with(char::is_lowercase) {
return Ordering::Greater;
}
if ia.starts_with(char::is_lowercase) && ib.starts_with(char::is_uppercase) {
return Ordering::Less;
}
if is_upper_snake_case(ia) && !is_upper_snake_case(ib) {
return Ordering::Greater;
}
if !is_upper_snake_case(ia) && is_upper_snake_case(ib) {
return Ordering::Less;
}
let ident_ord = ia.cmp(ib);
if ident_ord != Ordering::Equal {
return ident_ord;
}
if aa.is_none() && ab.is_some() {
return Ordering::Less;
}
if aa.is_some() && ab.is_none() {
return Ordering::Greater;
}
aa.cmp(ab)
}
(&List(ref a), &List(ref b)) => {
for (a, b) in a.iter().zip(b.iter()) {
let ord = a.cmp(b);
if ord != Ordering::Equal {
return ord;
}
}
a.len().cmp(&b.len())
}
(&Slf(_), _) => Ordering::Less,
(_, &Slf(_)) => Ordering::Greater,
(&Super(_), _) => Ordering::Less,
(_, &Super(_)) => Ordering::Greater,
(&Ident(..), _) => Ordering::Less,
(_, &Ident(..)) => Ordering::Greater,
(&Glob, _) => Ordering::Less,
(_, &Glob) => Ordering::Greater,
}
}
}
impl Ord for UseTree {
fn cmp(&self, other: &UseTree) -> Ordering {
for (a, b) in self.path.iter().zip(other.path.iter()) {
let ord = a.cmp(b);
// The comparison without aliases is a hack to avoid situations like
// comparing `a::b` to `a as c` - where the latter should be ordered
// first since it is shorter.
if ord != Ordering::Equal && a.remove_alias().cmp(&b.remove_alias()) != Ordering::Equal
{
return ord;
}
}
self.path.len().cmp(&other.path.len())
}
}
fn rewrite_nested_use_tree(
context: &RewriteContext,
use_tree_list: &[UseTree],
shape: Shape,
) -> Option<String> {
let mut list_items = Vec::with_capacity(use_tree_list.len());
let nested_shape = match context.config.imports_indent() {
IndentStyle::Block => shape
.block_indent(context.config.tab_spaces())
.with_max_width(context.config)
.sub_width(1)?,
IndentStyle::Visual => shape.visual_indent(0),
};
for use_tree in use_tree_list {
if let Some(mut list_item) = use_tree.list_item.clone() {
list_item.item = use_tree.rewrite(context, nested_shape);
list_items.push(list_item);
} else {
list_items.push(ListItem::from_str(use_tree.rewrite(context, nested_shape)?));
}
}
let has_nested_list = use_tree_list.iter().any(|use_segment| {
use_segment
.path
.last()
.map_or(false, |last_segment| match last_segment {
UseSegment::List(..) => true,
_ => false,
})
});
let remaining_width = if has_nested_list {
0
} else {
shape.width.saturating_sub(2)
};
let tactic = definitive_tactic(
&list_items,
context.config.imports_layout(),
Separator::Comma,
remaining_width,
);
let ends_with_newline = context.config.imports_indent() == IndentStyle::Block
&& tactic != DefinitiveListTactic::Horizontal;
let trailing_separator = if ends_with_newline {
context.config.trailing_comma()
} else {
SeparatorTactic::Never
};
let fmt = ListFormatting::new(nested_shape, context.config)
.tactic(tactic)
.trailing_separator(trailing_separator)
.ends_with_newline(ends_with_newline)
.preserve_newline(true)
.nested(has_nested_list);
let list_str = write_list(&list_items, &fmt)?;
let result = if (list_str.contains('\n') || list_str.len() > remaining_width)
&& context.config.imports_indent() == IndentStyle::Block
{
format!(
"{{\n{}{}\n{}}}",
nested_shape.indent.to_string(context.config),
list_str,
shape.indent.to_string(context.config)
)
} else {
format!("{{{}}}", list_str)
};
Some(result)
}
impl Rewrite for UseSegment {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
Some(match self {
UseSegment::Ident(ref ident, Some(ref rename)) => format!("{} as {}", ident, rename),
UseSegment::Ident(ref ident, None) => ident.clone(),
UseSegment::Slf(Some(ref rename)) => format!("self as {}", rename),
UseSegment::Slf(None) => "self".to_owned(),
UseSegment::Super(Some(ref rename)) => format!("super as {}", rename),
UseSegment::Super(None) => "super".to_owned(),
UseSegment::Glob => "*".to_owned(),
UseSegment::List(ref use_tree_list) => rewrite_nested_use_tree(
context,
use_tree_list,
// 1 = "{" and "}"
shape.offset_left(1)?.sub_width(1)?,
)?,
})
}
}
impl Rewrite for UseTree {
// This does NOT format attributes and visibility or add a trailing `;`.
fn rewrite(&self, context: &RewriteContext, mut shape: Shape) -> Option<String> {
let mut result = String::with_capacity(256);
let mut iter = self.path.iter().peekable();
while let Some(ref segment) = iter.next() {
let segment_str = segment.rewrite(context, shape)?;
result.push_str(&segment_str);
if iter.peek().is_some() {
result.push_str("::");
// 2 = "::"
shape = shape.offset_left(2 + segment_str.len())?;
}
}
Some(result)
}
}
#[cfg(test)]
mod test {
use super::*;
use syntax::source_map::DUMMY_SP;
// Parse the path part of an import. This parser is not robust and is only
// suitable for use in a test harness.
fn parse_use_tree(s: &str) -> UseTree {
use std::iter::Peekable;
use std::mem::swap;
use std::str::Chars;
struct Parser<'a> {
input: Peekable<Chars<'a>>,
}
impl<'a> Parser<'a> {
fn bump(&mut self) {
self.input.next().unwrap();
}
fn eat(&mut self, c: char) {
assert!(self.input.next().unwrap() == c);
}
fn push_segment(
result: &mut Vec<UseSegment>,
buf: &mut String,
alias_buf: &mut Option<String>,
) {
if !buf.is_empty() {
let mut alias = None;
swap(alias_buf, &mut alias);
if buf == "self" {
result.push(UseSegment::Slf(alias));
*buf = String::new();
*alias_buf = None;
} else if buf == "super" {
result.push(UseSegment::Super(alias));
*buf = String::new();
*alias_buf = None;
} else {
let mut name = String::new();
swap(buf, &mut name);
result.push(UseSegment::Ident(name, alias));
}
}
}
fn parse_in_list(&mut self) -> UseTree {
let mut result = vec![];
let mut buf = String::new();
let mut alias_buf = None;
while let Some(&c) = self.input.peek() {
match c {
'{' => {
assert!(buf.is_empty());
self.bump();
result.push(UseSegment::List(self.parse_list()));
self.eat('}');
}
'*' => {
assert!(buf.is_empty());
self.bump();
result.push(UseSegment::Glob);
}
':' => {
self.bump();
self.eat(':');
Self::push_segment(&mut result, &mut buf, &mut alias_buf);
}
'}' | ',' => {
Self::push_segment(&mut result, &mut buf, &mut alias_buf);
return UseTree {
path: result,
span: DUMMY_SP,
list_item: None,
visibility: None,
attrs: None,
};
}
' ' => {
self.bump();
self.eat('a');
self.eat('s');
self.eat(' ');
alias_buf = Some(String::new());
}
c => {
self.bump();
if let Some(ref mut buf) = alias_buf {
buf.push(c);
} else {
buf.push(c);
}
}
}
}
Self::push_segment(&mut result, &mut buf, &mut alias_buf);
UseTree {
path: result,
span: DUMMY_SP,
list_item: None,
visibility: None,
attrs: None,
}
}
fn parse_list(&mut self) -> Vec<UseTree> {
let mut result = vec![];
loop {
match self.input.peek().unwrap() {
',' | ' ' => self.bump(),
'}' => {
return result;
}
_ => result.push(self.parse_in_list()),
}
}
}
}
let mut parser = Parser {
input: s.chars().peekable(),
};
parser.parse_in_list()
}
macro parse_use_trees($($s:expr),* $(,)*) {
vec![
$(parse_use_tree($s),)*
]
}
#[test]
fn test_use_tree_merge() {
macro test_merge([$($input:expr),* $(,)*], [$($output:expr),* $(,)*]) {
assert_eq!(
merge_use_trees(parse_use_trees!($($input,)*)),
parse_use_trees!($($output,)*),
);
}
test_merge!(["a::b::{c, d}", "a::b::{e, f}"], ["a::b::{c, d, e, f}"]);
test_merge!(["a::b::c", "a::b"], ["a::b::{self, c}"]);
test_merge!(["a::b", "a::b"], ["a::b"]);
test_merge!(["a", "a::b", "a::b::c"], ["a::{self, b::{self, c}}"]);
test_merge!(
["a::{b::{self, c}, d::e}", "a::d::f"],
["a::{b::{self, c}, d::{e, f}}"]
);
test_merge!(
["a::d::f", "a::{b::{self, c}, d::e}"],
["a::{b::{self, c}, d::{e, f}}"]
);
test_merge!(
["a::{c, d, b}", "a::{d, e, b, a, f}", "a::{f, g, c}"],
["a::{a, b, c, d, e, f, g}"]
);
}
#[test]
fn test_use_tree_flatten() {
assert_eq!(
parse_use_tree("a::b::{c, d, e, f}").flatten(),
parse_use_trees!("a::b::c", "a::b::d", "a::b::e", "a::b::f",)
);
assert_eq!(
parse_use_tree("a::b::{c::{d, e, f}, g, h::{i, j, k}}").flatten(),
parse_use_trees![
"a::b::c::d",
"a::b::c::e",
"a::b::c::f",
"a::b::g",
"a::b::h::i",
"a::b::h::j",
"a::b::h::k",
]
);
}
#[test]
fn test_use_tree_normalize() {
assert_eq!(parse_use_tree("a::self").normalize(), parse_use_tree("a"));
assert_eq!(
parse_use_tree("a::self as foo").normalize(),
parse_use_tree("a as foo")
);
assert_eq!(parse_use_tree("a::{self}").normalize(), parse_use_tree("a"));
assert_eq!(parse_use_tree("a::{b}").normalize(), parse_use_tree("a::b"));
assert_eq!(
parse_use_tree("a::{b, c::self}").normalize(),
parse_use_tree("a::{b, c}")
);
assert_eq!(
parse_use_tree("a::{b as bar, c::self}").normalize(),
parse_use_tree("a::{b as bar, c}")
);
}
#[test]
fn test_use_tree_ord() {
assert!(parse_use_tree("a").normalize() < parse_use_tree("aa").normalize());
assert!(parse_use_tree("a").normalize() < parse_use_tree("a::a").normalize());
assert!(parse_use_tree("a").normalize() < parse_use_tree("*").normalize());
assert!(parse_use_tree("a").normalize() < parse_use_tree("{a, b}").normalize());
assert!(parse_use_tree("*").normalize() < parse_use_tree("{a, b}").normalize());
assert!(
parse_use_tree("aaaaaaaaaaaaaaa::{bb, cc, dddddddd}").normalize()
< parse_use_tree("aaaaaaaaaaaaaaa::{bb, cc, ddddddddd}").normalize()
);
assert!(
parse_use_tree("serde::de::{Deserialize}").normalize()
< parse_use_tree("serde_json").normalize()
);
assert!(parse_use_tree("a::b::c").normalize() < parse_use_tree("a::b::*").normalize());
assert!(
parse_use_tree("foo::{Bar, Baz}").normalize()
< parse_use_tree("{Bar, Baz}").normalize()
);
assert!(
parse_use_tree("foo::{self as bar}").normalize()
< parse_use_tree("foo::{qux as bar}").normalize()
);
assert!(
parse_use_tree("foo::{qux as bar}").normalize()
< parse_use_tree("foo::{baz, qux as bar}").normalize()
);
assert!(
parse_use_tree("foo::{self as bar, baz}").normalize()
< parse_use_tree("foo::{baz, qux as bar}").normalize()
);
assert!(parse_use_tree("foo").normalize() < parse_use_tree("Foo").normalize());
assert!(parse_use_tree("foo").normalize() < parse_use_tree("foo::Bar").normalize());
assert!(
parse_use_tree("std::cmp::{d, c, b, a}").normalize()
< parse_use_tree("std::cmp::{b, e, g, f}").normalize()
);
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/formatting.rs
|
// High level formatting functions.
use std::collections::HashMap;
use std::io::{self, Write};
use std::panic::{catch_unwind, AssertUnwindSafe};
use std::rc::Rc;
use std::time::{Duration, Instant};
use syntax::ast;
use syntax::errors::emitter::{ColorConfig, EmitterWriter};
use syntax::errors::Handler;
use syntax::parse::{self, ParseSess};
use syntax::source_map::{FilePathMapping, SourceMap, Span};
use comment::{CharClasses, FullCodeCharKind};
use config::{Config, FileName, Verbosity};
use issues::BadIssueSeeker;
use visitor::{FmtVisitor, SnippetProvider};
use {modules, source_file, ErrorKind, FormatReport, Input, Session};
// A map of the files of a crate, with their new content
pub(crate) type SourceFile = Vec<FileRecord>;
pub(crate) type FileRecord = (FileName, String);
impl<'b, T: Write + 'b> Session<'b, T> {
pub(crate) fn format_input_inner(&mut self, input: Input) -> Result<FormatReport, ErrorKind> {
if !self.config.version_meets_requirement() {
return Err(ErrorKind::VersionMismatch);
}
syntax::with_globals(|| {
syntax_pos::hygiene::set_default_edition(
self.config.edition().to_libsyntax_pos_edition(),
);
if self.config.disable_all_formatting() {
// When the input is from stdin, echo back the input.
if let Input::Text(ref buf) = input {
if let Err(e) = io::stdout().write_all(buf.as_bytes()) {
return Err(From::from(e));
}
}
return Ok(FormatReport::new());
}
let config = &self.config.clone();
let format_result = format_project(input, config, self);
format_result.map(|report| {
{
let new_errors = &report.internal.borrow().1;
self.errors.add(new_errors);
}
report
})
})
}
}
// Format an entire crate (or subset of the module tree).
fn format_project<T: FormatHandler>(
input: Input,
config: &Config,
handler: &mut T,
) -> Result<FormatReport, ErrorKind> {
// let mut timer = Timer::Initialized(Instant::now());
let main_file = input.file_name();
let input_is_stdin = main_file == FileName::Stdin;
// Parse the crate.
let source_map = Rc::new(SourceMap::new(FilePathMapping::empty()));
let mut parse_session = make_parse_sess(source_map.clone(), config);
let mut report = FormatReport::new();
let krate = parse_crate(input, &parse_session, config, &mut report)?;
// timer = timer.done_parsing();
// Suppress error output if we have to do any further parsing.
let silent_emitter = silent_emitter(source_map);
parse_session.span_diagnostic = Handler::with_emitter(true, false, silent_emitter);
let mut context = FormatContext::new(&krate, report, parse_session, config, handler);
let files = modules::list_files(&krate, context.parse_session.source_map())?;
for (path, module) in files {
if (config.skip_children() && path != main_file) || config.ignore().skip_file(&path) {
continue;
}
should_emit_verbose(input_is_stdin, config, || println!("Formatting {}", path));
let is_root = path == main_file;
context.format_file(path, module, is_root)?;
}
// timer = timer.done_formatting();
// should_emit_verbose(input_is_stdin, config, || {
// println!(
// "Spent {0:.3} secs in the parsing phase, and {1:.3} secs in the formatting phase",
// timer.get_parse_time(),
// timer.get_format_time(),
// )
// });
Ok(context.report)
}
// Used for formatting files.
#[derive(new)]
struct FormatContext<'a, T: FormatHandler + 'a> {
krate: &'a ast::Crate,
report: FormatReport,
parse_session: ParseSess,
config: &'a Config,
handler: &'a mut T,
}
impl<'a, T: FormatHandler + 'a> FormatContext<'a, T> {
// Formats a single file/module.
fn format_file(
&mut self,
path: FileName,
module: &ast::Mod,
is_root: bool,
) -> Result<(), ErrorKind> {
let source_file = self
.parse_session
.source_map()
.lookup_char_pos(module.inner.lo())
.file;
let big_snippet = source_file.src.as_ref().unwrap();
let snippet_provider = SnippetProvider::new(source_file.start_pos, big_snippet);
let mut visitor = FmtVisitor::from_source_map(
&self.parse_session,
&self.config,
&snippet_provider,
self.report.clone(),
);
// Format inner attributes if available.
if !self.krate.attrs.is_empty() && is_root {
visitor.skip_empty_lines(source_file.end_pos);
if visitor.visit_attrs(&self.krate.attrs, ast::AttrStyle::Inner) {
visitor.push_rewrite(module.inner, None);
} else {
visitor.format_separate_mod(module, &*source_file);
}
} else {
visitor.last_pos = source_file.start_pos;
visitor.skip_empty_lines(source_file.end_pos);
visitor.format_separate_mod(module, &*source_file);
};
debug_assert_eq!(
visitor.line_number,
::utils::count_newlines(&visitor.buffer)
);
// For some reason, the source_map does not include terminating
// newlines so we must add one on for each file. This is sad.
source_file::append_newline(&mut visitor.buffer);
format_lines(
&mut visitor.buffer,
&path,
&visitor.skipped_range,
&self.config,
&self.report,
);
self.config
.newline_style()
.apply(&mut visitor.buffer, &big_snippet);
if visitor.macro_rewrite_failure {
self.report.add_macro_format_failure();
}
self.handler
.handle_formatted_file(path, visitor.buffer, &mut self.report)
}
}
// Handle the results of formatting.
trait FormatHandler {
fn handle_formatted_file(
&mut self,
path: FileName,
result: String,
report: &mut FormatReport,
) -> Result<(), ErrorKind>;
}
impl<'b, T: Write + 'b> FormatHandler for Session<'b, T> {
// Called for each formatted file.
fn handle_formatted_file(
&mut self,
path: FileName,
mut result: String,
report: &mut FormatReport,
) -> Result<(), ErrorKind> {
if let Some(ref mut out) = self.out {
match source_file::write_file(&mut result, &path, out, &self.config) {
Ok(b) if b => report.add_diff(),
Err(e) => {
// Create a new error with path_str to help users see which files failed
let err_msg = format!("{}: {}", path, e);
return Err(io::Error::new(e.kind(), err_msg).into());
}
_ => {}
}
}
self.source_file.push((path, result));
Ok(())
}
}
pub(crate) struct FormattingError {
pub(crate) line: usize,
pub(crate) kind: ErrorKind,
is_comment: bool,
is_string: bool,
pub(crate) line_buffer: String,
}
impl FormattingError {
pub(crate) fn from_span(
span: &Span,
source_map: &SourceMap,
kind: ErrorKind,
) -> FormattingError {
FormattingError {
line: source_map.lookup_char_pos(span.lo()).line,
is_comment: kind.is_comment(),
kind,
is_string: false,
line_buffer: source_map
.span_to_lines(*span)
.ok()
.and_then(|fl| {
fl.file
.get_line(fl.lines[0].line_index)
.map(|l| l.into_owned())
}).unwrap_or_else(|| String::new()),
}
}
pub(crate) fn msg_prefix(&self) -> &str {
match self.kind {
ErrorKind::LineOverflow(..)
| ErrorKind::TrailingWhitespace
| ErrorKind::IoError(_)
| ErrorKind::ParseError
| ErrorKind::LostComment => "internal error:",
ErrorKind::LicenseCheck | ErrorKind::BadAttr | ErrorKind::VersionMismatch => "error:",
ErrorKind::BadIssue(_) | ErrorKind::DeprecatedAttr => "warning:",
}
}
pub(crate) fn msg_suffix(&self) -> &str {
if self.is_comment || self.is_string {
"set `error_on_unformatted = false` to suppress \
the warning against comments or string literals\n"
} else {
""
}
}
// (space, target)
pub(crate) fn format_len(&self) -> (usize, usize) {
match self.kind {
ErrorKind::LineOverflow(found, max) => (max, found - max),
ErrorKind::TrailingWhitespace
| ErrorKind::DeprecatedAttr
| ErrorKind::BadAttr
| ErrorKind::LostComment => {
let trailing_ws_start = self
.line_buffer
.rfind(|c: char| !c.is_whitespace())
.map(|pos| pos + 1)
.unwrap_or(0);
(
trailing_ws_start,
self.line_buffer.len() - trailing_ws_start,
)
}
_ => unreachable!(),
}
}
}
pub(crate) type FormatErrorMap = HashMap<FileName, Vec<FormattingError>>;
#[derive(Default, Debug)]
pub(crate) struct ReportedErrors {
// Encountered e.g. an IO error.
pub(crate) has_operational_errors: bool,
// Failed to reformat code because of parsing errors.
pub(crate) has_parsing_errors: bool,
// Code is valid, but it is impossible to format it properly.
pub(crate) has_formatting_errors: bool,
// Code contains macro call that was unable to format.
pub(crate) has_macro_format_failure: bool,
// Failed a check, such as the license check or other opt-in checking.
pub(crate) has_check_errors: bool,
/// Formatted code differs from existing code (--check only).
pub(crate) has_diff: bool,
}
impl ReportedErrors {
/// Combine two summaries together.
pub fn add(&mut self, other: &ReportedErrors) {
self.has_operational_errors |= other.has_operational_errors;
self.has_parsing_errors |= other.has_parsing_errors;
self.has_formatting_errors |= other.has_formatting_errors;
self.has_macro_format_failure |= other.has_macro_format_failure;
self.has_check_errors |= other.has_check_errors;
self.has_diff |= other.has_diff;
}
}
/// A single span of changed lines, with 0 or more removed lines
/// and a vector of 0 or more inserted lines.
#[derive(Debug, PartialEq, Eq)]
pub(crate) struct ModifiedChunk {
/// The first to be removed from the original text
pub line_number_orig: u32,
/// The number of lines which have been replaced
pub lines_removed: u32,
/// The new lines
pub lines: Vec<String>,
}
/// Set of changed sections of a file.
#[derive(Debug, PartialEq, Eq)]
pub(crate) struct ModifiedLines {
/// The set of changed chunks.
pub chunks: Vec<ModifiedChunk>,
}
#[derive(Clone, Copy, Debug)]
enum Timer {
Initialized(Instant),
DoneParsing(Instant, Instant),
DoneFormatting(Instant, Instant, Instant),
}
impl Timer {
fn done_parsing(self) -> Self {
match self {
Timer::Initialized(init_time) => Timer::DoneParsing(init_time, Instant::now()),
_ => panic!("Timer can only transition to DoneParsing from Initialized state"),
}
}
fn done_formatting(self) -> Self {
match self {
Timer::DoneParsing(init_time, parse_time) => {
Timer::DoneFormatting(init_time, parse_time, Instant::now())
}
_ => panic!("Timer can only transition to DoneFormatting from DoneParsing state"),
}
}
/// Returns the time it took to parse the source files in seconds.
fn get_parse_time(&self) -> f32 {
match *self {
Timer::DoneParsing(init, parse_time) | Timer::DoneFormatting(init, parse_time, _) => {
// This should never underflow since `Instant::now()` guarantees monotonicity.
Self::duration_to_f32(parse_time.duration_since(init))
}
Timer::Initialized(..) => unreachable!(),
}
}
/// Returns the time it took to go from the parsed AST to the formatted output. Parsing time is
/// not included.
fn get_format_time(&self) -> f32 {
match *self {
Timer::DoneFormatting(_init, parse_time, format_time) => {
Self::duration_to_f32(format_time.duration_since(parse_time))
}
Timer::DoneParsing(..) | Timer::Initialized(..) => unreachable!(),
}
}
fn duration_to_f32(d: Duration) -> f32 {
d.as_secs() as f32 + d.subsec_nanos() as f32 / 1_000_000_000f32
}
}
// Formatting done on a char by char or line by line basis.
// FIXME(#20) other stuff for parity with make tidy
fn format_lines(
text: &mut String,
name: &FileName,
skipped_range: &[(usize, usize)],
config: &Config,
report: &FormatReport,
) {
let mut formatter = FormatLines::new(name, skipped_range, config);
formatter.check_license(text);
formatter.iterate(text);
if formatter.newline_count > 1 {
debug!("track truncate: {} {}", text.len(), formatter.newline_count);
let line = text.len() - formatter.newline_count + 1;
text.truncate(line);
}
report.append(name.clone(), formatter.errors);
}
struct FormatLines<'a> {
name: &'a FileName,
skipped_range: &'a [(usize, usize)],
last_was_space: bool,
line_len: usize,
cur_line: usize,
newline_count: usize,
errors: Vec<FormattingError>,
issue_seeker: BadIssueSeeker,
line_buffer: String,
// true if the current line contains a string literal.
is_string: bool,
format_line: bool,
allow_issue_seek: bool,
config: &'a Config,
}
impl<'a> FormatLines<'a> {
fn new(
name: &'a FileName,
skipped_range: &'a [(usize, usize)],
config: &'a Config,
) -> FormatLines<'a> {
let issue_seeker = BadIssueSeeker::new(config.report_todo(), config.report_fixme());
FormatLines {
name,
skipped_range,
last_was_space: false,
line_len: 0,
cur_line: 1,
newline_count: 0,
errors: vec![],
allow_issue_seek: !issue_seeker.is_disabled(),
issue_seeker,
line_buffer: String::with_capacity(config.max_width() * 2),
is_string: false,
format_line: config.file_lines().contains_line(name, 1),
config,
}
}
fn check_license(&mut self, text: &mut String) {
if let Some(ref license_template) = self.config.license_template {
if !license_template.is_match(text) {
self.errors.push(FormattingError {
line: self.cur_line,
kind: ErrorKind::LicenseCheck,
is_comment: false,
is_string: false,
line_buffer: String::new(),
});
}
}
}
// Iterate over the chars in the file map.
fn iterate(&mut self, text: &mut String) {
for (kind, c) in CharClasses::new(text.chars()) {
if c == '\r' {
continue;
}
if self.allow_issue_seek && self.format_line {
// Add warnings for bad todos/ fixmes
if let Some(issue) = self.issue_seeker.inspect(c) {
self.push_err(ErrorKind::BadIssue(issue), false, false);
}
}
if c == '\n' {
self.new_line(kind);
} else {
self.char(c, kind);
}
}
}
fn new_line(&mut self, kind: FullCodeCharKind) {
if self.format_line {
// Check for (and record) trailing whitespace.
if self.last_was_space {
if self.should_report_error(kind, &ErrorKind::TrailingWhitespace)
&& !self.is_skipped_line()
{
self.push_err(
ErrorKind::TrailingWhitespace,
kind.is_comment(),
kind.is_string(),
);
}
self.line_len -= 1;
}
// Check for any line width errors we couldn't correct.
let error_kind = ErrorKind::LineOverflow(self.line_len, self.config.max_width());
if self.line_len > self.config.max_width()
&& !self.is_skipped_line()
&& self.should_report_error(kind, &error_kind)
{
self.push_err(error_kind, kind.is_comment(), self.is_string);
}
}
self.line_len = 0;
self.cur_line += 1;
self.format_line = self
.config
.file_lines()
.contains_line(self.name, self.cur_line);
self.newline_count += 1;
self.last_was_space = false;
self.line_buffer.clear();
self.is_string = false;
}
fn char(&mut self, c: char, kind: FullCodeCharKind) {
self.newline_count = 0;
self.line_len += if c == '\t' {
self.config.tab_spaces()
} else {
1
};
self.last_was_space = c.is_whitespace();
self.line_buffer.push(c);
if kind.is_string() {
self.is_string = true;
}
}
fn push_err(&mut self, kind: ErrorKind, is_comment: bool, is_string: bool) {
self.errors.push(FormattingError {
line: self.cur_line,
kind,
is_comment,
is_string,
line_buffer: self.line_buffer.clone(),
});
}
fn should_report_error(&self, char_kind: FullCodeCharKind, error_kind: &ErrorKind) -> bool {
let allow_error_report =
if char_kind.is_comment() || self.is_string || error_kind.is_comment() {
self.config.error_on_unformatted()
} else {
true
};
match error_kind {
ErrorKind::LineOverflow(..) => {
self.config.error_on_line_overflow() && allow_error_report
}
ErrorKind::TrailingWhitespace | ErrorKind::LostComment => allow_error_report,
_ => true,
}
}
/// Returns true if the line with the given line number was skipped by `#[rustfmt::skip]`.
fn is_skipped_line(&self) -> bool {
self.skipped_range
.iter()
.any(|&(lo, hi)| lo <= self.cur_line && self.cur_line <= hi)
}
}
fn parse_crate(
input: Input,
parse_session: &ParseSess,
config: &Config,
report: &mut FormatReport,
) -> Result<ast::Crate, ErrorKind> {
let input_is_stdin = input.is_text();
let mut parser = match input {
Input::File(file) => parse::new_parser_from_file(parse_session, &file),
Input::Text(text) => parse::new_parser_from_source_str(
parse_session,
syntax::source_map::FileName::Custom("stdin".to_owned()),
text,
),
};
parser.cfg_mods = false;
if config.skip_children() {
parser.recurse_into_file_modules = false;
}
let mut parser = AssertUnwindSafe(parser);
let result = catch_unwind(move || parser.0.parse_crate_mod());
match result {
Ok(Ok(c)) => {
if !parse_session.span_diagnostic.has_errors() {
return Ok(c);
}
}
Ok(Err(mut e)) => e.emit(),
Err(_) => {
// Note that if you see this message and want more information,
// then run the `parse_crate_mod` function above without
// `catch_unwind` so rustfmt panics and you can get a backtrace.
should_emit_verbose(input_is_stdin, config, || {
println!("The Rust parser panicked")
});
}
}
report.add_parsing_error();
Err(ErrorKind::ParseError)
}
fn silent_emitter(source_map: Rc<SourceMap>) -> Box<EmitterWriter> {
Box::new(EmitterWriter::new(
Box::new(Vec::new()),
Some(source_map),
false,
false,
))
}
fn make_parse_sess(source_map: Rc<SourceMap>, config: &Config) -> ParseSess {
let tty_handler = if config.hide_parse_errors() {
let silent_emitter = silent_emitter(source_map.clone());
Handler::with_emitter(true, false, silent_emitter)
} else {
let supports_color = term::stderr().map_or(false, |term| term.supports_color());
let color_cfg = if supports_color {
ColorConfig::Auto
} else {
ColorConfig::Never
};
Handler::with_tty_emitter(color_cfg, true, false, Some(source_map.clone()))
};
ParseSess::with_span_handler(tty_handler, source_map)
}
fn should_emit_verbose<F>(is_stdin: bool, config: &Config, f: F)
where
F: Fn(),
{
if config.verbose() == Verbosity::Verbose && !is_stdin {
f();
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/overflow.rs
|
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Rewrite a list some items with overflow.
// FIXME: Replace `ToExpr` with some enum.
use config::lists::*;
use syntax::ast;
use syntax::parse::token::DelimToken;
use syntax::source_map::Span;
use closures;
use expr::{is_every_expr_simple, is_method_call, is_nested_call, maybe_get_args_offset, ToExpr};
use lists::{definitive_tactic, itemize_list, write_list, ListFormatting, ListItem, Separator};
use rewrite::{Rewrite, RewriteContext};
use shape::Shape;
use source_map::SpanUtils;
use spanned::Spanned;
use utils::{count_newlines, extra_offset, first_line_width, last_line_width, mk_sp};
use std::cmp::min;
const SHORT_ITEM_THRESHOLD: usize = 10;
pub fn rewrite_with_parens<T>(
context: &RewriteContext,
ident: &str,
items: &[&T],
shape: Shape,
span: Span,
item_max_width: usize,
force_separator_tactic: Option<SeparatorTactic>,
) -> Option<String>
where
T: Rewrite + ToExpr + Spanned,
{
Context::new(
context,
items,
ident,
shape,
span,
"(",
")",
item_max_width,
force_separator_tactic,
None,
).rewrite(shape)
}
pub fn rewrite_with_angle_brackets<T>(
context: &RewriteContext,
ident: &str,
items: &[&T],
shape: Shape,
span: Span,
) -> Option<String>
where
T: Rewrite + ToExpr + Spanned,
{
Context::new(
context,
items,
ident,
shape,
span,
"<",
">",
context.config.max_width(),
None,
None,
).rewrite(shape)
}
pub fn rewrite_with_square_brackets<T>(
context: &RewriteContext,
name: &str,
items: &[&T],
shape: Shape,
span: Span,
force_separator_tactic: Option<SeparatorTactic>,
delim_token: Option<DelimToken>,
) -> Option<String>
where
T: Rewrite + ToExpr + Spanned,
{
let (lhs, rhs) = match delim_token {
Some(DelimToken::Paren) => ("(", ")"),
Some(DelimToken::Brace) => ("{", "}"),
_ => ("[", "]"),
};
Context::new(
context,
items,
name,
shape,
span,
lhs,
rhs,
context.config.width_heuristics().array_width,
force_separator_tactic,
Some(("[", "]")),
).rewrite(shape)
}
struct Context<'a, T: 'a> {
context: &'a RewriteContext<'a>,
items: &'a [&'a T],
ident: &'a str,
prefix: &'static str,
suffix: &'static str,
one_line_shape: Shape,
nested_shape: Shape,
span: Span,
item_max_width: usize,
one_line_width: usize,
force_separator_tactic: Option<SeparatorTactic>,
custom_delims: Option<(&'a str, &'a str)>,
}
impl<'a, T: 'a + Rewrite + ToExpr + Spanned> Context<'a, T> {
pub fn new(
context: &'a RewriteContext,
items: &'a [&'a T],
ident: &'a str,
shape: Shape,
span: Span,
prefix: &'static str,
suffix: &'static str,
item_max_width: usize,
force_separator_tactic: Option<SeparatorTactic>,
custom_delims: Option<(&'a str, &'a str)>,
) -> Context<'a, T> {
let used_width = extra_offset(ident, shape);
// 1 = `()`
let one_line_width = shape.width.saturating_sub(used_width + 2);
// 1 = "(" or ")"
let one_line_shape = shape
.offset_left(last_line_width(ident) + 1)
.and_then(|shape| shape.sub_width(1))
.unwrap_or(Shape { width: 0, ..shape });
let nested_shape = shape_from_indent_style(context, shape, used_width + 2, used_width + 1);
Context {
context,
items,
ident,
one_line_shape,
nested_shape,
span,
prefix,
suffix,
item_max_width,
one_line_width,
force_separator_tactic,
custom_delims,
}
}
fn last_item(&self) -> Option<&&T> {
self.items.last()
}
fn items_span(&self) -> Span {
let span_lo = self
.context
.snippet_provider
.span_after(self.span, self.prefix);
mk_sp(span_lo, self.span.hi())
}
fn rewrite_last_item_with_overflow(
&self,
last_list_item: &mut ListItem,
shape: Shape,
) -> Option<String> {
let last_item = self.last_item()?;
let rewrite = if let Some(expr) = last_item.to_expr() {
match expr.node {
// When overflowing the closure which consists of a single control flow expression,
// force to use block if its condition uses multi line.
ast::ExprKind::Closure(..) => {
// If the argument consists of multiple closures, we do not overflow
// the last closure.
if closures::args_have_many_closure(self.items) {
None
} else {
closures::rewrite_last_closure(self.context, expr, shape)
}
}
_ => expr.rewrite(self.context, shape),
}
} else {
last_item.rewrite(self.context, shape)
};
if let Some(rewrite) = rewrite {
let rewrite_first_line = Some(rewrite[..first_line_width(&rewrite)].to_owned());
last_list_item.item = rewrite_first_line;
Some(rewrite)
} else {
None
}
}
fn default_tactic(&self, list_items: &[ListItem]) -> DefinitiveListTactic {
definitive_tactic(
list_items,
ListTactic::LimitedHorizontalVertical(self.item_max_width),
Separator::Comma,
self.one_line_width,
)
}
fn try_overflow_last_item(&self, list_items: &mut Vec<ListItem>) -> DefinitiveListTactic {
// 1 = "("
let combine_arg_with_callee = self.items.len() == 1
&& self.items[0].to_expr().is_some()
&& self.ident.len() + 1 <= self.context.config.tab_spaces();
let overflow_last = combine_arg_with_callee || can_be_overflowed(self.context, self.items);
// Replace the last item with its first line to see if it fits with
// first arguments.
let placeholder = if overflow_last {
let old_value = *self.context.force_one_line_chain.borrow();
if !combine_arg_with_callee {
if let Some(ref expr) = self.last_item().and_then(|item| item.to_expr()) {
if is_method_call(expr) {
self.context.force_one_line_chain.replace(true);
}
}
}
let result = last_item_shape(
self.items,
list_items,
self.one_line_shape,
self.item_max_width,
).and_then(|arg_shape| {
self.rewrite_last_item_with_overflow(
&mut list_items[self.items.len() - 1],
arg_shape,
)
});
self.context.force_one_line_chain.replace(old_value);
result
} else {
None
};
let mut tactic = definitive_tactic(
&*list_items,
ListTactic::LimitedHorizontalVertical(self.item_max_width),
Separator::Comma,
self.one_line_width,
);
// Replace the stub with the full overflowing last argument if the rewrite
// succeeded and its first line fits with the other arguments.
match (overflow_last, tactic, placeholder) {
(true, DefinitiveListTactic::Horizontal, Some(ref overflowed))
if self.items.len() == 1 =>
{
// When we are rewriting a nested function call, we restrict the
// budget for the inner function to avoid them being deeply nested.
// However, when the inner function has a prefix or a suffix
// (e.g. `foo() as u32`), this budget reduction may produce poorly
// formatted code, where a prefix or a suffix being left on its own
// line. Here we explicitlly check those cases.
if count_newlines(overflowed) == 1 {
let rw = self
.items
.last()
.and_then(|last_item| last_item.rewrite(self.context, self.nested_shape));
let no_newline = rw.as_ref().map_or(false, |s| !s.contains('\n'));
if no_newline {
list_items[self.items.len() - 1].item = rw;
} else {
list_items[self.items.len() - 1].item = Some(overflowed.to_owned());
}
} else {
list_items[self.items.len() - 1].item = Some(overflowed.to_owned());
}
}
(true, DefinitiveListTactic::Horizontal, placeholder @ Some(..)) => {
list_items[self.items.len() - 1].item = placeholder;
}
_ if self.items.len() >= 1 => {
list_items[self.items.len() - 1].item = self
.items
.last()
.and_then(|last_item| last_item.rewrite(self.context, self.nested_shape));
// Use horizontal layout for a function with a single argument as long as
// everything fits in a single line.
// `self.one_line_width == 0` means vertical layout is forced.
if self.items.len() == 1
&& self.one_line_width != 0
&& !list_items[0].has_comment()
&& !list_items[0].inner_as_ref().contains('\n')
&& ::lists::total_item_width(&list_items[0]) <= self.one_line_width
{
tactic = DefinitiveListTactic::Horizontal;
} else {
tactic = self.default_tactic(list_items);
if tactic == DefinitiveListTactic::Vertical {
if let Some((all_simple, num_args_before)) =
maybe_get_args_offset(self.ident, self.items)
{
let one_line = all_simple
&& definitive_tactic(
&list_items[..num_args_before],
ListTactic::HorizontalVertical,
Separator::Comma,
self.nested_shape.width,
) == DefinitiveListTactic::Horizontal
&& definitive_tactic(
&list_items[num_args_before + 1..],
ListTactic::HorizontalVertical,
Separator::Comma,
self.nested_shape.width,
) == DefinitiveListTactic::Horizontal;
if one_line {
tactic = DefinitiveListTactic::SpecialMacro(num_args_before);
};
} else if is_every_expr_simple(self.items) && no_long_items(list_items) {
tactic = DefinitiveListTactic::Mixed;
}
}
}
}
_ => (),
}
tactic
}
fn rewrite_items(&self) -> Option<(bool, String)> {
let span = self.items_span();
let items = itemize_list(
self.context.snippet_provider,
self.items.iter(),
self.suffix,
",",
|item| item.span().lo(),
|item| item.span().hi(),
|item| item.rewrite(self.context, self.nested_shape),
span.lo(),
span.hi(),
true,
);
let mut list_items: Vec<_> = items.collect();
// Try letting the last argument overflow to the next line with block
// indentation. If its first line fits on one line with the other arguments,
// we format the function arguments horizontally.
let tactic = self.try_overflow_last_item(&mut list_items);
let trailing_separator = if let Some(tactic) = self.force_separator_tactic {
tactic
} else if !self.context.use_block_indent() {
SeparatorTactic::Never
} else if tactic == DefinitiveListTactic::Mixed {
// We are using mixed layout because everything did not fit within a single line.
SeparatorTactic::Always
} else {
self.context.config.trailing_comma()
};
let ends_with_newline = match tactic {
DefinitiveListTactic::Vertical | DefinitiveListTactic::Mixed => {
self.context.use_block_indent()
}
_ => false,
};
let fmt = ListFormatting::new(self.nested_shape, self.context.config)
.tactic(tactic)
.trailing_separator(trailing_separator)
.ends_with_newline(ends_with_newline);
write_list(&list_items, &fmt)
.map(|items_str| (tactic == DefinitiveListTactic::Horizontal, items_str))
}
fn wrap_items(&self, items_str: &str, shape: Shape, is_extendable: bool) -> String {
let shape = Shape {
width: shape.width.saturating_sub(last_line_width(self.ident)),
..shape
};
let (prefix, suffix) = match self.custom_delims {
Some((lhs, rhs)) => (lhs, rhs),
_ => (self.prefix, self.suffix),
};
// 2 = `()`
let fits_one_line = items_str.len() + 2 <= shape.width;
let extend_width = if items_str.is_empty() {
2
} else {
first_line_width(items_str) + 1
};
let nested_indent_str = self
.nested_shape
.indent
.to_string_with_newline(self.context.config);
let indent_str = shape
.block()
.indent
.to_string_with_newline(self.context.config);
let mut result = String::with_capacity(
self.ident.len() + items_str.len() + 2 + indent_str.len() + nested_indent_str.len(),
);
result.push_str(self.ident);
result.push_str(prefix);
if !self.context.use_block_indent()
|| (self.context.inside_macro() && !items_str.contains('\n') && fits_one_line)
|| (is_extendable && extend_width <= shape.width)
{
result.push_str(items_str);
} else {
if !items_str.is_empty() {
result.push_str(&nested_indent_str);
result.push_str(items_str);
}
result.push_str(&indent_str);
}
result.push_str(suffix);
result
}
fn rewrite(&self, shape: Shape) -> Option<String> {
let (extendable, items_str) = self.rewrite_items()?;
// If we are using visual indent style and failed to format, retry with block indent.
if !self.context.use_block_indent()
&& need_block_indent(&items_str, self.nested_shape)
&& !extendable
{
self.context.use_block.replace(true);
let result = self.rewrite(shape);
self.context.use_block.replace(false);
return result;
}
Some(self.wrap_items(&items_str, shape, extendable))
}
}
fn need_block_indent(s: &str, shape: Shape) -> bool {
s.lines().skip(1).any(|s| {
s.find(|c| !char::is_whitespace(c))
.map_or(false, |w| w + 1 < shape.indent.width())
})
}
fn can_be_overflowed<'a, T>(context: &RewriteContext, items: &[&T]) -> bool
where
T: Rewrite + Spanned + ToExpr + 'a,
{
items
.last()
.map_or(false, |x| x.can_be_overflowed(context, items.len()))
}
/// Returns a shape for the last argument which is going to be overflowed.
fn last_item_shape<T>(
lists: &[&T],
items: &[ListItem],
shape: Shape,
args_max_width: usize,
) -> Option<Shape>
where
T: Rewrite + Spanned + ToExpr,
{
let is_nested_call = lists
.iter()
.next()
.and_then(|item| item.to_expr())
.map_or(false, is_nested_call);
if items.len() == 1 && !is_nested_call {
return Some(shape);
}
let offset = items.iter().rev().skip(1).fold(0, |acc, i| {
// 2 = ", "
acc + 2 + i.inner_as_ref().len()
});
Shape {
width: min(args_max_width, shape.width),
..shape
}.offset_left(offset)
}
fn shape_from_indent_style(
context: &RewriteContext,
shape: Shape,
overhead: usize,
offset: usize,
) -> Shape {
let (shape, overhead) = if context.use_block_indent() {
let shape = shape
.block()
.block_indent(context.config.tab_spaces())
.with_max_width(context.config);
(shape, 1) // 1 = ","
} else {
(shape.visual_indent(offset), overhead)
};
Shape {
width: shape.width.saturating_sub(overhead),
..shape
}
}
fn no_long_items(list: &[ListItem]) -> bool {
list.iter()
.all(|item| item.inner_as_ref().len() <= SHORT_ITEM_THRESHOLD)
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/types.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::iter::ExactSizeIterator;
use std::ops::Deref;
use config::lists::*;
use syntax::ast::{self, FunctionRetTy, Mutability};
use syntax::source_map::{self, BytePos, Span};
use syntax::symbol::keywords;
use config::{IndentStyle, TypeDensity};
use expr::{rewrite_assign_rhs, rewrite_tuple, rewrite_unary_prefix, ToExpr};
use lists::{definitive_tactic, itemize_list, write_list, ListFormatting, Separator};
use macros::{rewrite_macro, MacroPosition};
use overflow;
use pairs::{rewrite_pair, PairParts};
use rewrite::{Rewrite, RewriteContext};
use shape::Shape;
use source_map::SpanUtils;
use spanned::Spanned;
use utils::{
colon_spaces, extra_offset, first_line_width, format_abi, format_mutability,
last_line_extendable, last_line_width, mk_sp, rewrite_ident,
};
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
pub enum PathContext {
Expr,
Type,
Import,
}
// Does not wrap on simple segments.
pub fn rewrite_path(
context: &RewriteContext,
path_context: PathContext,
qself: Option<&ast::QSelf>,
path: &ast::Path,
shape: Shape,
) -> Option<String> {
let skip_count = qself.map_or(0, |x| x.position);
let mut result = if path.is_global() && qself.is_none() && path_context != PathContext::Import {
"::".to_owned()
} else {
String::new()
};
let mut span_lo = path.span.lo();
if let Some(qself) = qself {
result.push('<');
let fmt_ty = qself.ty.rewrite(context, shape)?;
result.push_str(&fmt_ty);
if skip_count > 0 {
result.push_str(" as ");
if path.is_global() && path_context != PathContext::Import {
result.push_str("::");
}
// 3 = ">::".len()
let shape = shape.sub_width(3)?;
result = rewrite_path_segments(
PathContext::Type,
result,
path.segments.iter().take(skip_count),
span_lo,
path.span.hi(),
context,
shape,
)?;
}
result.push_str(">::");
span_lo = qself.ty.span.hi() + BytePos(1);
}
rewrite_path_segments(
path_context,
result,
path.segments.iter().skip(skip_count),
span_lo,
path.span.hi(),
context,
shape,
)
}
fn rewrite_path_segments<'a, I>(
path_context: PathContext,
mut buffer: String,
iter: I,
mut span_lo: BytePos,
span_hi: BytePos,
context: &RewriteContext,
shape: Shape,
) -> Option<String>
where
I: Iterator<Item = &'a ast::PathSegment>,
{
let mut first = true;
let shape = shape.visual_indent(0);
for segment in iter {
// Indicates a global path, shouldn't be rendered.
if segment.ident.name == keywords::CrateRoot.name() {
continue;
}
if first {
first = false;
} else {
buffer.push_str("::");
}
let extra_offset = extra_offset(&buffer, shape);
let new_shape = shape.shrink_left(extra_offset)?;
let segment_string = rewrite_segment(
path_context,
segment,
&mut span_lo,
span_hi,
context,
new_shape,
)?;
buffer.push_str(&segment_string);
}
Some(buffer)
}
#[derive(Debug)]
enum SegmentParam<'a> {
LifeTime(&'a ast::Lifetime),
Type(&'a ast::Ty),
Binding(&'a ast::TypeBinding),
}
impl<'a> SegmentParam<'a> {
fn from_generic_arg(arg: &ast::GenericArg) -> SegmentParam {
match arg {
ast::GenericArg::Lifetime(ref lt) => SegmentParam::LifeTime(lt),
ast::GenericArg::Type(ref ty) => SegmentParam::Type(ty),
}
}
}
impl<'a> Spanned for SegmentParam<'a> {
fn span(&self) -> Span {
match *self {
SegmentParam::LifeTime(lt) => lt.ident.span,
SegmentParam::Type(ty) => ty.span,
SegmentParam::Binding(binding) => binding.span,
}
}
}
impl<'a> ToExpr for SegmentParam<'a> {
fn to_expr(&self) -> Option<&ast::Expr> {
None
}
fn can_be_overflowed(&self, context: &RewriteContext, len: usize) -> bool {
match *self {
SegmentParam::Type(ty) => ty.can_be_overflowed(context, len),
_ => false,
}
}
}
impl<'a> Rewrite for SegmentParam<'a> {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
match *self {
SegmentParam::LifeTime(lt) => lt.rewrite(context, shape),
SegmentParam::Type(ty) => ty.rewrite(context, shape),
SegmentParam::Binding(binding) => {
let mut result = match context.config.type_punctuation_density() {
TypeDensity::Wide => format!("{} = ", rewrite_ident(context, binding.ident)),
TypeDensity::Compressed => {
format!("{}=", rewrite_ident(context, binding.ident))
}
};
let budget = shape.width.checked_sub(result.len())?;
let rewrite = binding
.ty
.rewrite(context, Shape::legacy(budget, shape.indent + result.len()))?;
result.push_str(&rewrite);
Some(result)
}
}
}
}
// Formats a path segment. There are some hacks involved to correctly determine
// the segment's associated span since it's not part of the AST.
//
// The span_lo is assumed to be greater than the end of any previous segment's
// parameters and lesser or equal than the start of current segment.
//
// span_hi is assumed equal to the end of the entire path.
//
// When the segment contains a positive number of parameters, we update span_lo
// so that invariants described above will hold for the next segment.
fn rewrite_segment(
path_context: PathContext,
segment: &ast::PathSegment,
span_lo: &mut BytePos,
span_hi: BytePos,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
let mut result = String::with_capacity(128);
result.push_str(rewrite_ident(context, segment.ident));
let ident_len = result.len();
let shape = if context.use_block_indent() {
shape.offset_left(ident_len)?
} else {
shape.shrink_left(ident_len)?
};
if let Some(ref args) = segment.args {
match **args {
ast::GenericArgs::AngleBracketed(ref data)
if !data.args.is_empty() || !data.bindings.is_empty() =>
{
let param_list = data
.args
.iter()
.map(SegmentParam::from_generic_arg)
.chain(data.bindings.iter().map(|x| SegmentParam::Binding(&*x)))
.collect::<Vec<_>>();
let separator = if path_context == PathContext::Expr {
"::"
} else {
""
};
result.push_str(separator);
let generics_str = overflow::rewrite_with_angle_brackets(
context,
"",
¶m_list.iter().map(|e| &*e).collect::<Vec<_>>(),
shape,
mk_sp(*span_lo, span_hi),
)?;
// Update position of last bracket.
*span_lo = context
.snippet_provider
.span_after(mk_sp(*span_lo, span_hi), "<");
result.push_str(&generics_str)
}
ast::GenericArgs::Parenthesized(ref data) => {
let output = match data.output {
Some(ref ty) => FunctionRetTy::Ty(ty.clone()),
None => FunctionRetTy::Default(source_map::DUMMY_SP),
};
result.push_str(&format_function_type(
data.inputs.iter().map(|x| &**x),
&output,
false,
data.span,
context,
shape,
)?);
}
_ => (),
}
}
Some(result)
}
fn format_function_type<'a, I>(
inputs: I,
output: &FunctionRetTy,
variadic: bool,
span: Span,
context: &RewriteContext,
shape: Shape,
) -> Option<String>
where
I: ExactSizeIterator,
<I as Iterator>::Item: Deref,
<I::Item as Deref>::Target: Rewrite + Spanned + 'a,
{
// Code for handling variadics is somewhat duplicated for items, but they
// are different enough to need some serious refactoring to share code.
enum ArgumentKind<T>
where
T: Deref,
<T as Deref>::Target: Rewrite + Spanned,
{
Regular(T),
Variadic(BytePos),
}
let variadic_arg = if variadic {
let variadic_start = context.snippet_provider.span_before(span, "...");
Some(ArgumentKind::Variadic(variadic_start))
} else {
None
};
// 2 for ()
let budget = shape.width.checked_sub(2)?;
// 1 for (
let offset = match context.config.indent_style() {
IndentStyle::Block => {
shape
.block()
.block_indent(context.config.tab_spaces())
.indent
}
IndentStyle::Visual => shape.indent + 1,
};
let list_shape = Shape::legacy(budget, offset);
let list_lo = context.snippet_provider.span_after(span, "(");
let items = itemize_list(
context.snippet_provider,
inputs.map(ArgumentKind::Regular).chain(variadic_arg),
")",
",",
|arg| match *arg {
ArgumentKind::Regular(ref ty) => ty.span().lo(),
ArgumentKind::Variadic(start) => start,
},
|arg| match *arg {
ArgumentKind::Regular(ref ty) => ty.span().hi(),
ArgumentKind::Variadic(start) => start + BytePos(3),
},
|arg| match *arg {
ArgumentKind::Regular(ref ty) => ty.rewrite(context, list_shape),
ArgumentKind::Variadic(_) => Some("...".to_owned()),
},
list_lo,
span.hi(),
false,
);
let item_vec: Vec<_> = items.collect();
let tactic = definitive_tactic(
&*item_vec,
ListTactic::HorizontalVertical,
Separator::Comma,
budget,
);
let trailing_separator = if !context.use_block_indent() || variadic {
SeparatorTactic::Never
} else {
context.config.trailing_comma()
};
let fmt = ListFormatting::new(list_shape, context.config)
.tactic(tactic)
.trailing_separator(trailing_separator)
.ends_with_newline(tactic.ends_with_newline(context.config.indent_style()))
.preserve_newline(true);
let list_str = write_list(&item_vec, &fmt)?;
let ty_shape = match context.config.indent_style() {
// 4 = " -> "
IndentStyle::Block => shape.offset_left(4)?,
IndentStyle::Visual => shape.block_left(4)?,
};
let output = match *output {
FunctionRetTy::Ty(ref ty) => {
let type_str = ty.rewrite(context, ty_shape)?;
format!(" -> {}", type_str)
}
FunctionRetTy::Default(..) => String::new(),
};
let args = if (!list_str.contains('\n') || list_str.is_empty()) && !output.contains('\n')
|| !context.use_block_indent()
{
format!("({})", list_str)
} else {
format!(
"({}{}{})",
offset.to_string_with_newline(context.config),
list_str,
shape.block().indent.to_string_with_newline(context.config),
)
};
if last_line_width(&args) + first_line_width(&output) <= shape.width {
Some(format!("{}{}", args, output))
} else {
Some(format!(
"{}\n{}{}",
args,
offset.to_string(context.config),
output.trim_left()
))
}
}
fn type_bound_colon(context: &RewriteContext) -> &'static str {
colon_spaces(
context.config.space_before_colon(),
context.config.space_after_colon(),
)
}
impl Rewrite for ast::WherePredicate {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
// FIXME: dead spans?
let result = match *self {
ast::WherePredicate::BoundPredicate(ast::WhereBoundPredicate {
ref bound_generic_params,
ref bounded_ty,
ref bounds,
..
}) => {
let type_str = bounded_ty.rewrite(context, shape)?;
let colon = type_bound_colon(context).trim_right();
let lhs = if let Some(lifetime_str) =
rewrite_lifetime_param(context, shape, bound_generic_params)
{
format!("for<{}> {}{}", lifetime_str, type_str, colon)
} else {
format!("{}{}", type_str, colon)
};
rewrite_assign_rhs(context, lhs, bounds, shape)?
}
ast::WherePredicate::RegionPredicate(ast::WhereRegionPredicate {
ref lifetime,
ref bounds,
..
}) => rewrite_bounded_lifetime(lifetime, bounds, context, shape)?,
ast::WherePredicate::EqPredicate(ast::WhereEqPredicate {
ref lhs_ty,
ref rhs_ty,
..
}) => {
let lhs_ty_str = lhs_ty.rewrite(context, shape).map(|lhs| lhs + " =")?;
rewrite_assign_rhs(context, lhs_ty_str, &**rhs_ty, shape)?
}
};
Some(result)
}
}
impl Rewrite for ast::GenericArg {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
match *self {
ast::GenericArg::Lifetime(ref lt) => lt.rewrite(context, shape),
ast::GenericArg::Type(ref ty) => ty.rewrite(context, shape),
}
}
}
fn rewrite_bounded_lifetime(
lt: &ast::Lifetime,
bounds: &[ast::GenericBound],
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
let result = lt.rewrite(context, shape)?;
if bounds.is_empty() {
Some(result)
} else {
let colon = type_bound_colon(context);
let overhead = last_line_width(&result) + colon.len();
let result = format!(
"{}{}{}",
result,
colon,
join_bounds(context, shape.sub_width(overhead)?, bounds, true)?
);
Some(result)
}
}
impl Rewrite for ast::Lifetime {
fn rewrite(&self, context: &RewriteContext, _: Shape) -> Option<String> {
Some(rewrite_ident(context, self.ident).to_owned())
}
}
impl Rewrite for ast::GenericBound {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
match *self {
ast::GenericBound::Trait(ref poly_trait_ref, trait_bound_modifier) => {
match trait_bound_modifier {
ast::TraitBoundModifier::None => poly_trait_ref.rewrite(context, shape),
ast::TraitBoundModifier::Maybe => {
let rw = poly_trait_ref.rewrite(context, shape.offset_left(1)?)?;
Some(format!("?{}", rw))
}
}
}
ast::GenericBound::Outlives(ref lifetime) => lifetime.rewrite(context, shape),
}
}
}
impl Rewrite for ast::GenericBounds {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
if self.is_empty() {
return Some(String::new());
}
let span = mk_sp(self.get(0)?.span().lo(), self.last()?.span().hi());
let has_paren = context.snippet(span).starts_with("(");
let bounds_shape = if has_paren {
shape.offset_left(1)?.sub_width(1)?
} else {
shape
};
join_bounds(context, bounds_shape, self, true).map(|s| {
if has_paren {
format!("({})", s)
} else {
s
}
})
}
}
impl Rewrite for ast::GenericParam {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
let mut result = String::with_capacity(128);
// FIXME: If there are more than one attributes, this will force multiline.
match self.attrs.rewrite(context, shape) {
Some(ref rw) if !rw.is_empty() => result.push_str(&format!("{} ", rw)),
_ => (),
}
result.push_str(rewrite_ident(context, self.ident));
if !self.bounds.is_empty() {
result.push_str(type_bound_colon(context));
result.push_str(&self.bounds.rewrite(context, shape)?)
}
if let ast::GenericParamKind::Type {
default: Some(ref def),
} = self.kind
{
let eq_str = match context.config.type_punctuation_density() {
TypeDensity::Compressed => "=",
TypeDensity::Wide => " = ",
};
result.push_str(eq_str);
let budget = shape.width.checked_sub(result.len())?;
let rewrite =
def.rewrite(context, Shape::legacy(budget, shape.indent + result.len()))?;
result.push_str(&rewrite);
}
Some(result)
}
}
impl Rewrite for ast::PolyTraitRef {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
if let Some(lifetime_str) =
rewrite_lifetime_param(context, shape, &self.bound_generic_params)
{
// 6 is "for<> ".len()
let extra_offset = lifetime_str.len() + 6;
let path_str = self
.trait_ref
.rewrite(context, shape.offset_left(extra_offset)?)?;
Some(format!("for<{}> {}", lifetime_str, path_str))
} else {
self.trait_ref.rewrite(context, shape)
}
}
}
impl Rewrite for ast::TraitRef {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
rewrite_path(context, PathContext::Type, None, &self.path, shape)
}
}
impl Rewrite for ast::Ty {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
match self.node {
ast::TyKind::TraitObject(ref bounds, tobj_syntax) => {
// we have to consider 'dyn' keyword is used or not!!!
let is_dyn = tobj_syntax == ast::TraitObjectSyntax::Dyn;
// 4 is length of 'dyn '
let shape = if is_dyn { shape.offset_left(4)? } else { shape };
let res = bounds.rewrite(context, shape)?;
if is_dyn {
Some(format!("dyn {}", res))
} else {
Some(res)
}
}
ast::TyKind::Ptr(ref mt) => {
let prefix = match mt.mutbl {
Mutability::Mutable => "*mut ",
Mutability::Immutable => "*const ",
};
rewrite_unary_prefix(context, prefix, &*mt.ty, shape)
}
ast::TyKind::Rptr(ref lifetime, ref mt) => {
let mut_str = format_mutability(mt.mutbl);
let mut_len = mut_str.len();
Some(match *lifetime {
Some(ref lifetime) => {
let lt_budget = shape.width.checked_sub(2 + mut_len)?;
let lt_str = lifetime.rewrite(
context,
Shape::legacy(lt_budget, shape.indent + 2 + mut_len),
)?;
let lt_len = lt_str.len();
let budget = shape.width.checked_sub(2 + mut_len + lt_len)?;
format!(
"&{} {}{}",
lt_str,
mut_str,
mt.ty.rewrite(
context,
Shape::legacy(budget, shape.indent + 2 + mut_len + lt_len)
)?
)
}
None => {
let budget = shape.width.checked_sub(1 + mut_len)?;
format!(
"&{}{}",
mut_str,
mt.ty.rewrite(
context,
Shape::legacy(budget, shape.indent + 1 + mut_len)
)?
)
}
})
}
// FIXME: we drop any comments here, even though it's a silly place to put
// comments.
ast::TyKind::Paren(ref ty) => {
let budget = shape.width.checked_sub(2)?;
ty.rewrite(context, Shape::legacy(budget, shape.indent + 1))
.map(|ty_str| format!("({})", ty_str))
}
ast::TyKind::Slice(ref ty) => {
let budget = shape.width.checked_sub(4)?;
ty.rewrite(context, Shape::legacy(budget, shape.indent + 1))
.map(|ty_str| format!("[{}]", ty_str))
}
ast::TyKind::Tup(ref items) => rewrite_tuple(
context,
&::utils::ptr_vec_to_ref_vec(items),
self.span,
shape,
),
ast::TyKind::Path(ref q_self, ref path) => {
rewrite_path(context, PathContext::Type, q_self.as_ref(), path, shape)
}
ast::TyKind::Array(ref ty, ref repeats) => rewrite_pair(
&**ty,
&*repeats.value,
PairParts::new("[", "; ", "]"),
context,
shape,
SeparatorPlace::Back,
),
ast::TyKind::Infer => {
if shape.width >= 1 {
Some("_".to_owned())
} else {
None
}
}
ast::TyKind::BareFn(ref bare_fn) => rewrite_bare_fn(bare_fn, self.span, context, shape),
ast::TyKind::Never => Some(String::from("!")),
ast::TyKind::Mac(ref mac) => {
rewrite_macro(mac, None, context, shape, MacroPosition::Expression)
}
ast::TyKind::ImplicitSelf => Some(String::from("")),
ast::TyKind::ImplTrait(_, ref it) => it
.rewrite(context, shape)
.map(|it_str| format!("impl {}", it_str)),
ast::TyKind::Err | ast::TyKind::Typeof(..) => unreachable!(),
}
}
}
fn rewrite_bare_fn(
bare_fn: &ast::BareFnTy,
span: Span,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
let mut result = String::with_capacity(128);
if let Some(ref lifetime_str) = rewrite_lifetime_param(context, shape, &bare_fn.generic_params)
{
result.push_str("for<");
// 6 = "for<> ".len(), 4 = "for<".
// This doesn't work out so nicely for multiline situation with lots of
// rightward drift. If that is a problem, we could use the list stuff.
result.push_str(lifetime_str);
result.push_str("> ");
}
result.push_str(::utils::format_unsafety(bare_fn.unsafety));
result.push_str(&format_abi(
bare_fn.abi,
context.config.force_explicit_abi(),
false,
));
result.push_str("fn");
let func_ty_shape = shape.offset_left(result.len())?;
let rewrite = format_function_type(
bare_fn.decl.inputs.iter(),
&bare_fn.decl.output,
bare_fn.decl.variadic,
span,
context,
func_ty_shape,
)?;
result.push_str(&rewrite);
Some(result)
}
fn is_generic_bounds_in_order(generic_bounds: &[ast::GenericBound]) -> bool {
let is_trait = |b: &ast::GenericBound| match b {
ast::GenericBound::Outlives(..) => false,
ast::GenericBound::Trait(..) => true,
};
let is_lifetime = |b: &ast::GenericBound| !is_trait(b);
let last_trait_index = generic_bounds.iter().rposition(is_trait);
let first_lifetime_index = generic_bounds.iter().position(is_lifetime);
match (last_trait_index, first_lifetime_index) {
(Some(last_trait_index), Some(first_lifetime_index)) => {
last_trait_index < first_lifetime_index
}
_ => true,
}
}
fn join_bounds(
context: &RewriteContext,
shape: Shape,
items: &[ast::GenericBound],
need_indent: bool,
) -> Option<String> {
// Try to join types in a single line
let joiner = match context.config.type_punctuation_density() {
TypeDensity::Compressed => "+",
TypeDensity::Wide => " + ",
};
let type_strs = items
.iter()
.map(|item| item.rewrite(context, shape))
.collect::<Option<Vec<_>>>()?;
let result = type_strs.join(joiner);
if items.len() <= 1 || (!result.contains('\n') && result.len() <= shape.width) {
return Some(result);
}
// We need to use multiple lines.
let (type_strs, offset) = if need_indent {
// Rewrite with additional indentation.
let nested_shape = shape.block_indent(context.config.tab_spaces());
let type_strs = items
.iter()
.map(|item| item.rewrite(context, nested_shape))
.collect::<Option<Vec<_>>>()?;
(type_strs, nested_shape.indent)
} else {
(type_strs, shape.indent)
};
let is_bound_extendable = |s: &str, b: &ast::GenericBound| match b {
ast::GenericBound::Outlives(..) => true,
ast::GenericBound::Trait(..) => last_line_extendable(s),
};
let mut result = String::with_capacity(128);
result.push_str(&type_strs[0]);
let mut can_be_put_on_the_same_line = is_bound_extendable(&result, &items[0]);
let generic_bounds_in_order = is_generic_bounds_in_order(items);
for (bound, bound_str) in items[1..].iter().zip(type_strs[1..].iter()) {
if generic_bounds_in_order && can_be_put_on_the_same_line {
result.push_str(joiner);
} else {
result.push_str(&offset.to_string_with_newline(context.config));
result.push_str("+ ");
}
result.push_str(bound_str);
can_be_put_on_the_same_line = is_bound_extendable(bound_str, bound);
}
Some(result)
}
pub fn can_be_overflowed_type(context: &RewriteContext, ty: &ast::Ty, len: usize) -> bool {
match ty.node {
ast::TyKind::Tup(..) => context.use_block_indent() && len == 1,
ast::TyKind::Rptr(_, ref mutty) | ast::TyKind::Ptr(ref mutty) => {
can_be_overflowed_type(context, &*mutty.ty, len)
}
_ => false,
}
}
/// Returns `None` if there is no `LifetimeDef` in the given generic parameters.
fn rewrite_lifetime_param(
context: &RewriteContext,
shape: Shape,
generic_params: &[ast::GenericParam],
) -> Option<String> {
let result = generic_params
.iter()
.filter(|p| match p.kind {
ast::GenericParamKind::Lifetime => true,
_ => false,
}).map(|lt| lt.rewrite(context, shape))
.collect::<Option<Vec<_>>>()?
.join(", ");
if result.is_empty() {
None
} else {
Some(result)
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/pairs.rs
|
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use syntax::ast;
use config::lists::*;
use config::IndentStyle;
use rewrite::{Rewrite, RewriteContext};
use shape::Shape;
use utils::{first_line_width, is_single_line, last_line_width, trimmed_last_line_width, wrap_str};
/// Sigils that decorate a binop pair.
#[derive(new, Clone, Copy)]
pub(crate) struct PairParts<'a> {
prefix: &'a str,
infix: &'a str,
suffix: &'a str,
}
impl<'a> PairParts<'a> {
pub(crate) fn infix(infix: &'a str) -> PairParts<'a> {
PairParts {
prefix: "",
infix,
suffix: "",
}
}
}
// Flattens a tree of pairs into a list and tries to rewrite them all at once.
// FIXME would be nice to reuse the lists API for this, but because each separator
// can be different, we can't.
pub(crate) fn rewrite_all_pairs(
expr: &ast::Expr,
shape: Shape,
context: &RewriteContext,
) -> Option<String> {
// First we try formatting on one line.
if let Some(list) = expr.flatten(context, false) {
if let Some(r) = rewrite_pairs_one_line(&list, shape, context) {
return Some(r);
}
}
// We can't format on line, so try many. When we flatten here we make sure
// to only flatten pairs with the same operator, that way we don't
// necessarily need one line per sub-expression, but we don't do anything
// too funny wrt precedence.
expr.flatten(context, true)
.and_then(|list| rewrite_pairs_multiline(list, shape, context))
}
// This may return a multi-line result since we allow the last expression to go
// multiline in a 'single line' formatting.
fn rewrite_pairs_one_line<T: Rewrite>(
list: &PairList<T>,
shape: Shape,
context: &RewriteContext,
) -> Option<String> {
assert!(list.list.len() >= 2, "Not a pair?");
let mut result = String::new();
let base_shape = shape.block();
for (e, s) in list.list.iter().zip(list.separators.iter()) {
let cur_shape = base_shape.offset_left(last_line_width(&result))?;
let rewrite = e.rewrite(context, cur_shape)?;
if !is_single_line(&rewrite) || result.len() > shape.width {
return None;
}
result.push_str(&rewrite);
result.push(' ');
result.push_str(s);
result.push(' ');
}
let last = list.list.last().unwrap();
let cur_shape = base_shape.offset_left(last_line_width(&result))?;
let rewrite = last.rewrite(context, cur_shape)?;
result.push_str(&rewrite);
if first_line_width(&result) > shape.width {
return None;
}
// Check the last expression in the list. We let this expression go over
// multiple lines, but we check that if this is necessary, then we can't
// do better using multi-line formatting.
if !is_single_line(&result) {
let multiline_shape = shape.offset_left(list.separators.last().unwrap().len() + 1)?;
let multiline_list: PairList<T> = PairList {
list: vec![last],
separators: vec![],
separator_place: list.separator_place,
};
// Format as if we were multi-line.
if let Some(rewrite) = rewrite_pairs_multiline(multiline_list, multiline_shape, context) {
// Also, don't let expressions surrounded by parens go multi-line,
// this looks really bad.
if rewrite.starts_with('(') || is_single_line(&rewrite) {
return None;
}
}
}
wrap_str(result, context.config.max_width(), shape)
}
fn rewrite_pairs_multiline<T: Rewrite>(
list: PairList<T>,
shape: Shape,
context: &RewriteContext,
) -> Option<String> {
let rhs_offset = shape.rhs_overhead(&context.config);
let nested_shape = (match context.config.indent_style() {
IndentStyle::Visual => shape.visual_indent(0),
IndentStyle::Block => shape.block_indent(context.config.tab_spaces()),
}).with_max_width(&context.config)
.sub_width(rhs_offset)?;
let indent_str = nested_shape.indent.to_string_with_newline(context.config);
let mut result = String::new();
let rewrite = list.list[0].rewrite(context, shape)?;
result.push_str(&rewrite);
for (e, s) in list.list[1..].iter().zip(list.separators.iter()) {
// The following test checks if we should keep two subexprs on the same
// line. We do this if not doing so would create an orphan and there is
// enough space to do so.
let offset = if result.contains('\n') {
0
} else {
shape.used_width()
};
if last_line_width(&result) + offset <= nested_shape.used_width() {
// We must snuggle the next line onto the previous line to avoid an orphan.
if let Some(line_shape) =
shape.offset_left(s.len() + 2 + trimmed_last_line_width(&result))
{
if let Some(rewrite) = e.rewrite(context, line_shape) {
result.push(' ');
result.push_str(s);
result.push(' ');
result.push_str(&rewrite);
continue;
}
}
}
let nested_overhead = s.len() + 1;
let line_shape = match context.config.binop_separator() {
SeparatorPlace::Back => {
result.push(' ');
result.push_str(s);
result.push_str(&indent_str);
nested_shape.sub_width(nested_overhead)?
}
SeparatorPlace::Front => {
result.push_str(&indent_str);
result.push_str(s);
result.push(' ');
nested_shape.offset_left(nested_overhead)?
}
};
let rewrite = e.rewrite(context, line_shape)?;
result.push_str(&rewrite);
}
Some(result)
}
// Rewrites a single pair.
pub(crate) fn rewrite_pair<LHS, RHS>(
lhs: &LHS,
rhs: &RHS,
pp: PairParts,
context: &RewriteContext,
shape: Shape,
separator_place: SeparatorPlace,
) -> Option<String>
where
LHS: Rewrite,
RHS: Rewrite,
{
let tab_spaces = context.config.tab_spaces();
let lhs_overhead = match separator_place {
SeparatorPlace::Back => shape.used_width() + pp.prefix.len() + pp.infix.trim_right().len(),
SeparatorPlace::Front => shape.used_width(),
};
let lhs_shape = Shape {
width: context.budget(lhs_overhead),
..shape
};
let lhs_result = lhs
.rewrite(context, lhs_shape)
.map(|lhs_str| format!("{}{}", pp.prefix, lhs_str))?;
// Try to put both lhs and rhs on the same line.
let rhs_orig_result = shape
.offset_left(last_line_width(&lhs_result) + pp.infix.len())
.and_then(|s| s.sub_width(pp.suffix.len()))
.and_then(|rhs_shape| rhs.rewrite(context, rhs_shape));
if let Some(ref rhs_result) = rhs_orig_result {
// If the length of the lhs is equal to or shorter than the tab width or
// the rhs looks like block expression, we put the rhs on the same
// line with the lhs even if the rhs is multi-lined.
let allow_same_line = lhs_result.len() <= tab_spaces || rhs_result
.lines()
.next()
.map(|first_line| first_line.ends_with('{'))
.unwrap_or(false);
if !rhs_result.contains('\n') || allow_same_line {
let one_line_width = last_line_width(&lhs_result)
+ pp.infix.len()
+ first_line_width(rhs_result)
+ pp.suffix.len();
if one_line_width <= shape.width {
return Some(format!(
"{}{}{}{}",
lhs_result, pp.infix, rhs_result, pp.suffix
));
}
}
}
// We have to use multiple lines.
// Re-evaluate the rhs because we have more space now:
let mut rhs_shape = match context.config.indent_style() {
IndentStyle::Visual => shape
.sub_width(pp.suffix.len() + pp.prefix.len())?
.visual_indent(pp.prefix.len()),
IndentStyle::Block => {
// Try to calculate the initial constraint on the right hand side.
let rhs_overhead = shape.rhs_overhead(context.config);
Shape::indented(shape.indent.block_indent(context.config), context.config)
.sub_width(rhs_overhead)?
}
};
let infix = match separator_place {
SeparatorPlace::Back => pp.infix.trim_right(),
SeparatorPlace::Front => pp.infix.trim_left(),
};
if separator_place == SeparatorPlace::Front {
rhs_shape = rhs_shape.offset_left(infix.len())?;
}
let rhs_result = rhs.rewrite(context, rhs_shape)?;
let indent_str = rhs_shape.indent.to_string_with_newline(context.config);
let infix_with_sep = match separator_place {
SeparatorPlace::Back => format!("{}{}", infix, indent_str),
SeparatorPlace::Front => format!("{}{}", indent_str, infix),
};
Some(format!(
"{}{}{}{}",
lhs_result, infix_with_sep, rhs_result, pp.suffix
))
}
// A pair which forms a tree and can be flattened (e.g., binops).
trait FlattenPair: Rewrite + Sized {
// If `_same_op` is `true`, then we only combine binops with the same
// operator into the list. E.g,, if the source is `a * b + c`, if `_same_op`
// is true, we make `[(a * b), c]` if `_same_op` is false, we make
// `[a, b, c]`
fn flatten(&self, _context: &RewriteContext, _same_op: bool) -> Option<PairList<Self>> {
None
}
}
struct PairList<'a, 'b, T: Rewrite + 'b> {
list: Vec<&'b T>,
separators: Vec<&'a str>,
separator_place: SeparatorPlace,
}
impl FlattenPair for ast::Expr {
fn flatten(&self, context: &RewriteContext, same_op: bool) -> Option<PairList<ast::Expr>> {
let top_op = match self.node {
ast::ExprKind::Binary(op, _, _) => op.node,
_ => return None,
};
// Turn a tree of binop expressions into a list using a depth-first,
// in-order traversal.
let mut stack = vec![];
let mut list = vec![];
let mut separators = vec![];
let mut node = self;
loop {
match node.node {
ast::ExprKind::Binary(op, ref lhs, _) if !same_op || op.node == top_op => {
stack.push(node);
node = lhs;
}
_ => {
list.push(node);
if let Some(pop) = stack.pop() {
match pop.node {
ast::ExprKind::Binary(op, _, ref rhs) => {
separators.push(op.node.to_string());
node = rhs;
}
_ => unreachable!(),
}
} else {
break;
}
}
}
}
assert_eq!(list.len() - 1, separators.len());
Some(PairList {
list,
separators,
separator_place: context.config.binop_separator(),
})
}
}
impl FlattenPair for ast::Ty {}
impl FlattenPair for ast::Pat {}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/source_map.rs
|
// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! This module contains utilities that work with the `SourceMap` from `libsyntax`/`syntex_syntax`.
//! This includes extension traits and methods for looking up spans and line ranges for AST nodes.
use config::file_lines::LineRange;
use syntax::source_map::{BytePos, SourceMap, Span};
use visitor::SnippetProvider;
use comment::FindUncommented;
pub trait SpanUtils {
fn span_after(&self, original: Span, needle: &str) -> BytePos;
fn span_after_last(&self, original: Span, needle: &str) -> BytePos;
fn span_before(&self, original: Span, needle: &str) -> BytePos;
fn opt_span_after(&self, original: Span, needle: &str) -> Option<BytePos>;
fn opt_span_before(&self, original: Span, needle: &str) -> Option<BytePos>;
}
pub trait LineRangeUtils {
/// Returns the `LineRange` that corresponds to `span` in `self`.
///
/// # Panics
///
/// Panics if `span` crosses a file boundary, which shouldn't happen.
fn lookup_line_range(&self, span: Span) -> LineRange;
}
impl<'a> SpanUtils for SnippetProvider<'a> {
fn span_after(&self, original: Span, needle: &str) -> BytePos {
self.opt_span_after(original, needle).expect("bad span")
}
fn span_after_last(&self, original: Span, needle: &str) -> BytePos {
let snippet = self.span_to_snippet(original).unwrap();
let mut offset = 0;
while let Some(additional_offset) = snippet[offset..].find_uncommented(needle) {
offset += additional_offset + needle.len();
}
original.lo() + BytePos(offset as u32)
}
fn span_before(&self, original: Span, needle: &str) -> BytePos {
self.opt_span_before(original, needle).expect(&format!(
"bad span: {}: {}",
needle,
self.span_to_snippet(original).unwrap()
))
}
fn opt_span_after(&self, original: Span, needle: &str) -> Option<BytePos> {
self.opt_span_before(original, needle)
.map(|bytepos| bytepos + BytePos(needle.len() as u32))
}
fn opt_span_before(&self, original: Span, needle: &str) -> Option<BytePos> {
let snippet = self.span_to_snippet(original)?;
let offset = snippet.find_uncommented(needle)?;
Some(original.lo() + BytePos(offset as u32))
}
}
impl LineRangeUtils for SourceMap {
fn lookup_line_range(&self, span: Span) -> LineRange {
let lo = self.lookup_line(span.lo()).unwrap();
let hi = self.lookup_line(span.hi()).unwrap();
debug_assert_eq!(
lo.fm.name, hi.fm.name,
"span crossed file boundary: lo: {:?}, hi: {:?}",
lo, hi
);
// Line numbers start at 1
LineRange {
file: lo.fm.clone(),
lo: lo.line + 1,
hi: hi.line + 1,
}
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/patterns.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use config::lists::*;
use syntax::ast::{self, BindingMode, FieldPat, Pat, PatKind, RangeEnd, RangeSyntax};
use syntax::ptr;
use syntax::source_map::{self, BytePos, Span};
use comment::FindUncommented;
use expr::{can_be_overflowed_expr, rewrite_unary_prefix, wrap_struct_field};
use lists::{
itemize_list, shape_for_tactic, struct_lit_formatting, struct_lit_shape, struct_lit_tactic,
write_list,
};
use macros::{rewrite_macro, MacroPosition};
use overflow;
use pairs::{rewrite_pair, PairParts};
use rewrite::{Rewrite, RewriteContext};
use shape::Shape;
use source_map::SpanUtils;
use spanned::Spanned;
use types::{rewrite_path, PathContext};
use utils::{format_mutability, mk_sp, rewrite_ident};
/// Returns true if the given pattern is short. A short pattern is defined by the following grammer:
///
/// [small, ntp]:
/// - single token
/// - `&[single-line, ntp]`
///
/// [small]:
/// - `[small, ntp]`
/// - unary tuple constructor `([small, ntp])`
/// - `&[small]`
pub fn is_short_pattern(pat: &ast::Pat, pat_str: &str) -> bool {
// We also require that the pattern is reasonably 'small' with its literal width.
pat_str.len() <= 20 && !pat_str.contains('\n') && is_short_pattern_inner(pat)
}
fn is_short_pattern_inner(pat: &ast::Pat) -> bool {
match pat.node {
ast::PatKind::Wild | ast::PatKind::Lit(_) => true,
ast::PatKind::Ident(_, _, ref pat) => pat.is_none(),
ast::PatKind::Struct(..)
| ast::PatKind::Mac(..)
| ast::PatKind::Slice(..)
| ast::PatKind::Path(..)
| ast::PatKind::Range(..) => false,
ast::PatKind::Tuple(ref subpats, _) => subpats.len() <= 1,
ast::PatKind::TupleStruct(ref path, ref subpats, _) => {
path.segments.len() <= 1 && subpats.len() <= 1
}
ast::PatKind::Box(ref p) | ast::PatKind::Ref(ref p, _) | ast::PatKind::Paren(ref p) => {
is_short_pattern_inner(&*p)
}
}
}
impl Rewrite for Pat {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
match self.node {
PatKind::Box(ref pat) => rewrite_unary_prefix(context, "box ", &**pat, shape),
PatKind::Ident(binding_mode, ident, ref sub_pat) => {
let (prefix, mutability) = match binding_mode {
BindingMode::ByRef(mutability) => ("ref ", mutability),
BindingMode::ByValue(mutability) => ("", mutability),
};
let mut_infix = format_mutability(mutability);
let id_str = rewrite_ident(context, ident);
let sub_pat = match *sub_pat {
Some(ref p) => {
// 3 - ` @ `.
let width = shape
.width
.checked_sub(prefix.len() + mut_infix.len() + id_str.len() + 3)?;
format!(
" @ {}",
p.rewrite(context, Shape::legacy(width, shape.indent))?
)
}
None => "".to_owned(),
};
Some(format!("{}{}{}{}", prefix, mut_infix, id_str, sub_pat))
}
PatKind::Wild => {
if 1 <= shape.width {
Some("_".to_owned())
} else {
None
}
}
PatKind::Range(ref lhs, ref rhs, ref end_kind) => {
let infix = match end_kind.node {
RangeEnd::Included(RangeSyntax::DotDotDot) => "...",
RangeEnd::Included(RangeSyntax::DotDotEq) => "..=",
RangeEnd::Excluded => "..",
};
let infix = if context.config.spaces_around_ranges() {
format!(" {} ", infix)
} else {
infix.to_owned()
};
rewrite_pair(
&**lhs,
&**rhs,
PairParts::infix(&infix),
context,
shape,
SeparatorPlace::Front,
)
}
PatKind::Ref(ref pat, mutability) => {
let prefix = format!("&{}", format_mutability(mutability));
rewrite_unary_prefix(context, &prefix, &**pat, shape)
}
PatKind::Tuple(ref items, dotdot_pos) => {
rewrite_tuple_pat(items, dotdot_pos, None, self.span, context, shape)
}
PatKind::Path(ref q_self, ref path) => {
rewrite_path(context, PathContext::Expr, q_self.as_ref(), path, shape)
}
PatKind::TupleStruct(ref path, ref pat_vec, dotdot_pos) => {
let path_str = rewrite_path(context, PathContext::Expr, None, path, shape)?;
rewrite_tuple_pat(
pat_vec,
dotdot_pos,
Some(path_str),
self.span,
context,
shape,
)
}
PatKind::Lit(ref expr) => expr.rewrite(context, shape),
PatKind::Slice(ref prefix, ref slice_pat, ref suffix) => {
// Rewrite all the sub-patterns.
let prefix = prefix.iter().map(|p| p.rewrite(context, shape));
let slice_pat = slice_pat
.as_ref()
.and_then(|p| p.rewrite(context, shape))
.map(|rw| Some(format!("{}..", if rw == "_" { "" } else { &rw })));
let suffix = suffix.iter().map(|p| p.rewrite(context, shape));
// Munge them together.
let pats: Option<Vec<String>> =
prefix.chain(slice_pat.into_iter()).chain(suffix).collect();
// Check that all the rewrites succeeded, and if not return None.
let pats = pats?;
// Unwrap all the sub-strings and join them with commas.
Some(format!("[{}]", pats.join(", ")))
}
PatKind::Struct(ref path, ref fields, ellipsis) => {
rewrite_struct_pat(path, fields, ellipsis, self.span, context, shape)
}
PatKind::Mac(ref mac) => rewrite_macro(mac, None, context, shape, MacroPosition::Pat),
PatKind::Paren(ref pat) => pat
.rewrite(context, shape.offset_left(1)?.sub_width(1)?)
.map(|inner_pat| format!("({})", inner_pat)),
}
}
}
fn rewrite_struct_pat(
path: &ast::Path,
fields: &[source_map::Spanned<ast::FieldPat>],
ellipsis: bool,
span: Span,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
// 2 = ` {`
let path_shape = shape.sub_width(2)?;
let path_str = rewrite_path(context, PathContext::Expr, None, path, path_shape)?;
if fields.is_empty() && !ellipsis {
return Some(format!("{} {{}}", path_str));
}
let (ellipsis_str, terminator) = if ellipsis { (", ..", "..") } else { ("", "}") };
// 3 = ` { `, 2 = ` }`.
let (h_shape, v_shape) =
struct_lit_shape(shape, context, path_str.len() + 3, ellipsis_str.len() + 2)?;
let items = itemize_list(
context.snippet_provider,
fields.iter(),
terminator,
",",
|f| f.span.lo(),
|f| f.span.hi(),
|f| f.node.rewrite(context, v_shape),
context.snippet_provider.span_after(span, "{"),
span.hi(),
false,
);
let item_vec = items.collect::<Vec<_>>();
let tactic = struct_lit_tactic(h_shape, context, &item_vec);
let nested_shape = shape_for_tactic(tactic, h_shape, v_shape);
let fmt = struct_lit_formatting(nested_shape, tactic, context, false);
let mut fields_str = write_list(&item_vec, &fmt)?;
let one_line_width = h_shape.map_or(0, |shape| shape.width);
if ellipsis {
if fields_str.contains('\n') || fields_str.len() > one_line_width {
// Add a missing trailing comma.
if context.config.trailing_comma() == SeparatorTactic::Never {
fields_str.push_str(",");
}
fields_str.push_str("\n");
fields_str.push_str(&nested_shape.indent.to_string(context.config));
fields_str.push_str("..");
} else {
if !fields_str.is_empty() {
// there are preceding struct fields being matched on
if tactic == DefinitiveListTactic::Vertical {
// if the tactic is Vertical, write_list already added a trailing ,
fields_str.push_str(" ");
} else {
fields_str.push_str(", ");
}
}
fields_str.push_str("..");
}
}
let fields_str = wrap_struct_field(context, &fields_str, shape, v_shape, one_line_width);
Some(format!("{} {{{}}}", path_str, fields_str))
}
impl Rewrite for FieldPat {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
let pat = self.pat.rewrite(context, shape);
if self.is_shorthand {
pat
} else {
let pat_str = pat?;
let id_str = rewrite_ident(context, self.ident);
let one_line_width = id_str.len() + 2 + pat_str.len();
if one_line_width <= shape.width {
Some(format!("{}: {}", id_str, pat_str))
} else {
let nested_shape = shape.block_indent(context.config.tab_spaces());
let pat_str = self.pat.rewrite(context, nested_shape)?;
Some(format!(
"{}:\n{}{}",
id_str,
nested_shape.indent.to_string(context.config),
pat_str,
))
}
}
}
}
pub enum TuplePatField<'a> {
Pat(&'a ptr::P<ast::Pat>),
Dotdot(Span),
}
impl<'a> Rewrite for TuplePatField<'a> {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
match *self {
TuplePatField::Pat(p) => p.rewrite(context, shape),
TuplePatField::Dotdot(_) => Some("..".to_string()),
}
}
}
impl<'a> Spanned for TuplePatField<'a> {
fn span(&self) -> Span {
match *self {
TuplePatField::Pat(p) => p.span(),
TuplePatField::Dotdot(span) => span,
}
}
}
pub fn can_be_overflowed_pat(context: &RewriteContext, pat: &TuplePatField, len: usize) -> bool {
match *pat {
TuplePatField::Pat(pat) => match pat.node {
ast::PatKind::Path(..)
| ast::PatKind::Tuple(..)
| ast::PatKind::Struct(..)
| ast::PatKind::TupleStruct(..) => context.use_block_indent() && len == 1,
ast::PatKind::Ref(ref p, _) | ast::PatKind::Box(ref p) => {
can_be_overflowed_pat(context, &TuplePatField::Pat(p), len)
}
ast::PatKind::Lit(ref expr) => can_be_overflowed_expr(context, expr, len),
_ => false,
},
TuplePatField::Dotdot(..) => false,
}
}
fn rewrite_tuple_pat(
pats: &[ptr::P<ast::Pat>],
dotdot_pos: Option<usize>,
path_str: Option<String>,
span: Span,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
let mut pat_vec: Vec<_> = pats.into_iter().map(|x| TuplePatField::Pat(x)).collect();
if let Some(pos) = dotdot_pos {
let prev = if pos == 0 {
span.lo()
} else {
pats[pos - 1].span().hi()
};
let next = if pos + 1 >= pats.len() {
span.hi()
} else {
pats[pos + 1].span().lo()
};
let dot_span = mk_sp(prev, next);
let snippet = context.snippet(dot_span);
let lo = dot_span.lo() + BytePos(snippet.find_uncommented("..").unwrap() as u32);
let dotdot = TuplePatField::Dotdot(Span::new(
lo,
// 2 == "..".len()
lo + BytePos(2),
source_map::NO_EXPANSION,
));
pat_vec.insert(pos, dotdot);
}
if pat_vec.is_empty() {
return Some(format!("{}()", path_str.unwrap_or_default()));
}
let wildcard_suffix_len = count_wildcard_suffix_len(context, &pat_vec, span, shape);
let (pat_vec, span) = if context.config.condense_wildcard_suffixes() && wildcard_suffix_len >= 2
{
let new_item_count = 1 + pat_vec.len() - wildcard_suffix_len;
let sp = pat_vec[new_item_count - 1].span();
let snippet = context.snippet(sp);
let lo = sp.lo() + BytePos(snippet.find_uncommented("_").unwrap() as u32);
pat_vec[new_item_count - 1] = TuplePatField::Dotdot(mk_sp(lo, lo + BytePos(1)));
(
&pat_vec[..new_item_count],
mk_sp(span.lo(), lo + BytePos(1)),
)
} else {
(&pat_vec[..], span)
};
// add comma if `(x,)`
let add_comma = path_str.is_none() && pat_vec.len() == 1 && dotdot_pos.is_none();
let path_str = path_str.unwrap_or_default();
let pat_ref_vec = pat_vec.iter().collect::<Vec<_>>();
overflow::rewrite_with_parens(
&context,
&path_str,
&pat_ref_vec,
shape,
span,
context.config.max_width(),
if dotdot_pos.is_some() {
Some(SeparatorTactic::Never)
} else if add_comma {
Some(SeparatorTactic::Always)
} else {
None
},
)
}
fn count_wildcard_suffix_len(
context: &RewriteContext,
patterns: &[TuplePatField],
span: Span,
shape: Shape,
) -> usize {
let mut suffix_len = 0;
let items: Vec<_> = itemize_list(
context.snippet_provider,
patterns.iter(),
")",
",",
|item| item.span().lo(),
|item| item.span().hi(),
|item| item.rewrite(context, shape),
context.snippet_provider.span_after(span, "("),
span.hi() - BytePos(1),
false,
).collect();
for item in items.iter().rev().take_while(|i| match i.item {
Some(ref internal_string) if internal_string == "_" => true,
_ => false,
}) {
suffix_len += 1;
if item.has_comment() {
break;
}
}
suffix_len
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/comment.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Formatting and tools for comments.
use std::{self, borrow::Cow, iter};
use itertools::{multipeek, MultiPeek};
use syntax::source_map::Span;
use config::Config;
use rewrite::RewriteContext;
use shape::{Indent, Shape};
use string::{rewrite_string, StringFormat};
use utils::{count_newlines, first_line_width, last_line_width};
use {ErrorKind, FormattingError};
fn is_custom_comment(comment: &str) -> bool {
if !comment.starts_with("//") {
false
} else if let Some(c) = comment.chars().nth(2) {
!c.is_alphanumeric() && !c.is_whitespace()
} else {
false
}
}
#[derive(Copy, Clone, PartialEq, Eq)]
pub enum CommentStyle<'a> {
DoubleSlash,
TripleSlash,
Doc,
SingleBullet,
DoubleBullet,
Exclamation,
Custom(&'a str),
}
fn custom_opener(s: &str) -> &str {
s.lines().next().map_or("", |first_line| {
first_line
.find(' ')
.map_or(first_line, |space_index| &first_line[0..space_index + 1])
})
}
impl<'a> CommentStyle<'a> {
pub fn is_doc_comment(&self) -> bool {
match *self {
CommentStyle::TripleSlash | CommentStyle::Doc => true,
_ => false,
}
}
pub fn opener(&self) -> &'a str {
match *self {
CommentStyle::DoubleSlash => "// ",
CommentStyle::TripleSlash => "/// ",
CommentStyle::Doc => "//! ",
CommentStyle::SingleBullet => "/* ",
CommentStyle::DoubleBullet => "/** ",
CommentStyle::Exclamation => "/*! ",
CommentStyle::Custom(opener) => opener,
}
}
pub fn closer(&self) -> &'a str {
match *self {
CommentStyle::DoubleSlash
| CommentStyle::TripleSlash
| CommentStyle::Custom(..)
| CommentStyle::Doc => "",
CommentStyle::DoubleBullet => " **/",
CommentStyle::SingleBullet | CommentStyle::Exclamation => " */",
}
}
pub fn line_start(&self) -> &'a str {
match *self {
CommentStyle::DoubleSlash => "// ",
CommentStyle::TripleSlash => "/// ",
CommentStyle::Doc => "//! ",
CommentStyle::SingleBullet | CommentStyle::Exclamation => " * ",
CommentStyle::DoubleBullet => " ** ",
CommentStyle::Custom(opener) => opener,
}
}
pub fn to_str_tuplet(&self) -> (&'a str, &'a str, &'a str) {
(self.opener(), self.closer(), self.line_start())
}
}
fn comment_style(orig: &str, normalize_comments: bool) -> CommentStyle {
if !normalize_comments {
if orig.starts_with("/**") && !orig.starts_with("/**/") {
CommentStyle::DoubleBullet
} else if orig.starts_with("/*!") {
CommentStyle::Exclamation
} else if orig.starts_with("/*") {
CommentStyle::SingleBullet
} else if orig.starts_with("///") && orig.chars().nth(3).map_or(true, |c| c != '/') {
CommentStyle::TripleSlash
} else if orig.starts_with("//!") {
CommentStyle::Doc
} else if is_custom_comment(orig) {
CommentStyle::Custom(custom_opener(orig))
} else {
CommentStyle::DoubleSlash
}
} else if (orig.starts_with("///") && orig.chars().nth(3).map_or(true, |c| c != '/'))
|| (orig.starts_with("/**") && !orig.starts_with("/**/"))
{
CommentStyle::TripleSlash
} else if orig.starts_with("//!") || orig.starts_with("/*!") {
CommentStyle::Doc
} else if is_custom_comment(orig) {
CommentStyle::Custom(custom_opener(orig))
} else {
CommentStyle::DoubleSlash
}
}
/// Combine `prev_str` and `next_str` into a single `String`. `span` may contain
/// comments between two strings. If there are such comments, then that will be
/// recovered. If `allow_extend` is true and there is no comment between the two
/// strings, then they will be put on a single line as long as doing so does not
/// exceed max width.
pub fn combine_strs_with_missing_comments(
context: &RewriteContext,
prev_str: &str,
next_str: &str,
span: Span,
shape: Shape,
allow_extend: bool,
) -> Option<String> {
let mut result =
String::with_capacity(prev_str.len() + next_str.len() + shape.indent.width() + 128);
result.push_str(prev_str);
let mut allow_one_line = !prev_str.contains('\n') && !next_str.contains('\n');
let first_sep = if prev_str.is_empty() || next_str.is_empty() {
""
} else {
" "
};
let mut one_line_width =
last_line_width(prev_str) + first_line_width(next_str) + first_sep.len();
let config = context.config;
let indent = shape.indent;
let missing_comment = rewrite_missing_comment(span, shape, context)?;
if missing_comment.is_empty() {
if allow_extend && prev_str.len() + first_sep.len() + next_str.len() <= shape.width {
result.push_str(first_sep);
} else if !prev_str.is_empty() {
result.push_str(&indent.to_string_with_newline(config))
}
result.push_str(next_str);
return Some(result);
}
// We have a missing comment between the first expression and the second expression.
// Peek the original source code and find out whether there is a newline between the first
// expression and the second expression or the missing comment. We will preserve the original
// layout whenever possible.
let original_snippet = context.snippet(span);
let prefer_same_line = if let Some(pos) = original_snippet.find('/') {
!original_snippet[..pos].contains('\n')
} else {
!original_snippet.contains('\n')
};
one_line_width -= first_sep.len();
let first_sep = if prev_str.is_empty() || missing_comment.is_empty() {
Cow::from("")
} else {
let one_line_width = last_line_width(prev_str) + first_line_width(&missing_comment) + 1;
if prefer_same_line && one_line_width <= shape.width {
Cow::from(" ")
} else {
indent.to_string_with_newline(config)
}
};
result.push_str(&first_sep);
result.push_str(&missing_comment);
let second_sep = if missing_comment.is_empty() || next_str.is_empty() {
Cow::from("")
} else if missing_comment.starts_with("//") {
indent.to_string_with_newline(config)
} else {
one_line_width += missing_comment.len() + first_sep.len() + 1;
allow_one_line &= !missing_comment.starts_with("//") && !missing_comment.contains('\n');
if prefer_same_line && allow_one_line && one_line_width <= shape.width {
Cow::from(" ")
} else {
indent.to_string_with_newline(config)
}
};
result.push_str(&second_sep);
result.push_str(next_str);
Some(result)
}
pub fn rewrite_doc_comment(orig: &str, shape: Shape, config: &Config) -> Option<String> {
_rewrite_comment(orig, false, shape, config, true)
}
pub fn rewrite_comment(
orig: &str,
block_style: bool,
shape: Shape,
config: &Config,
) -> Option<String> {
_rewrite_comment(orig, block_style, shape, config, false)
}
fn _rewrite_comment(
orig: &str,
block_style: bool,
shape: Shape,
config: &Config,
is_doc_comment: bool,
) -> Option<String> {
// If there are lines without a starting sigil, we won't format them correctly
// so in that case we won't even re-align (if !config.normalize_comments()) and
// we should stop now.
let num_bare_lines = orig
.lines()
.map(|line| line.trim())
.filter(|l| !(l.starts_with('*') || l.starts_with("//") || l.starts_with("/*")))
.count();
if num_bare_lines > 0 && !config.normalize_comments() {
return Some(orig.to_owned());
}
if !config.normalize_comments() && !config.wrap_comments() {
return light_rewrite_comment(orig, shape.indent, config, is_doc_comment);
}
identify_comment(orig, block_style, shape, config, is_doc_comment)
}
fn identify_comment(
orig: &str,
block_style: bool,
shape: Shape,
config: &Config,
is_doc_comment: bool,
) -> Option<String> {
let style = comment_style(orig, false);
let mut first_group_ending = 0;
fn compute_len(orig: &str, line: &str) -> usize {
if orig.len() > line.len() {
if orig.as_bytes()[line.len()] == b'\r' {
line.len() + 2
} else {
line.len() + 1
}
} else {
line.len()
}
}
match style {
CommentStyle::DoubleSlash | CommentStyle::TripleSlash | CommentStyle::Doc => {
let line_start = style.line_start().trim_left();
for line in orig.lines() {
if line.trim_left().starts_with(line_start) || comment_style(line, false) == style {
first_group_ending += compute_len(&orig[first_group_ending..], line);
} else {
break;
}
}
}
CommentStyle::Custom(opener) => {
let trimmed_opener = opener.trim_right();
for line in orig.lines() {
if line.trim_left().starts_with(trimmed_opener) {
first_group_ending += compute_len(&orig[first_group_ending..], line);
} else {
break;
}
}
}
// for a block comment, search for the closing symbol
CommentStyle::DoubleBullet | CommentStyle::SingleBullet | CommentStyle::Exclamation => {
let closer = style.closer().trim_left();
for line in orig.lines() {
first_group_ending += compute_len(&orig[first_group_ending..], line);
if line.trim_left().ends_with(closer) {
break;
}
}
}
}
let (first_group, rest) = orig.split_at(first_group_ending);
let first_group_str = rewrite_comment_inner(
first_group,
block_style,
style,
shape,
config,
is_doc_comment || style.is_doc_comment(),
)?;
if rest.is_empty() {
Some(first_group_str)
} else {
identify_comment(rest, block_style, shape, config, is_doc_comment).map(|rest_str| {
format!(
"{}\n{}{}",
first_group_str,
shape.indent.to_string(config),
rest_str
)
})
}
}
fn rewrite_comment_inner(
orig: &str,
block_style: bool,
style: CommentStyle,
shape: Shape,
config: &Config,
is_doc_comment: bool,
) -> Option<String> {
let (opener, closer, line_start) = if block_style {
CommentStyle::SingleBullet.to_str_tuplet()
} else {
comment_style(orig, config.normalize_comments()).to_str_tuplet()
};
let max_chars = shape
.width
.checked_sub(closer.len() + opener.len())
.unwrap_or(1);
let indent_str = shape.indent.to_string_with_newline(config);
let fmt_indent = shape.indent + (opener.len() - line_start.len());
let mut fmt = StringFormat {
opener: "",
closer: "",
line_start,
line_end: "",
shape: Shape::legacy(max_chars, fmt_indent),
trim_end: true,
config,
};
let line_breaks = count_newlines(orig.trim_right());
let lines = orig
.lines()
.enumerate()
.map(|(i, mut line)| {
line = trim_right_unless_two_whitespaces(line.trim_left(), is_doc_comment);
// Drop old closer.
if i == line_breaks && line.ends_with("*/") && !line.starts_with("//") {
line = line[..(line.len() - 2)].trim_right();
}
line
}).map(|s| left_trim_comment_line(s, &style))
.map(|(line, has_leading_whitespace)| {
if orig.starts_with("/*") && line_breaks == 0 {
(
line.trim_left(),
has_leading_whitespace || config.normalize_comments(),
)
} else {
(line, has_leading_whitespace || config.normalize_comments())
}
});
let mut result = String::with_capacity(orig.len() * 2);
result.push_str(opener);
let mut code_block_buffer = String::with_capacity(128);
let mut is_prev_line_multi_line = false;
let mut inside_code_block = false;
let comment_line_separator = format!("{}{}", indent_str, line_start);
let join_code_block_with_comment_line_separator = |s: &str| {
let mut result = String::with_capacity(s.len() + 128);
let mut iter = s.lines().peekable();
while let Some(line) = iter.next() {
result.push_str(line);
result.push_str(match iter.peek() {
Some(next_line) if next_line.is_empty() => comment_line_separator.trim_right(),
Some(..) => &comment_line_separator,
None => "",
});
}
result
};
for (i, (line, has_leading_whitespace)) in lines.enumerate() {
let is_last = i == count_newlines(orig);
if inside_code_block {
if line.starts_with("```") {
inside_code_block = false;
result.push_str(&comment_line_separator);
let code_block = {
let mut config = config.clone();
config.set().wrap_comments(false);
match ::format_code_block(&code_block_buffer, &config) {
Some(ref s) => trim_custom_comment_prefix(s),
None => trim_custom_comment_prefix(&code_block_buffer),
}
};
result.push_str(&join_code_block_with_comment_line_separator(&code_block));
code_block_buffer.clear();
result.push_str(&comment_line_separator);
result.push_str(line);
} else {
code_block_buffer.push_str(&hide_sharp_behind_comment(line));
code_block_buffer.push('\n');
if is_last {
// There is an code block that is not properly enclosed by backticks.
// We will leave them untouched.
result.push_str(&comment_line_separator);
result.push_str(&join_code_block_with_comment_line_separator(
&trim_custom_comment_prefix(&code_block_buffer),
));
}
}
continue;
} else {
inside_code_block = line.starts_with("```");
if result == opener {
let force_leading_whitespace = opener == "/* " && count_newlines(orig) == 0;
if !has_leading_whitespace && !force_leading_whitespace && result.ends_with(' ') {
result.pop();
}
if line.is_empty() {
continue;
}
} else if is_prev_line_multi_line && !line.is_empty() {
result.push(' ')
} else if is_last && line.is_empty() {
// trailing blank lines are unwanted
if !closer.is_empty() {
result.push_str(&indent_str);
}
break;
} else {
result.push_str(&comment_line_separator);
if !has_leading_whitespace && result.ends_with(' ') {
result.pop();
}
}
}
if config.wrap_comments() && line.len() > fmt.shape.width && !has_url(line) {
match rewrite_string(line, &fmt) {
Some(ref s) => {
is_prev_line_multi_line = s.contains('\n');
result.push_str(s);
}
None if is_prev_line_multi_line => {
// We failed to put the current `line` next to the previous `line`.
// Remove the trailing space, then start rewrite on the next line.
result.pop();
result.push_str(&comment_line_separator);
fmt.shape = Shape::legacy(max_chars, fmt_indent);
match rewrite_string(line, &fmt) {
Some(ref s) => {
is_prev_line_multi_line = s.contains('\n');
result.push_str(s);
}
None => {
is_prev_line_multi_line = false;
result.push_str(line);
}
}
}
None => {
is_prev_line_multi_line = false;
result.push_str(line);
}
}
fmt.shape = if is_prev_line_multi_line {
// 1 = " "
let offset = 1 + last_line_width(&result) - line_start.len();
Shape {
width: max_chars.saturating_sub(offset),
indent: fmt_indent,
offset: fmt.shape.offset + offset,
}
} else {
Shape::legacy(max_chars, fmt_indent)
};
} else {
if line.is_empty() && result.ends_with(' ') && !is_last {
// Remove space if this is an empty comment or a doc comment.
result.pop();
}
result.push_str(line);
fmt.shape = Shape::legacy(max_chars, fmt_indent);
is_prev_line_multi_line = false;
}
}
result.push_str(closer);
if result.ends_with(opener) && opener.ends_with(' ') {
// Trailing space.
result.pop();
}
Some(result)
}
const RUSTFMT_CUSTOM_COMMENT_PREFIX: &str = "//#### ";
fn hide_sharp_behind_comment<'a>(s: &'a str) -> Cow<'a, str> {
if s.trim_left().starts_with("# ") {
Cow::from(format!("{}{}", RUSTFMT_CUSTOM_COMMENT_PREFIX, s))
} else {
Cow::from(s)
}
}
fn trim_custom_comment_prefix(s: &str) -> String {
s.lines()
.map(|line| {
let left_trimmed = line.trim_left();
if left_trimmed.starts_with(RUSTFMT_CUSTOM_COMMENT_PREFIX) {
left_trimmed.trim_left_matches(RUSTFMT_CUSTOM_COMMENT_PREFIX)
} else {
line
}
}).collect::<Vec<_>>()
.join("\n")
}
/// Returns true if the given string MAY include URLs or alike.
fn has_url(s: &str) -> bool {
// This function may return false positive, but should get its job done in most cases.
s.contains("https://") || s.contains("http://") || s.contains("ftp://") || s.contains("file://")
}
/// Given the span, rewrite the missing comment inside it if available.
/// Note that the given span must only include comments (or leading/trailing whitespaces).
pub fn rewrite_missing_comment(
span: Span,
shape: Shape,
context: &RewriteContext,
) -> Option<String> {
let missing_snippet = context.snippet(span);
let trimmed_snippet = missing_snippet.trim();
if !trimmed_snippet.is_empty() {
rewrite_comment(trimmed_snippet, false, shape, context.config)
} else {
Some(String::new())
}
}
/// Recover the missing comments in the specified span, if available.
/// The layout of the comments will be preserved as long as it does not break the code
/// and its total width does not exceed the max width.
pub fn recover_missing_comment_in_span(
span: Span,
shape: Shape,
context: &RewriteContext,
used_width: usize,
) -> Option<String> {
let missing_comment = rewrite_missing_comment(span, shape, context)?;
if missing_comment.is_empty() {
Some(String::new())
} else {
let missing_snippet = context.snippet(span);
let pos = missing_snippet.find('/').unwrap_or(0);
// 1 = ` `
let total_width = missing_comment.len() + used_width + 1;
let force_new_line_before_comment =
missing_snippet[..pos].contains('\n') || total_width > context.config.max_width();
let sep = if force_new_line_before_comment {
shape.indent.to_string_with_newline(context.config)
} else {
Cow::from(" ")
};
Some(format!("{}{}", sep, missing_comment))
}
}
/// Trim trailing whitespaces unless they consist of two or more whitespaces.
fn trim_right_unless_two_whitespaces(s: &str, is_doc_comment: bool) -> &str {
if is_doc_comment && s.ends_with(" ") {
s
} else {
s.trim_right()
}
}
/// Trims whitespace and aligns to indent, but otherwise does not change comments.
fn light_rewrite_comment(
orig: &str,
offset: Indent,
config: &Config,
is_doc_comment: bool,
) -> Option<String> {
let lines: Vec<&str> = orig
.lines()
.map(|l| {
// This is basically just l.trim(), but in the case that a line starts
// with `*` we want to leave one space before it, so it aligns with the
// `*` in `/*`.
let first_non_whitespace = l.find(|c| !char::is_whitespace(c));
let left_trimmed = if let Some(fnw) = first_non_whitespace {
if l.as_bytes()[fnw] == b'*' && fnw > 0 {
&l[fnw - 1..]
} else {
&l[fnw..]
}
} else {
""
};
// Preserve markdown's double-space line break syntax in doc comment.
trim_right_unless_two_whitespaces(left_trimmed, is_doc_comment)
}).collect();
Some(lines.join(&format!("\n{}", offset.to_string(config))))
}
/// Trims comment characters and possibly a single space from the left of a string.
/// Does not trim all whitespace. If a single space is trimmed from the left of the string,
/// this function returns true.
fn left_trim_comment_line<'a>(line: &'a str, style: &CommentStyle) -> (&'a str, bool) {
if line.starts_with("//! ")
|| line.starts_with("/// ")
|| line.starts_with("/*! ")
|| line.starts_with("/** ")
{
(&line[4..], true)
} else if let CommentStyle::Custom(opener) = *style {
if line.starts_with(opener) {
(&line[opener.len()..], true)
} else {
(&line[opener.trim_right().len()..], false)
}
} else if line.starts_with("/* ")
|| line.starts_with("// ")
|| line.starts_with("//!")
|| line.starts_with("///")
|| line.starts_with("** ")
|| line.starts_with("/*!")
|| (line.starts_with("/**") && !line.starts_with("/**/"))
{
(&line[3..], line.chars().nth(2).unwrap() == ' ')
} else if line.starts_with("/*")
|| line.starts_with("* ")
|| line.starts_with("//")
|| line.starts_with("**")
{
(&line[2..], line.chars().nth(1).unwrap() == ' ')
} else if line.starts_with('*') {
(&line[1..], false)
} else {
(line, line.starts_with(' '))
}
}
pub trait FindUncommented {
fn find_uncommented(&self, pat: &str) -> Option<usize>;
}
impl FindUncommented for str {
fn find_uncommented(&self, pat: &str) -> Option<usize> {
let mut needle_iter = pat.chars();
for (kind, (i, b)) in CharClasses::new(self.char_indices()) {
match needle_iter.next() {
None => {
return Some(i - pat.len());
}
Some(c) => match kind {
FullCodeCharKind::Normal | FullCodeCharKind::InString if b == c => {}
_ => {
needle_iter = pat.chars();
}
},
}
}
// Handle case where the pattern is a suffix of the search string
match needle_iter.next() {
Some(_) => None,
None => Some(self.len() - pat.len()),
}
}
}
// Returns the first byte position after the first comment. The given string
// is expected to be prefixed by a comment, including delimiters.
// Good: "/* /* inner */ outer */ code();"
// Bad: "code(); // hello\n world!"
pub fn find_comment_end(s: &str) -> Option<usize> {
let mut iter = CharClasses::new(s.char_indices());
for (kind, (i, _c)) in &mut iter {
if kind == FullCodeCharKind::Normal || kind == FullCodeCharKind::InString {
return Some(i);
}
}
// Handle case where the comment ends at the end of s.
if iter.status == CharClassesStatus::Normal {
Some(s.len())
} else {
None
}
}
/// Returns true if text contains any comment.
pub fn contains_comment(text: &str) -> bool {
CharClasses::new(text.chars()).any(|(kind, _)| kind.is_comment())
}
/// Remove trailing spaces from the specified snippet. We do not remove spaces
/// inside strings or comments.
pub fn remove_trailing_white_spaces(text: &str) -> String {
let mut buffer = String::with_capacity(text.len());
let mut space_buffer = String::with_capacity(128);
for (char_kind, c) in CharClasses::new(text.chars()) {
match c {
'\n' => {
if char_kind == FullCodeCharKind::InString {
buffer.push_str(&space_buffer);
}
space_buffer.clear();
buffer.push('\n');
}
_ if c.is_whitespace() => {
space_buffer.push(c);
}
_ => {
if !space_buffer.is_empty() {
buffer.push_str(&space_buffer);
space_buffer.clear();
}
buffer.push(c);
}
}
}
buffer
}
pub struct CharClasses<T>
where
T: Iterator,
T::Item: RichChar,
{
base: MultiPeek<T>,
status: CharClassesStatus,
}
pub trait RichChar {
fn get_char(&self) -> char;
}
impl RichChar for char {
fn get_char(&self) -> char {
*self
}
}
impl RichChar for (usize, char) {
fn get_char(&self) -> char {
self.1
}
}
#[derive(PartialEq, Eq, Debug, Clone, Copy)]
enum CharClassesStatus {
Normal,
LitString,
LitStringEscape,
LitChar,
LitCharEscape,
// The u32 is the nesting deepness of the comment
BlockComment(u32),
// Status when the '/' has been consumed, but not yet the '*', deepness is
// the new deepness (after the comment opening).
BlockCommentOpening(u32),
// Status when the '*' has been consumed, but not yet the '/', deepness is
// the new deepness (after the comment closing).
BlockCommentClosing(u32),
LineComment,
}
/// Distinguish between functional part of code and comments
#[derive(PartialEq, Eq, Debug, Clone, Copy)]
pub enum CodeCharKind {
Normal,
Comment,
}
/// Distinguish between functional part of code and comments,
/// describing opening and closing of comments for ease when chunking
/// code from tagged characters
#[derive(PartialEq, Eq, Debug, Clone, Copy)]
pub enum FullCodeCharKind {
Normal,
/// The first character of a comment, there is only one for a comment (always '/')
StartComment,
/// Any character inside a comment including the second character of comment
/// marks ("//", "/*")
InComment,
/// Last character of a comment, '\n' for a line comment, '/' for a block comment.
EndComment,
/// Inside a string.
InString,
}
impl FullCodeCharKind {
pub fn is_comment(&self) -> bool {
match *self {
FullCodeCharKind::StartComment
| FullCodeCharKind::InComment
| FullCodeCharKind::EndComment => true,
_ => false,
}
}
pub fn is_string(&self) -> bool {
*self == FullCodeCharKind::InString
}
fn to_codecharkind(&self) -> CodeCharKind {
if self.is_comment() {
CodeCharKind::Comment
} else {
CodeCharKind::Normal
}
}
}
impl<T> CharClasses<T>
where
T: Iterator,
T::Item: RichChar,
{
pub fn new(base: T) -> CharClasses<T> {
CharClasses {
base: multipeek(base),
status: CharClassesStatus::Normal,
}
}
}
impl<T> Iterator for CharClasses<T>
where
T: Iterator,
T::Item: RichChar,
{
type Item = (FullCodeCharKind, T::Item);
fn next(&mut self) -> Option<(FullCodeCharKind, T::Item)> {
let item = self.base.next()?;
let chr = item.get_char();
let mut char_kind = FullCodeCharKind::Normal;
self.status = match self.status {
CharClassesStatus::LitString => match chr {
'"' => CharClassesStatus::Normal,
'\\' => {
char_kind = FullCodeCharKind::InString;
CharClassesStatus::LitStringEscape
}
_ => {
char_kind = FullCodeCharKind::InString;
CharClassesStatus::LitString
}
},
CharClassesStatus::LitStringEscape => {
char_kind = FullCodeCharKind::InString;
CharClassesStatus::LitString
}
CharClassesStatus::LitChar => match chr {
'\\' => CharClassesStatus::LitCharEscape,
'\'' => CharClassesStatus::Normal,
_ => CharClassesStatus::LitChar,
},
CharClassesStatus::LitCharEscape => CharClassesStatus::LitChar,
CharClassesStatus::Normal => match chr {
'"' => {
char_kind = FullCodeCharKind::InString;
CharClassesStatus::LitString
}
'\'' => {
// HACK: Work around mut borrow.
match self.base.peek() {
Some(next) if next.get_char() == '\\' => {
self.status = CharClassesStatus::LitChar;
return Some((char_kind, item));
}
_ => (),
}
match self.base.peek() {
Some(next) if next.get_char() == '\'' => CharClassesStatus::LitChar,
_ => CharClassesStatus::Normal,
}
}
'/' => match self.base.peek() {
Some(next) if next.get_char() == '*' => {
self.status = CharClassesStatus::BlockCommentOpening(1);
return Some((FullCodeCharKind::StartComment, item));
}
Some(next) if next.get_char() == '/' => {
self.status = CharClassesStatus::LineComment;
return Some((FullCodeCharKind::StartComment, item));
}
_ => CharClassesStatus::Normal,
},
_ => CharClassesStatus::Normal,
},
CharClassesStatus::BlockComment(deepness) => {
assert_ne!(deepness, 0);
self.status = match self.base.peek() {
Some(next) if next.get_char() == '/' && chr == '*' => {
CharClassesStatus::BlockCommentClosing(deepness - 1)
}
Some(next) if next.get_char() == '*' && chr == '/' => {
CharClassesStatus::BlockCommentOpening(deepness + 1)
}
_ => CharClassesStatus::BlockComment(deepness),
};
return Some((FullCodeCharKind::InComment, item));
}
CharClassesStatus::BlockCommentOpening(deepness) => {
assert_eq!(chr, '*');
self.status = CharClassesStatus::BlockComment(deepness);
return Some((FullCodeCharKind::InComment, item));
}
CharClassesStatus::BlockCommentClosing(deepness) => {
assert_eq!(chr, '/');
if deepness == 0 {
self.status = CharClassesStatus::Normal;
return Some((FullCodeCharKind::EndComment, item));
} else {
self.status = CharClassesStatus::BlockComment(deepness);
return Some((FullCodeCharKind::InComment, item));
}
}
CharClassesStatus::LineComment => match chr {
'\n' => {
self.status = CharClassesStatus::Normal;
return Some((FullCodeCharKind::EndComment, item));
}
_ => {
self.status = CharClassesStatus::LineComment;
return Some((FullCodeCharKind::InComment, item));
}
},
};
Some((char_kind, item))
}
}
/// An iterator over the lines of a string, paired with the char kind at the
/// end of the line.
pub struct LineClasses<'a> {
base: iter::Peekable<CharClasses<std::str::Chars<'a>>>,
kind: FullCodeCharKind,
}
impl<'a> LineClasses<'a> {
pub fn new(s: &'a str) -> Self {
LineClasses {
base: CharClasses::new(s.chars()).peekable(),
kind: FullCodeCharKind::Normal,
}
}
}
impl<'a> Iterator for LineClasses<'a> {
type Item = (FullCodeCharKind, String);
fn next(&mut self) -> Option<Self::Item> {
if self.base.peek().is_none() {
return None;
}
let mut line = String::new();
while let Some((kind, c)) = self.base.next() {
self.kind = kind;
if c == '\n' {
break;
} else {
line.push(c);
}
}
Some((self.kind, line))
}
}
/// Iterator over functional and commented parts of a string. Any part of a string is either
/// functional code, either *one* block comment, either *one* line comment. Whitespace between
/// comments is functional code. Line comments contain their ending newlines.
struct UngroupedCommentCodeSlices<'a> {
slice: &'a str,
iter: iter::Peekable<CharClasses<std::str::CharIndices<'a>>>,
}
impl<'a> UngroupedCommentCodeSlices<'a> {
fn new(code: &'a str) -> UngroupedCommentCodeSlices<'a> {
UngroupedCommentCodeSlices {
slice: code,
iter: CharClasses::new(code.char_indices()).peekable(),
}
}
}
impl<'a> Iterator for UngroupedCommentCodeSlices<'a> {
type Item = (CodeCharKind, usize, &'a str);
fn next(&mut self) -> Option<Self::Item> {
let (kind, (start_idx, _)) = self.iter.next()?;
match kind {
FullCodeCharKind::Normal | FullCodeCharKind::InString => {
// Consume all the Normal code
while let Some(&(char_kind, _)) = self.iter.peek() {
if char_kind.is_comment() {
break;
}
let _ = self.iter.next();
}
}
FullCodeCharKind::StartComment => {
// Consume the whole comment
while let Some((FullCodeCharKind::InComment, (_, _))) = self.iter.next() {}
}
_ => panic!(),
}
let slice = match self.iter.peek() {
Some(&(_, (end_idx, _))) => &self.slice[start_idx..end_idx],
None => &self.slice[start_idx..],
};
Some((
if kind.is_comment() {
CodeCharKind::Comment
} else {
CodeCharKind::Normal
},
start_idx,
slice,
))
}
}
/// Iterator over an alternating sequence of functional and commented parts of
/// a string. The first item is always a, possibly zero length, subslice of
/// functional text. Line style comments contain their ending newlines.
pub struct CommentCodeSlices<'a> {
slice: &'a str,
last_slice_kind: CodeCharKind,
last_slice_end: usize,
}
impl<'a> CommentCodeSlices<'a> {
pub fn new(slice: &'a str) -> CommentCodeSlices<'a> {
CommentCodeSlices {
slice,
last_slice_kind: CodeCharKind::Comment,
last_slice_end: 0,
}
}
}
impl<'a> Iterator for CommentCodeSlices<'a> {
type Item = (CodeCharKind, usize, &'a str);
fn next(&mut self) -> Option<Self::Item> {
if self.last_slice_end == self.slice.len() {
return None;
}
let mut sub_slice_end = self.last_slice_end;
let mut first_whitespace = None;
let subslice = &self.slice[self.last_slice_end..];
let mut iter = CharClasses::new(subslice.char_indices());
for (kind, (i, c)) in &mut iter {
let is_comment_connector = self.last_slice_kind == CodeCharKind::Normal
&& &subslice[..2] == "//"
&& [' ', '\t'].contains(&c);
if is_comment_connector && first_whitespace.is_none() {
first_whitespace = Some(i);
}
if kind.to_codecharkind() == self.last_slice_kind && !is_comment_connector {
let last_index = match first_whitespace {
Some(j) => j,
None => i,
};
sub_slice_end = self.last_slice_end + last_index;
break;
}
if !is_comment_connector {
first_whitespace = None;
}
}
if let (None, true) = (iter.next(), sub_slice_end == self.last_slice_end) {
// This was the last subslice.
sub_slice_end = match first_whitespace {
Some(i) => self.last_slice_end + i,
None => self.slice.len(),
};
}
let kind = match self.last_slice_kind {
CodeCharKind::Comment => CodeCharKind::Normal,
CodeCharKind::Normal => CodeCharKind::Comment,
};
let res = (
kind,
self.last_slice_end,
&self.slice[self.last_slice_end..sub_slice_end],
);
self.last_slice_end = sub_slice_end;
self.last_slice_kind = kind;
Some(res)
}
}
/// Checks is `new` didn't miss any comment from `span`, if it removed any, return previous text
/// (if it fits in the width/offset, else return None), else return `new`
pub fn recover_comment_removed(
new: String,
span: Span,
context: &RewriteContext,
) -> Option<String> {
let snippet = context.snippet(span);
if snippet != new && changed_comment_content(snippet, &new) {
// We missed some comments. Warn and keep the original text.
if context.config.error_on_unformatted() {
context.report.append(
context.source_map.span_to_filename(span).into(),
vec![FormattingError::from_span(
&span,
&context.source_map,
ErrorKind::LostComment,
)],
);
}
Some(snippet.to_owned())
} else {
Some(new)
}
}
pub fn filter_normal_code(code: &str) -> String {
let mut buffer = String::with_capacity(code.len());
LineClasses::new(code).for_each(|(kind, line)| match kind {
FullCodeCharKind::Normal | FullCodeCharKind::InString => {
buffer.push_str(&line);
buffer.push('\n');
}
_ => (),
});
if !code.ends_with("\n") && buffer.ends_with("\n") {
buffer.pop();
}
buffer
}
/// Return true if the two strings of code have the same payload of comments.
/// The payload of comments is everything in the string except:
/// - actual code (not comments)
/// - comment start/end marks
/// - whitespace
/// - '*' at the beginning of lines in block comments
fn changed_comment_content(orig: &str, new: &str) -> bool {
// Cannot write this as a fn since we cannot return types containing closures
let code_comment_content = |code| {
let slices = UngroupedCommentCodeSlices::new(code);
slices
.filter(|&(ref kind, _, _)| *kind == CodeCharKind::Comment)
.flat_map(|(_, _, s)| CommentReducer::new(s))
};
let res = code_comment_content(orig).ne(code_comment_content(new));
debug!(
"comment::changed_comment_content: {}\norig: '{}'\nnew: '{}'\nraw_old: {}\nraw_new: {}",
res,
orig,
new,
code_comment_content(orig).collect::<String>(),
code_comment_content(new).collect::<String>()
);
res
}
/// Iterator over the 'payload' characters of a comment.
/// It skips whitespace, comment start/end marks, and '*' at the beginning of lines.
/// The comment must be one comment, ie not more than one start mark (no multiple line comments,
/// for example).
struct CommentReducer<'a> {
is_block: bool,
at_start_line: bool,
iter: std::str::Chars<'a>,
}
impl<'a> CommentReducer<'a> {
fn new(comment: &'a str) -> CommentReducer<'a> {
let is_block = comment.starts_with("/*");
let comment = remove_comment_header(comment);
CommentReducer {
is_block,
at_start_line: false, // There are no supplementary '*' on the first line
iter: comment.chars(),
}
}
}
impl<'a> Iterator for CommentReducer<'a> {
type Item = char;
fn next(&mut self) -> Option<Self::Item> {
loop {
let mut c = self.iter.next()?;
if self.is_block && self.at_start_line {
while c.is_whitespace() {
c = self.iter.next()?;
}
// Ignore leading '*'
if c == '*' {
c = self.iter.next()?;
}
} else if c == '\n' {
self.at_start_line = true;
}
if !c.is_whitespace() {
return Some(c);
}
}
}
}
fn remove_comment_header(comment: &str) -> &str {
if comment.starts_with("///") || comment.starts_with("//!") {
&comment[3..]
} else if comment.starts_with("//") {
&comment[2..]
} else if (comment.starts_with("/**") && !comment.starts_with("/**/"))
|| comment.starts_with("/*!")
{
&comment[3..comment.len() - 2]
} else {
assert!(
comment.starts_with("/*"),
format!("string '{}' is not a comment", comment)
);
&comment[2..comment.len() - 2]
}
}
#[cfg(test)]
mod test {
use super::*;
use shape::{Indent, Shape};
#[test]
fn char_classes() {
let mut iter = CharClasses::new("//\n\n".chars());
assert_eq!((FullCodeCharKind::StartComment, '/'), iter.next().unwrap());
assert_eq!((FullCodeCharKind::InComment, '/'), iter.next().unwrap());
assert_eq!((FullCodeCharKind::EndComment, '\n'), iter.next().unwrap());
assert_eq!((FullCodeCharKind::Normal, '\n'), iter.next().unwrap());
assert_eq!(None, iter.next());
}
#[test]
fn comment_code_slices() {
let input = "code(); /* test */ 1 + 1";
let mut iter = CommentCodeSlices::new(input);
assert_eq!((CodeCharKind::Normal, 0, "code(); "), iter.next().unwrap());
assert_eq!(
(CodeCharKind::Comment, 8, "/* test */"),
iter.next().unwrap()
);
assert_eq!((CodeCharKind::Normal, 18, " 1 + 1"), iter.next().unwrap());
assert_eq!(None, iter.next());
}
#[test]
fn comment_code_slices_two() {
let input = "// comment\n test();";
let mut iter = CommentCodeSlices::new(input);
assert_eq!((CodeCharKind::Normal, 0, ""), iter.next().unwrap());
assert_eq!(
(CodeCharKind::Comment, 0, "// comment\n"),
iter.next().unwrap()
);
assert_eq!(
(CodeCharKind::Normal, 11, " test();"),
iter.next().unwrap()
);
assert_eq!(None, iter.next());
}
#[test]
fn comment_code_slices_three() {
let input = "1 // comment\n // comment2\n\n";
let mut iter = CommentCodeSlices::new(input);
assert_eq!((CodeCharKind::Normal, 0, "1 "), iter.next().unwrap());
assert_eq!(
(CodeCharKind::Comment, 2, "// comment\n // comment2\n"),
iter.next().unwrap()
);
assert_eq!((CodeCharKind::Normal, 29, "\n"), iter.next().unwrap());
assert_eq!(None, iter.next());
}
#[test]
#[rustfmt::skip]
fn format_comments() {
let mut config: ::config::Config = Default::default();
config.set().wrap_comments(true);
config.set().normalize_comments(true);
let comment = rewrite_comment(" //test",
true,
Shape::legacy(100, Indent::new(0, 100)),
&config).unwrap();
assert_eq!("/* test */", comment);
let comment = rewrite_comment("// comment on a",
false,
Shape::legacy(10, Indent::empty()),
&config).unwrap();
assert_eq!("// comment\n// on a", comment);
let comment = rewrite_comment("// A multi line comment\n // between args.",
false,
Shape::legacy(60, Indent::new(0, 12)),
&config).unwrap();
assert_eq!("// A multi line comment\n // between args.", comment);
let input = "// comment";
let expected =
"/* comment */";
let comment = rewrite_comment(input,
true,
Shape::legacy(9, Indent::new(0, 69)),
&config).unwrap();
assert_eq!(expected, comment);
let comment = rewrite_comment("/* trimmed */",
true,
Shape::legacy(100, Indent::new(0, 100)),
&config).unwrap();
assert_eq!("/* trimmed */", comment);
}
// This is probably intended to be a non-test fn, but it is not used. I'm
// keeping it around unless it helps us test stuff.
fn uncommented(text: &str) -> String {
CharClasses::new(text.chars())
.filter_map(|(s, c)| match s {
FullCodeCharKind::Normal | FullCodeCharKind::InString => Some(c),
_ => None,
}).collect()
}
#[test]
fn test_uncommented() {
assert_eq!(&uncommented("abc/*...*/"), "abc");
assert_eq!(
&uncommented("// .... /* \n../* /* *** / */ */a/* // */c\n"),
"..ac\n"
);
assert_eq!(&uncommented("abc \" /* */\" qsdf"), "abc \" /* */\" qsdf");
}
#[test]
fn test_contains_comment() {
assert_eq!(contains_comment("abc"), false);
assert_eq!(contains_comment("abc // qsdf"), true);
assert_eq!(contains_comment("abc /* kqsdf"), true);
assert_eq!(contains_comment("abc \" /* */\" qsdf"), false);
}
#[test]
fn test_find_uncommented() {
fn check(haystack: &str, needle: &str, expected: Option<usize>) {
assert_eq!(expected, haystack.find_uncommented(needle));
}
check("/*/ */test", "test", Some(6));
check("//test\ntest", "test", Some(7));
check("/* comment only */", "whatever", None);
check(
"/* comment */ some text /* more commentary */ result",
"result",
Some(46),
);
check("sup // sup", "p", Some(2));
check("sup", "x", None);
check(r#"π? /**/ π is nice!"#, r#"π is nice"#, Some(9));
check("/*sup yo? \n sup*/ sup", "p", Some(20));
check("hel/*lohello*/lo", "hello", None);
check("acb", "ab", None);
check(",/*A*/ ", ",", Some(0));
check("abc", "abc", Some(0));
check("/* abc */", "abc", None);
check("/**/abc/* */", "abc", Some(4));
check("\"/* abc */\"", "abc", Some(4));
check("\"/* abc", "abc", Some(4));
}
#[test]
fn test_remove_trailing_white_spaces() {
let s = format!(" r#\"\n test\n \"#");
assert_eq!(remove_trailing_white_spaces(&s), s);
}
#[test]
fn test_filter_normal_code() {
let s = r#"
fn main() {
println!("hello, world");
}
"#;
assert_eq!(s, filter_normal_code(s));
let s_with_comment = r#"
fn main() {
// hello, world
println!("hello, world");
}
"#;
assert_eq!(s, filter_normal_code(s_with_comment));
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/lists.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Format list-like expressions and items.
use std::cmp;
use std::iter::Peekable;
use config::lists::*;
use syntax::source_map::BytePos;
use comment::{find_comment_end, rewrite_comment, FindUncommented};
use config::{Config, IndentStyle};
use rewrite::RewriteContext;
use shape::{Indent, Shape};
use utils::{count_newlines, first_line_width, last_line_width, mk_sp, starts_with_newline};
use visitor::SnippetProvider;
pub struct ListFormatting<'a> {
tactic: DefinitiveListTactic,
separator: &'a str,
trailing_separator: SeparatorTactic,
separator_place: SeparatorPlace,
shape: Shape,
// Non-expressions, e.g. items, will have a new line at the end of the list.
// Important for comment styles.
ends_with_newline: bool,
// Remove newlines between list elements for expressions.
preserve_newline: bool,
// Nested import lists get some special handling for the "Mixed" list type
nested: bool,
config: &'a Config,
}
impl<'a> ListFormatting<'a> {
pub fn new(shape: Shape, config: &'a Config) -> Self {
ListFormatting {
tactic: DefinitiveListTactic::Vertical,
separator: ",",
trailing_separator: SeparatorTactic::Never,
separator_place: SeparatorPlace::Back,
shape,
ends_with_newline: true,
preserve_newline: false,
nested: false,
config: config,
}
}
pub fn tactic(mut self, tactic: DefinitiveListTactic) -> Self {
self.tactic = tactic;
self
}
pub fn separator(mut self, separator: &'a str) -> Self {
self.separator = separator;
self
}
pub fn trailing_separator(mut self, trailing_separator: SeparatorTactic) -> Self {
self.trailing_separator = trailing_separator;
self
}
pub fn separator_place(mut self, separator_place: SeparatorPlace) -> Self {
self.separator_place = separator_place;
self
}
pub fn ends_with_newline(mut self, ends_with_newline: bool) -> Self {
self.ends_with_newline = ends_with_newline;
self
}
pub fn preserve_newline(mut self, preserve_newline: bool) -> Self {
self.preserve_newline = preserve_newline;
self
}
pub fn nested(mut self, nested: bool) -> Self {
self.nested = nested;
self
}
pub fn needs_trailing_separator(&self) -> bool {
match self.trailing_separator {
// We always put separator in front.
SeparatorTactic::Always => true,
SeparatorTactic::Vertical => self.tactic == DefinitiveListTactic::Vertical,
SeparatorTactic::Never => {
self.tactic == DefinitiveListTactic::Vertical && self.separator_place.is_front()
}
}
}
}
impl AsRef<ListItem> for ListItem {
fn as_ref(&self) -> &ListItem {
self
}
}
#[derive(PartialEq, Eq, Debug, Copy, Clone)]
pub enum ListItemCommentStyle {
// Try to keep the comment on the same line with the item.
SameLine,
// Put the comment on the previous or the next line of the item.
DifferentLine,
// No comment available.
None,
}
#[derive(Debug, Clone)]
pub struct ListItem {
// None for comments mean that they are not present.
pub pre_comment: Option<String>,
pub pre_comment_style: ListItemCommentStyle,
// Item should include attributes and doc comments. None indicates a failed
// rewrite.
pub item: Option<String>,
pub post_comment: Option<String>,
// Whether there is extra whitespace before this item.
pub new_lines: bool,
}
impl ListItem {
pub fn empty() -> ListItem {
ListItem {
pre_comment: None,
pre_comment_style: ListItemCommentStyle::None,
item: None,
post_comment: None,
new_lines: false,
}
}
pub fn inner_as_ref(&self) -> &str {
self.item.as_ref().map_or("", |s| s)
}
pub fn is_different_group(&self) -> bool {
self.inner_as_ref().contains('\n') || self.pre_comment.is_some() || self
.post_comment
.as_ref()
.map_or(false, |s| s.contains('\n'))
}
pub fn is_multiline(&self) -> bool {
self.inner_as_ref().contains('\n')
|| self
.pre_comment
.as_ref()
.map_or(false, |s| s.contains('\n'))
|| self
.post_comment
.as_ref()
.map_or(false, |s| s.contains('\n'))
}
pub fn has_single_line_comment(&self) -> bool {
self.pre_comment
.as_ref()
.map_or(false, |comment| comment.trim_left().starts_with("//"))
|| self
.post_comment
.as_ref()
.map_or(false, |comment| comment.trim_left().starts_with("//"))
}
pub fn has_comment(&self) -> bool {
self.pre_comment.is_some() || self.post_comment.is_some()
}
pub fn from_str<S: Into<String>>(s: S) -> ListItem {
ListItem {
pre_comment: None,
pre_comment_style: ListItemCommentStyle::None,
item: Some(s.into()),
post_comment: None,
new_lines: false,
}
}
// true if the item causes something to be written.
fn is_substantial(&self) -> bool {
fn empty(s: &Option<String>) -> bool {
match *s {
Some(ref s) if !s.is_empty() => false,
_ => true,
}
}
!(empty(&self.pre_comment) && empty(&self.item) && empty(&self.post_comment))
}
}
/// The type of separator for lists.
#[derive(Copy, Clone, Eq, PartialEq, Debug)]
pub enum Separator {
Comma,
VerticalBar,
}
impl Separator {
pub fn len(&self) -> usize {
match *self {
// 2 = `, `
Separator::Comma => 2,
// 3 = ` | `
Separator::VerticalBar => 3,
}
}
}
pub fn definitive_tactic<I, T>(
items: I,
tactic: ListTactic,
sep: Separator,
width: usize,
) -> DefinitiveListTactic
where
I: IntoIterator<Item = T> + Clone,
T: AsRef<ListItem>,
{
let pre_line_comments = items
.clone()
.into_iter()
.any(|item| item.as_ref().has_single_line_comment());
let limit = match tactic {
_ if pre_line_comments => return DefinitiveListTactic::Vertical,
ListTactic::Horizontal => return DefinitiveListTactic::Horizontal,
ListTactic::Vertical => return DefinitiveListTactic::Vertical,
ListTactic::LimitedHorizontalVertical(limit) => ::std::cmp::min(width, limit),
ListTactic::Mixed | ListTactic::HorizontalVertical => width,
};
let (sep_count, total_width) = calculate_width(items.clone());
let total_sep_len = sep.len() * sep_count.saturating_sub(1);
let real_total = total_width + total_sep_len;
if real_total <= limit
&& !pre_line_comments
&& !items.into_iter().any(|item| item.as_ref().is_multiline())
{
DefinitiveListTactic::Horizontal
} else {
match tactic {
ListTactic::Mixed => DefinitiveListTactic::Mixed,
_ => DefinitiveListTactic::Vertical,
}
}
}
// Format a list of commented items into a string.
pub fn write_list<I, T>(items: I, formatting: &ListFormatting) -> Option<String>
where
I: IntoIterator<Item = T> + Clone,
T: AsRef<ListItem>,
{
let tactic = formatting.tactic;
let sep_len = formatting.separator.len();
// Now that we know how we will layout, we can decide for sure if there
// will be a trailing separator.
let mut trailing_separator = formatting.needs_trailing_separator();
let mut result = String::with_capacity(128);
let cloned_items = items.clone();
let mut iter = items.into_iter().enumerate().peekable();
let mut item_max_width: Option<usize> = None;
let sep_place =
SeparatorPlace::from_tactic(formatting.separator_place, tactic, formatting.separator);
let mut prev_item_had_post_comment = false;
let mut prev_item_is_nested_import = false;
let mut line_len = 0;
let indent_str = &formatting.shape.indent.to_string(formatting.config);
while let Some((i, item)) = iter.next() {
let item = item.as_ref();
let inner_item = item.item.as_ref()?;
let first = i == 0;
let last = iter.peek().is_none();
let mut separate = match sep_place {
SeparatorPlace::Front => !first,
SeparatorPlace::Back => !last || trailing_separator,
};
let item_sep_len = if separate { sep_len } else { 0 };
// Item string may be multi-line. Its length (used for block comment alignment)
// should be only the length of the last line.
let item_last_line = if item.is_multiline() {
inner_item.lines().last().unwrap_or("")
} else {
inner_item.as_ref()
};
let mut item_last_line_width = item_last_line.len() + item_sep_len;
if item_last_line.starts_with(&**indent_str) {
item_last_line_width -= indent_str.len();
}
if !item.is_substantial() {
continue;
}
match tactic {
DefinitiveListTactic::Horizontal if !first => {
result.push(' ');
}
DefinitiveListTactic::SpecialMacro(num_args_before) => {
if i == 0 {
// Nothing
} else if i < num_args_before {
result.push(' ');
} else if i <= num_args_before + 1 {
result.push('\n');
result.push_str(indent_str);
} else {
result.push(' ');
}
}
DefinitiveListTactic::Vertical
if !first && !inner_item.is_empty() && !result.is_empty() =>
{
result.push('\n');
result.push_str(indent_str);
}
DefinitiveListTactic::Mixed => {
let total_width = total_item_width(item) + item_sep_len;
// 1 is space between separator and item.
if (line_len > 0 && line_len + 1 + total_width > formatting.shape.width)
|| prev_item_had_post_comment
|| (formatting.nested
&& (prev_item_is_nested_import || (!first && inner_item.contains("::"))))
{
result.push('\n');
result.push_str(indent_str);
line_len = 0;
if formatting.ends_with_newline {
trailing_separator = true;
}
} else if line_len > 0 {
result.push(' ');
line_len += 1;
}
if last && formatting.ends_with_newline {
separate = formatting.trailing_separator != SeparatorTactic::Never;
}
line_len += total_width;
}
_ => {}
}
// Pre-comments
if let Some(ref comment) = item.pre_comment {
// Block style in non-vertical mode.
let block_mode = tactic == DefinitiveListTactic::Horizontal;
// Width restriction is only relevant in vertical mode.
let comment =
rewrite_comment(comment, block_mode, formatting.shape, formatting.config)?;
result.push_str(&comment);
if !inner_item.is_empty() {
if tactic == DefinitiveListTactic::Vertical || tactic == DefinitiveListTactic::Mixed
{
// We cannot keep pre-comments on the same line if the comment if normalized.
let keep_comment = if formatting.config.normalize_comments()
|| item.pre_comment_style == ListItemCommentStyle::DifferentLine
{
false
} else {
// We will try to keep the comment on the same line with the item here.
// 1 = ` `
let total_width = total_item_width(item) + item_sep_len + 1;
total_width <= formatting.shape.width
};
if keep_comment {
result.push(' ');
} else {
result.push('\n');
result.push_str(indent_str);
// This is the width of the item (without comments).
line_len = item.item.as_ref().map_or(0, |str| str.len());
}
} else {
result.push(' ');
}
}
item_max_width = None;
}
if separate && sep_place.is_front() && !first {
result.push_str(formatting.separator.trim());
result.push(' ');
}
result.push_str(inner_item);
// Post-comments
if tactic == DefinitiveListTactic::Horizontal && item.post_comment.is_some() {
let comment = item.post_comment.as_ref().unwrap();
let formatted_comment = rewrite_comment(
comment,
true,
Shape::legacy(formatting.shape.width, Indent::empty()),
formatting.config,
)?;
result.push(' ');
result.push_str(&formatted_comment);
}
if separate && sep_place.is_back() {
result.push_str(formatting.separator);
}
if tactic != DefinitiveListTactic::Horizontal && item.post_comment.is_some() {
let comment = item.post_comment.as_ref().unwrap();
let overhead = last_line_width(&result) + first_line_width(comment.trim());
let rewrite_post_comment = |item_max_width: &mut Option<usize>| {
if item_max_width.is_none() && !last && !inner_item.contains('\n') {
*item_max_width = Some(max_width_of_item_with_post_comment(
&cloned_items,
i,
overhead,
formatting.config.max_width(),
));
}
let overhead = if starts_with_newline(comment) {
0
} else if let Some(max_width) = *item_max_width {
max_width + 2
} else {
// 1 = space between item and comment.
item_last_line_width + 1
};
let width = formatting.shape.width.checked_sub(overhead).unwrap_or(1);
let offset = formatting.shape.indent + overhead;
let comment_shape = Shape::legacy(width, offset);
// Use block-style only for the last item or multiline comments.
let block_style = !formatting.ends_with_newline && last
|| comment.trim().contains('\n')
|| comment.trim().len() > width;
rewrite_comment(
comment.trim_left(),
block_style,
comment_shape,
formatting.config,
)
};
let mut formatted_comment = rewrite_post_comment(&mut item_max_width)?;
if !starts_with_newline(comment) {
let mut comment_alignment =
post_comment_alignment(item_max_width, inner_item.len());
if first_line_width(&formatted_comment)
+ last_line_width(&result)
+ comment_alignment
+ 1
> formatting.config.max_width()
{
item_max_width = None;
formatted_comment = rewrite_post_comment(&mut item_max_width)?;
comment_alignment = post_comment_alignment(item_max_width, inner_item.len());
}
for _ in 0..(comment_alignment + 1) {
result.push(' ');
}
// An additional space for the missing trailing separator.
if last && item_max_width.is_some() && !separate && !formatting.separator.is_empty()
{
result.push(' ');
}
} else {
result.push('\n');
result.push_str(indent_str);
}
if formatted_comment.contains('\n') {
item_max_width = None;
}
result.push_str(&formatted_comment);
} else {
item_max_width = None;
}
if formatting.preserve_newline
&& !last
&& tactic == DefinitiveListTactic::Vertical
&& item.new_lines
{
item_max_width = None;
result.push('\n');
}
prev_item_had_post_comment = item.post_comment.is_some();
prev_item_is_nested_import = inner_item.contains("::");
}
Some(result)
}
fn max_width_of_item_with_post_comment<I, T>(
items: &I,
i: usize,
overhead: usize,
max_budget: usize,
) -> usize
where
I: IntoIterator<Item = T> + Clone,
T: AsRef<ListItem>,
{
let mut max_width = 0;
let mut first = true;
for item in items.clone().into_iter().skip(i) {
let item = item.as_ref();
let inner_item_width = item.inner_as_ref().len();
if !first
&& (item.is_different_group()
|| item.post_comment.is_none()
|| inner_item_width + overhead > max_budget)
{
return max_width;
}
if max_width < inner_item_width {
max_width = inner_item_width;
}
if item.new_lines {
return max_width;
}
first = false;
}
max_width
}
fn post_comment_alignment(item_max_width: Option<usize>, inner_item_len: usize) -> usize {
item_max_width.unwrap_or(0).saturating_sub(inner_item_len)
}
pub struct ListItems<'a, I, F1, F2, F3>
where
I: Iterator,
{
snippet_provider: &'a SnippetProvider<'a>,
inner: Peekable<I>,
get_lo: F1,
get_hi: F2,
get_item_string: F3,
prev_span_end: BytePos,
next_span_start: BytePos,
terminator: &'a str,
separator: &'a str,
leave_last: bool,
}
pub fn extract_pre_comment(pre_snippet: &str) -> (Option<String>, ListItemCommentStyle) {
let trimmed_pre_snippet = pre_snippet.trim();
let has_single_line_comment = trimmed_pre_snippet.starts_with("//");
let has_block_comment = trimmed_pre_snippet.starts_with("/*");
if has_single_line_comment {
(
Some(trimmed_pre_snippet.to_owned()),
ListItemCommentStyle::DifferentLine,
)
} else if has_block_comment {
let comment_end = pre_snippet.chars().rev().position(|c| c == '/').unwrap();
if pre_snippet
.chars()
.rev()
.take(comment_end + 1)
.any(|c| c == '\n')
{
(
Some(trimmed_pre_snippet.to_owned()),
ListItemCommentStyle::DifferentLine,
)
} else {
(
Some(trimmed_pre_snippet.to_owned()),
ListItemCommentStyle::SameLine,
)
}
} else {
(None, ListItemCommentStyle::None)
}
}
pub fn extract_post_comment(
post_snippet: &str,
comment_end: usize,
separator: &str,
) -> Option<String> {
let white_space: &[_] = &[' ', '\t'];
// Cleanup post-comment: strip separators and whitespace.
let post_snippet = post_snippet[..comment_end].trim();
let post_snippet_trimmed = if post_snippet.starts_with(|c| c == ',' || c == ':') {
post_snippet[1..].trim_matches(white_space)
} else if post_snippet.starts_with(separator) {
post_snippet[separator.len()..].trim_matches(white_space)
} else if post_snippet.ends_with(',') {
post_snippet[..(post_snippet.len() - 1)].trim_matches(white_space)
} else {
post_snippet
};
if !post_snippet_trimmed.is_empty() {
Some(post_snippet_trimmed.to_owned())
} else {
None
}
}
pub fn get_comment_end(
post_snippet: &str,
separator: &str,
terminator: &str,
is_last: bool,
) -> usize {
if is_last {
return post_snippet
.find_uncommented(terminator)
.unwrap_or_else(|| post_snippet.len());
}
let mut block_open_index = post_snippet.find("/*");
// check if it really is a block comment (and not `//*` or a nested comment)
if let Some(i) = block_open_index {
match post_snippet.find('/') {
Some(j) if j < i => block_open_index = None,
_ if i > 0 && &post_snippet[i - 1..i] == "/" => block_open_index = None,
_ => (),
}
}
let newline_index = post_snippet.find('\n');
if let Some(separator_index) = post_snippet.find_uncommented(separator) {
match (block_open_index, newline_index) {
// Separator before comment, with the next item on same line.
// Comment belongs to next item.
(Some(i), None) if i > separator_index => separator_index + 1,
// Block-style post-comment before the separator.
(Some(i), None) => cmp::max(
find_comment_end(&post_snippet[i..]).unwrap() + i,
separator_index + 1,
),
// Block-style post-comment. Either before or after the separator.
(Some(i), Some(j)) if i < j => cmp::max(
find_comment_end(&post_snippet[i..]).unwrap() + i,
separator_index + 1,
),
// Potential *single* line comment.
(_, Some(j)) if j > separator_index => j + 1,
_ => post_snippet.len(),
}
} else if let Some(newline_index) = newline_index {
// Match arms may not have trailing comma. In any case, for match arms,
// we will assume that the post comment belongs to the next arm if they
// do not end with trailing comma.
newline_index + 1
} else {
0
}
}
// Account for extra whitespace between items. This is fiddly
// because of the way we divide pre- and post- comments.
fn has_extra_newline(post_snippet: &str, comment_end: usize) -> bool {
if post_snippet.is_empty() || comment_end == 0 {
return false;
}
// Everything from the separator to the next item.
let test_snippet = &post_snippet[comment_end - 1..];
let first_newline = test_snippet
.find('\n')
.unwrap_or_else(|| test_snippet.len());
// From the end of the first line of comments.
let test_snippet = &test_snippet[first_newline..];
let first = test_snippet
.find(|c: char| !c.is_whitespace())
.unwrap_or_else(|| test_snippet.len());
// From the end of the first line of comments to the next non-whitespace char.
let test_snippet = &test_snippet[..first];
// There were multiple line breaks which got trimmed to nothing.
count_newlines(test_snippet) > 1
}
impl<'a, T, I, F1, F2, F3> Iterator for ListItems<'a, I, F1, F2, F3>
where
I: Iterator<Item = T>,
F1: Fn(&T) -> BytePos,
F2: Fn(&T) -> BytePos,
F3: Fn(&T) -> Option<String>,
{
type Item = ListItem;
fn next(&mut self) -> Option<Self::Item> {
self.inner.next().map(|item| {
// Pre-comment
let pre_snippet = self
.snippet_provider
.span_to_snippet(mk_sp(self.prev_span_end, (self.get_lo)(&item)))
.unwrap_or("");
let (pre_comment, pre_comment_style) = extract_pre_comment(pre_snippet);
// Post-comment
let next_start = match self.inner.peek() {
Some(next_item) => (self.get_lo)(next_item),
None => self.next_span_start,
};
let post_snippet = self
.snippet_provider
.span_to_snippet(mk_sp((self.get_hi)(&item), next_start))
.unwrap_or("");
let comment_end = get_comment_end(
post_snippet,
self.separator,
self.terminator,
self.inner.peek().is_none(),
);
let new_lines = has_extra_newline(post_snippet, comment_end);
let post_comment = extract_post_comment(post_snippet, comment_end, self.separator);
self.prev_span_end = (self.get_hi)(&item) + BytePos(comment_end as u32);
ListItem {
pre_comment,
pre_comment_style,
item: if self.inner.peek().is_none() && self.leave_last {
None
} else {
(self.get_item_string)(&item)
},
post_comment,
new_lines,
}
})
}
}
#[cfg_attr(feature = "cargo-clippy", allow(too_many_arguments))]
// Creates an iterator over a list's items with associated comments.
pub fn itemize_list<'a, T, I, F1, F2, F3>(
snippet_provider: &'a SnippetProvider,
inner: I,
terminator: &'a str,
separator: &'a str,
get_lo: F1,
get_hi: F2,
get_item_string: F3,
prev_span_end: BytePos,
next_span_start: BytePos,
leave_last: bool,
) -> ListItems<'a, I, F1, F2, F3>
where
I: Iterator<Item = T>,
F1: Fn(&T) -> BytePos,
F2: Fn(&T) -> BytePos,
F3: Fn(&T) -> Option<String>,
{
ListItems {
snippet_provider,
inner: inner.peekable(),
get_lo,
get_hi,
get_item_string,
prev_span_end,
next_span_start,
terminator,
separator,
leave_last,
}
}
/// Returns the count and total width of the list items.
fn calculate_width<I, T>(items: I) -> (usize, usize)
where
I: IntoIterator<Item = T>,
T: AsRef<ListItem>,
{
items
.into_iter()
.map(|item| total_item_width(item.as_ref()))
.fold((0, 0), |acc, l| (acc.0 + 1, acc.1 + l))
}
pub fn total_item_width(item: &ListItem) -> usize {
comment_len(item.pre_comment.as_ref().map(|x| &(*x)[..]))
+ comment_len(item.post_comment.as_ref().map(|x| &(*x)[..]))
+ item.item.as_ref().map_or(0, |str| str.len())
}
fn comment_len(comment: Option<&str>) -> usize {
match comment {
Some(s) => {
let text_len = s.trim().len();
if text_len > 0 {
// We'll put " /*" before and " */" after inline comments.
text_len + 6
} else {
text_len
}
}
None => 0,
}
}
// Compute horizontal and vertical shapes for a struct-lit-like thing.
pub fn struct_lit_shape(
shape: Shape,
context: &RewriteContext,
prefix_width: usize,
suffix_width: usize,
) -> Option<(Option<Shape>, Shape)> {
let v_shape = match context.config.indent_style() {
IndentStyle::Visual => shape
.visual_indent(0)
.shrink_left(prefix_width)?
.sub_width(suffix_width)?,
IndentStyle::Block => {
let shape = shape.block_indent(context.config.tab_spaces());
Shape {
width: context.budget(shape.indent.width()),
..shape
}
}
};
let shape_width = shape.width.checked_sub(prefix_width + suffix_width);
if let Some(w) = shape_width {
let shape_width = cmp::min(w, context.config.width_heuristics().struct_lit_width);
Some((Some(Shape::legacy(shape_width, shape.indent)), v_shape))
} else {
Some((None, v_shape))
}
}
// Compute the tactic for the internals of a struct-lit-like thing.
pub fn struct_lit_tactic(
h_shape: Option<Shape>,
context: &RewriteContext,
items: &[ListItem],
) -> DefinitiveListTactic {
if let Some(h_shape) = h_shape {
let prelim_tactic = match (context.config.indent_style(), items.len()) {
(IndentStyle::Visual, 1) => ListTactic::HorizontalVertical,
_ if context.config.struct_lit_single_line() => ListTactic::HorizontalVertical,
_ => ListTactic::Vertical,
};
definitive_tactic(items, prelim_tactic, Separator::Comma, h_shape.width)
} else {
DefinitiveListTactic::Vertical
}
}
// Given a tactic and possible shapes for horizontal and vertical layout,
// come up with the actual shape to use.
pub fn shape_for_tactic(
tactic: DefinitiveListTactic,
h_shape: Option<Shape>,
v_shape: Shape,
) -> Shape {
match tactic {
DefinitiveListTactic::Horizontal => h_shape.unwrap(),
_ => v_shape,
}
}
// Create a ListFormatting object for formatting the internals of a
// struct-lit-like thing, that is a series of fields.
pub fn struct_lit_formatting<'a>(
shape: Shape,
tactic: DefinitiveListTactic,
context: &'a RewriteContext,
force_no_trailing_comma: bool,
) -> ListFormatting<'a> {
let ends_with_newline = context.config.indent_style() != IndentStyle::Visual
&& tactic == DefinitiveListTactic::Vertical;
ListFormatting {
tactic,
separator: ",",
trailing_separator: if force_no_trailing_comma {
SeparatorTactic::Never
} else {
context.config.trailing_comma()
},
separator_place: SeparatorPlace::Back,
shape,
ends_with_newline,
preserve_newline: true,
nested: false,
config: context.config,
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/lib.rs
|
// Copyright 2015-2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![feature(decl_macro)]
#![allow(unused_attributes)]
#![feature(type_ascription)]
#![feature(unicode_internals)]
#![feature(extern_prelude)]
#![feature(nll)]
#[macro_use]
extern crate derive_new;
extern crate diff;
extern crate failure;
// extern crate isatty;
extern crate itertools;
#[cfg(test)]
#[macro_use]
extern crate lazy_static;
#[macro_use]
extern crate log;
extern crate regex;
extern crate rustc_target;
extern crate serde;
#[macro_use]
extern crate serde_derive;
extern crate serde_json;
extern crate syntax;
extern crate syntax_pos;
extern crate toml;
extern crate unicode_segmentation;
use std::cell::RefCell;
use std::collections::HashMap;
use std::fmt;
use std::io::{self, Write};
use std::mem;
use std::path::PathBuf;
use std::rc::Rc;
use syntax::ast;
use comment::LineClasses;
use failure::Fail;
use formatting::{FormatErrorMap, FormattingError, ReportedErrors, SourceFile};
use issues::Issue;
use shape::Indent;
pub use config::{
load_config, CliOptions, Color, Config, EmitMode, FileLines, FileName, NewlineStyle, Range,
Verbosity,
};
#[macro_use]
mod utils;
mod attr;
mod chains;
pub(crate) mod checkstyle;
mod closures;
mod comment;
pub(crate) mod config;
mod expr;
pub(crate) mod formatting;
mod imports;
mod issues;
mod items;
mod lists;
mod macros;
mod matches;
mod missed_spans;
pub(crate) mod modules;
mod overflow;
mod pairs;
mod patterns;
mod reorder;
mod rewrite;
pub(crate) mod rustfmt_diff;
mod shape;
pub(crate) mod source_file;
pub(crate) mod source_map;
mod spanned;
mod string;
#[cfg(test)]
mod test;
mod types;
mod vertical;
pub(crate) mod visitor;
/// The various errors that can occur during formatting. Note that not all of
/// these can currently be propagated to clients.
#[derive(Fail, Debug)]
pub enum ErrorKind {
/// Line has exceeded character limit (found, maximum).
#[fail(
display = "line formatted, but exceeded maximum width \
(maximum: {} (see `max_width` option), found: {})",
_0,
_1
)]
LineOverflow(usize, usize),
/// Line ends in whitespace.
#[fail(display = "left behind trailing whitespace")]
TrailingWhitespace,
/// TODO or FIXME item without an issue number.
#[fail(display = "found {}", _0)]
BadIssue(Issue),
/// License check has failed.
#[fail(display = "license check failed")]
LicenseCheck,
/// Used deprecated skip attribute.
#[fail(display = "`rustfmt_skip` is deprecated; use `rustfmt::skip`")]
DeprecatedAttr,
/// Used a rustfmt:: attribute other than skip.
#[fail(display = "invalid attribute")]
BadAttr,
/// An io error during reading or writing.
#[fail(display = "io error: {}", _0)]
IoError(io::Error),
/// Parse error occurred when parsing the Input.
#[fail(display = "parse error")]
ParseError,
/// The user mandated a version and the current version of Rustfmt does not
/// satisfy that requirement.
#[fail(display = "version mismatch")]
VersionMismatch,
/// If we had formatted the given node, then we would have lost a comment.
#[fail(display = "not formatted because a comment would be lost")]
LostComment,
}
impl ErrorKind {
fn is_comment(&self) -> bool {
match self {
ErrorKind::LostComment => true,
_ => false,
}
}
}
impl From<io::Error> for ErrorKind {
fn from(e: io::Error) -> ErrorKind {
ErrorKind::IoError(e)
}
}
/// Reports on any issues that occurred during a run of Rustfmt.
///
/// Can be reported to the user via its `Display` implementation of `print_fancy`.
#[derive(Clone)]
pub struct FormatReport {
// Maps stringified file paths to their associated formatting errors.
internal: Rc<RefCell<(FormatErrorMap, ReportedErrors)>>,
}
impl FormatReport {
fn new() -> FormatReport {
FormatReport {
internal: Rc::new(RefCell::new((HashMap::new(), ReportedErrors::default()))),
}
}
fn append(&self, f: FileName, mut v: Vec<FormattingError>) {
self.track_errors(&v);
self.internal
.borrow_mut()
.0
.entry(f)
.and_modify(|fe| fe.append(&mut v))
.or_insert(v);
}
fn track_errors(&self, new_errors: &[FormattingError]) {
let errs = &mut self.internal.borrow_mut().1;
if !new_errors.is_empty() {
errs.has_formatting_errors = true;
}
if errs.has_operational_errors && errs.has_check_errors {
return;
}
for err in new_errors {
match err.kind {
ErrorKind::LineOverflow(..) | ErrorKind::TrailingWhitespace => {
errs.has_operational_errors = true;
}
ErrorKind::BadIssue(_)
| ErrorKind::LicenseCheck
| ErrorKind::DeprecatedAttr
| ErrorKind::BadAttr
| ErrorKind::VersionMismatch => {
errs.has_check_errors = true;
}
_ => {}
}
}
}
fn add_diff(&mut self) {
self.internal.borrow_mut().1.has_diff = true;
}
fn add_macro_format_failure(&mut self) {
self.internal.borrow_mut().1.has_macro_format_failure = true;
}
fn add_parsing_error(&mut self) {
self.internal.borrow_mut().1.has_parsing_errors = true;
}
fn warning_count(&self) -> usize {
self.internal
.borrow()
.0
.iter()
.map(|(_, errors)| errors.len())
.sum()
}
/// Whether any warnings or errors are present in the report.
pub fn has_warnings(&self) -> bool {
self.internal.borrow().1.has_formatting_errors
}
/// Print the report to a terminal using colours and potentially other
/// fancy output.
pub fn fancy_print(
&self,
mut t: Box<term::Terminal<Output = io::Stderr>>,
) -> Result<(), term::Error> {
for (file, errors) in &self.internal.borrow().0 {
for error in errors {
let prefix_space_len = error.line.to_string().len();
let prefix_spaces = " ".repeat(1 + prefix_space_len);
// First line: the overview of error
t.fg(term::color::RED)?;
t.attr(term::Attr::Bold)?;
write!(t, "{} ", error.msg_prefix())?;
t.reset()?;
t.attr(term::Attr::Bold)?;
writeln!(t, "{}", error.kind)?;
// Second line: file info
write!(t, "{}--> ", &prefix_spaces[1..])?;
t.reset()?;
writeln!(t, "{}:{}", file, error.line)?;
// Third to fifth lines: show the line which triggered error, if available.
if !error.line_buffer.is_empty() {
let (space_len, target_len) = error.format_len();
t.attr(term::Attr::Bold)?;
write!(t, "{}|\n{} | ", prefix_spaces, error.line)?;
t.reset()?;
writeln!(t, "{}", error.line_buffer)?;
t.attr(term::Attr::Bold)?;
write!(t, "{}| ", prefix_spaces)?;
t.fg(term::color::RED)?;
writeln!(t, "{}", FormatReport::target_str(space_len, target_len))?;
t.reset()?;
}
// The last line: show note if available.
let msg_suffix = error.msg_suffix();
if !msg_suffix.is_empty() {
t.attr(term::Attr::Bold)?;
write!(t, "{}= note: ", prefix_spaces)?;
t.reset()?;
writeln!(t, "{}", error.msg_suffix())?;
} else {
writeln!(t)?;
}
t.reset()?;
}
}
if !self.internal.borrow().0.is_empty() {
t.attr(term::Attr::Bold)?;
write!(t, "warning: ")?;
t.reset()?;
write!(
t,
"rustfmt may have failed to format. See previous {} errors.\n\n",
self.warning_count(),
)?;
}
Ok(())
}
fn target_str(space_len: usize, target_len: usize) -> String {
let empty_line = " ".repeat(space_len);
let overflowed = "^".repeat(target_len);
empty_line + &overflowed
}
}
impl fmt::Display for FormatReport {
// Prints all the formatting errors.
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
for (file, errors) in &self.internal.borrow().0 {
for error in errors {
let prefix_space_len = error.line.to_string().len();
let prefix_spaces = " ".repeat(1 + prefix_space_len);
let error_line_buffer = if error.line_buffer.is_empty() {
String::from(" ")
} else {
let (space_len, target_len) = error.format_len();
format!(
"{}|\n{} | {}\n{}| {}",
prefix_spaces,
error.line,
error.line_buffer,
prefix_spaces,
FormatReport::target_str(space_len, target_len)
)
};
let error_info = format!("{} {}", error.msg_prefix(), error.kind);
let file_info = format!("{}--> {}:{}", &prefix_spaces[1..], file, error.line);
let msg_suffix = error.msg_suffix();
let note = if msg_suffix.is_empty() {
String::new()
} else {
format!("{}note= ", prefix_spaces)
};
writeln!(
fmt,
"{}\n{}\n{}\n{}{}",
error_info,
file_info,
error_line_buffer,
note,
error.msg_suffix()
)?;
}
}
if !self.internal.borrow().0.is_empty() {
writeln!(
fmt,
"warning: rustfmt may have failed to format. See previous {} errors.",
self.warning_count(),
)?;
}
Ok(())
}
}
/// Format the given snippet. The snippet is expected to be *complete* code.
/// When we cannot parse the given snippet, this function returns `None`.
fn format_snippet(snippet: &str, config: &Config) -> Option<String> {
let mut out: Vec<u8> = Vec::with_capacity(snippet.len() * 2);
let input = Input::Text(snippet.into());
let mut config = config.clone();
config.set().emit_mode(config::EmitMode::Stdout);
config.set().verbose(Verbosity::Quiet);
config.set().hide_parse_errors(true);
{
let mut session = Session::new(config, Some(&mut out));
let result = session.format(input);
let formatting_error = session.errors.has_macro_format_failure
|| session.out.as_ref().unwrap().is_empty() && !snippet.is_empty();
if formatting_error || result.is_err() {
return None;
}
}
String::from_utf8(out).ok()
}
/// Format the given code block. Mainly targeted for code block in comment.
/// The code block may be incomplete (i.e. parser may be unable to parse it).
/// To avoid panic in parser, we wrap the code block with a dummy function.
/// The returned code block does *not* end with newline.
fn format_code_block(code_snippet: &str, config: &Config) -> Option<String> {
const FN_MAIN_PREFIX: &str = "fn main() {\n";
fn enclose_in_main_block(s: &str, config: &Config) -> String {
let indent = Indent::from_width(config, config.tab_spaces());
let mut result = String::with_capacity(s.len() * 2);
result.push_str(FN_MAIN_PREFIX);
let mut need_indent = true;
for (kind, line) in LineClasses::new(s) {
if need_indent {
result.push_str(&indent.to_string(config));
}
result.push_str(&line);
result.push('\n');
need_indent = !kind.is_string() || line.ends_with('\\');
}
result.push('}');
result
}
// Wrap the given code block with `fn main()` if it does not have one.
let snippet = enclose_in_main_block(code_snippet, config);
let mut result = String::with_capacity(snippet.len());
let mut is_first = true;
// While formatting the code, ignore the config's newline style setting and always use "\n"
// instead of "\r\n" for the newline characters. This is okay because the output here is
// not directly outputted by rustfmt command, but used by the comment formatter's input.
// We have output-file-wide "\n" ==> "\r\n" conversion process after here if it's necessary.
let mut config_with_unix_newline = config.clone();
config_with_unix_newline
.set()
.newline_style(NewlineStyle::Unix);
let formatted = format_snippet(&snippet, &config_with_unix_newline)?;
// Trim "fn main() {" on the first line and "}" on the last line,
// then unindent the whole code block.
let block_len = formatted.rfind('}').unwrap_or(formatted.len());
let mut is_indented = true;
for (kind, ref line) in LineClasses::new(&formatted[FN_MAIN_PREFIX.len()..block_len]) {
if !is_first {
result.push('\n');
} else {
is_first = false;
}
let trimmed_line = if !is_indented {
line
} else if line.len() > config.max_width() {
// If there are lines that are larger than max width, we cannot tell
// whether we have succeeded but have some comments or strings that
// are too long, or we have failed to format code block. We will be
// conservative and just return `None` in this case.
return None;
} else if line.len() > config.tab_spaces() {
// Make sure that the line has leading whitespaces.
let indent_str = Indent::from_width(config, config.tab_spaces()).to_string(config);
if line.starts_with(indent_str.as_ref()) {
let offset = if config.hard_tabs() {
1
} else {
config.tab_spaces()
};
&line[offset..]
} else {
line
}
} else {
line
};
result.push_str(trimmed_line);
is_indented = !kind.is_string() || line.ends_with('\\');
}
Some(result)
}
/// A session is a run of rustfmt across a single or multiple inputs.
pub struct Session<'b, T: Write + 'b> {
pub config: Config,
pub out: Option<&'b mut T>,
pub(crate) errors: ReportedErrors,
source_file: SourceFile,
}
impl<'b, T: Write + 'b> Session<'b, T> {
pub fn new(config: Config, out: Option<&'b mut T>) -> Session<'b, T> {
if config.emit_mode() == EmitMode::Checkstyle {
println!("{}", checkstyle::header());
}
Session {
config,
out,
errors: ReportedErrors::default(),
source_file: SourceFile::new(),
}
}
/// The main entry point for Rustfmt. Formats the given input according to the
/// given config. `out` is only necessary if required by the configuration.
pub fn format(&mut self, input: Input) -> Result<FormatReport, ErrorKind> {
self.format_input_inner(input)
}
pub fn override_config<F, U>(&mut self, mut config: Config, f: F) -> U
where
F: FnOnce(&mut Session<'b, T>) -> U,
{
mem::swap(&mut config, &mut self.config);
let result = f(self);
mem::swap(&mut config, &mut self.config);
result
}
pub fn add_operational_error(&mut self) {
self.errors.has_operational_errors = true;
}
pub fn has_operational_errors(&self) -> bool {
self.errors.has_operational_errors
}
pub fn has_parsing_errors(&self) -> bool {
self.errors.has_parsing_errors
}
pub fn has_formatting_errors(&self) -> bool {
self.errors.has_formatting_errors
}
pub fn has_check_errors(&self) -> bool {
self.errors.has_check_errors
}
pub fn has_diff(&self) -> bool {
self.errors.has_diff
}
pub fn has_no_errors(&self) -> bool {
!(self.has_operational_errors()
|| self.has_parsing_errors()
|| self.has_formatting_errors()
|| self.has_check_errors()
|| self.has_diff())
|| self.errors.has_macro_format_failure
}
}
impl<'b, T: Write + 'b> Drop for Session<'b, T> {
fn drop(&mut self) {
if self.config.emit_mode() == EmitMode::Checkstyle {
println!("{}", checkstyle::footer());
}
}
}
#[derive(Debug)]
pub enum Input {
File(PathBuf),
Text(String),
}
impl Input {
fn is_text(&self) -> bool {
match *self {
Input::File(_) => false,
Input::Text(_) => true,
}
}
fn file_name(&self) -> FileName {
match *self {
Input::File(ref file) => FileName::Real(file.clone()),
Input::Text(..) => FileName::Stdin,
}
}
}
#[cfg(test)]
mod unit_tests {
use super::*;
#[test]
fn test_no_panic_on_format_snippet_and_format_code_block() {
// `format_snippet()` and `format_code_block()` should not panic
// even when we cannot parse the given snippet.
let snippet = "let";
assert!(format_snippet(snippet, &Config::default()).is_none());
assert!(format_code_block(snippet, &Config::default()).is_none());
}
fn test_format_inner<F>(formatter: F, input: &str, expected: &str) -> bool
where
F: Fn(&str, &Config) -> Option<String>,
{
let output = formatter(input, &Config::default());
output.is_some() && output.unwrap() == expected
}
#[test]
fn test_format_snippet() {
let snippet = "fn main() { println!(\"hello, world\"); }";
#[cfg(not(windows))]
let expected = "fn main() {\n \
println!(\"hello, world\");\n\
}\n";
#[cfg(windows)]
let expected = "fn main() {\r\n \
println!(\"hello, world\");\r\n\
}\r\n";
assert!(test_format_inner(format_snippet, snippet, expected));
}
#[test]
fn test_format_code_block_fail() {
#[rustfmt::skip]
let code_block = "this_line_is_100_characters_long_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx(x, y, z);";
assert!(format_code_block(code_block, &Config::default()).is_none());
}
#[test]
fn test_format_code_block() {
// simple code block
let code_block = "let x=3;";
let expected = "let x = 3;";
assert!(test_format_inner(format_code_block, code_block, expected));
// more complex code block, taken from chains.rs.
let code_block =
"let (nested_shape, extend) = if !parent_rewrite_contains_newline && is_continuable(&parent) {
(
chain_indent(context, shape.add_offset(parent_rewrite.len())),
context.config.indent_style() == IndentStyle::Visual || is_small_parent,
)
} else if is_block_expr(context, &parent, &parent_rewrite) {
match context.config.indent_style() {
// Try to put the first child on the same line with parent's last line
IndentStyle::Block => (parent_shape.block_indent(context.config.tab_spaces()), true),
// The parent is a block, so align the rest of the chain with the closing
// brace.
IndentStyle::Visual => (parent_shape, false),
}
} else {
(
chain_indent(context, shape.add_offset(parent_rewrite.len())),
false,
)
};
";
let expected =
"let (nested_shape, extend) = if !parent_rewrite_contains_newline && is_continuable(&parent) {
(
chain_indent(context, shape.add_offset(parent_rewrite.len())),
context.config.indent_style() == IndentStyle::Visual || is_small_parent,
)
} else if is_block_expr(context, &parent, &parent_rewrite) {
match context.config.indent_style() {
// Try to put the first child on the same line with parent's last line
IndentStyle::Block => (parent_shape.block_indent(context.config.tab_spaces()), true),
// The parent is a block, so align the rest of the chain with the closing
// brace.
IndentStyle::Visual => (parent_shape, false),
}
} else {
(
chain_indent(context, shape.add_offset(parent_rewrite.len())),
false,
)
};";
assert!(test_format_inner(format_code_block, code_block, expected));
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/vertical.rs
|
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Format with vertical alignment.
use std::cmp;
use config::lists::*;
use syntax::ast;
use syntax::source_map::{BytePos, Span};
use comment::{combine_strs_with_missing_comments, contains_comment};
use expr::rewrite_field;
use items::{rewrite_struct_field, rewrite_struct_field_prefix};
use lists::{definitive_tactic, itemize_list, write_list, ListFormatting, Separator};
use rewrite::{Rewrite, RewriteContext};
use shape::{Indent, Shape};
use source_map::SpanUtils;
use spanned::Spanned;
use utils::{contains_skip, is_attributes_extendable, mk_sp, rewrite_ident};
pub trait AlignedItem {
fn skip(&self) -> bool;
fn get_span(&self) -> Span;
fn rewrite_prefix(&self, context: &RewriteContext, shape: Shape) -> Option<String>;
fn rewrite_aligned_item(
&self,
context: &RewriteContext,
shape: Shape,
prefix_max_width: usize,
) -> Option<String>;
}
impl AlignedItem for ast::StructField {
fn skip(&self) -> bool {
contains_skip(&self.attrs)
}
fn get_span(&self) -> Span {
self.span()
}
fn rewrite_prefix(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
let attrs_str = self.attrs.rewrite(context, shape)?;
let missing_span = if self.attrs.is_empty() {
mk_sp(self.span.lo(), self.span.lo())
} else {
mk_sp(self.attrs.last().unwrap().span.hi(), self.span.lo())
};
let attrs_extendable = self.ident.is_none() && is_attributes_extendable(&attrs_str);
rewrite_struct_field_prefix(context, self).and_then(|field_str| {
combine_strs_with_missing_comments(
context,
&attrs_str,
&field_str,
missing_span,
shape,
attrs_extendable,
)
})
}
fn rewrite_aligned_item(
&self,
context: &RewriteContext,
shape: Shape,
prefix_max_width: usize,
) -> Option<String> {
rewrite_struct_field(context, self, shape, prefix_max_width)
}
}
impl AlignedItem for ast::Field {
fn skip(&self) -> bool {
contains_skip(&self.attrs)
}
fn get_span(&self) -> Span {
self.span()
}
fn rewrite_prefix(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
let attrs_str = self.attrs.rewrite(context, shape)?;
let name = rewrite_ident(context, self.ident);
let missing_span = if self.attrs.is_empty() {
mk_sp(self.span.lo(), self.span.lo())
} else {
mk_sp(self.attrs.last().unwrap().span.hi(), self.span.lo())
};
combine_strs_with_missing_comments(
context,
&attrs_str,
name,
missing_span,
shape,
is_attributes_extendable(&attrs_str),
)
}
fn rewrite_aligned_item(
&self,
context: &RewriteContext,
shape: Shape,
prefix_max_width: usize,
) -> Option<String> {
rewrite_field(context, self, shape, prefix_max_width)
}
}
pub fn rewrite_with_alignment<T: AlignedItem>(
fields: &[T],
context: &RewriteContext,
shape: Shape,
span: Span,
one_line_width: usize,
) -> Option<String> {
let (spaces, group_index) = if context.config.struct_field_align_threshold() > 0 {
group_aligned_items(context, fields)
} else {
("", fields.len() - 1)
};
let init = &fields[0..group_index + 1];
let rest = &fields[group_index + 1..];
let init_last_pos = if rest.is_empty() {
span.hi()
} else {
// Decide whether the missing comments should stick to init or rest.
let init_hi = init[init.len() - 1].get_span().hi();
let rest_lo = rest[0].get_span().lo();
let missing_span = mk_sp(init_hi, rest_lo);
let missing_span = mk_sp(
context.snippet_provider.span_after(missing_span, ","),
missing_span.hi(),
);
let snippet = context.snippet(missing_span);
if snippet.trim_left().starts_with("//") {
let offset = snippet.lines().next().map_or(0, |l| l.len());
// 2 = "," + "\n"
init_hi + BytePos(offset as u32 + 2)
} else if snippet.trim_left().starts_with("/*") {
let comment_lines = snippet
.lines()
.position(|line| line.trim_right().ends_with("*/"))
.unwrap_or(0);
let offset = snippet
.lines()
.take(comment_lines + 1)
.collect::<Vec<_>>()
.join("\n")
.len();
init_hi + BytePos(offset as u32 + 2)
} else {
missing_span.lo()
}
};
let init_span = mk_sp(span.lo(), init_last_pos);
let one_line_width = if rest.is_empty() { one_line_width } else { 0 };
let result =
rewrite_aligned_items_inner(context, init, init_span, shape.indent, one_line_width)?;
if rest.is_empty() {
Some(result + spaces)
} else {
let rest_span = mk_sp(init_last_pos, span.hi());
let rest_str = rewrite_with_alignment(rest, context, shape, rest_span, one_line_width)?;
Some(
result
+ spaces
+ "\n"
+ &shape
.indent
.block_indent(context.config)
.to_string(context.config)
+ &rest_str,
)
}
}
fn struct_field_prefix_max_min_width<T: AlignedItem>(
context: &RewriteContext,
fields: &[T],
shape: Shape,
) -> (usize, usize) {
fields
.iter()
.map(|field| {
field.rewrite_prefix(context, shape).and_then(|field_str| {
if field_str.contains('\n') {
None
} else {
Some(field_str.len())
}
})
}).fold(Some((0, ::std::usize::MAX)), |acc, len| match (acc, len) {
(Some((max_len, min_len)), Some(len)) => {
Some((cmp::max(max_len, len), cmp::min(min_len, len)))
}
_ => None,
}).unwrap_or((0, 0))
}
fn rewrite_aligned_items_inner<T: AlignedItem>(
context: &RewriteContext,
fields: &[T],
span: Span,
offset: Indent,
one_line_width: usize,
) -> Option<String> {
let item_indent = offset.block_indent(context.config);
// 1 = ","
let item_shape = Shape::indented(item_indent, context.config).sub_width(1)?;
let (mut field_prefix_max_width, field_prefix_min_width) =
struct_field_prefix_max_min_width(context, fields, item_shape);
let max_diff = field_prefix_max_width.saturating_sub(field_prefix_min_width);
if max_diff > context.config.struct_field_align_threshold() {
field_prefix_max_width = 0;
}
let items = itemize_list(
context.snippet_provider,
fields.iter(),
"}",
",",
|field| field.get_span().lo(),
|field| field.get_span().hi(),
|field| field.rewrite_aligned_item(context, item_shape, field_prefix_max_width),
span.lo(),
span.hi(),
false,
).collect::<Vec<_>>();
let tactic = definitive_tactic(
&items,
ListTactic::HorizontalVertical,
Separator::Comma,
one_line_width,
);
let fmt = ListFormatting::new(item_shape, context.config)
.tactic(tactic)
.trailing_separator(context.config.trailing_comma())
.preserve_newline(true);
write_list(&items, &fmt)
}
fn group_aligned_items<T: AlignedItem>(
context: &RewriteContext,
fields: &[T],
) -> (&'static str, usize) {
let mut index = 0;
for i in 0..fields.len() - 1 {
if fields[i].skip() {
return ("", index);
}
// See if there are comments or empty lines between fields.
let span = mk_sp(fields[i].get_span().hi(), fields[i + 1].get_span().lo());
let snippet = context
.snippet(span)
.lines()
.skip(1)
.collect::<Vec<_>>()
.join("\n");
let spacings = if snippet.lines().rev().skip(1).any(|l| l.trim().is_empty()) {
"\n"
} else {
""
};
if contains_comment(&snippet) || snippet.lines().count() > 1 {
return (spacings, index);
}
index += 1;
}
("", index)
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/issues.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Objects for seeking through a char stream for occurrences of TODO and FIXME.
// Depending on the loaded configuration, may also check that these have an
// associated issue number.
use std::fmt;
use config::ReportTactic;
const TO_DO_CHARS: &[char] = &['t', 'o', 'd', 'o'];
const FIX_ME_CHARS: &[char] = &['f', 'i', 'x', 'm', 'e'];
// Enabled implementation detail is here because it is
// irrelevant outside the issues module
fn is_enabled(report_tactic: ReportTactic) -> bool {
report_tactic != ReportTactic::Never
}
#[derive(Clone, Copy)]
enum Seeking {
Issue { todo_idx: usize, fixme_idx: usize },
Number { issue: Issue, part: NumberPart },
}
#[derive(Clone, Copy)]
enum NumberPart {
OpenParen,
Pound,
Number,
CloseParen,
}
#[derive(PartialEq, Eq, Debug, Clone, Copy)]
pub struct Issue {
issue_type: IssueType,
// Indicates whether we're looking for issues with missing numbers, or
// all issues of this type.
missing_number: bool,
}
impl fmt::Display for Issue {
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
let msg = match self.issue_type {
IssueType::Todo => "TODO",
IssueType::Fixme => "FIXME",
};
let details = if self.missing_number {
" without issue number"
} else {
""
};
write!(fmt, "{}{}", msg, details)
}
}
#[derive(PartialEq, Eq, Debug, Clone, Copy)]
enum IssueType {
Todo,
Fixme,
}
enum IssueClassification {
Good,
Bad(Issue),
None,
}
pub struct BadIssueSeeker {
state: Seeking,
report_todo: ReportTactic,
report_fixme: ReportTactic,
}
impl BadIssueSeeker {
pub fn new(report_todo: ReportTactic, report_fixme: ReportTactic) -> BadIssueSeeker {
BadIssueSeeker {
state: Seeking::Issue {
todo_idx: 0,
fixme_idx: 0,
},
report_todo,
report_fixme,
}
}
pub fn is_disabled(&self) -> bool {
!is_enabled(self.report_todo) && !is_enabled(self.report_fixme)
}
// Check whether or not the current char is conclusive evidence for an
// unnumbered TO-DO or FIX-ME.
pub fn inspect(&mut self, c: char) -> Option<Issue> {
match self.state {
Seeking::Issue {
todo_idx,
fixme_idx,
} => {
self.state = self.inspect_issue(c, todo_idx, fixme_idx);
}
Seeking::Number { issue, part } => {
let result = self.inspect_number(c, issue, part);
if let IssueClassification::None = result {
return None;
}
self.state = Seeking::Issue {
todo_idx: 0,
fixme_idx: 0,
};
if let IssueClassification::Bad(issue) = result {
return Some(issue);
}
}
}
None
}
fn inspect_issue(&mut self, c: char, mut todo_idx: usize, mut fixme_idx: usize) -> Seeking {
if let Some(lower_case_c) = c.to_lowercase().next() {
if is_enabled(self.report_todo) && lower_case_c == TO_DO_CHARS[todo_idx] {
todo_idx += 1;
if todo_idx == TO_DO_CHARS.len() {
return Seeking::Number {
issue: Issue {
issue_type: IssueType::Todo,
missing_number: if let ReportTactic::Unnumbered = self.report_todo {
true
} else {
false
},
},
part: NumberPart::OpenParen,
};
}
fixme_idx = 0;
} else if is_enabled(self.report_fixme) && lower_case_c == FIX_ME_CHARS[fixme_idx] {
// Exploit the fact that the character sets of todo and fixme
// are disjoint by adding else.
fixme_idx += 1;
if fixme_idx == FIX_ME_CHARS.len() {
return Seeking::Number {
issue: Issue {
issue_type: IssueType::Fixme,
missing_number: if let ReportTactic::Unnumbered = self.report_fixme {
true
} else {
false
},
},
part: NumberPart::OpenParen,
};
}
todo_idx = 0;
} else {
todo_idx = 0;
fixme_idx = 0;
}
}
Seeking::Issue {
todo_idx,
fixme_idx,
}
}
fn inspect_number(
&mut self,
c: char,
issue: Issue,
mut part: NumberPart,
) -> IssueClassification {
if !issue.missing_number || c == '\n' {
return IssueClassification::Bad(issue);
} else if c == ')' {
return if let NumberPart::CloseParen = part {
IssueClassification::Good
} else {
IssueClassification::Bad(issue)
};
}
match part {
NumberPart::OpenParen => {
if c != '(' {
return IssueClassification::Bad(issue);
} else {
part = NumberPart::Pound;
}
}
NumberPart::Pound => {
if c == '#' {
part = NumberPart::Number;
}
}
NumberPart::Number => {
if c >= '0' && c <= '9' {
part = NumberPart::CloseParen;
} else {
return IssueClassification::Bad(issue);
}
}
NumberPart::CloseParen => {}
}
self.state = Seeking::Number { part, issue };
IssueClassification::None
}
}
#[test]
fn find_unnumbered_issue() {
fn check_fail(text: &str, failing_pos: usize) {
let mut seeker = BadIssueSeeker::new(ReportTactic::Unnumbered, ReportTactic::Unnumbered);
assert_eq!(
Some(failing_pos),
text.find(|c| seeker.inspect(c).is_some())
);
}
fn check_pass(text: &str) {
let mut seeker = BadIssueSeeker::new(ReportTactic::Unnumbered, ReportTactic::Unnumbered);
assert_eq!(None, text.find(|c| seeker.inspect(c).is_some()));
}
check_fail("TODO\n", 4);
check_pass(" TO FIX DOME\n");
check_fail(" \n FIXME\n", 8);
check_fail("FIXME(\n", 6);
check_fail("FIXME(#\n", 7);
check_fail("FIXME(#1\n", 8);
check_fail("FIXME(#)1\n", 7);
check_pass("FIXME(#1222)\n");
check_fail("FIXME(#12\n22)\n", 9);
check_pass("FIXME(@maintainer, #1222, hello)\n");
check_fail("TODO(#22) FIXME\n", 15);
}
#[test]
fn find_issue() {
fn is_bad_issue(text: &str, report_todo: ReportTactic, report_fixme: ReportTactic) -> bool {
let mut seeker = BadIssueSeeker::new(report_todo, report_fixme);
text.chars().any(|c| seeker.inspect(c).is_some())
}
assert!(is_bad_issue(
"TODO(@maintainer, #1222, hello)\n",
ReportTactic::Always,
ReportTactic::Never,
));
assert!(!is_bad_issue(
"TODO: no number\n",
ReportTactic::Never,
ReportTactic::Always,
));
assert!(!is_bad_issue(
"Todo: mixed case\n",
ReportTactic::Never,
ReportTactic::Always,
));
assert!(is_bad_issue(
"This is a FIXME(#1)\n",
ReportTactic::Never,
ReportTactic::Always,
));
assert!(is_bad_issue(
"This is a FixMe(#1) mixed case\n",
ReportTactic::Never,
ReportTactic::Always,
));
assert!(!is_bad_issue(
"bad FIXME\n",
ReportTactic::Always,
ReportTactic::Never,
));
}
#[test]
fn issue_type() {
let mut seeker = BadIssueSeeker::new(ReportTactic::Always, ReportTactic::Never);
let expected = Some(Issue {
issue_type: IssueType::Todo,
missing_number: false,
});
assert_eq!(
expected,
"TODO(#100): more awesomeness"
.chars()
.map(|c| seeker.inspect(c))
.find(Option::is_some)
.unwrap()
);
let mut seeker = BadIssueSeeker::new(ReportTactic::Never, ReportTactic::Unnumbered);
let expected = Some(Issue {
issue_type: IssueType::Fixme,
missing_number: true,
});
assert_eq!(
expected,
"Test. FIXME: bad, bad, not good"
.chars()
.map(|c| seeker.inspect(c))
.find(Option::is_some)
.unwrap()
);
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/matches.rs
|
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Format match expression.
use std::iter::repeat;
use config::lists::*;
use syntax::source_map::{BytePos, Span};
use syntax::{ast, ptr};
use comment::{combine_strs_with_missing_comments, rewrite_comment};
use config::{Config, ControlBraceStyle, IndentStyle};
use expr::{
format_expr, is_empty_block, is_simple_block, is_unsafe_block, prefer_next_line,
rewrite_multiple_patterns, ExprType, RhsTactics, ToExpr,
};
use lists::{itemize_list, write_list, ListFormatting};
use rewrite::{Rewrite, RewriteContext};
use shape::Shape;
use source_map::SpanUtils;
use spanned::Spanned;
use utils::{
contains_skip, extra_offset, first_line_width, inner_attributes, last_line_extendable, mk_sp,
ptr_vec_to_ref_vec, trimmed_last_line_width,
};
/// A simple wrapper type against `ast::Arm`. Used inside `write_list()`.
struct ArmWrapper<'a> {
pub arm: &'a ast::Arm,
/// True if the arm is the last one in match expression. Used to decide on whether we should add
/// trailing comma to the match arm when `config.trailing_comma() == Never`.
pub is_last: bool,
/// Holds a byte position of `|` at the beginning of the arm pattern, if available.
pub beginning_vert: Option<BytePos>,
}
impl<'a> ArmWrapper<'a> {
pub fn new(
arm: &'a ast::Arm,
is_last: bool,
beginning_vert: Option<BytePos>,
) -> ArmWrapper<'a> {
ArmWrapper {
arm,
is_last,
beginning_vert,
}
}
}
impl<'a> Spanned for ArmWrapper<'a> {
fn span(&self) -> Span {
if let Some(lo) = self.beginning_vert {
mk_sp(lo, self.arm.span().hi())
} else {
self.arm.span()
}
}
}
impl<'a> Rewrite for ArmWrapper<'a> {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
rewrite_match_arm(context, self.arm, shape, self.is_last)
}
}
pub fn rewrite_match(
context: &RewriteContext,
cond: &ast::Expr,
arms: &[ast::Arm],
shape: Shape,
span: Span,
attrs: &[ast::Attribute],
) -> Option<String> {
// Do not take the rhs overhead from the upper expressions into account
// when rewriting match condition.
let cond_shape = Shape {
width: context.budget(shape.used_width()),
..shape
};
// 6 = `match `
let cond_shape = match context.config.indent_style() {
IndentStyle::Visual => cond_shape.shrink_left(6)?,
IndentStyle::Block => cond_shape.offset_left(6)?,
};
let cond_str = cond.rewrite(context, cond_shape)?;
let alt_block_sep = &shape.indent.to_string_with_newline(context.config);
let block_sep = match context.config.control_brace_style() {
ControlBraceStyle::AlwaysNextLine => alt_block_sep,
_ if last_line_extendable(&cond_str) => " ",
// 2 = ` {`
_ if cond_str.contains('\n') || cond_str.len() + 2 > cond_shape.width => alt_block_sep,
_ => " ",
};
let nested_indent_str = shape
.indent
.block_indent(context.config)
.to_string(context.config);
// Inner attributes.
let inner_attrs = &inner_attributes(attrs);
let inner_attrs_str = if inner_attrs.is_empty() {
String::new()
} else {
inner_attrs
.rewrite(context, shape)
.map(|s| format!("{}{}\n", nested_indent_str, s))?
};
let open_brace_pos = if inner_attrs.is_empty() {
let hi = if arms.is_empty() {
span.hi()
} else {
arms[0].span().lo()
};
context
.snippet_provider
.span_after(mk_sp(cond.span.hi(), hi), "{")
} else {
inner_attrs[inner_attrs.len() - 1].span().hi()
};
if arms.is_empty() {
let snippet = context.snippet(mk_sp(open_brace_pos, span.hi() - BytePos(1)));
if snippet.trim().is_empty() {
Some(format!("match {} {{}}", cond_str))
} else {
// Empty match with comments or inner attributes? We are not going to bother, sorry ;)
Some(context.snippet(span).to_owned())
}
} else {
let span_after_cond = mk_sp(cond.span.hi(), span.hi());
Some(format!(
"match {}{}{{\n{}{}{}\n{}}}",
cond_str,
block_sep,
inner_attrs_str,
nested_indent_str,
rewrite_match_arms(context, arms, shape, span_after_cond, open_brace_pos)?,
shape.indent.to_string(context.config),
))
}
}
fn arm_comma(config: &Config, body: &ast::Expr, is_last: bool) -> &'static str {
if is_last && config.trailing_comma() == SeparatorTactic::Never {
""
} else if config.match_block_trailing_comma() {
","
} else if let ast::ExprKind::Block(ref block, _) = body.node {
if let ast::BlockCheckMode::Default = block.rules {
""
} else {
","
}
} else {
","
}
}
/// Collect a byte position of the beginning `|` for each arm, if available.
fn collect_beginning_verts(
context: &RewriteContext,
arms: &[ast::Arm],
span: Span,
) -> Vec<Option<BytePos>> {
let mut beginning_verts = Vec::with_capacity(arms.len());
let mut lo = context.snippet_provider.span_after(span, "{");
for arm in arms {
let hi = arm.pats[0].span.lo();
let missing_span = mk_sp(lo, hi);
beginning_verts.push(context.snippet_provider.opt_span_before(missing_span, "|"));
lo = arm.span().hi();
}
beginning_verts
}
fn rewrite_match_arms(
context: &RewriteContext,
arms: &[ast::Arm],
shape: Shape,
span: Span,
open_brace_pos: BytePos,
) -> Option<String> {
let arm_shape = shape
.block_indent(context.config.tab_spaces())
.with_max_width(context.config);
let arm_len = arms.len();
let is_last_iter = repeat(false)
.take(arm_len.saturating_sub(1))
.chain(repeat(true));
let beginning_verts = collect_beginning_verts(context, arms, span);
let items = itemize_list(
context.snippet_provider,
arms.iter()
.zip(is_last_iter)
.zip(beginning_verts.into_iter())
.map(|((arm, is_last), beginning_vert)| ArmWrapper::new(arm, is_last, beginning_vert)),
"}",
"|",
|arm| arm.span().lo(),
|arm| arm.span().hi(),
|arm| arm.rewrite(context, arm_shape),
open_brace_pos,
span.hi(),
false,
);
let arms_vec: Vec<_> = items.collect();
// We will add/remove commas inside `arm.rewrite()`, and hence no separator here.
let fmt = ListFormatting::new(arm_shape, context.config)
.separator("")
.preserve_newline(true);
write_list(&arms_vec, &fmt)
}
fn rewrite_match_arm(
context: &RewriteContext,
arm: &ast::Arm,
shape: Shape,
is_last: bool,
) -> Option<String> {
let (missing_span, attrs_str) = if !arm.attrs.is_empty() {
if contains_skip(&arm.attrs) {
let (_, body) = flatten_arm_body(context, &arm.body);
// `arm.span()` does not include trailing comma, add it manually.
return Some(format!(
"{}{}",
context.snippet(arm.span()),
arm_comma(context.config, body, is_last),
));
}
let missing_span = mk_sp(
arm.attrs[arm.attrs.len() - 1].span.hi(),
arm.pats[0].span.lo(),
);
(missing_span, arm.attrs.rewrite(context, shape)?)
} else {
(mk_sp(arm.span().lo(), arm.span().lo()), String::new())
};
let pats_str =
rewrite_match_pattern(context, &ptr_vec_to_ref_vec(&arm.pats), &arm.guard, shape)
.and_then(|pats_str| {
combine_strs_with_missing_comments(
context,
&attrs_str,
&pats_str,
missing_span,
shape,
false,
)
})?;
let arrow_span = mk_sp(arm.pats.last().unwrap().span.hi(), arm.body.span.lo());
rewrite_match_body(
context,
&arm.body,
&pats_str,
shape,
arm.guard.is_some(),
arrow_span,
is_last,
)
}
fn rewrite_match_pattern(
context: &RewriteContext,
pats: &[&ast::Pat],
guard: &Option<ptr::P<ast::Expr>>,
shape: Shape,
) -> Option<String> {
// Patterns
// 5 = ` => {`
let pat_shape = shape.sub_width(5)?;
let pats_str = rewrite_multiple_patterns(context, pats, pat_shape)?;
// Guard
let guard_str = rewrite_guard(
context,
guard,
shape,
trimmed_last_line_width(&pats_str),
pats_str.contains("\n"),
)?;
Some(format!("{}{}", pats_str, guard_str))
}
fn block_can_be_flattened<'a>(
context: &RewriteContext,
expr: &'a ast::Expr,
) -> Option<&'a ast::Block> {
match expr.node {
ast::ExprKind::Block(ref block, _)
if !is_unsafe_block(block)
&& is_simple_block(block, Some(&expr.attrs), context.source_map) =>
{
Some(&*block)
}
_ => None,
}
}
// (extend, body)
// @extend: true if the arm body can be put next to `=>`
// @body: flattened body, if the body is block with a single expression
fn flatten_arm_body<'a>(context: &'a RewriteContext, body: &'a ast::Expr) -> (bool, &'a ast::Expr) {
if let Some(ref block) = block_can_be_flattened(context, body) {
if let ast::StmtKind::Expr(ref expr) = block.stmts[0].node {
if let ast::ExprKind::Block(..) = expr.node {
flatten_arm_body(context, expr)
} else {
let can_extend_expr =
!context.config.force_multiline_blocks() && can_flatten_block_around_this(expr);
(can_extend_expr, &*expr)
}
} else {
(false, &*body)
}
} else {
(
!context.config.force_multiline_blocks() && body.can_be_overflowed(context, 1),
&*body,
)
}
}
fn rewrite_match_body(
context: &RewriteContext,
body: &ptr::P<ast::Expr>,
pats_str: &str,
shape: Shape,
has_guard: bool,
arrow_span: Span,
is_last: bool,
) -> Option<String> {
let (extend, body) = flatten_arm_body(context, body);
let (is_block, is_empty_block) = if let ast::ExprKind::Block(ref block, _) = body.node {
(
true,
is_empty_block(block, Some(&body.attrs), context.source_map),
)
} else {
(false, false)
};
let comma = arm_comma(context.config, body, is_last);
let alt_block_sep = &shape.indent.to_string_with_newline(context.config);
let combine_orig_body = |body_str: &str| {
let block_sep = match context.config.control_brace_style() {
ControlBraceStyle::AlwaysNextLine if is_block => alt_block_sep,
_ => " ",
};
Some(format!("{} =>{}{}{}", pats_str, block_sep, body_str, comma))
};
let next_line_indent = if !is_block || is_empty_block {
shape.indent.block_indent(context.config)
} else {
shape.indent
};
let forbid_same_line = has_guard && pats_str.contains('\n') && !is_empty_block;
// Look for comments between `=>` and the start of the body.
let arrow_comment = {
let arrow_snippet = context.snippet(arrow_span).trim();
let arrow_index = arrow_snippet.find("=>").unwrap();
// 2 = `=>`
let comment_str = arrow_snippet[arrow_index + 2..].trim();
if comment_str.is_empty() {
String::new()
} else {
rewrite_comment(comment_str, false, shape, &context.config)?
}
};
let combine_next_line_body = |body_str: &str| {
let nested_indent_str = next_line_indent.to_string_with_newline(context.config);
if is_block {
let mut result = pats_str.to_owned();
result.push_str(" =>");
if !arrow_comment.is_empty() {
result.push_str(&nested_indent_str);
result.push_str(&arrow_comment);
}
result.push_str(&nested_indent_str);
result.push_str(&body_str);
return Some(result);
}
let indent_str = shape.indent.to_string_with_newline(context.config);
let (body_prefix, body_suffix) = if context.config.match_arm_blocks() {
let comma = if context.config.match_block_trailing_comma() {
","
} else {
""
};
("{", format!("{}}}{}", indent_str, comma))
} else {
("", String::from(","))
};
let block_sep = match context.config.control_brace_style() {
ControlBraceStyle::AlwaysNextLine => format!("{}{}", alt_block_sep, body_prefix),
_ if body_prefix.is_empty() => "".to_owned(),
_ if forbid_same_line || !arrow_comment.is_empty() => {
format!("{}{}", alt_block_sep, body_prefix)
}
_ => format!(" {}", body_prefix),
} + &nested_indent_str;
let mut result = pats_str.to_owned();
result.push_str(" =>");
if !arrow_comment.is_empty() {
result.push_str(&indent_str);
result.push_str(&arrow_comment);
}
result.push_str(&block_sep);
result.push_str(&body_str);
result.push_str(&body_suffix);
Some(result)
};
// Let's try and get the arm body on the same line as the condition.
// 4 = ` => `.len()
let orig_body_shape = shape
.offset_left(extra_offset(pats_str, shape) + 4)
.and_then(|shape| shape.sub_width(comma.len()));
let orig_body = if forbid_same_line || !arrow_comment.is_empty() {
None
} else if let Some(body_shape) = orig_body_shape {
let rewrite = nop_block_collapse(
format_expr(body, ExprType::Statement, context, body_shape),
body_shape.width,
);
match rewrite {
Some(ref body_str)
if is_block || (!body_str.contains('\n') && body_str.len() <= body_shape.width) =>
{
return combine_orig_body(body_str);
}
_ => rewrite,
}
} else {
None
};
let orig_budget = orig_body_shape.map_or(0, |shape| shape.width);
// Try putting body on the next line and see if it looks better.
let next_line_body_shape = Shape::indented(next_line_indent, context.config);
let next_line_body = nop_block_collapse(
format_expr(body, ExprType::Statement, context, next_line_body_shape),
next_line_body_shape.width,
);
match (orig_body, next_line_body) {
(Some(ref orig_str), Some(ref next_line_str))
if prefer_next_line(orig_str, next_line_str, RhsTactics::Default) =>
{
combine_next_line_body(next_line_str)
}
(Some(ref orig_str), _) if extend && first_line_width(orig_str) <= orig_budget => {
combine_orig_body(orig_str)
}
(Some(ref orig_str), Some(ref next_line_str)) if orig_str.contains('\n') => {
combine_next_line_body(next_line_str)
}
(None, Some(ref next_line_str)) => combine_next_line_body(next_line_str),
(None, None) => None,
(Some(ref orig_str), _) => combine_orig_body(orig_str),
}
}
// The `if ...` guard on a match arm.
fn rewrite_guard(
context: &RewriteContext,
guard: &Option<ptr::P<ast::Expr>>,
shape: Shape,
// The amount of space used up on this line for the pattern in
// the arm (excludes offset).
pattern_width: usize,
multiline_pattern: bool,
) -> Option<String> {
if let Some(ref guard) = *guard {
// First try to fit the guard string on the same line as the pattern.
// 4 = ` if `, 5 = ` => {`
let cond_shape = shape
.offset_left(pattern_width + 4)
.and_then(|s| s.sub_width(5));
if !multiline_pattern {
if let Some(cond_shape) = cond_shape {
if let Some(cond_str) = guard.rewrite(context, cond_shape) {
if !cond_str.contains('\n') || pattern_width <= context.config.tab_spaces() {
return Some(format!(" if {}", cond_str));
}
}
}
}
// Not enough space to put the guard after the pattern, try a newline.
// 3 = `if `, 5 = ` => {`
let cond_shape = Shape::indented(shape.indent.block_indent(context.config), context.config)
.offset_left(3)
.and_then(|s| s.sub_width(5));
if let Some(cond_shape) = cond_shape {
if let Some(cond_str) = guard.rewrite(context, cond_shape) {
return Some(format!(
"{}if {}",
cond_shape.indent.to_string_with_newline(context.config),
cond_str
));
}
}
None
} else {
Some(String::new())
}
}
fn nop_block_collapse(block_str: Option<String>, budget: usize) -> Option<String> {
debug!("nop_block_collapse {:?} {}", block_str, budget);
block_str.map(|block_str| {
if block_str.starts_with('{')
&& budget >= 2
&& (block_str[1..].find(|c: char| !c.is_whitespace()).unwrap() == block_str.len() - 2)
{
"{}".to_owned()
} else {
block_str.to_owned()
}
})
}
fn can_flatten_block_around_this(body: &ast::Expr) -> bool {
match body.node {
// We do not allow `if` to stay on the same line, since we could easily mistake
// `pat => if cond { ... }` and `pat if cond => { ... }`.
ast::ExprKind::If(..) | ast::ExprKind::IfLet(..) => false,
// We do not allow collapsing a block around expression with condition
// to avoid it being cluttered with match arm.
ast::ExprKind::ForLoop(..) | ast::ExprKind::While(..) | ast::ExprKind::WhileLet(..) => {
false
}
ast::ExprKind::Loop(..)
| ast::ExprKind::Match(..)
| ast::ExprKind::Block(..)
| ast::ExprKind::Closure(..)
| ast::ExprKind::Array(..)
| ast::ExprKind::Call(..)
| ast::ExprKind::MethodCall(..)
| ast::ExprKind::Mac(..)
| ast::ExprKind::Struct(..)
| ast::ExprKind::Tup(..) => true,
ast::ExprKind::AddrOf(_, ref expr)
| ast::ExprKind::Box(ref expr)
| ast::ExprKind::Try(ref expr)
| ast::ExprKind::Unary(_, ref expr)
| ast::ExprKind::Cast(ref expr, _) => can_flatten_block_around_this(expr),
_ => false,
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/chains.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Formatting of chained expressions, i.e. expressions which are chained by
//! dots: struct and enum field access, method calls, and try shorthand (?).
//!
//! Instead of walking these subexpressions one-by-one, as is our usual strategy
//! for expression formatting, we collect maximal sequences of these expressions
//! and handle them simultaneously.
//!
//! Whenever possible, the entire chain is put on a single line. If that fails,
//! we put each subexpression on a separate, much like the (default) function
//! argument function argument strategy.
//!
//! Depends on config options: `chain_indent` is the indent to use for
//! blocks in the parent/root/base of the chain (and the rest of the chain's
//! alignment).
//! E.g., `let foo = { aaaa; bbb; ccc }.bar.baz();`, we would layout for the
//! following values of `chain_indent`:
//! Block:
//!
//! ```ignore
//! let foo = {
//! aaaa;
//! bbb;
//! ccc
//! }.bar
//! .baz();
//! ```
//!
//! Visual:
//!
//! ```ignore
//! let foo = {
//! aaaa;
//! bbb;
//! ccc
//! }
//! .bar
//! .baz();
//! ```
//!
//! If the first item in the chain is a block expression, we align the dots with
//! the braces.
//! Block:
//!
//! ```ignore
//! let a = foo.bar
//! .baz()
//! .qux
//! ```
//!
//! Visual:
//!
//! ```ignore
//! let a = foo.bar
//! .baz()
//! .qux
//! ```
use comment::rewrite_comment;
use config::IndentStyle;
use expr::rewrite_call;
use lists::{extract_post_comment, extract_pre_comment, get_comment_end};
use macros::convert_try_mac;
use rewrite::{Rewrite, RewriteContext};
use shape::Shape;
use source_map::SpanUtils;
use utils::{
first_line_width, last_line_extendable, last_line_width, mk_sp, trimmed_last_line_width,
wrap_str,
};
use std::borrow::Cow;
use std::cmp::min;
use std::iter;
use syntax::source_map::{BytePos, Span};
use syntax::{ast, ptr};
pub fn rewrite_chain(expr: &ast::Expr, context: &RewriteContext, shape: Shape) -> Option<String> {
let chain = Chain::from_ast(expr, context);
debug!("rewrite_chain {:?} {:?}", chain, shape);
// If this is just an expression with some `?`s, then format it trivially and
// return early.
if chain.children.is_empty() {
return chain.parent.rewrite(context, shape);
}
chain.rewrite(context, shape)
}
#[derive(Debug)]
enum CommentPosition {
Back,
Top,
}
// An expression plus trailing `?`s to be formatted together.
#[derive(Debug)]
struct ChainItem {
kind: ChainItemKind,
tries: usize,
span: Span,
}
// FIXME: we can't use a reference here because to convert `try!` to `?` we
// synthesise the AST node. However, I think we could use `Cow` and that
// would remove a lot of cloning.
#[derive(Debug)]
enum ChainItemKind {
Parent(ast::Expr),
MethodCall(
ast::PathSegment,
Vec<ast::GenericArg>,
Vec<ptr::P<ast::Expr>>,
),
StructField(ast::Ident),
TupleField(ast::Ident, bool),
Comment(String, CommentPosition),
}
impl ChainItemKind {
fn is_block_like(&self, context: &RewriteContext, reps: &str) -> bool {
match self {
ChainItemKind::Parent(ref expr) => is_block_expr(context, expr, reps),
ChainItemKind::MethodCall(..) => reps.contains('\n'),
ChainItemKind::StructField(..)
| ChainItemKind::TupleField(..)
| ChainItemKind::Comment(..) => false,
}
}
fn is_tup_field_access(expr: &ast::Expr) -> bool {
match expr.node {
ast::ExprKind::Field(_, ref field) => {
field.name.to_string().chars().all(|c| c.is_digit(10))
}
_ => false,
}
}
fn from_ast(context: &RewriteContext, expr: &ast::Expr) -> (ChainItemKind, Span) {
let (kind, span) = match expr.node {
ast::ExprKind::MethodCall(ref segment, ref expressions) => {
let types = if let Some(ref generic_args) = segment.args {
if let ast::GenericArgs::AngleBracketed(ref data) = **generic_args {
data.args.clone()
} else {
vec![]
}
} else {
vec![]
};
let span = mk_sp(expressions[0].span.hi(), expr.span.hi());
let kind = ChainItemKind::MethodCall(segment.clone(), types, expressions.clone());
(kind, span)
}
ast::ExprKind::Field(ref nested, field) => {
let kind = if Self::is_tup_field_access(expr) {
ChainItemKind::TupleField(field, Self::is_tup_field_access(nested))
} else {
ChainItemKind::StructField(field)
};
let span = mk_sp(nested.span.hi(), field.span.hi());
(kind, span)
}
_ => return (ChainItemKind::Parent(expr.clone()), expr.span),
};
// Remove comments from the span.
let lo = context.snippet_provider.span_before(span, ".");
(kind, mk_sp(lo, span.hi()))
}
}
impl Rewrite for ChainItem {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
let shape = shape.sub_width(self.tries)?;
let rewrite = match self.kind {
ChainItemKind::Parent(ref expr) => expr.rewrite(context, shape)?,
ChainItemKind::MethodCall(ref segment, ref types, ref exprs) => {
Self::rewrite_method_call(segment.ident, types, exprs, self.span, context, shape)?
}
ChainItemKind::StructField(ident) => format!(".{}", ident.name),
ChainItemKind::TupleField(ident, nested) => {
format!("{}.{}", if nested { " " } else { "" }, ident.name)
}
ChainItemKind::Comment(ref comment, _) => {
rewrite_comment(comment, false, shape, context.config)?
}
};
Some(format!("{}{}", rewrite, "?".repeat(self.tries)))
}
}
impl ChainItem {
fn new(context: &RewriteContext, expr: &ast::Expr, tries: usize) -> ChainItem {
let (kind, span) = ChainItemKind::from_ast(context, expr);
ChainItem { kind, tries, span }
}
fn comment(span: Span, comment: String, pos: CommentPosition) -> ChainItem {
ChainItem {
kind: ChainItemKind::Comment(comment, pos),
tries: 0,
span,
}
}
fn is_comment(&self) -> bool {
match self.kind {
ChainItemKind::Comment(..) => true,
_ => false,
}
}
fn rewrite_method_call(
method_name: ast::Ident,
types: &[ast::GenericArg],
args: &[ptr::P<ast::Expr>],
span: Span,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
let type_str = if types.is_empty() {
String::new()
} else {
let type_list = types
.iter()
.map(|ty| ty.rewrite(context, shape))
.collect::<Option<Vec<_>>>()?;
format!("::<{}>", type_list.join(", "))
};
let callee_str = format!(".{}{}", method_name, type_str);
rewrite_call(context, &callee_str, &args[1..], span, shape)
}
}
#[derive(Debug)]
struct Chain {
parent: ChainItem,
children: Vec<ChainItem>,
}
impl Chain {
fn from_ast(expr: &ast::Expr, context: &RewriteContext) -> Chain {
let subexpr_list = Self::make_subexpr_list(expr, context);
// Un-parse the expression tree into ChainItems
let mut rev_children = vec![];
let mut sub_tries = 0;
for subexpr in &subexpr_list {
match subexpr.node {
ast::ExprKind::Try(_) => sub_tries += 1,
_ => {
rev_children.push(ChainItem::new(context, subexpr, sub_tries));
sub_tries = 0;
}
}
}
fn is_tries(s: &str) -> bool {
s.chars().all(|c| c == '?')
}
fn handle_post_comment(
post_comment_span: Span,
post_comment_snippet: &str,
prev_span_end: &mut BytePos,
children: &mut Vec<ChainItem>,
) {
let white_spaces: &[_] = &[' ', '\t'];
if post_comment_snippet
.trim_matches(white_spaces)
.starts_with('\n')
{
// No post comment.
return;
}
// HACK: Treat `?`s as separators.
let trimmed_snippet = post_comment_snippet.trim_matches('?');
let comment_end = get_comment_end(trimmed_snippet, "?", "", false);
let maybe_post_comment = extract_post_comment(trimmed_snippet, comment_end, "?")
.and_then(|comment| {
if comment.is_empty() {
None
} else {
Some((comment, comment_end))
}
});
if let Some((post_comment, comment_end)) = maybe_post_comment {
children.push(ChainItem::comment(
post_comment_span,
post_comment,
CommentPosition::Back,
));
*prev_span_end = *prev_span_end + BytePos(comment_end as u32);
}
}
let parent = rev_children.pop().unwrap();
let mut children = vec![];
let mut prev_span_end = parent.span.hi();
let mut iter = rev_children.into_iter().rev().peekable();
if let Some(first_chain_item) = iter.peek() {
let comment_span = mk_sp(prev_span_end, first_chain_item.span.lo());
let comment_snippet = context.snippet(comment_span);
if !is_tries(comment_snippet.trim()) {
handle_post_comment(
comment_span,
comment_snippet,
&mut prev_span_end,
&mut children,
);
}
}
while let Some(chain_item) = iter.next() {
let comment_snippet = context.snippet(chain_item.span);
// FIXME: Figure out the way to get a correct span when converting `try!` to `?`.
let handle_comment =
!(context.config.use_try_shorthand() || is_tries(comment_snippet.trim()));
// Pre-comment
if handle_comment {
let pre_comment_span = mk_sp(prev_span_end, chain_item.span.lo());
let pre_comment_snippet = context.snippet(pre_comment_span);
let pre_comment_snippet = pre_comment_snippet.trim().trim_matches('?');
let (pre_comment, _) = extract_pre_comment(pre_comment_snippet);
match pre_comment {
Some(ref comment) if !comment.is_empty() => {
children.push(ChainItem::comment(
pre_comment_span,
comment.to_owned(),
CommentPosition::Top,
));
}
_ => (),
}
}
prev_span_end = chain_item.span.hi();
children.push(chain_item);
// Post-comment
if !handle_comment || iter.peek().is_none() {
continue;
}
let next_lo = iter.peek().unwrap().span.lo();
let post_comment_span = mk_sp(prev_span_end, next_lo);
let post_comment_snippet = context.snippet(post_comment_span);
handle_post_comment(
post_comment_span,
post_comment_snippet,
&mut prev_span_end,
&mut children,
);
}
Chain { parent, children }
}
// Returns a Vec of the prefixes of the chain.
// E.g., for input `a.b.c` we return [`a.b.c`, `a.b`, 'a']
fn make_subexpr_list(expr: &ast::Expr, context: &RewriteContext) -> Vec<ast::Expr> {
let mut subexpr_list = vec![expr.clone()];
while let Some(subexpr) = Self::pop_expr_chain(subexpr_list.last().unwrap(), context) {
subexpr_list.push(subexpr.clone());
}
subexpr_list
}
// Returns the expression's subexpression, if it exists. When the subexpr
// is a try! macro, we'll convert it to shorthand when the option is set.
fn pop_expr_chain(expr: &ast::Expr, context: &RewriteContext) -> Option<ast::Expr> {
match expr.node {
ast::ExprKind::MethodCall(_, ref expressions) => {
Some(Self::convert_try(&expressions[0], context))
}
ast::ExprKind::Field(ref subexpr, _) | ast::ExprKind::Try(ref subexpr) => {
Some(Self::convert_try(subexpr, context))
}
_ => None,
}
}
fn convert_try(expr: &ast::Expr, context: &RewriteContext) -> ast::Expr {
match expr.node {
ast::ExprKind::Mac(ref mac) if context.config.use_try_shorthand() => {
if let Some(subexpr) = convert_try_mac(mac, context) {
subexpr
} else {
expr.clone()
}
}
_ => expr.clone(),
}
}
}
impl Rewrite for Chain {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
debug!("rewrite chain {:?} {:?}", self, shape);
let mut formatter = match context.config.indent_style() {
IndentStyle::Block => Box::new(ChainFormatterBlock::new(self)) as Box<ChainFormatter>,
IndentStyle::Visual => Box::new(ChainFormatterVisual::new(self)) as Box<ChainFormatter>,
};
formatter.format_root(&self.parent, context, shape)?;
if let Some(result) = formatter.pure_root() {
return wrap_str(result, context.config.max_width(), shape);
}
// Decide how to layout the rest of the chain.
let child_shape = formatter.child_shape(context, shape)?;
formatter.format_children(context, child_shape)?;
formatter.format_last_child(context, shape, child_shape)?;
let result = formatter.join_rewrites(context, child_shape)?;
wrap_str(result, context.config.max_width(), shape)
}
}
// There are a few types for formatting chains. This is because there is a lot
// in common between formatting with block vs visual indent, but they are
// different enough that branching on the indent all over the place gets ugly.
// Anything that can format a chain is a ChainFormatter.
trait ChainFormatter {
// Parent is the first item in the chain, e.g., `foo` in `foo.bar.baz()`.
// Root is the parent plus any other chain items placed on the first line to
// avoid an orphan. E.g.,
// ```
// foo.bar
// .baz()
// ```
// If `bar` were not part of the root, then foo would be orphaned and 'float'.
fn format_root(
&mut self,
parent: &ChainItem,
context: &RewriteContext,
shape: Shape,
) -> Option<()>;
fn child_shape(&self, context: &RewriteContext, shape: Shape) -> Option<Shape>;
fn format_children(&mut self, context: &RewriteContext, child_shape: Shape) -> Option<()>;
fn format_last_child(
&mut self,
context: &RewriteContext,
shape: Shape,
child_shape: Shape,
) -> Option<()>;
fn join_rewrites(&self, context: &RewriteContext, child_shape: Shape) -> Option<String>;
// Returns `Some` if the chain is only a root, None otherwise.
fn pure_root(&mut self) -> Option<String>;
}
// Data and behaviour that is shared by both chain formatters. The concrete
// formatters can delegate much behaviour to `ChainFormatterShared`.
struct ChainFormatterShared<'a> {
// The current working set of child items.
children: &'a [ChainItem],
// The current rewrites of items (includes trailing `?`s, but not any way to
// connect the rewrites together).
rewrites: Vec<String>,
// Whether the chain can fit on one line.
fits_single_line: bool,
// The number of children in the chain. This is not equal to `self.children.len()`
// because `self.children` will change size as we process the chain.
child_count: usize,
}
impl<'a> ChainFormatterShared<'a> {
fn new(chain: &'a Chain) -> ChainFormatterShared<'a> {
ChainFormatterShared {
children: &chain.children,
rewrites: Vec::with_capacity(chain.children.len() + 1),
fits_single_line: false,
child_count: chain.children.len(),
}
}
fn pure_root(&mut self) -> Option<String> {
if self.children.is_empty() {
assert_eq!(self.rewrites.len(), 1);
Some(self.rewrites.pop().unwrap())
} else {
None
}
}
// Rewrite the last child. The last child of a chain requires special treatment. We need to
// know whether 'overflowing' the last child make a better formatting:
//
// A chain with overflowing the last child:
// ```
// parent.child1.child2.last_child(
// a,
// b,
// c,
// )
// ```
//
// A chain without overflowing the last child (in vertical layout):
// ```
// parent
// .child1
// .child2
// .last_child(a, b, c)
// ```
//
// In particular, overflowing is effective when the last child is a method with a multi-lined
// block-like argument (e.g. closure):
// ```
// parent.child1.child2.last_child(|a, b, c| {
// let x = foo(a, b, c);
// let y = bar(a, b, c);
//
// // ...
//
// result
// })
// ```
fn format_last_child(
&mut self,
may_extend: bool,
context: &RewriteContext,
shape: Shape,
child_shape: Shape,
) -> Option<()> {
let last = self.children.last()?;
let extendable = may_extend && last_line_extendable(&self.rewrites[0]);
let prev_last_line_width = last_line_width(&self.rewrites[0]);
// Total of all items excluding the last.
let almost_total = if extendable {
prev_last_line_width
} else {
self.rewrites.iter().fold(0, |a, b| a + b.len())
} + last.tries;
let one_line_budget = if self.child_count == 1 {
shape.width
} else {
min(shape.width, context.config.width_heuristics().chain_width)
}.saturating_sub(almost_total);
let all_in_one_line = !self.children.iter().any(ChainItem::is_comment)
&& self.rewrites.iter().all(|s| !s.contains('\n'))
&& one_line_budget > 0;
let last_shape = if all_in_one_line {
shape.sub_width(last.tries)?
} else if extendable {
child_shape.sub_width(last.tries)?
} else {
child_shape.sub_width(shape.rhs_overhead(context.config) + last.tries)?
};
let mut last_subexpr_str = None;
if all_in_one_line || extendable {
// First we try to 'overflow' the last child and see if it looks better than using
// vertical layout.
if let Some(one_line_shape) = last_shape.offset_left(almost_total) {
if let Some(rw) = last.rewrite(context, one_line_shape) {
// We allow overflowing here only if both of the following conditions match:
// 1. The entire chain fits in a single line except the last child.
// 2. `last_child_str.lines().count() >= 5`.
let line_count = rw.lines().count();
let could_fit_single_line = first_line_width(&rw) <= one_line_budget;
if could_fit_single_line && line_count >= 5 {
last_subexpr_str = Some(rw);
self.fits_single_line = all_in_one_line;
} else {
// We could not know whether overflowing is better than using vertical
// layout, just by looking at the overflowed rewrite. Now we rewrite the
// last child on its own line, and compare two rewrites to choose which is
// better.
let last_shape = child_shape
.sub_width(shape.rhs_overhead(context.config) + last.tries)?;
match last.rewrite(context, last_shape) {
Some(ref new_rw) if !could_fit_single_line => {
last_subexpr_str = Some(new_rw.clone());
}
Some(ref new_rw) if new_rw.lines().count() >= line_count => {
last_subexpr_str = Some(rw);
self.fits_single_line = could_fit_single_line && all_in_one_line;
}
new_rw @ Some(..) => {
last_subexpr_str = new_rw;
}
_ => {
last_subexpr_str = Some(rw);
self.fits_single_line = could_fit_single_line && all_in_one_line;
}
}
}
}
}
}
last_subexpr_str = last_subexpr_str.or_else(|| last.rewrite(context, last_shape));
self.rewrites.push(last_subexpr_str?);
Some(())
}
fn join_rewrites(
&self,
context: &RewriteContext,
child_shape: Shape,
block_like_iter: impl Iterator<Item = bool>,
) -> Option<String> {
let connector = if self.fits_single_line {
// Yay, we can put everything on one line.
Cow::from("")
} else {
// Use new lines.
if *context.force_one_line_chain.borrow() {
return None;
}
child_shape.to_string_with_newline(context.config)
};
let mut rewrite_iter = self.rewrites.iter();
let mut result = rewrite_iter.next().unwrap().clone();
let children_iter = self.children.iter();
let iter = rewrite_iter.zip(block_like_iter).zip(children_iter);
for ((rewrite, prev_is_block_like), chain_item) in iter {
match chain_item.kind {
ChainItemKind::Comment(_, CommentPosition::Back) => result.push(' '),
ChainItemKind::Comment(_, CommentPosition::Top) => result.push_str(&connector),
_ => {
if !prev_is_block_like {
result.push_str(&connector);
}
}
}
result.push_str(&rewrite);
}
Some(result)
}
}
// Formats a chain using block indent.
struct ChainFormatterBlock<'a> {
shared: ChainFormatterShared<'a>,
// For each rewrite, whether the corresponding item is block-like.
is_block_like: Vec<bool>,
}
impl<'a> ChainFormatterBlock<'a> {
fn new(chain: &'a Chain) -> ChainFormatterBlock<'a> {
ChainFormatterBlock {
shared: ChainFormatterShared::new(chain),
is_block_like: Vec::with_capacity(chain.children.len() + 1),
}
}
}
impl<'a> ChainFormatter for ChainFormatterBlock<'a> {
fn format_root(
&mut self,
parent: &ChainItem,
context: &RewriteContext,
shape: Shape,
) -> Option<()> {
let mut root_rewrite: String = parent.rewrite(context, shape)?;
let mut root_ends_with_block = parent.kind.is_block_like(context, &root_rewrite);
let tab_width = context.config.tab_spaces().saturating_sub(shape.offset);
while root_rewrite.len() <= tab_width && !root_rewrite.contains('\n') {
let item = &self.shared.children[0];
if let ChainItemKind::Comment(..) = item.kind {
break;
}
let shape = shape.offset_left(root_rewrite.len())?;
match &item.rewrite(context, shape) {
Some(rewrite) => root_rewrite.push_str(rewrite),
None => break,
}
root_ends_with_block = item.kind.is_block_like(context, &root_rewrite);
self.shared.children = &self.shared.children[1..];
if self.shared.children.is_empty() {
break;
}
}
self.is_block_like.push(root_ends_with_block);
self.shared.rewrites.push(root_rewrite);
Some(())
}
fn child_shape(&self, context: &RewriteContext, shape: Shape) -> Option<Shape> {
Some(
if self.is_block_like[0] {
shape.block_indent(0)
} else {
shape.block_indent(context.config.tab_spaces())
}.with_max_width(context.config),
)
}
fn format_children(&mut self, context: &RewriteContext, child_shape: Shape) -> Option<()> {
for item in &self.shared.children[..self.shared.children.len() - 1] {
let rewrite = item.rewrite(context, child_shape)?;
self.is_block_like
.push(item.kind.is_block_like(context, &rewrite));
self.shared.rewrites.push(rewrite);
}
Some(())
}
fn format_last_child(
&mut self,
context: &RewriteContext,
shape: Shape,
child_shape: Shape,
) -> Option<()> {
self.shared
.format_last_child(true, context, shape, child_shape)
}
fn join_rewrites(&self, context: &RewriteContext, child_shape: Shape) -> Option<String> {
self.shared
.join_rewrites(context, child_shape, self.is_block_like.iter().cloned())
}
fn pure_root(&mut self) -> Option<String> {
self.shared.pure_root()
}
}
// Format a chain using visual indent.
struct ChainFormatterVisual<'a> {
shared: ChainFormatterShared<'a>,
// The extra offset from the chain's shape to the position of the `.`
offset: usize,
}
impl<'a> ChainFormatterVisual<'a> {
fn new(chain: &'a Chain) -> ChainFormatterVisual<'a> {
ChainFormatterVisual {
shared: ChainFormatterShared::new(chain),
offset: 0,
}
}
}
impl<'a> ChainFormatter for ChainFormatterVisual<'a> {
fn format_root(
&mut self,
parent: &ChainItem,
context: &RewriteContext,
shape: Shape,
) -> Option<()> {
let parent_shape = shape.visual_indent(0);
let mut root_rewrite = parent.rewrite(context, parent_shape)?;
let multiline = root_rewrite.contains('\n');
self.offset = if multiline {
last_line_width(&root_rewrite).saturating_sub(shape.used_width())
} else {
trimmed_last_line_width(&root_rewrite)
};
if !multiline || parent.kind.is_block_like(context, &root_rewrite) {
let item = &self.shared.children[0];
if let ChainItemKind::Comment(..) = item.kind {
self.shared.rewrites.push(root_rewrite);
return Some(());
}
let child_shape = parent_shape
.visual_indent(self.offset)
.sub_width(self.offset)?;
let rewrite = item.rewrite(context, child_shape)?;
match wrap_str(rewrite, context.config.max_width(), shape) {
Some(rewrite) => root_rewrite.push_str(&rewrite),
None => {
// We couldn't fit in at the visual indent, try the last
// indent.
let rewrite = item.rewrite(context, parent_shape)?;
root_rewrite.push_str(&rewrite);
self.offset = 0;
}
}
self.shared.children = &self.shared.children[1..];
}
self.shared.rewrites.push(root_rewrite);
Some(())
}
fn child_shape(&self, context: &RewriteContext, shape: Shape) -> Option<Shape> {
shape
.with_max_width(context.config)
.offset_left(self.offset)
.map(|s| s.visual_indent(0))
}
fn format_children(&mut self, context: &RewriteContext, child_shape: Shape) -> Option<()> {
for item in &self.shared.children[..self.shared.children.len() - 1] {
let rewrite = item.rewrite(context, child_shape)?;
self.shared.rewrites.push(rewrite);
}
Some(())
}
fn format_last_child(
&mut self,
context: &RewriteContext,
shape: Shape,
child_shape: Shape,
) -> Option<()> {
self.shared
.format_last_child(false, context, shape, child_shape)
}
fn join_rewrites(&self, context: &RewriteContext, child_shape: Shape) -> Option<String> {
self.shared
.join_rewrites(context, child_shape, iter::repeat(false))
}
fn pure_root(&mut self) -> Option<String> {
self.shared.pure_root()
}
}
// States whether an expression's last line exclusively consists of closing
// parens, braces, and brackets in its idiomatic formatting.
fn is_block_expr(context: &RewriteContext, expr: &ast::Expr, repr: &str) -> bool {
match expr.node {
ast::ExprKind::Mac(..)
| ast::ExprKind::Call(..)
| ast::ExprKind::MethodCall(..)
| ast::ExprKind::Struct(..)
| ast::ExprKind::While(..)
| ast::ExprKind::WhileLet(..)
| ast::ExprKind::If(..)
| ast::ExprKind::IfLet(..)
| ast::ExprKind::Block(..)
| ast::ExprKind::Loop(..)
| ast::ExprKind::ForLoop(..)
| ast::ExprKind::Match(..) => repr.contains('\n'),
ast::ExprKind::Paren(ref expr)
| ast::ExprKind::Binary(_, _, ref expr)
| ast::ExprKind::Index(_, ref expr)
| ast::ExprKind::Unary(_, ref expr)
| ast::ExprKind::Closure(_, _, _, _, ref expr, _)
| ast::ExprKind::Try(ref expr)
| ast::ExprKind::Yield(Some(ref expr)) => is_block_expr(context, expr, repr),
// This can only be a string lit
ast::ExprKind::Lit(_) => {
repr.contains('\n') && trimmed_last_line_width(repr) <= context.config.tab_spaces()
}
_ => false,
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/spanned.rs
|
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use syntax::ast;
use syntax::source_map::Span;
use macros::MacroArg;
use utils::{mk_sp, outer_attributes};
use std::cmp::max;
/// Spanned returns a span including attributes, if available.
pub trait Spanned {
fn span(&self) -> Span;
}
macro_rules! span_with_attrs_lo_hi {
($this:ident, $lo:expr, $hi:expr) => {{
let attrs = outer_attributes(&$this.attrs);
if attrs.is_empty() {
mk_sp($lo, $hi)
} else {
mk_sp(attrs[0].span.lo(), $hi)
}
}};
}
macro_rules! span_with_attrs {
($this:ident) => {
span_with_attrs_lo_hi!($this, $this.span.lo(), $this.span.hi())
};
}
macro_rules! implement_spanned {
($this:ty) => {
impl Spanned for $this {
fn span(&self) -> Span {
span_with_attrs!(self)
}
}
};
}
// Implement `Spanned` for structs with `attrs` field.
implement_spanned!(ast::Expr);
implement_spanned!(ast::Field);
implement_spanned!(ast::ForeignItem);
implement_spanned!(ast::Item);
implement_spanned!(ast::Local);
implement_spanned!(ast::TraitItem);
implement_spanned!(ast::ImplItem);
impl Spanned for ast::Stmt {
fn span(&self) -> Span {
match self.node {
ast::StmtKind::Local(ref local) => mk_sp(local.span().lo(), self.span.hi()),
ast::StmtKind::Item(ref item) => mk_sp(item.span().lo(), self.span.hi()),
ast::StmtKind::Expr(ref expr) | ast::StmtKind::Semi(ref expr) => {
mk_sp(expr.span().lo(), self.span.hi())
}
ast::StmtKind::Mac(ref mac) => {
let (_, _, ref attrs) = **mac;
if attrs.is_empty() {
self.span
} else {
mk_sp(attrs[0].span.lo(), self.span.hi())
}
}
}
}
}
impl Spanned for ast::Pat {
fn span(&self) -> Span {
self.span
}
}
impl Spanned for ast::Ty {
fn span(&self) -> Span {
self.span
}
}
impl Spanned for ast::Arm {
fn span(&self) -> Span {
let lo = if self.attrs.is_empty() {
self.pats[0].span.lo()
} else {
self.attrs[0].span.lo()
};
span_with_attrs_lo_hi!(self, lo, self.body.span.hi())
}
}
impl Spanned for ast::Arg {
fn span(&self) -> Span {
if ::items::is_named_arg(self) {
mk_sp(self.pat.span.lo(), self.ty.span.hi())
} else {
self.ty.span
}
}
}
impl Spanned for ast::GenericParam {
fn span(&self) -> Span {
let lo = if self.attrs.is_empty() {
self.ident.span.lo()
} else {
self.attrs[0].span.lo()
};
let hi = if self.bounds.is_empty() {
self.ident.span.hi()
} else {
self.bounds.last().unwrap().span().hi()
};
let ty_hi = if let ast::GenericParamKind::Type {
default: Some(ref ty),
} = self.kind
{
ty.span().hi()
} else {
hi
};
mk_sp(lo, max(hi, ty_hi))
}
}
impl Spanned for ast::StructField {
fn span(&self) -> Span {
span_with_attrs_lo_hi!(self, self.span.lo(), self.ty.span.hi())
}
}
impl Spanned for ast::WherePredicate {
fn span(&self) -> Span {
match *self {
ast::WherePredicate::BoundPredicate(ref p) => p.span,
ast::WherePredicate::RegionPredicate(ref p) => p.span,
ast::WherePredicate::EqPredicate(ref p) => p.span,
}
}
}
impl Spanned for ast::FunctionRetTy {
fn span(&self) -> Span {
match *self {
ast::FunctionRetTy::Default(span) => span,
ast::FunctionRetTy::Ty(ref ty) => ty.span,
}
}
}
impl Spanned for ast::GenericArg {
fn span(&self) -> Span {
match *self {
ast::GenericArg::Lifetime(ref lt) => lt.ident.span,
ast::GenericArg::Type(ref ty) => ty.span(),
}
}
}
impl Spanned for ast::GenericBound {
fn span(&self) -> Span {
match *self {
ast::GenericBound::Trait(ref ptr, _) => ptr.span,
ast::GenericBound::Outlives(ref l) => l.ident.span,
}
}
}
impl Spanned for MacroArg {
fn span(&self) -> Span {
match *self {
MacroArg::Expr(ref expr) => expr.span(),
MacroArg::Ty(ref ty) => ty.span(),
MacroArg::Pat(ref pat) => pat.span(),
MacroArg::Item(ref item) => item.span(),
}
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/attr.rs
|
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Format attributes and meta items.
use comment::{contains_comment, rewrite_doc_comment};
use config::lists::*;
use config::IndentStyle;
use expr::rewrite_literal;
use lists::{definitive_tactic, itemize_list, write_list, ListFormatting, Separator};
use rewrite::{Rewrite, RewriteContext};
use shape::Shape;
use types::{rewrite_path, PathContext};
use utils::{count_newlines, mk_sp};
use std::borrow::Cow;
use syntax::ast;
use syntax::source_map::{BytePos, Span, DUMMY_SP};
/// Returns attributes on the given statement.
pub fn get_attrs_from_stmt(stmt: &ast::Stmt) -> &[ast::Attribute] {
match stmt.node {
ast::StmtKind::Local(ref local) => &local.attrs,
ast::StmtKind::Item(ref item) => &item.attrs,
ast::StmtKind::Expr(ref expr) | ast::StmtKind::Semi(ref expr) => &expr.attrs,
ast::StmtKind::Mac(ref mac) => &mac.2,
}
}
/// Returns attributes that are within `outer_span`.
pub fn filter_inline_attrs(attrs: &[ast::Attribute], outer_span: Span) -> Vec<ast::Attribute> {
attrs
.iter()
.filter(|a| outer_span.lo() <= a.span.lo() && a.span.hi() <= outer_span.hi())
.cloned()
.collect()
}
fn is_derive(attr: &ast::Attribute) -> bool {
attr.check_name("derive")
}
/// Returns the arguments of `#[derive(...)]`.
fn get_derive_spans<'a>(attr: &ast::Attribute) -> Option<Vec<Span>> {
attr.meta_item_list().map(|meta_item_list| {
meta_item_list
.iter()
.map(|nested_meta_item| nested_meta_item.span)
.collect()
})
}
// The shape of the arguments to a function-like attribute.
fn argument_shape(
left: usize,
right: usize,
combine: bool,
shape: Shape,
context: &RewriteContext,
) -> Option<Shape> {
match context.config.indent_style() {
IndentStyle::Block => {
if combine {
shape.offset_left(left)
} else {
Some(
shape
.block_indent(context.config.tab_spaces())
.with_max_width(context.config),
)
}
}
IndentStyle::Visual => shape
.visual_indent(0)
.shrink_left(left)
.and_then(|s| s.sub_width(right)),
}
}
fn format_derive(
derive_args: &[Span],
prefix: &str,
shape: Shape,
context: &RewriteContext,
) -> Option<String> {
let mut result = String::with_capacity(128);
result.push_str(prefix);
result.push_str("[derive(");
let argument_shape = argument_shape(10 + prefix.len(), 2, false, shape, context)?;
let item_str = format_arg_list(
derive_args.iter(),
|_| DUMMY_SP.lo(),
|_| DUMMY_SP.hi(),
|sp| Some(context.snippet(**sp).to_owned()),
DUMMY_SP,
context,
argument_shape,
// 10 = "[derive()]", 3 = "()" and "]"
shape.offset_left(10 + prefix.len())?.sub_width(3)?,
None,
false,
)?;
result.push_str(&item_str);
if item_str.starts_with('\n') {
result.push(',');
result.push_str(&shape.indent.to_string_with_newline(context.config));
}
result.push_str(")]");
Some(result)
}
/// Returns the first group of attributes that fills the given predicate.
/// We consider two doc comments are in different group if they are separated by normal comments.
fn take_while_with_pred<'a, P>(
context: &RewriteContext,
attrs: &'a [ast::Attribute],
pred: P,
) -> &'a [ast::Attribute]
where
P: Fn(&ast::Attribute) -> bool,
{
let mut len = 0;
let mut iter = attrs.iter().peekable();
while let Some(attr) = iter.next() {
if pred(attr) {
len += 1;
} else {
break;
}
if let Some(next_attr) = iter.peek() {
// Extract comments between two attributes.
let span_between_attr = mk_sp(attr.span.hi(), next_attr.span.lo());
let snippet = context.snippet(span_between_attr);
if count_newlines(snippet) >= 2 || snippet.contains('/') {
break;
}
}
}
&attrs[..len]
}
/// Rewrite the any doc comments which come before any other attributes.
fn rewrite_initial_doc_comments(
context: &RewriteContext,
attrs: &[ast::Attribute],
shape: Shape,
) -> Option<(usize, Option<String>)> {
if attrs.is_empty() {
return Some((0, None));
}
// Rewrite doc comments
let sugared_docs = take_while_with_pred(context, attrs, |a| a.is_sugared_doc);
if !sugared_docs.is_empty() {
let snippet = sugared_docs
.iter()
.map(|a| context.snippet(a.span))
.collect::<Vec<_>>()
.join("\n");
return Some((
sugared_docs.len(),
Some(rewrite_doc_comment(
&snippet,
shape.comment(context.config),
context.config,
)?),
));
}
Some((0, None))
}
impl Rewrite for ast::NestedMetaItem {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
match self.node {
ast::NestedMetaItemKind::MetaItem(ref meta_item) => meta_item.rewrite(context, shape),
ast::NestedMetaItemKind::Literal(ref l) => rewrite_literal(context, l, shape),
}
}
}
fn has_newlines_before_after_comment(comment: &str) -> (&str, &str) {
// Look at before and after comment and see if there are any empty lines.
let comment_begin = comment.find('/');
let len = comment_begin.unwrap_or_else(|| comment.len());
let mlb = count_newlines(&comment[..len]) > 1;
let mla = if comment_begin.is_none() {
mlb
} else {
comment
.chars()
.rev()
.take_while(|c| c.is_whitespace())
.filter(|&c| c == '\n')
.count()
> 1
};
(if mlb { "\n" } else { "" }, if mla { "\n" } else { "" })
}
impl Rewrite for ast::MetaItem {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
Some(match self.node {
ast::MetaItemKind::Word => {
rewrite_path(context, PathContext::Type, None, &self.ident, shape)?
}
ast::MetaItemKind::List(ref list) => {
let path = rewrite_path(context, PathContext::Type, None, &self.ident, shape)?;
let snippet = context.snippet(self.span);
// 2 = )] (this might go wrong if there is whitespace between the brackets, but
// it's close enough).
let snippet = snippet[..snippet.len() - 2].trim();
let trailing_comma = if snippet.ends_with(',') { "," } else { "" };
let combine = list.len() == 1 && match list[0].node {
ast::NestedMetaItemKind::Literal(..) => false,
ast::NestedMetaItemKind::MetaItem(ref inner_meta_item) => {
match inner_meta_item.node {
ast::MetaItemKind::List(..) => rewrite_path(
context,
PathContext::Type,
None,
&inner_meta_item.ident,
shape,
).map_or(false, |s| s.len() + path.len() + 2 <= shape.width),
_ => false,
}
}
};
let argument_shape = argument_shape(
path.len() + 1,
2 + trailing_comma.len(),
combine,
shape,
context,
)?;
let item_str = format_arg_list(
list.iter(),
|nested_meta_item| nested_meta_item.span.lo(),
|nested_meta_item| nested_meta_item.span.hi(),
|nested_meta_item| nested_meta_item.rewrite(context, argument_shape),
self.span,
context,
argument_shape,
// 3 = "()" and "]"
shape
.offset_left(path.len())?
.sub_width(3 + trailing_comma.len())?,
Some(context.config.width_heuristics().fn_call_width),
combine,
)?;
let indent = if item_str.starts_with('\n') {
shape.indent.to_string_with_newline(context.config)
} else {
Cow::Borrowed("")
};
format!("{}({}{}{})", path, item_str, trailing_comma, indent)
}
ast::MetaItemKind::NameValue(ref literal) => {
let path = rewrite_path(context, PathContext::Type, None, &self.ident, shape)?;
// 3 = ` = `
let lit_shape = shape.shrink_left(path.len() + 3)?;
// `rewrite_literal` returns `None` when `literal` exceeds max
// width. Since a literal is basically unformattable unless it
// is a string literal (and only if `format_strings` is set),
// we might be better off ignoring the fact that the attribute
// is longer than the max width and contiue on formatting.
// See #2479 for example.
let value = rewrite_literal(context, literal, lit_shape)
.unwrap_or_else(|| context.snippet(literal.span).to_owned());
format!("{} = {}", path, value)
}
})
}
}
fn format_arg_list<I, T, F1, F2, F3>(
list: I,
get_lo: F1,
get_hi: F2,
get_item_string: F3,
span: Span,
context: &RewriteContext,
shape: Shape,
one_line_shape: Shape,
one_line_limit: Option<usize>,
combine: bool,
) -> Option<String>
where
I: Iterator<Item = T>,
F1: Fn(&T) -> BytePos,
F2: Fn(&T) -> BytePos,
F3: Fn(&T) -> Option<String>,
{
let items = itemize_list(
context.snippet_provider,
list,
")",
",",
get_lo,
get_hi,
get_item_string,
span.lo(),
span.hi(),
false,
);
let item_vec = items.collect::<Vec<_>>();
let tactic = if let Some(limit) = one_line_limit {
ListTactic::LimitedHorizontalVertical(limit)
} else {
ListTactic::HorizontalVertical
};
let tactic = definitive_tactic(&item_vec, tactic, Separator::Comma, shape.width);
let fmt = ListFormatting::new(shape, context.config)
.tactic(tactic)
.ends_with_newline(false);
let item_str = write_list(&item_vec, &fmt)?;
let one_line_budget = one_line_shape.width;
if context.config.indent_style() == IndentStyle::Visual
|| combine
|| (!item_str.contains('\n') && item_str.len() <= one_line_budget)
{
Some(item_str)
} else {
let nested_indent = shape.indent.to_string_with_newline(context.config);
Some(format!("{}{}", nested_indent, item_str))
}
}
impl Rewrite for ast::Attribute {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
let snippet = context.snippet(self.span);
if self.is_sugared_doc {
rewrite_doc_comment(snippet, shape.comment(context.config), context.config)
} else {
let prefix = attr_prefix(self);
if contains_comment(snippet) {
return Some(snippet.to_owned());
}
// 1 = `[`
let shape = shape.offset_left(prefix.len() + 1)?;
Some(
self.meta()
.and_then(|meta| meta.rewrite(context, shape))
.map_or_else(|| snippet.to_owned(), |rw| format!("{}[{}]", prefix, rw)),
)
}
}
}
impl<'a> Rewrite for [ast::Attribute] {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
if self.is_empty() {
return Some(String::new());
}
// The current remaining attributes.
let mut attrs = self;
let mut result = String::new();
// This is not just a simple map because we need to handle doc comments
// (where we take as many doc comment attributes as possible) and possibly
// merging derives into a single attribute.
loop {
if attrs.is_empty() {
return Some(result);
}
// Handle doc comments.
let (doc_comment_len, doc_comment_str) =
rewrite_initial_doc_comments(context, attrs, shape)?;
if doc_comment_len > 0 {
let doc_comment_str = doc_comment_str.expect("doc comments, but no result");
result.push_str(&doc_comment_str);
let missing_span = attrs
.get(doc_comment_len)
.map(|next| mk_sp(attrs[doc_comment_len - 1].span.hi(), next.span.lo()));
if let Some(missing_span) = missing_span {
let snippet = context.snippet(missing_span);
let (mla, mlb) = has_newlines_before_after_comment(snippet);
let comment = ::comment::recover_missing_comment_in_span(
missing_span,
shape.with_max_width(context.config),
context,
0,
)?;
let comment = if comment.is_empty() {
format!("\n{}", mlb)
} else {
format!("{}{}\n{}", mla, comment, mlb)
};
result.push_str(&comment);
result.push_str(&shape.indent.to_string(context.config));
}
attrs = &attrs[doc_comment_len..];
continue;
}
// Handle derives if we will merge them.
if context.config.merge_derives() && is_derive(&attrs[0]) {
let derives = take_while_with_pred(context, attrs, is_derive);
let mut derive_spans = vec![];
for derive in derives {
derive_spans.append(&mut get_derive_spans(derive)?);
}
let derive_str =
format_derive(&derive_spans, attr_prefix(&attrs[0]), shape, context)?;
result.push_str(&derive_str);
let missing_span = attrs
.get(derives.len())
.map(|next| mk_sp(attrs[derives.len() - 1].span.hi(), next.span.lo()));
if let Some(missing_span) = missing_span {
let comment = ::comment::recover_missing_comment_in_span(
missing_span,
shape.with_max_width(context.config),
context,
0,
)?;
result.push_str(&comment);
if let Some(next) = attrs.get(derives.len()) {
if next.is_sugared_doc {
let snippet = context.snippet(missing_span);
let (_, mlb) = has_newlines_before_after_comment(snippet);
result.push_str(&mlb);
}
}
result.push('\n');
result.push_str(&shape.indent.to_string(context.config));
}
attrs = &attrs[derives.len()..];
continue;
}
// If we get here, then we have a regular attribute, just handle one
// at a time.
let formatted_attr = attrs[0].rewrite(context, shape)?;
result.push_str(&formatted_attr);
let missing_span = attrs
.get(1)
.map(|next| mk_sp(attrs[0].span.hi(), next.span.lo()));
if let Some(missing_span) = missing_span {
let comment = ::comment::recover_missing_comment_in_span(
missing_span,
shape.with_max_width(context.config),
context,
0,
)?;
result.push_str(&comment);
if let Some(next) = attrs.get(1) {
if next.is_sugared_doc {
let snippet = context.snippet(missing_span);
let (_, mlb) = has_newlines_before_after_comment(snippet);
result.push_str(&mlb);
}
}
result.push('\n');
result.push_str(&shape.indent.to_string(context.config));
}
attrs = &attrs[1..];
}
}
}
fn attr_prefix(attr: &ast::Attribute) -> &'static str {
match attr.style {
ast::AttrStyle::Inner => "#!",
ast::AttrStyle::Outer => "#",
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/items.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Formatting top-level items - functions, structs, enums, traits, impls.
use std::borrow::Cow;
use std::cmp::{min, Ordering};
use config::lists::*;
use regex::Regex;
use rustc_target::spec::abi;
use syntax::source_map::{self, BytePos, Span};
use syntax::visit;
use syntax::{ast, ptr, symbol};
use comment::{
combine_strs_with_missing_comments, contains_comment, recover_comment_removed,
recover_missing_comment_in_span, rewrite_missing_comment, FindUncommented,
};
use config::{BraceStyle, Config, Density, IndentStyle};
use expr::{
format_expr, is_empty_block, is_simple_block_stmt, rewrite_assign_rhs, rewrite_assign_rhs_with,
ExprType, RhsTactics,
};
use lists::{definitive_tactic, itemize_list, write_list, ListFormatting, ListItem, Separator};
use macros::{rewrite_macro, MacroPosition};
use overflow;
use rewrite::{Rewrite, RewriteContext};
use shape::{Indent, Shape};
use source_map::{LineRangeUtils, SpanUtils};
use spanned::Spanned;
use utils::*;
use vertical::rewrite_with_alignment;
use visitor::FmtVisitor;
const DEFAULT_VISIBILITY: ast::Visibility = source_map::Spanned {
node: ast::VisibilityKind::Inherited,
span: source_map::DUMMY_SP,
};
fn type_annotation_separator(config: &Config) -> &str {
colon_spaces(config.space_before_colon(), config.space_after_colon())
}
// Statements of the form
// let pat: ty = init;
impl Rewrite for ast::Local {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
debug!(
"Local::rewrite {:?} {} {:?}",
self, shape.width, shape.indent
);
skip_out_of_file_lines_range!(context, self.span);
if contains_skip(&self.attrs) {
return None;
}
let attrs_str = self.attrs.rewrite(context, shape)?;
let mut result = if attrs_str.is_empty() {
"let ".to_owned()
} else {
combine_strs_with_missing_comments(
context,
&attrs_str,
"let ",
mk_sp(
self.attrs.last().map(|a| a.span.hi()).unwrap(),
self.span.lo(),
),
shape,
false,
)?
};
// 4 = "let ".len()
let pat_shape = shape.offset_left(4)?;
// 1 = ;
let pat_shape = pat_shape.sub_width(1)?;
let pat_str = self.pat.rewrite(context, pat_shape)?;
result.push_str(&pat_str);
// String that is placed within the assignment pattern and expression.
let infix = {
let mut infix = String::with_capacity(32);
if let Some(ref ty) = self.ty {
let separator = type_annotation_separator(context.config);
let indent = shape.indent + last_line_width(&result) + separator.len();
// 1 = ;
let budget = shape.width.checked_sub(indent.width() + 1)?;
let rewrite = ty.rewrite(context, Shape::legacy(budget, indent))?;
infix.push_str(separator);
infix.push_str(&rewrite);
}
if self.init.is_some() {
infix.push_str(" =");
}
infix
};
result.push_str(&infix);
if let Some(ref ex) = self.init {
// 1 = trailing semicolon;
let nested_shape = shape.sub_width(1)?;
result = rewrite_assign_rhs(context, result, &**ex, nested_shape)?;
}
result.push(';');
Some(result)
}
}
// FIXME convert to using rewrite style rather than visitor
// FIXME format modules in this style
#[allow(dead_code)]
struct Item<'a> {
keyword: &'static str,
abi: Cow<'static, str>,
vis: Option<&'a ast::Visibility>,
body: Vec<BodyElement<'a>>,
span: Span,
}
impl<'a> Item<'a> {
fn from_foreign_mod(fm: &'a ast::ForeignMod, span: Span, config: &Config) -> Item<'a> {
Item {
keyword: "",
abi: format_abi(fm.abi, config.force_explicit_abi(), true),
vis: None,
body: fm
.items
.iter()
.map(|i| BodyElement::ForeignItem(i))
.collect(),
span,
}
}
}
enum BodyElement<'a> {
// Stmt(&'a ast::Stmt),
// Field(&'a ast::Field),
// Variant(&'a ast::Variant),
// Item(&'a ast::Item),
ForeignItem(&'a ast::ForeignItem),
}
/// Represents a fn's signature.
pub struct FnSig<'a> {
decl: &'a ast::FnDecl,
generics: &'a ast::Generics,
abi: abi::Abi,
is_async: ast::IsAsync,
constness: ast::Constness,
defaultness: ast::Defaultness,
unsafety: ast::Unsafety,
visibility: ast::Visibility,
}
impl<'a> FnSig<'a> {
pub fn new(
decl: &'a ast::FnDecl,
generics: &'a ast::Generics,
vis: ast::Visibility,
) -> FnSig<'a> {
FnSig {
decl,
generics,
abi: abi::Abi::Rust,
is_async: ast::IsAsync::NotAsync,
constness: ast::Constness::NotConst,
defaultness: ast::Defaultness::Final,
unsafety: ast::Unsafety::Normal,
visibility: vis,
}
}
pub fn from_method_sig(
method_sig: &'a ast::MethodSig,
generics: &'a ast::Generics,
) -> FnSig<'a> {
FnSig {
unsafety: method_sig.header.unsafety,
is_async: method_sig.header.asyncness,
constness: method_sig.header.constness.node,
defaultness: ast::Defaultness::Final,
abi: method_sig.header.abi,
decl: &*method_sig.decl,
generics,
visibility: DEFAULT_VISIBILITY,
}
}
pub fn from_fn_kind(
fn_kind: &'a visit::FnKind,
generics: &'a ast::Generics,
decl: &'a ast::FnDecl,
defaultness: ast::Defaultness,
) -> FnSig<'a> {
match *fn_kind {
visit::FnKind::ItemFn(_, fn_header, visibility, _) => FnSig {
decl,
generics,
abi: fn_header.abi,
constness: fn_header.constness.node,
is_async: fn_header.asyncness,
defaultness,
unsafety: fn_header.unsafety,
visibility: visibility.clone(),
},
visit::FnKind::Method(_, method_sig, vis, _) => {
let mut fn_sig = FnSig::from_method_sig(method_sig, generics);
fn_sig.defaultness = defaultness;
if let Some(vis) = vis {
fn_sig.visibility = vis.clone();
}
fn_sig
}
_ => unreachable!(),
}
}
fn to_str(&self, context: &RewriteContext) -> String {
let mut result = String::with_capacity(128);
// Vis defaultness constness unsafety abi.
result.push_str(&*format_visibility(context, &self.visibility));
result.push_str(format_defaultness(self.defaultness));
result.push_str(format_constness(self.constness));
result.push_str(format_unsafety(self.unsafety));
result.push_str(format_async(self.is_async));
result.push_str(&format_abi(
self.abi,
context.config.force_explicit_abi(),
false,
));
result
}
}
impl<'a> FmtVisitor<'a> {
fn format_item(&mut self, item: &Item) {
self.buffer.push_str(&item.abi);
let snippet = self.snippet(item.span);
let brace_pos = snippet.find_uncommented("{").unwrap();
self.push_str("{");
if !item.body.is_empty() || contains_comment(&snippet[brace_pos..]) {
// FIXME: this skips comments between the extern keyword and the opening
// brace.
self.last_pos = item.span.lo() + BytePos(brace_pos as u32 + 1);
self.block_indent = self.block_indent.block_indent(self.config);
if item.body.is_empty() {
self.format_missing_no_indent(item.span.hi() - BytePos(1));
self.block_indent = self.block_indent.block_unindent(self.config);
let indent_str = self.block_indent.to_string(self.config);
self.push_str(&indent_str);
} else {
for item in &item.body {
self.format_body_element(item);
}
self.block_indent = self.block_indent.block_unindent(self.config);
self.format_missing_with_indent(item.span.hi() - BytePos(1));
}
}
self.push_str("}");
self.last_pos = item.span.hi();
}
fn format_body_element(&mut self, element: &BodyElement) {
match *element {
BodyElement::ForeignItem(item) => self.format_foreign_item(item),
}
}
pub fn format_foreign_mod(&mut self, fm: &ast::ForeignMod, span: Span) {
let item = Item::from_foreign_mod(fm, span, self.config);
self.format_item(&item);
}
fn format_foreign_item(&mut self, item: &ast::ForeignItem) {
let rewrite = item.rewrite(&self.get_context(), self.shape());
self.push_rewrite(item.span(), rewrite);
self.last_pos = item.span.hi();
}
pub fn rewrite_fn(
&mut self,
indent: Indent,
ident: ast::Ident,
fn_sig: &FnSig,
span: Span,
block: &ast::Block,
inner_attrs: Option<&[ast::Attribute]>,
) -> Option<String> {
let context = self.get_context();
let mut newline_brace = newline_for_brace(self.config, &fn_sig.generics.where_clause);
let (mut result, force_newline_brace) =
rewrite_fn_base(&context, indent, ident, fn_sig, span, newline_brace, true)?;
// 2 = ` {`
if self.config.brace_style() == BraceStyle::AlwaysNextLine
|| force_newline_brace
|| last_line_width(&result) + 2 > self.shape().width
{
newline_brace = true;
} else if !result.contains('\n') {
newline_brace = false;
}
// Prepare for the function body by possibly adding a newline and
// indent.
// FIXME we'll miss anything between the end of the signature and the
// start of the body, but we need more spans from the compiler to solve
// this.
if newline_brace {
result.push_str(&indent.to_string_with_newline(self.config));
} else {
result.push(' ');
}
self.single_line_fn(&result, block, inner_attrs)
.or_else(|| Some(result))
}
pub fn rewrite_required_fn(
&mut self,
indent: Indent,
ident: ast::Ident,
sig: &ast::MethodSig,
generics: &ast::Generics,
span: Span,
) -> Option<String> {
// Drop semicolon or it will be interpreted as comment.
let span = mk_sp(span.lo(), span.hi() - BytePos(1));
let context = self.get_context();
let (mut result, _) = rewrite_fn_base(
&context,
indent,
ident,
&FnSig::from_method_sig(sig, generics),
span,
false,
false,
)?;
// Re-attach semicolon
result.push(';');
Some(result)
}
fn single_line_fn(
&self,
fn_str: &str,
block: &ast::Block,
inner_attrs: Option<&[ast::Attribute]>,
) -> Option<String> {
if fn_str.contains('\n') || inner_attrs.map_or(false, |a| !a.is_empty()) {
return None;
}
let source_map = self.get_context().source_map;
if self.config.empty_item_single_line()
&& is_empty_block(block, None, source_map)
&& self.block_indent.width() + fn_str.len() + 2 <= self.config.max_width()
{
return Some(format!("{}{{}}", fn_str));
}
if self.config.fn_single_line() && is_simple_block_stmt(block, None, source_map) {
let rewrite = {
if let Some(stmt) = block.stmts.first() {
match stmt_expr(stmt) {
Some(e) => {
let suffix = if semicolon_for_expr(&self.get_context(), e) {
";"
} else {
""
};
format_expr(e, ExprType::Statement, &self.get_context(), self.shape())
.map(|s| s + suffix)
.or_else(|| Some(self.snippet(e.span).to_owned()))
}
None => stmt.rewrite(&self.get_context(), self.shape()),
}
} else {
None
}
};
if let Some(res) = rewrite {
let width = self.block_indent.width() + fn_str.len() + res.len() + 4;
if !res.contains('\n') && width <= self.config.max_width() {
return Some(format!("{}{{ {} }}", fn_str, res));
}
}
}
None
}
pub fn visit_static(&mut self, static_parts: &StaticParts) {
let rewrite = rewrite_static(&self.get_context(), static_parts, self.block_indent);
self.push_rewrite(static_parts.span, rewrite);
}
pub fn visit_struct(&mut self, struct_parts: &StructParts) {
let is_tuple = struct_parts.def.is_tuple();
let rewrite = format_struct(&self.get_context(), struct_parts, self.block_indent, None)
.map(|s| if is_tuple { s + ";" } else { s });
self.push_rewrite(struct_parts.span, rewrite);
}
pub fn visit_enum(
&mut self,
ident: ast::Ident,
vis: &ast::Visibility,
enum_def: &ast::EnumDef,
generics: &ast::Generics,
span: Span,
) {
let enum_header = format_header(&self.get_context(), "enum ", ident, vis);
self.push_str(&enum_header);
let enum_snippet = self.snippet(span);
let brace_pos = enum_snippet.find_uncommented("{").unwrap();
let body_start = span.lo() + BytePos(brace_pos as u32 + 1);
let generics_str = format_generics(
&self.get_context(),
generics,
self.config.brace_style(),
if enum_def.variants.is_empty() {
BracePos::ForceSameLine
} else {
BracePos::Auto
},
self.block_indent,
mk_sp(span.lo(), body_start),
last_line_width(&enum_header),
).unwrap();
self.push_str(&generics_str);
self.last_pos = body_start;
match self.format_variant_list(enum_def, body_start, span.hi()) {
Some(ref s) if enum_def.variants.is_empty() => self.push_str(s),
rw => {
self.push_rewrite(mk_sp(body_start, span.hi()), rw);
self.block_indent = self.block_indent.block_unindent(self.config);
}
}
}
// Format the body of an enum definition
fn format_variant_list(
&mut self,
enum_def: &ast::EnumDef,
body_lo: BytePos,
body_hi: BytePos,
) -> Option<String> {
if enum_def.variants.is_empty() {
let mut buffer = String::with_capacity(128);
// 1 = "}"
let span = mk_sp(body_lo, body_hi - BytePos(1));
format_empty_struct_or_tuple(
&self.get_context(),
span,
self.block_indent,
&mut buffer,
"",
"}",
);
return Some(buffer);
}
let mut result = String::with_capacity(1024);
let original_offset = self.block_indent;
self.block_indent = self.block_indent.block_indent(self.config);
let itemize_list_with = |one_line_width: usize| {
itemize_list(
self.snippet_provider,
enum_def.variants.iter(),
"}",
",",
|f| {
if !f.node.attrs.is_empty() {
f.node.attrs[0].span.lo()
} else {
f.span.lo()
}
},
|f| f.span.hi(),
|f| self.format_variant(f, one_line_width),
body_lo,
body_hi,
false,
).collect()
};
let mut items: Vec<_> =
itemize_list_with(self.config.width_heuristics().struct_variant_width);
// If one of the variants use multiple lines, use multi-lined formatting for all variants.
let has_multiline_variant = items.iter().any(|item| item.inner_as_ref().contains('\n'));
let has_single_line_variant = items.iter().any(|item| !item.inner_as_ref().contains('\n'));
if has_multiline_variant && has_single_line_variant {
items = itemize_list_with(0);
}
let shape = self.shape().sub_width(2)?;
let fmt = ListFormatting::new(shape, self.config)
.trailing_separator(self.config.trailing_comma())
.preserve_newline(true);
let list = write_list(&items, &fmt)?;
result.push_str(&list);
result.push_str(&original_offset.to_string_with_newline(self.config));
result.push('}');
Some(result)
}
// Variant of an enum.
fn format_variant(&self, field: &ast::Variant, one_line_width: usize) -> Option<String> {
if contains_skip(&field.node.attrs) {
let lo = field.node.attrs[0].span.lo();
let span = mk_sp(lo, field.span.hi());
return Some(self.snippet(span).to_owned());
}
let context = self.get_context();
// 1 = ','
let shape = self.shape().sub_width(1)?;
let attrs_str = field.node.attrs.rewrite(&context, shape)?;
let lo = field
.node
.attrs
.last()
.map_or(field.span.lo(), |attr| attr.span.hi());
let span = mk_sp(lo, field.span.lo());
let variant_body = match field.node.data {
ast::VariantData::Tuple(..) | ast::VariantData::Struct(..) => format_struct(
&context,
&StructParts::from_variant(field),
self.block_indent,
Some(one_line_width),
)?,
ast::VariantData::Unit(..) => {
if let Some(ref expr) = field.node.disr_expr {
let lhs = format!("{} =", rewrite_ident(&context, field.node.ident));
rewrite_assign_rhs(&context, lhs, &*expr.value, shape)?
} else {
rewrite_ident(&context, field.node.ident).to_owned()
}
}
};
combine_strs_with_missing_comments(&context, &attrs_str, &variant_body, span, shape, false)
}
fn visit_impl_items(&mut self, items: &[ast::ImplItem]) {
if self.get_context().config.reorder_impl_items() {
// Create visitor for each items, then reorder them.
let mut buffer = vec![];
for item in items {
self.visit_impl_item(item);
buffer.push((self.buffer.clone(), item.clone()));
self.buffer.clear();
}
// type -> existential -> const -> macro -> method
use ast::ImplItemKind::*;
fn need_empty_line(a: &ast::ImplItemKind, b: &ast::ImplItemKind) -> bool {
match (a, b) {
(Type(..), Type(..))
| (Const(..), Const(..))
| (Existential(..), Existential(..)) => false,
_ => true,
}
}
buffer.sort_by(|(_, a), (_, b)| match (&a.node, &b.node) {
(Type(..), Type(..))
| (Const(..), Const(..))
| (Macro(..), Macro(..))
| (Existential(..), Existential(..)) => a.ident.as_str().cmp(&b.ident.as_str()),
(Method(..), Method(..)) => a.span.lo().cmp(&b.span.lo()),
(Type(..), _) => Ordering::Less,
(_, Type(..)) => Ordering::Greater,
(Existential(..), _) => Ordering::Less,
(_, Existential(..)) => Ordering::Greater,
(Const(..), _) => Ordering::Less,
(_, Const(..)) => Ordering::Greater,
(Macro(..), _) => Ordering::Less,
(_, Macro(..)) => Ordering::Greater,
});
let mut prev_kind = None;
for (buf, item) in buffer {
// Make sure that there are at least a single empty line between
// different impl items.
if prev_kind
.as_ref()
.map_or(false, |prev_kind| need_empty_line(prev_kind, &item.node))
{
self.push_str("\n");
}
let indent_str = self.block_indent.to_string_with_newline(self.config);
self.push_str(&indent_str);
self.push_str(buf.trim());
prev_kind = Some(item.node.clone());
}
} else {
for item in items {
self.visit_impl_item(item);
}
}
}
}
pub fn format_impl(
context: &RewriteContext,
item: &ast::Item,
offset: Indent,
where_span_end: Option<BytePos>,
) -> Option<String> {
if let ast::ItemKind::Impl(_, _, _, ref generics, _, ref self_ty, ref items) = item.node {
let mut result = String::with_capacity(128);
let ref_and_type = format_impl_ref_and_type(context, item, offset)?;
let sep = offset.to_string_with_newline(context.config);
result.push_str(&ref_and_type);
let where_budget = if result.contains('\n') {
context.config.max_width()
} else {
context.budget(last_line_width(&result))
};
let mut option = WhereClauseOption::snuggled(&ref_and_type);
let snippet = context.snippet(item.span);
let open_pos = snippet.find_uncommented("{")? + 1;
if !contains_comment(&snippet[open_pos..])
&& items.is_empty()
&& generics.where_clause.predicates.len() == 1
&& !result.contains('\n')
{
option.suppress_comma();
option.snuggle();
option.compress_where();
}
let where_clause_str = rewrite_where_clause(
context,
&generics.where_clause,
context.config.brace_style(),
Shape::legacy(where_budget, offset.block_only()),
Density::Vertical,
"{",
where_span_end,
self_ty.span.hi(),
option,
false,
)?;
// If there is no where clause, we may have missing comments between the trait name and
// the opening brace.
if generics.where_clause.predicates.is_empty() {
if let Some(hi) = where_span_end {
match recover_missing_comment_in_span(
mk_sp(self_ty.span.hi(), hi),
Shape::indented(offset, context.config),
context,
last_line_width(&result),
) {
Some(ref missing_comment) if !missing_comment.is_empty() => {
result.push_str(missing_comment);
}
_ => (),
}
}
}
if is_impl_single_line(context, items, &result, &where_clause_str, item)? {
result.push_str(&where_clause_str);
if where_clause_str.contains('\n') || last_line_contains_single_line_comment(&result) {
// if the where_clause contains extra comments AND
// there is only one where clause predicate
// recover the suppressed comma in single line where_clause formatting
if generics.where_clause.predicates.len() == 1 {
result.push_str(",");
}
result.push_str(&format!("{}{{{}}}", &sep, &sep));
} else {
result.push_str(" {}");
}
return Some(result);
}
result.push_str(&where_clause_str);
let need_newline = last_line_contains_single_line_comment(&result) || result.contains('\n');
match context.config.brace_style() {
_ if need_newline => result.push_str(&sep),
BraceStyle::AlwaysNextLine => result.push_str(&sep),
BraceStyle::PreferSameLine => result.push(' '),
BraceStyle::SameLineWhere => {
if !where_clause_str.is_empty() {
result.push_str(&sep);
} else {
result.push(' ');
}
}
}
result.push('{');
let snippet = context.snippet(item.span);
let open_pos = snippet.find_uncommented("{")? + 1;
if !items.is_empty() || contains_comment(&snippet[open_pos..]) {
let mut visitor = FmtVisitor::from_context(context);
let item_indent = offset.block_only().block_indent(context.config);
visitor.block_indent = item_indent;
visitor.last_pos = item.span.lo() + BytePos(open_pos as u32);
visitor.visit_attrs(&item.attrs, ast::AttrStyle::Inner);
visitor.visit_impl_items(items);
visitor.format_missing(item.span.hi() - BytePos(1));
let inner_indent_str = visitor.block_indent.to_string_with_newline(context.config);
let outer_indent_str = offset.block_only().to_string_with_newline(context.config);
result.push_str(&inner_indent_str);
result.push_str(visitor.buffer.to_string().trim());
result.push_str(&outer_indent_str);
}
if result.ends_with('{') && !context.config.empty_item_single_line() {
result.push_str(&sep);
}
result.push('}');
Some(result)
} else {
unreachable!();
}
}
fn is_impl_single_line(
context: &RewriteContext,
items: &[ast::ImplItem],
result: &str,
where_clause_str: &str,
item: &ast::Item,
) -> Option<bool> {
let snippet = context.snippet(item.span);
let open_pos = snippet.find_uncommented("{")? + 1;
Some(
context.config.empty_item_single_line()
&& items.is_empty()
&& !result.contains('\n')
&& result.len() + where_clause_str.len() <= context.config.max_width()
&& !contains_comment(&snippet[open_pos..]),
)
}
fn format_impl_ref_and_type(
context: &RewriteContext,
item: &ast::Item,
offset: Indent,
) -> Option<String> {
if let ast::ItemKind::Impl(
unsafety,
polarity,
defaultness,
ref generics,
ref trait_ref,
ref self_ty,
_,
) = item.node
{
let mut result = String::with_capacity(128);
result.push_str(&format_visibility(context, &item.vis));
result.push_str(format_defaultness(defaultness));
result.push_str(format_unsafety(unsafety));
let shape = generics_shape_from_config(
context.config,
Shape::indented(offset + last_line_width(&result), context.config),
0,
)?;
let generics_str = rewrite_generics(context, "impl", generics, shape)?;
result.push_str(&generics_str);
let polarity_str = if polarity == ast::ImplPolarity::Negative {
"!"
} else {
""
};
if let Some(ref trait_ref) = *trait_ref {
let result_len = last_line_width(&result);
result.push_str(&rewrite_trait_ref(
context,
trait_ref,
offset,
polarity_str,
result_len,
)?);
}
// Try to put the self type in a single line.
// ` for`
let trait_ref_overhead = if trait_ref.is_some() { 4 } else { 0 };
let curly_brace_overhead = if generics.where_clause.predicates.is_empty() {
// If there is no where clause adapt budget for type formatting to take space and curly
// brace into account.
match context.config.brace_style() {
BraceStyle::AlwaysNextLine => 0,
_ => 2,
}
} else {
0
};
let used_space = last_line_width(&result) + trait_ref_overhead + curly_brace_overhead;
// 1 = space before the type.
let budget = context.budget(used_space + 1);
if let Some(self_ty_str) = self_ty.rewrite(context, Shape::legacy(budget, offset)) {
if !self_ty_str.contains('\n') {
if trait_ref.is_some() {
result.push_str(" for ");
} else {
result.push(' ');
}
result.push_str(&self_ty_str);
return Some(result);
}
}
// Couldn't fit the self type on a single line, put it on a new line.
result.push('\n');
// Add indentation of one additional tab.
let new_line_offset = offset.block_indent(context.config);
result.push_str(&new_line_offset.to_string(context.config));
if trait_ref.is_some() {
result.push_str("for ");
}
let budget = context.budget(last_line_width(&result));
let type_offset = match context.config.indent_style() {
IndentStyle::Visual => new_line_offset + trait_ref_overhead,
IndentStyle::Block => new_line_offset,
};
result.push_str(&*self_ty.rewrite(context, Shape::legacy(budget, type_offset))?);
Some(result)
} else {
unreachable!();
}
}
fn rewrite_trait_ref(
context: &RewriteContext,
trait_ref: &ast::TraitRef,
offset: Indent,
polarity_str: &str,
result_len: usize,
) -> Option<String> {
// 1 = space between generics and trait_ref
let used_space = 1 + polarity_str.len() + result_len;
let shape = Shape::indented(offset + used_space, context.config);
if let Some(trait_ref_str) = trait_ref.rewrite(context, shape) {
if !trait_ref_str.contains('\n') {
return Some(format!(" {}{}", polarity_str, &trait_ref_str));
}
}
// We could not make enough space for trait_ref, so put it on new line.
let offset = offset.block_indent(context.config);
let shape = Shape::indented(offset, context.config);
let trait_ref_str = trait_ref.rewrite(context, shape)?;
Some(format!(
"{}{}{}",
&offset.to_string_with_newline(context.config),
polarity_str,
&trait_ref_str
))
}
pub struct StructParts<'a> {
prefix: &'a str,
ident: ast::Ident,
vis: &'a ast::Visibility,
def: &'a ast::VariantData,
generics: Option<&'a ast::Generics>,
span: Span,
}
impl<'a> StructParts<'a> {
fn format_header(&self, context: &RewriteContext) -> String {
format_header(context, self.prefix, self.ident, self.vis)
}
fn from_variant(variant: &'a ast::Variant) -> Self {
StructParts {
prefix: "",
ident: variant.node.ident,
vis: &DEFAULT_VISIBILITY,
def: &variant.node.data,
generics: None,
span: variant.span,
}
}
pub fn from_item(item: &'a ast::Item) -> Self {
let (prefix, def, generics) = match item.node {
ast::ItemKind::Struct(ref def, ref generics) => ("struct ", def, generics),
ast::ItemKind::Union(ref def, ref generics) => ("union ", def, generics),
_ => unreachable!(),
};
StructParts {
prefix,
ident: item.ident,
vis: &item.vis,
def,
generics: Some(generics),
span: item.span,
}
}
}
fn format_struct(
context: &RewriteContext,
struct_parts: &StructParts,
offset: Indent,
one_line_width: Option<usize>,
) -> Option<String> {
match *struct_parts.def {
ast::VariantData::Unit(..) => format_unit_struct(context, struct_parts, offset),
ast::VariantData::Tuple(ref fields, _) => {
format_tuple_struct(context, struct_parts, fields, offset)
}
ast::VariantData::Struct(ref fields, _) => {
format_struct_struct(context, struct_parts, fields, offset, one_line_width)
}
}
}
pub fn format_trait(context: &RewriteContext, item: &ast::Item, offset: Indent) -> Option<String> {
if let ast::ItemKind::Trait(
is_auto,
unsafety,
ref generics,
ref generic_bounds,
ref trait_items,
) = item.node
{
let mut result = String::with_capacity(128);
let header = format!(
"{}{}{}trait ",
format_visibility(context, &item.vis),
format_unsafety(unsafety),
format_auto(is_auto),
);
result.push_str(&header);
let body_lo = context.snippet_provider.span_after(item.span, "{");
let shape = Shape::indented(offset, context.config).offset_left(result.len())?;
let generics_str =
rewrite_generics(context, rewrite_ident(context, item.ident), generics, shape)?;
result.push_str(&generics_str);
// FIXME(#2055): rustfmt fails to format when there are comments between trait bounds.
if !generic_bounds.is_empty() {
let ident_hi = context
.snippet_provider
.span_after(item.span, &item.ident.as_str());
let bound_hi = generic_bounds.last().unwrap().span().hi();
let snippet = context.snippet(mk_sp(ident_hi, bound_hi));
if contains_comment(snippet) {
return None;
}
result = rewrite_assign_rhs_with(
context,
result + ":",
generic_bounds,
shape,
RhsTactics::ForceNextLineWithoutIndent,
)?;
}
// Rewrite where clause.
if !generics.where_clause.predicates.is_empty() {
let where_density = if context.config.indent_style() == IndentStyle::Block {
Density::Compressed
} else {
Density::Tall
};
let where_budget = context.budget(last_line_width(&result));
let pos_before_where = if generic_bounds.is_empty() {
generics.where_clause.span.lo()
} else {
generic_bounds[generic_bounds.len() - 1].span().hi()
};
let option = WhereClauseOption::snuggled(&generics_str);
let where_clause_str = rewrite_where_clause(
context,
&generics.where_clause,
context.config.brace_style(),
Shape::legacy(where_budget, offset.block_only()),
where_density,
"{",
None,
pos_before_where,
option,
false,
)?;
// If the where clause cannot fit on the same line,
// put the where clause on a new line
if !where_clause_str.contains('\n')
&& last_line_width(&result) + where_clause_str.len() + offset.width()
> context.config.comment_width()
{
let width = offset.block_indent + context.config.tab_spaces() - 1;
let where_indent = Indent::new(0, width);
result.push_str(&where_indent.to_string_with_newline(context.config));
}
result.push_str(&where_clause_str);
} else {
let item_snippet = context.snippet(item.span);
if let Some(lo) = item_snippet.find('/') {
// 1 = `{`
let comment_hi = body_lo - BytePos(1);
let comment_lo = item.span.lo() + BytePos(lo as u32);
if comment_lo < comment_hi {
match recover_missing_comment_in_span(
mk_sp(comment_lo, comment_hi),
Shape::indented(offset, context.config),
context,
last_line_width(&result),
) {
Some(ref missing_comment) if !missing_comment.is_empty() => {
result.push_str(missing_comment);
}
_ => (),
}
}
}
}
match context.config.brace_style() {
_ if last_line_contains_single_line_comment(&result)
|| last_line_width(&result) + 2 > context.budget(offset.width()) =>
{
result.push_str(&offset.to_string_with_newline(context.config));
}
BraceStyle::AlwaysNextLine => {
result.push_str(&offset.to_string_with_newline(context.config));
}
BraceStyle::PreferSameLine => result.push(' '),
BraceStyle::SameLineWhere => {
if result.contains('\n')
|| (!generics.where_clause.predicates.is_empty() && !trait_items.is_empty())
{
result.push_str(&offset.to_string_with_newline(context.config));
} else {
result.push(' ');
}
}
}
result.push('{');
let snippet = context.snippet(item.span);
let open_pos = snippet.find_uncommented("{")? + 1;
if !trait_items.is_empty() || contains_comment(&snippet[open_pos..]) {
let mut visitor = FmtVisitor::from_context(context);
visitor.block_indent = offset.block_only().block_indent(context.config);
visitor.last_pos = item.span.lo() + BytePos(open_pos as u32);
for item in trait_items {
visitor.visit_trait_item(item);
}
visitor.format_missing(item.span.hi() - BytePos(1));
let inner_indent_str = visitor.block_indent.to_string_with_newline(context.config);
let outer_indent_str = offset.block_only().to_string_with_newline(context.config);
result.push_str(&inner_indent_str);
result.push_str(visitor.buffer.to_string().trim());
result.push_str(&outer_indent_str);
} else if result.contains('\n') {
result.push('\n');
}
result.push('}');
Some(result)
} else {
unreachable!();
}
}
pub fn format_trait_alias(
context: &RewriteContext,
ident: ast::Ident,
generics: &ast::Generics,
generic_bounds: &ast::GenericBounds,
shape: Shape,
) -> Option<String> {
let alias = rewrite_ident(context, ident);
// 6 = "trait ", 2 = " ="
let g_shape = shape.offset_left(6)?.sub_width(2)?;
let generics_str = rewrite_generics(context, &alias, generics, g_shape)?;
let lhs = format!("trait {} =", generics_str);
// 1 = ";"
rewrite_assign_rhs(context, lhs, generic_bounds, shape.sub_width(1)?).map(|s| s + ";")
}
fn format_unit_struct(context: &RewriteContext, p: &StructParts, offset: Indent) -> Option<String> {
let header_str = format_header(context, p.prefix, p.ident, p.vis);
let generics_str = if let Some(generics) = p.generics {
let hi = if generics.where_clause.predicates.is_empty() {
generics.span.hi()
} else {
generics.where_clause.span.hi()
};
format_generics(
context,
generics,
context.config.brace_style(),
BracePos::None,
offset,
mk_sp(generics.span.lo(), hi),
last_line_width(&header_str),
)?
} else {
String::new()
};
Some(format!("{}{};", header_str, generics_str))
}
pub fn format_struct_struct(
context: &RewriteContext,
struct_parts: &StructParts,
fields: &[ast::StructField],
offset: Indent,
one_line_width: Option<usize>,
) -> Option<String> {
let mut result = String::with_capacity(1024);
let span = struct_parts.span;
let header_str = struct_parts.format_header(context);
result.push_str(&header_str);
let header_hi = span.lo() + BytePos(header_str.len() as u32);
let body_lo = context.snippet_provider.span_after(span, "{");
let generics_str = match struct_parts.generics {
Some(g) => format_generics(
context,
g,
context.config.brace_style(),
if fields.is_empty() {
BracePos::ForceSameLine
} else {
BracePos::Auto
},
offset,
mk_sp(header_hi, body_lo),
last_line_width(&result),
)?,
None => {
// 3 = ` {}`, 2 = ` {`.
let overhead = if fields.is_empty() { 3 } else { 2 };
if (context.config.brace_style() == BraceStyle::AlwaysNextLine && !fields.is_empty())
|| context.config.max_width() < overhead + result.len()
{
format!("\n{}{{", offset.block_only().to_string(context.config))
} else {
" {".to_owned()
}
}
};
// 1 = `}`
let overhead = if fields.is_empty() { 1 } else { 0 };
let total_width = result.len() + generics_str.len() + overhead;
if !generics_str.is_empty()
&& !generics_str.contains('\n')
&& total_width > context.config.max_width()
{
result.push('\n');
result.push_str(&offset.to_string(context.config));
result.push_str(generics_str.trim_left());
} else {
result.push_str(&generics_str);
}
if fields.is_empty() {
let inner_span = mk_sp(body_lo, span.hi() - BytePos(1));
format_empty_struct_or_tuple(context, inner_span, offset, &mut result, "", "}");
return Some(result);
}
// 3 = ` ` and ` }`
let one_line_budget = context.budget(result.len() + 3 + offset.width());
let one_line_budget =
one_line_width.map_or(0, |one_line_width| min(one_line_width, one_line_budget));
let items_str = rewrite_with_alignment(
fields,
context,
Shape::indented(offset, context.config).sub_width(1)?,
mk_sp(body_lo, span.hi()),
one_line_budget,
)?;
if !items_str.contains('\n')
&& !result.contains('\n')
&& items_str.len() <= one_line_budget
&& !last_line_contains_single_line_comment(&items_str)
{
Some(format!("{} {} }}", result, items_str))
} else {
Some(format!(
"{}\n{}{}\n{}}}",
result,
offset
.block_indent(context.config)
.to_string(context.config),
items_str,
offset.to_string(context.config)
))
}
}
fn get_bytepos_after_visibility(vis: &ast::Visibility, default_span: Span) -> BytePos {
match vis.node {
ast::VisibilityKind::Crate(..) | ast::VisibilityKind::Restricted { .. } => vis.span.hi(),
_ => default_span.lo(),
}
}
// Format tuple or struct without any fields. We need to make sure that the comments
// inside the delimiters are preserved.
fn format_empty_struct_or_tuple(
context: &RewriteContext,
span: Span,
offset: Indent,
result: &mut String,
opener: &str,
closer: &str,
) {
// 3 = " {}" or "();"
let used_width = last_line_used_width(&result, offset.width()) + 3;
if used_width > context.config.max_width() {
result.push_str(&offset.to_string_with_newline(context.config))
}
result.push_str(opener);
match rewrite_missing_comment(span, Shape::indented(offset, context.config), context) {
Some(ref s) if s.is_empty() => (),
Some(ref s) => {
if !is_single_line(s) || first_line_contains_single_line_comment(s) {
let nested_indent_str = offset
.block_indent(context.config)
.to_string_with_newline(context.config);
result.push_str(&nested_indent_str);
}
result.push_str(s);
if last_line_contains_single_line_comment(s) {
result.push_str(&offset.to_string_with_newline(context.config));
}
}
None => result.push_str(context.snippet(span)),
}
result.push_str(closer);
}
fn format_tuple_struct(
context: &RewriteContext,
struct_parts: &StructParts,
fields: &[ast::StructField],
offset: Indent,
) -> Option<String> {
let mut result = String::with_capacity(1024);
let span = struct_parts.span;
let header_str = struct_parts.format_header(context);
result.push_str(&header_str);
let body_lo = if fields.is_empty() {
let lo = get_bytepos_after_visibility(struct_parts.vis, span);
context
.snippet_provider
.span_after(mk_sp(lo, span.hi()), "(")
} else {
fields[0].span.lo()
};
let body_hi = if fields.is_empty() {
context
.snippet_provider
.span_after(mk_sp(body_lo, span.hi()), ")")
} else {
// This is a dirty hack to work around a missing `)` from the span of the last field.
let last_arg_span = fields[fields.len() - 1].span;
context
.snippet_provider
.opt_span_after(mk_sp(last_arg_span.hi(), span.hi()), ")")
.unwrap_or(last_arg_span.hi())
};
let where_clause_str = match struct_parts.generics {
Some(generics) => {
let budget = context.budget(last_line_width(&header_str));
let shape = Shape::legacy(budget, offset);
let generics_str = rewrite_generics(context, "", generics, shape)?;
result.push_str(&generics_str);
let where_budget = context.budget(last_line_width(&result));
let option = WhereClauseOption::new(true, false);
rewrite_where_clause(
context,
&generics.where_clause,
context.config.brace_style(),
Shape::legacy(where_budget, offset.block_only()),
Density::Compressed,
";",
None,
body_hi,
option,
false,
)?
}
None => "".to_owned(),
};
if fields.is_empty() {
let body_hi = context
.snippet_provider
.span_before(mk_sp(body_lo, span.hi()), ")");
let inner_span = mk_sp(body_lo, body_hi);
format_empty_struct_or_tuple(context, inner_span, offset, &mut result, "(", ")");
} else {
let shape = Shape::indented(offset, context.config).sub_width(1)?;
let fields = &fields.iter().collect::<Vec<_>>();
result = overflow::rewrite_with_parens(
context,
&result,
fields,
shape,
span,
context.config.width_heuristics().fn_call_width,
None,
)?;
}
if !where_clause_str.is_empty()
&& !where_clause_str.contains('\n')
&& (result.contains('\n')
|| offset.block_indent + result.len() + where_clause_str.len() + 1
> context.config.max_width())
{
// We need to put the where clause on a new line, but we didn't
// know that earlier, so the where clause will not be indented properly.
result.push('\n');
result.push_str(
&(offset.block_only() + (context.config.tab_spaces() - 1)).to_string(context.config),
);
}
result.push_str(&where_clause_str);
Some(result)
}
fn rewrite_type_prefix(
context: &RewriteContext,
indent: Indent,
prefix: &str,
ident: ast::Ident,
generics: &ast::Generics,
) -> Option<String> {
let mut result = String::with_capacity(128);
result.push_str(prefix);
let ident_str = rewrite_ident(context, ident);
// 2 = `= `
if generics.params.is_empty() {
result.push_str(ident_str)
} else {
let g_shape = Shape::indented(indent, context.config)
.offset_left(result.len())?
.sub_width(2)?;
let generics_str = rewrite_generics(context, ident_str, generics, g_shape)?;
result.push_str(&generics_str);
}
let where_budget = context.budget(last_line_width(&result));
let option = WhereClauseOption::snuggled(&result);
let where_clause_str = rewrite_where_clause(
context,
&generics.where_clause,
context.config.brace_style(),
Shape::legacy(where_budget, indent),
Density::Vertical,
"=",
None,
generics.span.hi(),
option,
false,
)?;
result.push_str(&where_clause_str);
Some(result)
}
fn rewrite_type_item<R: Rewrite>(
context: &RewriteContext,
indent: Indent,
prefix: &str,
suffix: &str,
ident: ast::Ident,
rhs: &R,
generics: &ast::Generics,
vis: &ast::Visibility,
) -> Option<String> {
let mut result = String::with_capacity(128);
result.push_str(&rewrite_type_prefix(
context,
indent,
&format!("{}{} ", format_visibility(context, vis), prefix),
ident,
generics,
)?);
if generics.where_clause.predicates.is_empty() {
result.push_str(suffix);
} else {
result.push_str(&indent.to_string_with_newline(context.config));
result.push_str(suffix.trim_left());
}
// 1 = ";"
let rhs_shape = Shape::indented(indent, context.config).sub_width(1)?;
rewrite_assign_rhs(context, result, rhs, rhs_shape).map(|s| s + ";")
}
pub fn rewrite_type_alias(
context: &RewriteContext,
indent: Indent,
ident: ast::Ident,
ty: &ast::Ty,
generics: &ast::Generics,
vis: &ast::Visibility,
) -> Option<String> {
rewrite_type_item(context, indent, "type", " =", ident, ty, generics, vis)
}
pub fn rewrite_existential_type(
context: &RewriteContext,
indent: Indent,
ident: ast::Ident,
generic_bounds: &ast::GenericBounds,
generics: &ast::Generics,
vis: &ast::Visibility,
) -> Option<String> {
rewrite_type_item(
context,
indent,
"existential type",
":",
ident,
generic_bounds,
generics,
vis,
)
}
fn type_annotation_spacing(config: &Config) -> (&str, &str) {
(
if config.space_before_colon() { " " } else { "" },
if config.space_after_colon() { " " } else { "" },
)
}
pub fn rewrite_struct_field_prefix(
context: &RewriteContext,
field: &ast::StructField,
) -> Option<String> {
let vis = format_visibility(context, &field.vis);
let type_annotation_spacing = type_annotation_spacing(context.config);
Some(match field.ident {
Some(name) => format!(
"{}{}{}:",
vis,
rewrite_ident(context, name),
type_annotation_spacing.0
),
None => format!("{}", vis),
})
}
impl Rewrite for ast::StructField {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
rewrite_struct_field(context, self, shape, 0)
}
}
pub fn rewrite_struct_field(
context: &RewriteContext,
field: &ast::StructField,
shape: Shape,
lhs_max_width: usize,
) -> Option<String> {
if contains_skip(&field.attrs) {
return Some(context.snippet(field.span()).to_owned());
}
let type_annotation_spacing = type_annotation_spacing(context.config);
let prefix = rewrite_struct_field_prefix(context, field)?;
let attrs_str = field.attrs.rewrite(context, shape)?;
let attrs_extendable = field.ident.is_none() && is_attributes_extendable(&attrs_str);
let missing_span = if field.attrs.is_empty() {
mk_sp(field.span.lo(), field.span.lo())
} else {
mk_sp(field.attrs.last().unwrap().span.hi(), field.span.lo())
};
let mut spacing = String::from(if field.ident.is_some() {
type_annotation_spacing.1
} else {
""
});
// Try to put everything on a single line.
let attr_prefix = combine_strs_with_missing_comments(
context,
&attrs_str,
&prefix,
missing_span,
shape,
attrs_extendable,
)?;
let overhead = last_line_width(&attr_prefix);
let lhs_offset = lhs_max_width.saturating_sub(overhead);
for _ in 0..lhs_offset {
spacing.push(' ');
}
// In this extreme case we will be missing a space betweeen an attribute and a field.
if prefix.is_empty() && !attrs_str.is_empty() && attrs_extendable && spacing.is_empty() {
spacing.push(' ');
}
let orig_ty = shape
.offset_left(overhead + spacing.len())
.and_then(|ty_shape| field.ty.rewrite(context, ty_shape));
if let Some(ref ty) = orig_ty {
if !ty.contains('\n') {
return Some(attr_prefix + &spacing + ty);
}
}
let is_prefix_empty = prefix.is_empty();
// We must use multiline. We are going to put attributes and a field on different lines.
let field_str = rewrite_assign_rhs(context, prefix, &*field.ty, shape)?;
// Remove a leading white-space from `rewrite_assign_rhs()` when rewriting a tuple struct.
let field_str = if is_prefix_empty {
field_str.trim_left()
} else {
&field_str
};
combine_strs_with_missing_comments(context, &attrs_str, field_str, missing_span, shape, false)
}
pub struct StaticParts<'a> {
prefix: &'a str,
vis: &'a ast::Visibility,
ident: ast::Ident,
ty: &'a ast::Ty,
mutability: ast::Mutability,
expr_opt: Option<&'a ptr::P<ast::Expr>>,
defaultness: Option<ast::Defaultness>,
span: Span,
}
impl<'a> StaticParts<'a> {
pub fn from_item(item: &'a ast::Item) -> Self {
let (prefix, ty, mutability, expr) = match item.node {
ast::ItemKind::Static(ref ty, mutability, ref expr) => ("static", ty, mutability, expr),
ast::ItemKind::Const(ref ty, ref expr) => {
("const", ty, ast::Mutability::Immutable, expr)
}
_ => unreachable!(),
};
StaticParts {
prefix,
vis: &item.vis,
ident: item.ident,
ty,
mutability,
expr_opt: Some(expr),
defaultness: None,
span: item.span,
}
}
pub fn from_trait_item(ti: &'a ast::TraitItem) -> Self {
let (ty, expr_opt) = match ti.node {
ast::TraitItemKind::Const(ref ty, ref expr_opt) => (ty, expr_opt),
_ => unreachable!(),
};
StaticParts {
prefix: "const",
vis: &DEFAULT_VISIBILITY,
ident: ti.ident,
ty,
mutability: ast::Mutability::Immutable,
expr_opt: expr_opt.as_ref(),
defaultness: None,
span: ti.span,
}
}
pub fn from_impl_item(ii: &'a ast::ImplItem) -> Self {
let (ty, expr) = match ii.node {
ast::ImplItemKind::Const(ref ty, ref expr) => (ty, expr),
_ => unreachable!(),
};
StaticParts {
prefix: "const",
vis: &ii.vis,
ident: ii.ident,
ty,
mutability: ast::Mutability::Immutable,
expr_opt: Some(expr),
defaultness: Some(ii.defaultness),
span: ii.span,
}
}
}
fn rewrite_static(
context: &RewriteContext,
static_parts: &StaticParts,
offset: Indent,
) -> Option<String> {
let colon = colon_spaces(
context.config.space_before_colon(),
context.config.space_after_colon(),
);
let mut prefix = format!(
"{}{}{} {}{}{}",
format_visibility(context, static_parts.vis),
static_parts.defaultness.map_or("", format_defaultness),
static_parts.prefix,
format_mutability(static_parts.mutability),
static_parts.ident,
colon,
);
// 2 = " =".len()
let ty_shape =
Shape::indented(offset.block_only(), context.config).offset_left(prefix.len() + 2)?;
let ty_str = match static_parts.ty.rewrite(context, ty_shape) {
Some(ty_str) => ty_str,
None => {
if prefix.ends_with(' ') {
prefix.pop();
}
let nested_indent = offset.block_indent(context.config);
let nested_shape = Shape::indented(nested_indent, context.config);
let ty_str = static_parts.ty.rewrite(context, nested_shape)?;
format!(
"{}{}",
nested_indent.to_string_with_newline(context.config),
ty_str
)
}
};
if let Some(expr) = static_parts.expr_opt {
let lhs = format!("{}{} =", prefix, ty_str);
// 1 = ;
let remaining_width = context.budget(offset.block_indent + 1);
rewrite_assign_rhs(
context,
lhs,
&**expr,
Shape::legacy(remaining_width, offset.block_only()),
).and_then(|res| recover_comment_removed(res, static_parts.span, context))
.map(|s| if s.ends_with(';') { s } else { s + ";" })
} else {
Some(format!("{}{};", prefix, ty_str))
}
}
pub fn rewrite_associated_type(
ident: ast::Ident,
ty_opt: Option<&ptr::P<ast::Ty>>,
generic_bounds_opt: Option<&ast::GenericBounds>,
context: &RewriteContext,
indent: Indent,
) -> Option<String> {
let prefix = format!("type {}", rewrite_ident(context, ident));
let type_bounds_str = if let Some(bounds) = generic_bounds_opt {
if bounds.is_empty() {
String::new()
} else {
// 2 = ": ".len()
let shape = Shape::indented(indent, context.config).offset_left(prefix.len() + 2)?;
bounds.rewrite(context, shape).map(|s| format!(": {}", s))?
}
} else {
String::new()
};
if let Some(ty) = ty_opt {
// 1 = `;`
let shape = Shape::indented(indent, context.config).sub_width(1)?;
let lhs = format!("{}{} =", prefix, type_bounds_str);
rewrite_assign_rhs(context, lhs, &**ty, shape).map(|s| s + ";")
} else {
Some(format!("{}{};", prefix, type_bounds_str))
}
}
pub fn rewrite_existential_impl_type(
context: &RewriteContext,
ident: ast::Ident,
generic_bounds: &ast::GenericBounds,
indent: Indent,
) -> Option<String> {
rewrite_associated_type(ident, None, Some(generic_bounds), context, indent)
.map(|s| format!("existential {}", s))
}
pub fn rewrite_associated_impl_type(
ident: ast::Ident,
defaultness: ast::Defaultness,
ty_opt: Option<&ptr::P<ast::Ty>>,
context: &RewriteContext,
indent: Indent,
) -> Option<String> {
let result = rewrite_associated_type(ident, ty_opt, None, context, indent)?;
match defaultness {
ast::Defaultness::Default => Some(format!("default {}", result)),
_ => Some(result),
}
}
impl Rewrite for ast::FunctionRetTy {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
match *self {
ast::FunctionRetTy::Default(_) => Some(String::new()),
ast::FunctionRetTy::Ty(ref ty) => {
let inner_width = shape.width.checked_sub(3)?;
ty.rewrite(context, Shape::legacy(inner_width, shape.indent + 3))
.map(|r| format!("-> {}", r))
}
}
}
}
fn is_empty_infer(context: &RewriteContext, ty: &ast::Ty) -> bool {
match ty.node {
ast::TyKind::Infer => {
let original = context.snippet(ty.span);
original != "_"
}
_ => false,
}
}
impl Rewrite for ast::Arg {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
if is_named_arg(self) {
let mut result = self
.pat
.rewrite(context, Shape::legacy(shape.width, shape.indent))?;
if !is_empty_infer(context, &*self.ty) {
if context.config.space_before_colon() {
result.push_str(" ");
}
result.push_str(":");
if context.config.space_after_colon() {
result.push_str(" ");
}
let overhead = last_line_width(&result);
let max_width = shape.width.checked_sub(overhead)?;
let ty_str = self
.ty
.rewrite(context, Shape::legacy(max_width, shape.indent))?;
result.push_str(&ty_str);
}
Some(result)
} else {
self.ty.rewrite(context, shape)
}
}
}
fn rewrite_explicit_self(
explicit_self: &ast::ExplicitSelf,
args: &[ast::Arg],
context: &RewriteContext,
) -> Option<String> {
match explicit_self.node {
ast::SelfKind::Region(lt, m) => {
let mut_str = format_mutability(m);
match lt {
Some(ref l) => {
let lifetime_str = l.rewrite(
context,
Shape::legacy(context.config.max_width(), Indent::empty()),
)?;
Some(format!("&{} {}self", lifetime_str, mut_str))
}
None => Some(format!("&{}self", mut_str)),
}
}
ast::SelfKind::Explicit(ref ty, _) => {
assert!(!args.is_empty(), "&[ast::Arg] shouldn't be empty.");
let mutability = explicit_self_mutability(&args[0]);
let type_str = ty.rewrite(
context,
Shape::legacy(context.config.max_width(), Indent::empty()),
)?;
Some(format!(
"{}self: {}",
format_mutability(mutability),
type_str
))
}
ast::SelfKind::Value(_) => {
assert!(!args.is_empty(), "&[ast::Arg] shouldn't be empty.");
let mutability = explicit_self_mutability(&args[0]);
Some(format!("{}self", format_mutability(mutability)))
}
}
}
// Hacky solution caused by absence of `Mutability` in `SelfValue` and
// `SelfExplicit` variants of `ast::ExplicitSelf_`.
fn explicit_self_mutability(arg: &ast::Arg) -> ast::Mutability {
if let ast::PatKind::Ident(ast::BindingMode::ByValue(mutability), _, _) = arg.pat.node {
mutability
} else {
unreachable!()
}
}
pub fn span_lo_for_arg(arg: &ast::Arg) -> BytePos {
if is_named_arg(arg) {
arg.pat.span.lo()
} else {
arg.ty.span.lo()
}
}
pub fn span_hi_for_arg(context: &RewriteContext, arg: &ast::Arg) -> BytePos {
match arg.ty.node {
ast::TyKind::Infer if context.snippet(arg.ty.span) == "_" => arg.ty.span.hi(),
ast::TyKind::Infer if is_named_arg(arg) => arg.pat.span.hi(),
_ => arg.ty.span.hi(),
}
}
pub fn is_named_arg(arg: &ast::Arg) -> bool {
if let ast::PatKind::Ident(_, ident, _) = arg.pat.node {
ident != symbol::keywords::Invalid.ident()
} else {
true
}
}
// Return type is (result, force_new_line_for_brace)
fn rewrite_fn_base(
context: &RewriteContext,
indent: Indent,
ident: ast::Ident,
fn_sig: &FnSig,
span: Span,
newline_brace: bool,
has_body: bool,
) -> Option<(String, bool)> {
let mut force_new_line_for_brace = false;
let where_clause = &fn_sig.generics.where_clause;
let mut result = String::with_capacity(1024);
result.push_str(&fn_sig.to_str(context));
// fn foo
result.push_str("fn ");
// Generics.
let overhead = if has_body && !newline_brace {
// 4 = `() {`
4
} else {
// 2 = `()`
2
};
let used_width = last_line_used_width(&result, indent.width());
let one_line_budget = context.budget(used_width + overhead);
let shape = Shape {
width: one_line_budget,
indent,
offset: used_width,
};
let fd = fn_sig.decl;
let generics_str = rewrite_generics(
context,
rewrite_ident(context, ident),
fn_sig.generics,
shape,
)?;
result.push_str(&generics_str);
let snuggle_angle_bracket = generics_str
.lines()
.last()
.map_or(false, |l| l.trim_left().len() == 1);
// Note that the width and indent don't really matter, we'll re-layout the
// return type later anyway.
let ret_str = fd
.output
.rewrite(context, Shape::indented(indent, context.config))?;
let multi_line_ret_str = ret_str.contains('\n');
let ret_str_len = if multi_line_ret_str { 0 } else { ret_str.len() };
// Args.
let (one_line_budget, multi_line_budget, mut arg_indent) = compute_budgets_for_args(
context,
&result,
indent,
ret_str_len,
newline_brace,
has_body,
multi_line_ret_str,
)?;
debug!(
"rewrite_fn_base: one_line_budget: {}, multi_line_budget: {}, arg_indent: {:?}",
one_line_budget, multi_line_budget, arg_indent
);
// Check if vertical layout was forced.
if one_line_budget == 0 {
if snuggle_angle_bracket {
result.push('(');
} else {
result.push_str("(");
if context.config.indent_style() == IndentStyle::Visual {
result.push_str(&arg_indent.to_string_with_newline(context.config));
}
}
} else {
result.push('(');
}
// Skip `pub(crate)`.
let lo_after_visibility = get_bytepos_after_visibility(&fn_sig.visibility, span);
// A conservative estimation, to goal is to be over all parens in generics
let args_start = fn_sig
.generics
.params
.iter()
.last()
.map_or(lo_after_visibility, |param| param.span().hi());
let args_end = if fd.inputs.is_empty() {
context
.snippet_provider
.span_after(mk_sp(args_start, span.hi()), ")")
} else {
let last_span = mk_sp(fd.inputs[fd.inputs.len() - 1].span().hi(), span.hi());
context.snippet_provider.span_after(last_span, ")")
};
let args_span = mk_sp(
context
.snippet_provider
.span_after(mk_sp(args_start, span.hi()), "("),
args_end,
);
let arg_str = rewrite_args(
context,
&fd.inputs,
fd.get_self().as_ref(),
one_line_budget,
multi_line_budget,
indent,
arg_indent,
args_span,
fd.variadic,
generics_str.contains('\n'),
)?;
let put_args_in_block = match context.config.indent_style() {
IndentStyle::Block => arg_str.contains('\n') || arg_str.len() > one_line_budget,
_ => false,
} && !fd.inputs.is_empty();
let mut args_last_line_contains_comment = false;
if put_args_in_block {
arg_indent = indent.block_indent(context.config);
result.push_str(&arg_indent.to_string_with_newline(context.config));
result.push_str(&arg_str);
result.push_str(&indent.to_string_with_newline(context.config));
result.push(')');
} else {
result.push_str(&arg_str);
let used_width = last_line_used_width(&result, indent.width()) + first_line_width(&ret_str);
// Put the closing brace on the next line if it overflows the max width.
// 1 = `)`
if fd.inputs.is_empty() && used_width + 1 > context.config.max_width() {
result.push('\n');
}
// If the last line of args contains comment, we cannot put the closing paren
// on the same line.
if arg_str
.lines()
.last()
.map_or(false, |last_line| last_line.contains("//"))
{
args_last_line_contains_comment = true;
result.push_str(&arg_indent.to_string_with_newline(context.config));
}
result.push(')');
}
// Return type.
if let ast::FunctionRetTy::Ty(..) = fd.output {
let ret_should_indent = match context.config.indent_style() {
// If our args are block layout then we surely must have space.
IndentStyle::Block if put_args_in_block || fd.inputs.is_empty() => false,
_ if args_last_line_contains_comment => false,
_ if result.contains('\n') || multi_line_ret_str => true,
_ => {
// If the return type would push over the max width, then put the return type on
// a new line. With the +1 for the signature length an additional space between
// the closing parenthesis of the argument and the arrow '->' is considered.
let mut sig_length = result.len() + indent.width() + ret_str_len + 1;
// If there is no where clause, take into account the space after the return type
// and the brace.
if where_clause.predicates.is_empty() {
sig_length += 2;
}
sig_length > context.config.max_width()
}
};
let ret_indent = if ret_should_indent {
let indent = if arg_str.is_empty() {
// Aligning with non-existent args looks silly.
force_new_line_for_brace = true;
indent + 4
} else {
// FIXME: we might want to check that using the arg indent
// doesn't blow our budget, and if it does, then fallback to
// the where clause indent.
arg_indent
};
result.push_str(&indent.to_string_with_newline(context.config));
indent
} else {
result.push(' ');
Indent::new(indent.block_indent, last_line_width(&result))
};
if multi_line_ret_str || ret_should_indent {
// Now that we know the proper indent and width, we need to
// re-layout the return type.
let ret_str = fd
.output
.rewrite(context, Shape::indented(ret_indent, context.config))?;
result.push_str(&ret_str);
} else {
result.push_str(&ret_str);
}
// Comment between return type and the end of the decl.
let snippet_lo = fd.output.span().hi();
if where_clause.predicates.is_empty() {
let snippet_hi = span.hi();
let snippet = context.snippet(mk_sp(snippet_lo, snippet_hi));
// Try to preserve the layout of the original snippet.
let original_starts_with_newline = snippet
.find(|c| c != ' ')
.map_or(false, |i| starts_with_newline(&snippet[i..]));
let original_ends_with_newline = snippet
.rfind(|c| c != ' ')
.map_or(false, |i| snippet[i..].ends_with('\n'));
let snippet = snippet.trim();
if !snippet.is_empty() {
result.push(if original_starts_with_newline {
'\n'
} else {
' '
});
result.push_str(snippet);
if original_ends_with_newline {
force_new_line_for_brace = true;
}
}
}
}
let pos_before_where = match fd.output {
ast::FunctionRetTy::Default(..) => args_span.hi(),
ast::FunctionRetTy::Ty(ref ty) => ty.span.hi(),
};
let is_args_multi_lined = arg_str.contains('\n');
let option = WhereClauseOption::new(!has_body, put_args_in_block && ret_str.is_empty());
let where_clause_str = rewrite_where_clause(
context,
where_clause,
context.config.brace_style(),
Shape::indented(indent, context.config),
Density::Tall,
"{",
Some(span.hi()),
pos_before_where,
option,
is_args_multi_lined,
)?;
// If there are neither where clause nor return type, we may be missing comments between
// args and `{`.
if where_clause_str.is_empty() {
if let ast::FunctionRetTy::Default(ret_span) = fd.output {
match recover_missing_comment_in_span(
mk_sp(args_span.hi(), ret_span.hi()),
shape,
context,
last_line_width(&result),
) {
Some(ref missing_comment) if !missing_comment.is_empty() => {
result.push_str(missing_comment);
force_new_line_for_brace = true;
}
_ => (),
}
}
}
result.push_str(&where_clause_str);
force_new_line_for_brace |= last_line_contains_single_line_comment(&result);
force_new_line_for_brace |= is_args_multi_lined && context.config.where_single_line();
Some((result, force_new_line_for_brace))
}
#[derive(Copy, Clone)]
struct WhereClauseOption {
suppress_comma: bool, // Force no trailing comma
snuggle: bool, // Do not insert newline before `where`
compress_where: bool, // Try single line where clause instead of vertical layout
}
impl WhereClauseOption {
pub fn new(suppress_comma: bool, snuggle: bool) -> WhereClauseOption {
WhereClauseOption {
suppress_comma,
snuggle,
compress_where: false,
}
}
pub fn snuggled(current: &str) -> WhereClauseOption {
WhereClauseOption {
suppress_comma: false,
snuggle: last_line_width(current) == 1,
compress_where: false,
}
}
pub fn suppress_comma(&mut self) {
self.suppress_comma = true
}
pub fn compress_where(&mut self) {
self.compress_where = true
}
pub fn snuggle(&mut self) {
self.snuggle = true
}
}
fn rewrite_args(
context: &RewriteContext,
args: &[ast::Arg],
explicit_self: Option<&ast::ExplicitSelf>,
one_line_budget: usize,
multi_line_budget: usize,
indent: Indent,
arg_indent: Indent,
span: Span,
variadic: bool,
generics_str_contains_newline: bool,
) -> Option<String> {
let mut arg_item_strs = args
.iter()
.map(|arg| arg.rewrite(context, Shape::legacy(multi_line_budget, arg_indent)))
.collect::<Option<Vec<_>>>()?;
// Account for sugary self.
// FIXME: the comment for the self argument is dropped. This is blocked
// on rust issue #27522.
let min_args = explicit_self
.and_then(|explicit_self| rewrite_explicit_self(explicit_self, args, context))
.map_or(1, |self_str| {
arg_item_strs[0] = self_str;
2
});
// Comments between args.
let mut arg_items = Vec::new();
if min_args == 2 {
arg_items.push(ListItem::from_str(""));
}
// FIXME(#21): if there are no args, there might still be a comment, but
// without spans for the comment or parens, there is no chance of
// getting it right. You also don't get to put a comment on self, unless
// it is explicit.
if args.len() >= min_args || variadic {
let comment_span_start = if min_args == 2 {
let second_arg_start = if arg_has_pattern(&args[1]) {
args[1].pat.span.lo()
} else {
args[1].ty.span.lo()
};
let reduced_span = mk_sp(span.lo(), second_arg_start);
context.snippet_provider.span_after_last(reduced_span, ",")
} else {
span.lo()
};
enum ArgumentKind<'a> {
Regular(&'a ast::Arg),
Variadic(BytePos),
}
let variadic_arg = if variadic {
let variadic_span = mk_sp(args.last().unwrap().ty.span.hi(), span.hi());
let variadic_start =
context.snippet_provider.span_after(variadic_span, "...") - BytePos(3);
Some(ArgumentKind::Variadic(variadic_start))
} else {
None
};
let more_items = itemize_list(
context.snippet_provider,
args[min_args - 1..]
.iter()
.map(ArgumentKind::Regular)
.chain(variadic_arg),
")",
",",
|arg| match *arg {
ArgumentKind::Regular(arg) => span_lo_for_arg(arg),
ArgumentKind::Variadic(start) => start,
},
|arg| match *arg {
ArgumentKind::Regular(arg) => arg.ty.span.hi(),
ArgumentKind::Variadic(start) => start + BytePos(3),
},
|arg| match *arg {
ArgumentKind::Regular(..) => None,
ArgumentKind::Variadic(..) => Some("...".to_owned()),
},
comment_span_start,
span.hi(),
false,
);
arg_items.extend(more_items);
}
let fits_in_one_line = !generics_str_contains_newline
&& (arg_items.is_empty()
|| arg_items.len() == 1 && arg_item_strs[0].len() <= one_line_budget);
for (item, arg) in arg_items.iter_mut().zip(arg_item_strs) {
item.item = Some(arg);
}
let last_line_ends_with_comment = arg_items
.iter()
.last()
.and_then(|item| item.post_comment.as_ref())
.map_or(false, |s| s.trim().starts_with("//"));
let (indent, trailing_comma) = match context.config.indent_style() {
IndentStyle::Block if fits_in_one_line => {
(indent.block_indent(context.config), SeparatorTactic::Never)
}
IndentStyle::Block => (
indent.block_indent(context.config),
context.config.trailing_comma(),
),
IndentStyle::Visual if last_line_ends_with_comment => {
(arg_indent, context.config.trailing_comma())
}
IndentStyle::Visual => (arg_indent, SeparatorTactic::Never),
};
let tactic = definitive_tactic(
&arg_items,
context.config.fn_args_density().to_list_tactic(),
Separator::Comma,
one_line_budget,
);
let budget = match tactic {
DefinitiveListTactic::Horizontal => one_line_budget,
_ => multi_line_budget,
};
debug!("rewrite_args: budget: {}, tactic: {:?}", budget, tactic);
let trailing_separator = if variadic {
SeparatorTactic::Never
} else {
trailing_comma
};
let fmt = ListFormatting::new(Shape::legacy(budget, indent), context.config)
.tactic(tactic)
.trailing_separator(trailing_separator)
.ends_with_newline(tactic.ends_with_newline(context.config.indent_style()))
.preserve_newline(true);
write_list(&arg_items, &fmt)
}
fn arg_has_pattern(arg: &ast::Arg) -> bool {
if let ast::PatKind::Ident(_, ident, _) = arg.pat.node {
ident != symbol::keywords::Invalid.ident()
} else {
true
}
}
fn compute_budgets_for_args(
context: &RewriteContext,
result: &str,
indent: Indent,
ret_str_len: usize,
newline_brace: bool,
has_braces: bool,
force_vertical_layout: bool,
) -> Option<((usize, usize, Indent))> {
debug!(
"compute_budgets_for_args {} {:?}, {}, {}",
result.len(),
indent,
ret_str_len,
newline_brace
);
// Try keeping everything on the same line.
if !result.contains('\n') && !force_vertical_layout {
// 2 = `()`, 3 = `() `, space is before ret_string.
let overhead = if ret_str_len == 0 { 2 } else { 3 };
let mut used_space = indent.width() + result.len() + ret_str_len + overhead;
if has_braces {
if !newline_brace {
// 2 = `{}`
used_space += 2;
}
} else {
// 1 = `;`
used_space += 1;
}
let one_line_budget = context.budget(used_space);
if one_line_budget > 0 {
// 4 = "() {".len()
let (indent, multi_line_budget) = match context.config.indent_style() {
IndentStyle::Block => {
let indent = indent.block_indent(context.config);
(indent, context.budget(indent.width() + 1))
}
IndentStyle::Visual => {
let indent = indent + result.len() + 1;
let multi_line_overhead = indent.width() + if newline_brace { 2 } else { 4 };
(indent, context.budget(multi_line_overhead))
}
};
return Some((one_line_budget, multi_line_budget, indent));
}
}
// Didn't work. we must force vertical layout and put args on a newline.
let new_indent = indent.block_indent(context.config);
let used_space = match context.config.indent_style() {
// 1 = `,`
IndentStyle::Block => new_indent.width() + 1,
// Account for `)` and possibly ` {`.
IndentStyle::Visual => new_indent.width() + if ret_str_len == 0 { 1 } else { 3 },
};
Some((0, context.budget(used_space), new_indent))
}
fn newline_for_brace(config: &Config, where_clause: &ast::WhereClause) -> bool {
let predicate_count = where_clause.predicates.len();
if config.where_single_line() && predicate_count == 1 {
return false;
}
let brace_style = config.brace_style();
brace_style == BraceStyle::AlwaysNextLine
|| (brace_style == BraceStyle::SameLineWhere && predicate_count > 0)
}
fn rewrite_generics(
context: &RewriteContext,
ident: &str,
generics: &ast::Generics,
shape: Shape,
) -> Option<String> {
// FIXME: convert bounds to where clauses where they get too big or if
// there is a where clause at all.
if generics.params.is_empty() {
return Some(ident.to_owned());
}
let params = &generics.params.iter().map(|e| &*e).collect::<Vec<_>>();
overflow::rewrite_with_angle_brackets(context, ident, params, shape, generics.span)
}
pub fn generics_shape_from_config(config: &Config, shape: Shape, offset: usize) -> Option<Shape> {
match config.indent_style() {
IndentStyle::Visual => shape.visual_indent(1 + offset).sub_width(offset + 2),
IndentStyle::Block => {
// 1 = ","
shape
.block()
.block_indent(config.tab_spaces())
.with_max_width(config)
.sub_width(1)
}
}
}
fn rewrite_where_clause_rfc_style(
context: &RewriteContext,
where_clause: &ast::WhereClause,
shape: Shape,
terminator: &str,
span_end: Option<BytePos>,
span_end_before_where: BytePos,
where_clause_option: WhereClauseOption,
is_args_multi_line: bool,
) -> Option<String> {
let block_shape = shape.block().with_max_width(context.config);
let (span_before, span_after) =
missing_span_before_after_where(span_end_before_where, where_clause);
let (comment_before, comment_after) =
rewrite_comments_before_after_where(context, span_before, span_after, shape)?;
let starting_newline = if where_clause_option.snuggle && comment_before.is_empty() {
Cow::from(" ")
} else {
block_shape.indent.to_string_with_newline(context.config)
};
let clause_shape = block_shape.block_left(context.config.tab_spaces())?;
// 1 = `,`
let clause_shape = clause_shape.sub_width(1)?;
// each clause on one line, trailing comma (except if suppress_comma)
let span_start = where_clause.predicates[0].span().lo();
// If we don't have the start of the next span, then use the end of the
// predicates, but that means we miss comments.
let len = where_clause.predicates.len();
let end_of_preds = where_clause.predicates[len - 1].span().hi();
let span_end = span_end.unwrap_or(end_of_preds);
let items = itemize_list(
context.snippet_provider,
where_clause.predicates.iter(),
terminator,
",",
|pred| pred.span().lo(),
|pred| pred.span().hi(),
|pred| pred.rewrite(context, clause_shape),
span_start,
span_end,
false,
);
let where_single_line = context.config.where_single_line() && len == 1 && !is_args_multi_line;
let comma_tactic = if where_clause_option.suppress_comma || where_single_line {
SeparatorTactic::Never
} else {
context.config.trailing_comma()
};
// shape should be vertical only and only if we have `where_single_line` option enabled
// and the number of items of the where clause is equal to 1
let shape_tactic = if where_single_line {
DefinitiveListTactic::Horizontal
} else {
DefinitiveListTactic::Vertical
};
let fmt = ListFormatting::new(clause_shape, context.config)
.tactic(shape_tactic)
.trailing_separator(comma_tactic)
.preserve_newline(true);
let preds_str = write_list(&items.collect::<Vec<_>>(), &fmt)?;
let comment_separator = |comment: &str, shape: Shape| {
if comment.is_empty() {
Cow::from("")
} else {
shape.indent.to_string_with_newline(context.config)
}
};
let newline_before_where = comment_separator(&comment_before, shape);
let newline_after_where = comment_separator(&comment_after, clause_shape);
// 6 = `where `
let clause_sep = if where_clause_option.compress_where
&& comment_before.is_empty()
&& comment_after.is_empty()
&& !preds_str.contains('\n')
&& 6 + preds_str.len() <= shape.width
|| where_single_line
{
Cow::from(" ")
} else {
clause_shape.indent.to_string_with_newline(context.config)
};
Some(format!(
"{}{}{}where{}{}{}{}",
starting_newline,
comment_before,
newline_before_where,
newline_after_where,
comment_after,
clause_sep,
preds_str
))
}
fn rewrite_where_clause(
context: &RewriteContext,
where_clause: &ast::WhereClause,
brace_style: BraceStyle,
shape: Shape,
density: Density,
terminator: &str,
span_end: Option<BytePos>,
span_end_before_where: BytePos,
where_clause_option: WhereClauseOption,
is_args_multi_line: bool,
) -> Option<String> {
if where_clause.predicates.is_empty() {
return Some(String::new());
}
if context.config.indent_style() == IndentStyle::Block {
return rewrite_where_clause_rfc_style(
context,
where_clause,
shape,
terminator,
span_end,
span_end_before_where,
where_clause_option,
is_args_multi_line,
);
}
let extra_indent = Indent::new(context.config.tab_spaces(), 0);
let offset = match context.config.indent_style() {
IndentStyle::Block => shape.indent + extra_indent.block_indent(context.config),
// 6 = "where ".len()
IndentStyle::Visual => shape.indent + extra_indent + 6,
};
// FIXME: if indent_style != Visual, then the budgets below might
// be out by a char or two.
let budget = context.config.max_width() - offset.width();
let span_start = where_clause.predicates[0].span().lo();
// If we don't have the start of the next span, then use the end of the
// predicates, but that means we miss comments.
let len = where_clause.predicates.len();
let end_of_preds = where_clause.predicates[len - 1].span().hi();
let span_end = span_end.unwrap_or(end_of_preds);
let items = itemize_list(
context.snippet_provider,
where_clause.predicates.iter(),
terminator,
",",
|pred| pred.span().lo(),
|pred| pred.span().hi(),
|pred| pred.rewrite(context, Shape::legacy(budget, offset)),
span_start,
span_end,
false,
);
let item_vec = items.collect::<Vec<_>>();
// FIXME: we don't need to collect here
let tactic = definitive_tactic(&item_vec, ListTactic::Vertical, Separator::Comma, budget);
let mut comma_tactic = context.config.trailing_comma();
// Kind of a hack because we don't usually have trailing commas in where clauses.
if comma_tactic == SeparatorTactic::Vertical || where_clause_option.suppress_comma {
comma_tactic = SeparatorTactic::Never;
}
let fmt = ListFormatting::new(Shape::legacy(budget, offset), context.config)
.tactic(tactic)
.trailing_separator(comma_tactic)
.ends_with_newline(tactic.ends_with_newline(context.config.indent_style()))
.preserve_newline(true);
let preds_str = write_list(&item_vec, &fmt)?;
let end_length = if terminator == "{" {
// If the brace is on the next line we don't need to count it otherwise it needs two
// characters " {"
match brace_style {
BraceStyle::AlwaysNextLine | BraceStyle::SameLineWhere => 0,
BraceStyle::PreferSameLine => 2,
}
} else if terminator == "=" {
2
} else {
terminator.len()
};
if density == Density::Tall
|| preds_str.contains('\n')
|| shape.indent.width() + " where ".len() + preds_str.len() + end_length > shape.width
{
Some(format!(
"\n{}where {}",
(shape.indent + extra_indent).to_string(context.config),
preds_str
))
} else {
Some(format!(" where {}", preds_str))
}
}
fn missing_span_before_after_where(
before_item_span_end: BytePos,
where_clause: &ast::WhereClause,
) -> (Span, Span) {
let missing_span_before = mk_sp(before_item_span_end, where_clause.span.lo());
// 5 = `where`
let pos_after_where = where_clause.span.lo() + BytePos(5);
let missing_span_after = mk_sp(pos_after_where, where_clause.predicates[0].span().lo());
(missing_span_before, missing_span_after)
}
fn rewrite_comments_before_after_where(
context: &RewriteContext,
span_before_where: Span,
span_after_where: Span,
shape: Shape,
) -> Option<(String, String)> {
let before_comment = rewrite_missing_comment(span_before_where, shape, context)?;
let after_comment = rewrite_missing_comment(
span_after_where,
shape.block_indent(context.config.tab_spaces()),
context,
)?;
Some((before_comment, after_comment))
}
fn format_header(
context: &RewriteContext,
item_name: &str,
ident: ast::Ident,
vis: &ast::Visibility,
) -> String {
format!(
"{}{}{}",
format_visibility(context, vis),
item_name,
rewrite_ident(context, ident)
)
}
#[derive(PartialEq, Eq, Clone, Copy)]
enum BracePos {
None,
Auto,
ForceSameLine,
}
fn format_generics(
context: &RewriteContext,
generics: &ast::Generics,
brace_style: BraceStyle,
brace_pos: BracePos,
offset: Indent,
span: Span,
used_width: usize,
) -> Option<String> {
let shape = Shape::legacy(context.budget(used_width + offset.width()), offset);
let mut result = rewrite_generics(context, "", generics, shape)?;
let same_line_brace = if !generics.where_clause.predicates.is_empty() || result.contains('\n') {
let budget = context.budget(last_line_used_width(&result, offset.width()));
let mut option = WhereClauseOption::snuggled(&result);
if brace_pos == BracePos::None {
option.suppress_comma = true;
}
// If the generics are not parameterized then generics.span.hi() == 0,
// so we use span.lo(), which is the position after `struct Foo`.
let span_end_before_where = if !generics.params.is_empty() {
generics.span.hi()
} else {
span.lo()
};
let where_clause_str = rewrite_where_clause(
context,
&generics.where_clause,
brace_style,
Shape::legacy(budget, offset.block_only()),
Density::Tall,
"{",
Some(span.hi()),
span_end_before_where,
option,
false,
)?;
result.push_str(&where_clause_str);
brace_pos == BracePos::ForceSameLine
|| brace_style == BraceStyle::PreferSameLine
|| (generics.where_clause.predicates.is_empty()
&& trimmed_last_line_width(&result) == 1)
} else {
brace_pos == BracePos::ForceSameLine
|| trimmed_last_line_width(&result) == 1
|| brace_style != BraceStyle::AlwaysNextLine
};
if brace_pos == BracePos::None {
return Some(result);
}
let total_used_width = last_line_used_width(&result, used_width);
let remaining_budget = context.budget(total_used_width);
// If the same line brace if forced, it indicates that we are rewriting an item with empty body,
// and hence we take the closer into account as well for one line budget.
// We assume that the closer has the same length as the opener.
let overhead = if brace_pos == BracePos::ForceSameLine {
// 3 = ` {}`
3
} else {
// 2 = ` {`
2
};
let forbid_same_line_brace = overhead > remaining_budget;
if !forbid_same_line_brace && same_line_brace {
result.push(' ');
} else {
result.push('\n');
result.push_str(&offset.block_only().to_string(context.config));
}
result.push('{');
Some(result)
}
impl Rewrite for ast::ForeignItem {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
let attrs_str = self.attrs.rewrite(context, shape)?;
// Drop semicolon or it will be interpreted as comment.
// FIXME: this may be a faulty span from libsyntax.
let span = mk_sp(self.span.lo(), self.span.hi() - BytePos(1));
let item_str = match self.node {
ast::ForeignItemKind::Fn(ref fn_decl, ref generics) => rewrite_fn_base(
context,
shape.indent,
self.ident,
&FnSig::new(fn_decl, generics, self.vis.clone()),
span,
false,
false,
).map(|(s, _)| format!("{};", s)),
ast::ForeignItemKind::Static(ref ty, is_mutable) => {
// FIXME(#21): we're dropping potential comments in between the
// function keywords here.
let vis = format_visibility(context, &self.vis);
let mut_str = if is_mutable { "mut " } else { "" };
let prefix = format!(
"{}static {}{}:",
vis,
mut_str,
rewrite_ident(context, self.ident)
);
// 1 = ;
rewrite_assign_rhs(context, prefix, &**ty, shape.sub_width(1)?).map(|s| s + ";")
}
ast::ForeignItemKind::Ty => {
let vis = format_visibility(context, &self.vis);
Some(format!(
"{}type {};",
vis,
rewrite_ident(context, self.ident)
))
}
ast::ForeignItemKind::Macro(ref mac) => {
rewrite_macro(mac, None, context, shape, MacroPosition::Item)
}
}?;
let missing_span = if self.attrs.is_empty() {
mk_sp(self.span.lo(), self.span.lo())
} else {
mk_sp(self.attrs[self.attrs.len() - 1].span.hi(), self.span.lo())
};
combine_strs_with_missing_comments(
context,
&attrs_str,
&item_str,
missing_span,
shape,
false,
)
}
}
/// Rewrite an inline mod.
pub fn rewrite_mod(context: &RewriteContext, item: &ast::Item) -> String {
let mut result = String::with_capacity(32);
result.push_str(&*format_visibility(context, &item.vis));
result.push_str("mod ");
result.push_str(rewrite_ident(context, item.ident));
result.push(';');
result
}
/// Rewrite `extern crate foo;` WITHOUT attributes.
pub fn rewrite_extern_crate(context: &RewriteContext, item: &ast::Item) -> Option<String> {
assert!(is_extern_crate(item));
let new_str = context.snippet(item.span);
Some(if contains_comment(new_str) {
new_str.to_owned()
} else {
let no_whitespace = &new_str.split_whitespace().collect::<Vec<&str>>().join(" ");
String::from(&*Regex::new(r"\s;").unwrap().replace(no_whitespace, ";"))
})
}
/// Returns true for `mod foo;`, false for `mod foo { .. }`.
pub fn is_mod_decl(item: &ast::Item) -> bool {
match item.node {
ast::ItemKind::Mod(ref m) => m.inner.hi() != item.span.hi(),
_ => false,
}
}
pub fn is_use_item(item: &ast::Item) -> bool {
match item.node {
ast::ItemKind::Use(_) => true,
_ => false,
}
}
pub fn is_extern_crate(item: &ast::Item) -> bool {
match item.node {
ast::ItemKind::ExternCrate(..) => true,
_ => false,
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/expr.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::borrow::Cow;
use std::cmp::min;
use config::lists::*;
use syntax::parse::token::DelimToken;
use syntax::source_map::{BytePos, SourceMap, Span};
use syntax::{ast, ptr};
use chains::rewrite_chain;
use closures;
use comment::{
combine_strs_with_missing_comments, contains_comment, recover_comment_removed, rewrite_comment,
rewrite_missing_comment, CharClasses, FindUncommented,
};
use config::{Config, ControlBraceStyle, IndentStyle};
use lists::{
definitive_tactic, itemize_list, shape_for_tactic, struct_lit_formatting, struct_lit_shape,
struct_lit_tactic, write_list, ListFormatting, ListItem, Separator,
};
use macros::{rewrite_macro, MacroArg, MacroPosition};
use matches::rewrite_match;
use overflow;
use pairs::{rewrite_all_pairs, rewrite_pair, PairParts};
use patterns::{can_be_overflowed_pat, is_short_pattern, TuplePatField};
use rewrite::{Rewrite, RewriteContext};
use shape::{Indent, Shape};
use source_map::{LineRangeUtils, SpanUtils};
use spanned::Spanned;
use string::{rewrite_string, StringFormat};
use types::{can_be_overflowed_type, rewrite_path, PathContext};
use utils::{
colon_spaces, contains_skip, count_newlines, first_line_ends_with, first_line_width,
inner_attributes, last_line_extendable, last_line_width, mk_sp, outer_attributes,
ptr_vec_to_ref_vec, semicolon_for_stmt, wrap_str,
};
use vertical::rewrite_with_alignment;
use visitor::FmtVisitor;
impl Rewrite for ast::Expr {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
format_expr(self, ExprType::SubExpression, context, shape)
}
}
#[derive(Copy, Clone, PartialEq)]
pub enum ExprType {
Statement,
SubExpression,
}
pub fn format_expr(
expr: &ast::Expr,
expr_type: ExprType,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
skip_out_of_file_lines_range!(context, expr.span);
if contains_skip(&*expr.attrs) {
return Some(context.snippet(expr.span()).to_owned());
}
let expr_rw = match expr.node {
ast::ExprKind::Array(ref expr_vec) => rewrite_array(
"",
&ptr_vec_to_ref_vec(expr_vec),
expr.span,
context,
shape,
choose_separator_tactic(context, expr.span),
None,
),
ast::ExprKind::Lit(ref l) => rewrite_literal(context, l, shape),
ast::ExprKind::Call(ref callee, ref args) => {
let inner_span = mk_sp(callee.span.hi(), expr.span.hi());
let callee_str = callee.rewrite(context, shape)?;
rewrite_call(context, &callee_str, args, inner_span, shape)
}
ast::ExprKind::Paren(ref subexpr) => rewrite_paren(context, subexpr, shape, expr.span),
ast::ExprKind::Binary(op, ref lhs, ref rhs) => {
// FIXME: format comments between operands and operator
rewrite_all_pairs(expr, shape, context).or_else(|| {
rewrite_pair(
&**lhs,
&**rhs,
PairParts::infix(&format!(" {} ", context.snippet(op.span))),
context,
shape,
context.config.binop_separator(),
)
})
}
ast::ExprKind::Unary(ref op, ref subexpr) => rewrite_unary_op(context, op, subexpr, shape),
ast::ExprKind::Struct(ref path, ref fields, ref base) => rewrite_struct_lit(
context,
path,
fields,
base.as_ref().map(|e| &**e),
expr.span,
shape,
),
ast::ExprKind::Tup(ref items) => {
rewrite_tuple(context, &ptr_vec_to_ref_vec(items), expr.span, shape)
}
ast::ExprKind::If(..)
| ast::ExprKind::IfLet(..)
| ast::ExprKind::ForLoop(..)
| ast::ExprKind::Loop(..)
| ast::ExprKind::While(..)
| ast::ExprKind::WhileLet(..) => to_control_flow(expr, expr_type)
.and_then(|control_flow| control_flow.rewrite(context, shape)),
ast::ExprKind::Block(ref block, opt_label) => {
match expr_type {
ExprType::Statement => {
if is_unsafe_block(block) {
rewrite_block(block, Some(&expr.attrs), opt_label, context, shape)
} else if let rw @ Some(_) =
rewrite_empty_block(context, block, Some(&expr.attrs), opt_label, "", shape)
{
// Rewrite block without trying to put it in a single line.
rw
} else {
let prefix = block_prefix(context, block, shape)?;
rewrite_block_with_visitor(
context,
&prefix,
block,
Some(&expr.attrs),
opt_label,
shape,
true,
)
}
}
ExprType::SubExpression => {
rewrite_block(block, Some(&expr.attrs), opt_label, context, shape)
}
}
}
ast::ExprKind::Match(ref cond, ref arms) => {
rewrite_match(context, cond, arms, shape, expr.span, &expr.attrs)
}
ast::ExprKind::Path(ref qself, ref path) => {
rewrite_path(context, PathContext::Expr, qself.as_ref(), path, shape)
}
ast::ExprKind::Assign(ref lhs, ref rhs) => {
rewrite_assignment(context, lhs, rhs, None, shape)
}
ast::ExprKind::AssignOp(ref op, ref lhs, ref rhs) => {
rewrite_assignment(context, lhs, rhs, Some(op), shape)
}
ast::ExprKind::Continue(ref opt_label) => {
let id_str = match *opt_label {
Some(label) => format!(" {}", label.ident),
None => String::new(),
};
Some(format!("continue{}", id_str))
}
ast::ExprKind::Break(ref opt_label, ref opt_expr) => {
let id_str = match *opt_label {
Some(label) => format!(" {}", label.ident),
None => String::new(),
};
if let Some(ref expr) = *opt_expr {
rewrite_unary_prefix(context, &format!("break{} ", id_str), &**expr, shape)
} else {
Some(format!("break{}", id_str))
}
}
ast::ExprKind::Yield(ref opt_expr) => if let Some(ref expr) = *opt_expr {
rewrite_unary_prefix(context, "yield ", &**expr, shape)
} else {
Some("yield".to_string())
},
ast::ExprKind::Closure(capture, asyncness, movability, ref fn_decl, ref body, _) => {
closures::rewrite_closure(
capture, asyncness, movability, fn_decl, body, expr.span, context, shape,
)
}
ast::ExprKind::Try(..) | ast::ExprKind::Field(..) | ast::ExprKind::MethodCall(..) => {
rewrite_chain(expr, context, shape)
}
ast::ExprKind::Mac(ref mac) => {
rewrite_macro(mac, None, context, shape, MacroPosition::Expression).or_else(|| {
wrap_str(
context.snippet(expr.span).to_owned(),
context.config.max_width(),
shape,
)
})
}
ast::ExprKind::Ret(None) => Some("return".to_owned()),
ast::ExprKind::Ret(Some(ref expr)) => {
rewrite_unary_prefix(context, "return ", &**expr, shape)
}
ast::ExprKind::Box(ref expr) => rewrite_unary_prefix(context, "box ", &**expr, shape),
ast::ExprKind::AddrOf(mutability, ref expr) => {
rewrite_expr_addrof(context, mutability, expr, shape)
}
ast::ExprKind::Cast(ref expr, ref ty) => rewrite_pair(
&**expr,
&**ty,
PairParts::infix(" as "),
context,
shape,
SeparatorPlace::Front,
),
ast::ExprKind::Type(ref expr, ref ty) => rewrite_pair(
&**expr,
&**ty,
PairParts::infix(": "),
context,
shape,
SeparatorPlace::Back,
),
ast::ExprKind::Index(ref expr, ref index) => {
rewrite_index(&**expr, &**index, context, shape)
}
ast::ExprKind::Repeat(ref expr, ref repeats) => rewrite_pair(
&**expr,
&*repeats.value,
PairParts::new("[", "; ", "]"),
context,
shape,
SeparatorPlace::Back,
),
ast::ExprKind::Range(ref lhs, ref rhs, limits) => {
let delim = match limits {
ast::RangeLimits::HalfOpen => "..",
ast::RangeLimits::Closed => "..=",
};
fn needs_space_before_range(context: &RewriteContext, lhs: &ast::Expr) -> bool {
match lhs.node {
ast::ExprKind::Lit(ref lit) => match lit.node {
ast::LitKind::FloatUnsuffixed(..) => {
context.snippet(lit.span).ends_with('.')
}
_ => false,
},
_ => false,
}
}
fn needs_space_after_range(rhs: &ast::Expr) -> bool {
match rhs.node {
// Don't format `.. ..` into `....`, which is invalid.
//
// This check is unnecessary for `lhs`, because a range
// starting from another range needs parentheses as `(x ..) ..`
// (`x .. ..` is a range from `x` to `..`).
ast::ExprKind::Range(None, _, _) => true,
_ => false,
}
}
let default_sp_delim = |lhs: Option<&ast::Expr>, rhs: Option<&ast::Expr>| {
let space_if = |b: bool| if b { " " } else { "" };
format!(
"{}{}{}",
lhs.map(|lhs| space_if(needs_space_before_range(context, lhs)))
.unwrap_or(""),
delim,
rhs.map(|rhs| space_if(needs_space_after_range(rhs)))
.unwrap_or(""),
)
};
match (lhs.as_ref().map(|x| &**x), rhs.as_ref().map(|x| &**x)) {
(Some(lhs), Some(rhs)) => {
let sp_delim = if context.config.spaces_around_ranges() {
format!(" {} ", delim)
} else {
default_sp_delim(Some(lhs), Some(rhs))
};
rewrite_pair(
&*lhs,
&*rhs,
PairParts::infix(&sp_delim),
context,
shape,
context.config.binop_separator(),
)
}
(None, Some(rhs)) => {
let sp_delim = if context.config.spaces_around_ranges() {
format!("{} ", delim)
} else {
default_sp_delim(None, Some(rhs))
};
rewrite_unary_prefix(context, &sp_delim, &*rhs, shape)
}
(Some(lhs), None) => {
let sp_delim = if context.config.spaces_around_ranges() {
format!(" {}", delim)
} else {
default_sp_delim(Some(lhs), None)
};
rewrite_unary_suffix(context, &sp_delim, &*lhs, shape)
}
(None, None) => Some(delim.to_owned()),
}
}
// We do not format these expressions yet, but they should still
// satisfy our width restrictions.
ast::ExprKind::InlineAsm(..) => Some(context.snippet(expr.span).to_owned()),
// FIXME: Was complaining about try not existing
ast::ExprKind::Catch(ref block) => {
if let rw @ Some(_) =
rewrite_single_line_block(context, "try ", block, Some(&expr.attrs), None, shape)
{
rw
} else {
// 9 = `try `
let budget = shape.width.saturating_sub(9);
Some(format!(
"{}{}",
"try ",
rewrite_block(
block,
Some(&expr.attrs),
None,
context,
Shape::legacy(budget, shape.indent)
)?
))
}
}
ast::ExprKind::ObsoleteInPlace(ref lhs, ref rhs) => lhs
.rewrite(context, shape)
.map(|s| s + " <-")
.and_then(|lhs| rewrite_assign_rhs(context, lhs, &**rhs, shape)),
ast::ExprKind::Async(capture_by, _node_id, ref block) => {
let mover = if capture_by == ast::CaptureBy::Value {
"move "
} else {
""
};
if let rw @ Some(_) = rewrite_single_line_block(
context,
format!("{}{}", "async ", mover).as_str(),
block,
Some(&expr.attrs),
None,
shape,
) {
rw
} else {
// 6 = `async `
let budget = shape.width.saturating_sub(6);
Some(format!(
"{}{}{}",
"async ",
mover,
rewrite_block(
block,
Some(&expr.attrs),
None,
context,
Shape::legacy(budget, shape.indent)
)?
))
}
}
};
expr_rw
.and_then(|expr_str| recover_comment_removed(expr_str, expr.span, context))
.and_then(|expr_str| {
let attrs = outer_attributes(&expr.attrs);
let attrs_str = attrs.rewrite(context, shape)?;
let span = mk_sp(
attrs.last().map_or(expr.span.lo(), |attr| attr.span.hi()),
expr.span.lo(),
);
combine_strs_with_missing_comments(context, &attrs_str, &expr_str, span, shape, false)
})
}
pub fn rewrite_array<T: Rewrite + Spanned + ToExpr>(
name: &str,
exprs: &[&T],
span: Span,
context: &RewriteContext,
shape: Shape,
force_separator_tactic: Option<SeparatorTactic>,
delim_token: Option<DelimToken>,
) -> Option<String> {
overflow::rewrite_with_square_brackets(
context,
name,
exprs,
shape,
span,
force_separator_tactic,
delim_token,
)
}
fn rewrite_empty_block(
context: &RewriteContext,
block: &ast::Block,
attrs: Option<&[ast::Attribute]>,
label: Option<ast::Label>,
prefix: &str,
shape: Shape,
) -> Option<String> {
let label_str = rewrite_label(label);
if attrs.map_or(false, |a| !inner_attributes(a).is_empty()) {
return None;
}
if block.stmts.is_empty()
&& !block_contains_comment(block, context.source_map)
&& shape.width >= 2
{
return Some(format!("{}{}{{}}", prefix, label_str));
}
// If a block contains only a single-line comment, then leave it on one line.
let user_str = context.snippet(block.span);
let user_str = user_str.trim();
if user_str.starts_with('{') && user_str.ends_with('}') {
let comment_str = user_str[1..user_str.len() - 1].trim();
if block.stmts.is_empty()
&& !comment_str.contains('\n')
&& !comment_str.starts_with("//")
&& comment_str.len() + 4 <= shape.width
{
return Some(format!("{}{}{{ {} }}", prefix, label_str, comment_str));
}
}
None
}
fn block_prefix(context: &RewriteContext, block: &ast::Block, shape: Shape) -> Option<String> {
Some(match block.rules {
ast::BlockCheckMode::Unsafe(..) => {
let snippet = context.snippet(block.span);
let open_pos = snippet.find_uncommented("{")?;
// Extract comment between unsafe and block start.
let trimmed = &snippet[6..open_pos].trim();
if !trimmed.is_empty() {
// 9 = "unsafe {".len(), 7 = "unsafe ".len()
let budget = shape.width.checked_sub(9)?;
format!(
"unsafe {} ",
rewrite_comment(
trimmed,
true,
Shape::legacy(budget, shape.indent + 7),
context.config,
)?
)
} else {
"unsafe ".to_owned()
}
}
ast::BlockCheckMode::Default => String::new(),
})
}
fn rewrite_single_line_block(
context: &RewriteContext,
prefix: &str,
block: &ast::Block,
attrs: Option<&[ast::Attribute]>,
label: Option<ast::Label>,
shape: Shape,
) -> Option<String> {
if is_simple_block(block, attrs, context.source_map) {
let expr_shape = shape.offset_left(last_line_width(prefix))?;
let expr_str = block.stmts[0].rewrite(context, expr_shape)?;
let label_str = rewrite_label(label);
let result = format!("{}{}{{ {} }}", prefix, label_str, expr_str);
if result.len() <= shape.width && !result.contains('\n') {
return Some(result);
}
}
None
}
pub fn rewrite_block_with_visitor(
context: &RewriteContext,
prefix: &str,
block: &ast::Block,
attrs: Option<&[ast::Attribute]>,
label: Option<ast::Label>,
shape: Shape,
has_braces: bool,
) -> Option<String> {
if let rw @ Some(_) = rewrite_empty_block(context, block, attrs, label, prefix, shape) {
return rw;
}
let mut visitor = FmtVisitor::from_context(context);
visitor.block_indent = shape.indent;
visitor.is_if_else_block = context.is_if_else_block();
match block.rules {
ast::BlockCheckMode::Unsafe(..) => {
let snippet = context.snippet(block.span);
let open_pos = snippet.find_uncommented("{")?;
visitor.last_pos = block.span.lo() + BytePos(open_pos as u32)
}
ast::BlockCheckMode::Default => visitor.last_pos = block.span.lo(),
}
let inner_attrs = attrs.map(inner_attributes);
let label_str = rewrite_label(label);
visitor.visit_block(block, inner_attrs.as_ref().map(|a| &**a), has_braces);
Some(format!("{}{}{}", prefix, label_str, visitor.buffer))
}
impl Rewrite for ast::Block {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
rewrite_block(self, None, None, context, shape)
}
}
fn rewrite_block(
block: &ast::Block,
attrs: Option<&[ast::Attribute]>,
label: Option<ast::Label>,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
let prefix = block_prefix(context, block, shape)?;
// shape.width is used only for the single line case: either the empty block `{}`,
// or an unsafe expression `unsafe { e }`.
if let rw @ Some(_) = rewrite_empty_block(context, block, attrs, label, &prefix, shape) {
return rw;
}
let result = rewrite_block_with_visitor(context, &prefix, block, attrs, label, shape, true);
if let Some(ref result_str) = result {
if result_str.lines().count() <= 3 {
if let rw @ Some(_) =
rewrite_single_line_block(context, &prefix, block, attrs, label, shape)
{
return rw;
}
}
}
result
}
impl Rewrite for ast::Stmt {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
skip_out_of_file_lines_range!(context, self.span());
let result = match self.node {
ast::StmtKind::Local(ref local) => local.rewrite(context, shape),
ast::StmtKind::Expr(ref ex) | ast::StmtKind::Semi(ref ex) => {
let suffix = if semicolon_for_stmt(context, self) {
";"
} else {
""
};
let shape = shape.sub_width(suffix.len())?;
format_expr(ex, ExprType::Statement, context, shape).map(|s| s + suffix)
}
ast::StmtKind::Mac(..) | ast::StmtKind::Item(..) => None,
};
result.and_then(|res| recover_comment_removed(res, self.span(), context))
}
}
// Rewrite condition if the given expression has one.
pub fn rewrite_cond(context: &RewriteContext, expr: &ast::Expr, shape: Shape) -> Option<String> {
match expr.node {
ast::ExprKind::Match(ref cond, _) => {
// `match `cond` {`
let cond_shape = match context.config.indent_style() {
IndentStyle::Visual => shape.shrink_left(6).and_then(|s| s.sub_width(2))?,
IndentStyle::Block => shape.offset_left(8)?,
};
cond.rewrite(context, cond_shape)
}
_ => to_control_flow(expr, ExprType::SubExpression).and_then(|control_flow| {
let alt_block_sep =
String::from("\n") + &shape.indent.block_only().to_string(context.config);
control_flow
.rewrite_cond(context, shape, &alt_block_sep)
.and_then(|rw| Some(rw.0))
}),
}
}
// Abstraction over control flow expressions
#[derive(Debug)]
struct ControlFlow<'a> {
cond: Option<&'a ast::Expr>,
block: &'a ast::Block,
else_block: Option<&'a ast::Expr>,
label: Option<ast::Label>,
pats: Vec<&'a ast::Pat>,
keyword: &'a str,
matcher: &'a str,
connector: &'a str,
allow_single_line: bool,
// True if this is an `if` expression in an `else if` :-( hacky
nested_if: bool,
span: Span,
}
fn to_control_flow(expr: &ast::Expr, expr_type: ExprType) -> Option<ControlFlow> {
match expr.node {
ast::ExprKind::If(ref cond, ref if_block, ref else_block) => Some(ControlFlow::new_if(
cond,
vec![],
if_block,
else_block.as_ref().map(|e| &**e),
expr_type == ExprType::SubExpression,
false,
expr.span,
)),
ast::ExprKind::IfLet(ref pat, ref cond, ref if_block, ref else_block) => {
Some(ControlFlow::new_if(
cond,
ptr_vec_to_ref_vec(pat),
if_block,
else_block.as_ref().map(|e| &**e),
expr_type == ExprType::SubExpression,
false,
expr.span,
))
}
ast::ExprKind::ForLoop(ref pat, ref cond, ref block, label) => {
Some(ControlFlow::new_for(pat, cond, block, label, expr.span))
}
ast::ExprKind::Loop(ref block, label) => {
Some(ControlFlow::new_loop(block, label, expr.span))
}
ast::ExprKind::While(ref cond, ref block, label) => Some(ControlFlow::new_while(
vec![],
cond,
block,
label,
expr.span,
)),
ast::ExprKind::WhileLet(ref pat, ref cond, ref block, label) => Some(
ControlFlow::new_while(ptr_vec_to_ref_vec(pat), cond, block, label, expr.span),
),
_ => None,
}
}
fn choose_matcher(pats: &[&ast::Pat]) -> &'static str {
if pats.is_empty() {
""
} else {
"let"
}
}
impl<'a> ControlFlow<'a> {
fn new_if(
cond: &'a ast::Expr,
pats: Vec<&'a ast::Pat>,
block: &'a ast::Block,
else_block: Option<&'a ast::Expr>,
allow_single_line: bool,
nested_if: bool,
span: Span,
) -> ControlFlow<'a> {
let matcher = choose_matcher(&pats);
ControlFlow {
cond: Some(cond),
block,
else_block,
label: None,
pats,
keyword: "if",
matcher,
connector: " =",
allow_single_line,
nested_if,
span,
}
}
fn new_loop(block: &'a ast::Block, label: Option<ast::Label>, span: Span) -> ControlFlow<'a> {
ControlFlow {
cond: None,
block,
else_block: None,
label,
pats: vec![],
keyword: "loop",
matcher: "",
connector: "",
allow_single_line: false,
nested_if: false,
span,
}
}
fn new_while(
pats: Vec<&'a ast::Pat>,
cond: &'a ast::Expr,
block: &'a ast::Block,
label: Option<ast::Label>,
span: Span,
) -> ControlFlow<'a> {
let matcher = choose_matcher(&pats);
ControlFlow {
cond: Some(cond),
block,
else_block: None,
label,
pats,
keyword: "while",
matcher,
connector: " =",
allow_single_line: false,
nested_if: false,
span,
}
}
fn new_for(
pat: &'a ast::Pat,
cond: &'a ast::Expr,
block: &'a ast::Block,
label: Option<ast::Label>,
span: Span,
) -> ControlFlow<'a> {
ControlFlow {
cond: Some(cond),
block,
else_block: None,
label,
pats: vec![pat],
keyword: "for",
matcher: "",
connector: " in",
allow_single_line: false,
nested_if: false,
span,
}
}
fn rewrite_single_line(
&self,
pat_expr_str: &str,
context: &RewriteContext,
width: usize,
) -> Option<String> {
assert!(self.allow_single_line);
let else_block = self.else_block?;
let fixed_cost = self.keyword.len() + " { } else { }".len();
if let ast::ExprKind::Block(ref else_node, _) = else_block.node {
if !is_simple_block(self.block, None, context.source_map)
|| !is_simple_block(else_node, None, context.source_map)
|| pat_expr_str.contains('\n')
{
return None;
}
let new_width = width.checked_sub(pat_expr_str.len() + fixed_cost)?;
let expr = &self.block.stmts[0];
let if_str = expr.rewrite(context, Shape::legacy(new_width, Indent::empty()))?;
let new_width = new_width.checked_sub(if_str.len())?;
let else_expr = &else_node.stmts[0];
let else_str = else_expr.rewrite(context, Shape::legacy(new_width, Indent::empty()))?;
if if_str.contains('\n') || else_str.contains('\n') {
return None;
}
let result = format!(
"{} {} {{ {} }} else {{ {} }}",
self.keyword, pat_expr_str, if_str, else_str
);
if result.len() <= width {
return Some(result);
}
}
None
}
}
impl<'a> ControlFlow<'a> {
fn rewrite_pat_expr(
&self,
context: &RewriteContext,
expr: &ast::Expr,
shape: Shape,
offset: usize,
) -> Option<String> {
debug!("rewrite_pat_expr {:?} {:?} {:?}", shape, self.pats, expr);
let cond_shape = shape.offset_left(offset)?;
if !self.pats.is_empty() {
let matcher = if self.matcher.is_empty() {
self.matcher.to_owned()
} else {
format!("{} ", self.matcher)
};
let pat_shape = cond_shape
.offset_left(matcher.len())?
.sub_width(self.connector.len())?;
let pat_string = rewrite_multiple_patterns(context, &self.pats, pat_shape)?;
let result = format!("{}{}{}", matcher, pat_string, self.connector);
return rewrite_assign_rhs(context, result, expr, cond_shape);
}
let expr_rw = expr.rewrite(context, cond_shape);
// The expression may (partially) fit on the current line.
// We do not allow splitting between `if` and condition.
if self.keyword == "if" || expr_rw.is_some() {
return expr_rw;
}
// The expression won't fit on the current line, jump to next.
let nested_shape = shape
.block_indent(context.config.tab_spaces())
.with_max_width(context.config);
let nested_indent_str = nested_shape.indent.to_string_with_newline(context.config);
expr.rewrite(context, nested_shape)
.map(|expr_rw| format!("{}{}", nested_indent_str, expr_rw))
}
fn rewrite_cond(
&self,
context: &RewriteContext,
shape: Shape,
alt_block_sep: &str,
) -> Option<(String, usize)> {
// Do not take the rhs overhead from the upper expressions into account
// when rewriting pattern.
let new_width = context.budget(shape.used_width());
let fresh_shape = Shape {
width: new_width,
..shape
};
let constr_shape = if self.nested_if {
// We are part of an if-elseif-else chain. Our constraints are tightened.
// 7 = "} else " .len()
fresh_shape.offset_left(7)?
} else {
fresh_shape
};
let label_string = rewrite_label(self.label);
// 1 = space after keyword.
let offset = self.keyword.len() + label_string.len() + 1;
let pat_expr_string = match self.cond {
Some(cond) => self.rewrite_pat_expr(context, cond, constr_shape, offset)?,
None => String::new(),
};
let brace_overhead =
if context.config.control_brace_style() != ControlBraceStyle::AlwaysNextLine {
// 2 = ` {`
2
} else {
0
};
let one_line_budget = context
.config
.max_width()
.saturating_sub(constr_shape.used_width() + offset + brace_overhead);
let force_newline_brace = (pat_expr_string.contains('\n')
|| pat_expr_string.len() > one_line_budget)
&& !last_line_extendable(&pat_expr_string);
// Try to format if-else on single line.
if self.allow_single_line
&& context
.config
.width_heuristics()
.single_line_if_else_max_width
> 0
{
let trial = self.rewrite_single_line(&pat_expr_string, context, shape.width);
if let Some(cond_str) = trial {
if cond_str.len() <= context
.config
.width_heuristics()
.single_line_if_else_max_width
{
return Some((cond_str, 0));
}
}
}
let cond_span = if let Some(cond) = self.cond {
cond.span
} else {
mk_sp(self.block.span.lo(), self.block.span.lo())
};
// `for event in event`
// Do not include label in the span.
let lo = self
.label
.map_or(self.span.lo(), |label| label.ident.span.hi());
let between_kwd_cond = mk_sp(
context
.snippet_provider
.span_after(mk_sp(lo, self.span.hi()), self.keyword.trim()),
if self.pats.is_empty() {
cond_span.lo()
} else if self.matcher.is_empty() {
self.pats[0].span.lo()
} else {
context
.snippet_provider
.span_before(self.span, self.matcher.trim())
},
);
let between_kwd_cond_comment = extract_comment(between_kwd_cond, context, shape);
let after_cond_comment =
extract_comment(mk_sp(cond_span.hi(), self.block.span.lo()), context, shape);
let block_sep = if self.cond.is_none() && between_kwd_cond_comment.is_some() {
""
} else if context.config.control_brace_style() == ControlBraceStyle::AlwaysNextLine
|| force_newline_brace
{
alt_block_sep
} else {
" "
};
let used_width = if pat_expr_string.contains('\n') {
last_line_width(&pat_expr_string)
} else {
// 2 = spaces after keyword and condition.
label_string.len() + self.keyword.len() + pat_expr_string.len() + 2
};
Some((
format!(
"{}{}{}{}{}",
label_string,
self.keyword,
between_kwd_cond_comment.as_ref().map_or(
if pat_expr_string.is_empty() || pat_expr_string.starts_with('\n') {
""
} else {
" "
},
|s| &**s,
),
pat_expr_string,
after_cond_comment.as_ref().map_or(block_sep, |s| &**s)
),
used_width,
))
}
}
impl<'a> Rewrite for ControlFlow<'a> {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
debug!("ControlFlow::rewrite {:?} {:?}", self, shape);
let alt_block_sep = &shape.indent.to_string_with_newline(context.config);
let (cond_str, used_width) = self.rewrite_cond(context, shape, alt_block_sep)?;
// If `used_width` is 0, it indicates that whole control flow is written in a single line.
if used_width == 0 {
return Some(cond_str);
}
let block_width = shape.width.saturating_sub(used_width);
// This is used only for the empty block case: `{}`. So, we use 1 if we know
// we should avoid the single line case.
let block_width = if self.else_block.is_some() || self.nested_if {
min(1, block_width)
} else {
block_width
};
let block_shape = Shape {
width: block_width,
..shape
};
let block_str = {
let old_val = context.is_if_else_block.replace(self.else_block.is_some());
let result =
rewrite_block_with_visitor(context, "", self.block, None, None, block_shape, true);
context.is_if_else_block.replace(old_val);
result?
};
let mut result = format!("{}{}", cond_str, block_str);
if let Some(else_block) = self.else_block {
let shape = Shape::indented(shape.indent, context.config);
let mut last_in_chain = false;
let rewrite = match else_block.node {
// If the else expression is another if-else expression, prevent it
// from being formatted on a single line.
// Note how we're passing the original shape, as the
// cost of "else" should not cascade.
ast::ExprKind::IfLet(ref pat, ref cond, ref if_block, ref next_else_block) => {
ControlFlow::new_if(
cond,
ptr_vec_to_ref_vec(pat),
if_block,
next_else_block.as_ref().map(|e| &**e),
false,
true,
mk_sp(else_block.span.lo(), self.span.hi()),
).rewrite(context, shape)
}
ast::ExprKind::If(ref cond, ref if_block, ref next_else_block) => {
ControlFlow::new_if(
cond,
vec![],
if_block,
next_else_block.as_ref().map(|e| &**e),
false,
true,
mk_sp(else_block.span.lo(), self.span.hi()),
).rewrite(context, shape)
}
_ => {
last_in_chain = true;
// When rewriting a block, the width is only used for single line
// blocks, passing 1 lets us avoid that.
let else_shape = Shape {
width: min(1, shape.width),
..shape
};
format_expr(else_block, ExprType::Statement, context, else_shape)
}
};
let between_kwd_else_block = mk_sp(
self.block.span.hi(),
context
.snippet_provider
.span_before(mk_sp(self.block.span.hi(), else_block.span.lo()), "else"),
);
let between_kwd_else_block_comment =
extract_comment(between_kwd_else_block, context, shape);
let after_else = mk_sp(
context
.snippet_provider
.span_after(mk_sp(self.block.span.hi(), else_block.span.lo()), "else"),
else_block.span.lo(),
);
let after_else_comment = extract_comment(after_else, context, shape);
let between_sep = match context.config.control_brace_style() {
ControlBraceStyle::AlwaysNextLine | ControlBraceStyle::ClosingNextLine => {
&*alt_block_sep
}
ControlBraceStyle::AlwaysSameLine => " ",
};
let after_sep = match context.config.control_brace_style() {
ControlBraceStyle::AlwaysNextLine if last_in_chain => &*alt_block_sep,
_ => " ",
};
result.push_str(&format!(
"{}else{}",
between_kwd_else_block_comment
.as_ref()
.map_or(between_sep, |s| &**s),
after_else_comment.as_ref().map_or(after_sep, |s| &**s),
));
result.push_str(&rewrite?);
}
Some(result)
}
}
fn rewrite_label(opt_label: Option<ast::Label>) -> Cow<'static, str> {
match opt_label {
Some(label) => Cow::from(format!("{}: ", label.ident)),
None => Cow::from(""),
}
}
fn extract_comment(span: Span, context: &RewriteContext, shape: Shape) -> Option<String> {
match rewrite_missing_comment(span, shape, context) {
Some(ref comment) if !comment.is_empty() => Some(format!(
"{indent}{}{indent}",
comment,
indent = shape.indent.to_string_with_newline(context.config)
)),
_ => None,
}
}
pub fn block_contains_comment(block: &ast::Block, source_map: &SourceMap) -> bool {
let snippet = source_map.span_to_snippet(block.span).unwrap();
contains_comment(&snippet)
}
// Checks that a block contains no statements, an expression and no comments or
// attributes.
// FIXME: incorrectly returns false when comment is contained completely within
// the expression.
pub fn is_simple_block(
block: &ast::Block,
attrs: Option<&[ast::Attribute]>,
source_map: &SourceMap,
) -> bool {
(block.stmts.len() == 1
&& stmt_is_expr(&block.stmts[0])
&& !block_contains_comment(block, source_map)
&& attrs.map_or(true, |a| a.is_empty()))
}
/// Checks whether a block contains at most one statement or expression, and no
/// comments or attributes.
pub fn is_simple_block_stmt(
block: &ast::Block,
attrs: Option<&[ast::Attribute]>,
source_map: &SourceMap,
) -> bool {
block.stmts.len() <= 1
&& !block_contains_comment(block, source_map)
&& attrs.map_or(true, |a| a.is_empty())
}
/// Checks whether a block contains no statements, expressions, comments, or
/// inner attributes.
pub fn is_empty_block(
block: &ast::Block,
attrs: Option<&[ast::Attribute]>,
source_map: &SourceMap,
) -> bool {
block.stmts.is_empty()
&& !block_contains_comment(block, source_map)
&& attrs.map_or(true, |a| inner_attributes(a).is_empty())
}
pub fn stmt_is_expr(stmt: &ast::Stmt) -> bool {
match stmt.node {
ast::StmtKind::Expr(..) => true,
_ => false,
}
}
pub fn is_unsafe_block(block: &ast::Block) -> bool {
if let ast::BlockCheckMode::Unsafe(..) = block.rules {
true
} else {
false
}
}
pub fn rewrite_multiple_patterns(
context: &RewriteContext,
pats: &[&ast::Pat],
shape: Shape,
) -> Option<String> {
let pat_strs = pats
.iter()
.map(|p| p.rewrite(context, shape))
.collect::<Option<Vec<_>>>()?;
let use_mixed_layout = pats
.iter()
.zip(pat_strs.iter())
.all(|(pat, pat_str)| is_short_pattern(pat, pat_str));
let items: Vec<_> = pat_strs.into_iter().map(ListItem::from_str).collect();
let tactic = if use_mixed_layout {
DefinitiveListTactic::Mixed
} else {
definitive_tactic(
&items,
ListTactic::HorizontalVertical,
Separator::VerticalBar,
shape.width,
)
};
let fmt = ListFormatting::new(shape, context.config)
.tactic(tactic)
.separator(" |")
.separator_place(context.config.binop_separator())
.ends_with_newline(false);
write_list(&items, &fmt)
}
pub fn rewrite_literal(context: &RewriteContext, l: &ast::Lit, shape: Shape) -> Option<String> {
match l.node {
ast::LitKind::Str(_, ast::StrStyle::Cooked) => rewrite_string_lit(context, l.span, shape),
_ => wrap_str(
context.snippet(l.span).to_owned(),
context.config.max_width(),
shape,
),
}
}
fn rewrite_string_lit(context: &RewriteContext, span: Span, shape: Shape) -> Option<String> {
let string_lit = context.snippet(span);
if !context.config.format_strings() {
if string_lit
.lines()
.rev()
.skip(1)
.all(|line| line.ends_with('\\'))
{
let new_indent = shape.visual_indent(1).indent;
let indented_string_lit = String::from(
string_lit
.lines()
.map(|line| {
format!(
"{}{}",
new_indent.to_string(context.config),
line.trim_left()
)
}).collect::<Vec<_>>()
.join("\n")
.trim_left(),
);
return wrap_str(indented_string_lit, context.config.max_width(), shape);
} else {
return wrap_str(string_lit.to_owned(), context.config.max_width(), shape);
}
}
// Remove the quote characters.
let str_lit = &string_lit[1..string_lit.len() - 1];
rewrite_string(
str_lit,
&StringFormat::new(shape.visual_indent(0), context.config),
)
}
/// In case special-case style is required, returns an offset from which we start horizontal layout.
pub fn maybe_get_args_offset<T: ToExpr>(callee_str: &str, args: &[&T]) -> Option<(bool, usize)> {
if let Some(&(_, num_args_before)) = SPECIAL_MACRO_WHITELIST
.iter()
.find(|&&(s, _)| s == callee_str)
{
let all_simple = args.len() > num_args_before && is_every_expr_simple(args);
Some((all_simple, num_args_before))
} else {
None
}
}
/// A list of `format!`-like macros, that take a long format string and a list of arguments to
/// format.
///
/// Organized as a list of `(&str, usize)` tuples, giving the name of the macro and the number of
/// arguments before the format string (none for `format!("format", ...)`, one for `assert!(result,
/// "format", ...)`, two for `assert_eq!(left, right, "format", ...)`).
const SPECIAL_MACRO_WHITELIST: &[(&str, usize)] = &[
// format! like macros
// From the Rust Standard Library.
("eprint!", 0),
("eprintln!", 0),
("format!", 0),
("format_args!", 0),
("print!", 0),
("println!", 0),
("panic!", 0),
("unreachable!", 0),
// From the `log` crate.
("debug!", 0),
("error!", 0),
("info!", 0),
("warn!", 0),
// write! like macros
("assert!", 1),
("debug_assert!", 1),
("write!", 1),
("writeln!", 1),
// assert_eq! like macros
("assert_eq!", 2),
("assert_ne!", 2),
("debug_assert_eq!", 2),
("debug_assert_ne!", 2),
];
fn choose_separator_tactic(context: &RewriteContext, span: Span) -> Option<SeparatorTactic> {
if context.inside_macro() {
if span_ends_with_comma(context, span) {
Some(SeparatorTactic::Always)
} else {
Some(SeparatorTactic::Never)
}
} else {
None
}
}
pub fn rewrite_call(
context: &RewriteContext,
callee: &str,
args: &[ptr::P<ast::Expr>],
span: Span,
shape: Shape,
) -> Option<String> {
overflow::rewrite_with_parens(
context,
callee,
&ptr_vec_to_ref_vec(args),
shape,
span,
context.config.width_heuristics().fn_call_width,
choose_separator_tactic(context, span),
)
}
fn is_simple_expr(expr: &ast::Expr) -> bool {
match expr.node {
ast::ExprKind::Lit(..) => true,
ast::ExprKind::Path(ref qself, ref path) => qself.is_none() && path.segments.len() <= 1,
ast::ExprKind::AddrOf(_, ref expr)
| ast::ExprKind::Box(ref expr)
| ast::ExprKind::Cast(ref expr, _)
| ast::ExprKind::Field(ref expr, _)
| ast::ExprKind::Try(ref expr)
| ast::ExprKind::Unary(_, ref expr) => is_simple_expr(expr),
ast::ExprKind::Index(ref lhs, ref rhs) => is_simple_expr(lhs) && is_simple_expr(rhs),
ast::ExprKind::Repeat(ref lhs, ref rhs) => {
is_simple_expr(lhs) && is_simple_expr(&*rhs.value)
}
_ => false,
}
}
pub fn is_every_expr_simple<T: ToExpr>(lists: &[&T]) -> bool {
lists
.iter()
.all(|arg| arg.to_expr().map_or(false, is_simple_expr))
}
pub fn can_be_overflowed_expr(context: &RewriteContext, expr: &ast::Expr, args_len: usize) -> bool {
match expr.node {
ast::ExprKind::Match(..) => {
(context.use_block_indent() && args_len == 1)
|| (context.config.indent_style() == IndentStyle::Visual && args_len > 1)
}
ast::ExprKind::If(..)
| ast::ExprKind::IfLet(..)
| ast::ExprKind::ForLoop(..)
| ast::ExprKind::Loop(..)
| ast::ExprKind::While(..)
| ast::ExprKind::WhileLet(..) => {
context.config.combine_control_expr() && context.use_block_indent() && args_len == 1
}
ast::ExprKind::Block(..) | ast::ExprKind::Closure(..) => {
context.use_block_indent()
|| context.config.indent_style() == IndentStyle::Visual && args_len > 1
}
ast::ExprKind::Array(..)
| ast::ExprKind::Call(..)
| ast::ExprKind::Mac(..)
| ast::ExprKind::MethodCall(..)
| ast::ExprKind::Struct(..)
| ast::ExprKind::Tup(..) => context.use_block_indent() && args_len == 1,
ast::ExprKind::AddrOf(_, ref expr)
| ast::ExprKind::Box(ref expr)
| ast::ExprKind::Try(ref expr)
| ast::ExprKind::Unary(_, ref expr)
| ast::ExprKind::Cast(ref expr, _) => can_be_overflowed_expr(context, expr, args_len),
_ => false,
}
}
pub fn is_nested_call(expr: &ast::Expr) -> bool {
match expr.node {
ast::ExprKind::Call(..) | ast::ExprKind::Mac(..) => true,
ast::ExprKind::AddrOf(_, ref expr)
| ast::ExprKind::Box(ref expr)
| ast::ExprKind::Try(ref expr)
| ast::ExprKind::Unary(_, ref expr)
| ast::ExprKind::Cast(ref expr, _) => is_nested_call(expr),
_ => false,
}
}
/// Return true if a function call or a method call represented by the given span ends with a
/// trailing comma. This function is used when rewriting macro, as adding or removing a trailing
/// comma from macro can potentially break the code.
pub fn span_ends_with_comma(context: &RewriteContext, span: Span) -> bool {
let mut result: bool = Default::default();
let mut prev_char: char = Default::default();
let closing_delimiters = &[')', '}', ']'];
for (kind, c) in CharClasses::new(context.snippet(span).chars()) {
match c {
_ if kind.is_comment() || c.is_whitespace() => continue,
c if closing_delimiters.contains(&c) => {
result &= !closing_delimiters.contains(&prev_char);
}
',' => result = true,
_ => result = false,
}
prev_char = c;
}
result
}
fn rewrite_paren(
context: &RewriteContext,
mut subexpr: &ast::Expr,
shape: Shape,
mut span: Span,
) -> Option<String> {
debug!("rewrite_paren, shape: {:?}", shape);
// Extract comments within parens.
let mut pre_comment;
let mut post_comment;
let remove_nested_parens = context.config.remove_nested_parens();
loop {
// 1 = "(" or ")"
let pre_span = mk_sp(span.lo() + BytePos(1), subexpr.span.lo());
let post_span = mk_sp(subexpr.span.hi(), span.hi() - BytePos(1));
pre_comment = rewrite_missing_comment(pre_span, shape, context)?;
post_comment = rewrite_missing_comment(post_span, shape, context)?;
// Remove nested parens if there are no comments.
if let ast::ExprKind::Paren(ref subsubexpr) = subexpr.node {
if remove_nested_parens && pre_comment.is_empty() && post_comment.is_empty() {
span = subexpr.span;
subexpr = subsubexpr;
continue;
}
}
break;
}
// 1 `(`
let sub_shape = shape.offset_left(1).and_then(|s| s.sub_width(1))?;
let subexpr_str = subexpr.rewrite(context, sub_shape)?;
debug!("rewrite_paren, subexpr_str: `{:?}`", subexpr_str);
// 2 = `()`
if subexpr_str.contains('\n') || first_line_width(&subexpr_str) + 2 <= shape.width {
Some(format!("({}{}{})", pre_comment, &subexpr_str, post_comment))
} else {
None
}
}
fn rewrite_index(
expr: &ast::Expr,
index: &ast::Expr,
context: &RewriteContext,
shape: Shape,
) -> Option<String> {
let expr_str = expr.rewrite(context, shape)?;
let offset = last_line_width(&expr_str) + 1;
let rhs_overhead = shape.rhs_overhead(context.config);
let index_shape = if expr_str.contains('\n') {
Shape::legacy(context.config.max_width(), shape.indent)
.offset_left(offset)
.and_then(|shape| shape.sub_width(1 + rhs_overhead))
} else {
shape.visual_indent(offset).sub_width(offset + 1)
};
let orig_index_rw = index_shape.and_then(|s| index.rewrite(context, s));
// Return if index fits in a single line.
match orig_index_rw {
Some(ref index_str) if !index_str.contains('\n') => {
return Some(format!("{}[{}]", expr_str, index_str));
}
_ => (),
}
// Try putting index on the next line and see if it fits in a single line.
let indent = shape.indent.block_indent(context.config);
let index_shape = Shape::indented(indent, context.config).offset_left(1)?;
let index_shape = index_shape.sub_width(1 + rhs_overhead)?;
let new_index_rw = index.rewrite(context, index_shape);
match (orig_index_rw, new_index_rw) {
(_, Some(ref new_index_str)) if !new_index_str.contains('\n') => Some(format!(
"{}{}[{}]",
expr_str,
indent.to_string_with_newline(context.config),
new_index_str,
)),
(None, Some(ref new_index_str)) => Some(format!(
"{}{}[{}]",
expr_str,
indent.to_string_with_newline(context.config),
new_index_str,
)),
(Some(ref index_str), _) => Some(format!("{}[{}]", expr_str, index_str)),
_ => None,
}
}
fn struct_lit_can_be_aligned(fields: &[ast::Field], base: &Option<&ast::Expr>) -> bool {
if base.is_some() {
return false;
}
fields.iter().all(|field| !field.is_shorthand)
}
fn rewrite_struct_lit<'a>(
context: &RewriteContext,
path: &ast::Path,
fields: &'a [ast::Field],
base: Option<&'a ast::Expr>,
span: Span,
shape: Shape,
) -> Option<String> {
debug!("rewrite_struct_lit: shape {:?}", shape);
enum StructLitField<'a> {
Regular(&'a ast::Field),
Base(&'a ast::Expr),
}
// 2 = " {".len()
let path_shape = shape.sub_width(2)?;
let path_str = rewrite_path(context, PathContext::Expr, None, path, path_shape)?;
if fields.is_empty() && base.is_none() {
return Some(format!("{} {{}}", path_str));
}
// Foo { a: Foo } - indent is +3, width is -5.
let (h_shape, v_shape) = struct_lit_shape(shape, context, path_str.len() + 3, 2)?;
let one_line_width = h_shape.map_or(0, |shape| shape.width);
let body_lo = context.snippet_provider.span_after(span, "{");
let fields_str = if struct_lit_can_be_aligned(fields, &base)
&& context.config.struct_field_align_threshold() > 0
{
rewrite_with_alignment(
fields,
context,
shape,
mk_sp(body_lo, span.hi()),
one_line_width,
)?
} else {
let field_iter = fields
.into_iter()
.map(StructLitField::Regular)
.chain(base.into_iter().map(StructLitField::Base));
let span_lo = |item: &StructLitField| match *item {
StructLitField::Regular(field) => field.span().lo(),
StructLitField::Base(expr) => {
let last_field_hi = fields.last().map_or(span.lo(), |field| field.span.hi());
let snippet = context.snippet(mk_sp(last_field_hi, expr.span.lo()));
let pos = snippet.find_uncommented("..").unwrap();
last_field_hi + BytePos(pos as u32)
}
};
let span_hi = |item: &StructLitField| match *item {
StructLitField::Regular(field) => field.span().hi(),
StructLitField::Base(expr) => expr.span.hi(),
};
let rewrite = |item: &StructLitField| match *item {
StructLitField::Regular(field) => {
// The 1 taken from the v_budget is for the comma.
rewrite_field(context, field, v_shape.sub_width(1)?, 0)
}
StructLitField::Base(expr) => {
// 2 = ..
expr.rewrite(context, v_shape.offset_left(2)?)
.map(|s| format!("..{}", s))
}
};
let items = itemize_list(
context.snippet_provider,
field_iter,
"}",
",",
span_lo,
span_hi,
rewrite,
body_lo,
span.hi(),
false,
);
let item_vec = items.collect::<Vec<_>>();
let tactic = struct_lit_tactic(h_shape, context, &item_vec);
let nested_shape = shape_for_tactic(tactic, h_shape, v_shape);
let ends_with_comma = span_ends_with_comma(context, span);
let force_no_trailing_comma = context.inside_macro() && !ends_with_comma;
let fmt = struct_lit_formatting(
nested_shape,
tactic,
context,
force_no_trailing_comma || base.is_some(),
);
write_list(&item_vec, &fmt)?
};
let fields_str = wrap_struct_field(context, &fields_str, shape, v_shape, one_line_width);
Some(format!("{} {{{}}}", path_str, fields_str))
// FIXME if context.config.indent_style() == Visual, but we run out
// of space, we should fall back to BlockIndent.
}
pub fn wrap_struct_field(
context: &RewriteContext,
fields_str: &str,
shape: Shape,
nested_shape: Shape,
one_line_width: usize,
) -> String {
if context.config.indent_style() == IndentStyle::Block
&& (fields_str.contains('\n')
|| !context.config.struct_lit_single_line()
|| fields_str.len() > one_line_width)
{
format!(
"{}{}{}",
nested_shape.indent.to_string_with_newline(context.config),
fields_str,
shape.indent.to_string_with_newline(context.config)
)
} else {
// One liner or visual indent.
format!(" {} ", fields_str)
}
}
pub fn struct_lit_field_separator(config: &Config) -> &str {
colon_spaces(config.space_before_colon(), config.space_after_colon())
}
pub fn rewrite_field(
context: &RewriteContext,
field: &ast::Field,
shape: Shape,
prefix_max_width: usize,
) -> Option<String> {
if contains_skip(&field.attrs) {
return Some(context.snippet(field.span()).to_owned());
}
let mut attrs_str = field.attrs.rewrite(context, shape)?;
if !attrs_str.is_empty() {
attrs_str.push_str(&shape.indent.to_string_with_newline(context.config));
};
let name = context.snippet(field.ident.span);
if field.is_shorthand {
Some(attrs_str + &name)
} else {
let mut separator = String::from(struct_lit_field_separator(context.config));
for _ in 0..prefix_max_width.saturating_sub(name.len()) {
separator.push(' ');
}
let overhead = name.len() + separator.len();
let expr_shape = shape.offset_left(overhead)?;
let expr = field.expr.rewrite(context, expr_shape);
match expr {
Some(ref e) if e.as_str() == name && context.config.use_field_init_shorthand() => {
Some(attrs_str + &name)
}
Some(e) => Some(format!("{}{}{}{}", attrs_str, name, separator, e)),
None => {
let expr_offset = shape.indent.block_indent(context.config);
let expr = field
.expr
.rewrite(context, Shape::indented(expr_offset, context.config));
expr.map(|s| {
format!(
"{}{}:\n{}{}",
attrs_str,
name,
expr_offset.to_string(context.config),
s
)
})
}
}
}
}
fn rewrite_tuple_in_visual_indent_style<'a, T>(
context: &RewriteContext,
items: &[&T],
span: Span,
shape: Shape,
) -> Option<String>
where
T: Rewrite + Spanned + ToExpr + 'a,
{
let mut items = items.iter();
// In case of length 1, need a trailing comma
debug!("rewrite_tuple_in_visual_indent_style {:?}", shape);
if items.len() == 1 {
// 3 = "(" + ",)"
let nested_shape = shape.sub_width(3)?.visual_indent(1);
return items
.next()
.unwrap()
.rewrite(context, nested_shape)
.map(|s| format!("({},)", s));
}
let list_lo = context.snippet_provider.span_after(span, "(");
let nested_shape = shape.sub_width(2)?.visual_indent(1);
let items = itemize_list(
context.snippet_provider,
items,
")",
",",
|item| item.span().lo(),
|item| item.span().hi(),
|item| item.rewrite(context, nested_shape),
list_lo,
span.hi() - BytePos(1),
false,
);
let item_vec: Vec<_> = items.collect();
let tactic = definitive_tactic(
&item_vec,
ListTactic::HorizontalVertical,
Separator::Comma,
nested_shape.width,
);
let fmt = ListFormatting::new(shape, context.config)
.tactic(tactic)
.ends_with_newline(false);
let list_str = write_list(&item_vec, &fmt)?;
Some(format!("({})", list_str))
}
pub fn rewrite_tuple<'a, T>(
context: &RewriteContext,
items: &[&T],
span: Span,
shape: Shape,
) -> Option<String>
where
T: Rewrite + Spanned + ToExpr + 'a,
{
debug!("rewrite_tuple {:?}", shape);
if context.use_block_indent() {
// We use the same rule as function calls for rewriting tuples.
let force_tactic = if context.inside_macro() {
if span_ends_with_comma(context, span) {
Some(SeparatorTactic::Always)
} else {
Some(SeparatorTactic::Never)
}
} else if items.len() == 1 {
Some(SeparatorTactic::Always)
} else {
None
};
overflow::rewrite_with_parens(
context,
"",
items,
shape,
span,
context.config.width_heuristics().fn_call_width,
force_tactic,
)
} else {
rewrite_tuple_in_visual_indent_style(context, items, span, shape)
}
}
pub fn rewrite_unary_prefix<R: Rewrite>(
context: &RewriteContext,
prefix: &str,
rewrite: &R,
shape: Shape,
) -> Option<String> {
rewrite
.rewrite(context, shape.offset_left(prefix.len())?)
.map(|r| format!("{}{}", prefix, r))
}
// FIXME: this is probably not correct for multi-line Rewrites. we should
// subtract suffix.len() from the last line budget, not the first!
pub fn rewrite_unary_suffix<R: Rewrite>(
context: &RewriteContext,
suffix: &str,
rewrite: &R,
shape: Shape,
) -> Option<String> {
rewrite
.rewrite(context, shape.sub_width(suffix.len())?)
.map(|mut r| {
r.push_str(suffix);
r
})
}
fn rewrite_unary_op(
context: &RewriteContext,
op: &ast::UnOp,
expr: &ast::Expr,
shape: Shape,
) -> Option<String> {
// For some reason, an UnOp is not spanned like BinOp!
let operator_str = match *op {
ast::UnOp::Deref => "*",
ast::UnOp::Not => "!",
ast::UnOp::Neg => "-",
};
rewrite_unary_prefix(context, operator_str, expr, shape)
}
fn rewrite_assignment(
context: &RewriteContext,
lhs: &ast::Expr,
rhs: &ast::Expr,
op: Option<&ast::BinOp>,
shape: Shape,
) -> Option<String> {
let operator_str = match op {
Some(op) => context.snippet(op.span),
None => "=",
};
// 1 = space between lhs and operator.
let lhs_shape = shape.sub_width(operator_str.len() + 1)?;
let lhs_str = format!("{} {}", lhs.rewrite(context, lhs_shape)?, operator_str);
rewrite_assign_rhs(context, lhs_str, rhs, shape)
}
/// Controls where to put the rhs.
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum RhsTactics {
/// Use heuristics.
Default,
/// Put the rhs on the next line if it uses multiple line, without extra indentation.
ForceNextLineWithoutIndent,
}
// The left hand side must contain everything up to, and including, the
// assignment operator.
pub fn rewrite_assign_rhs<S: Into<String>, R: Rewrite>(
context: &RewriteContext,
lhs: S,
ex: &R,
shape: Shape,
) -> Option<String> {
rewrite_assign_rhs_with(context, lhs, ex, shape, RhsTactics::Default)
}
pub fn rewrite_assign_rhs_with<S: Into<String>, R: Rewrite>(
context: &RewriteContext,
lhs: S,
ex: &R,
shape: Shape,
rhs_tactics: RhsTactics,
) -> Option<String> {
let lhs = lhs.into();
let last_line_width = last_line_width(&lhs).saturating_sub(if lhs.contains('\n') {
shape.indent.width()
} else {
0
});
// 1 = space between operator and rhs.
let orig_shape = shape.offset_left(last_line_width + 1).unwrap_or(Shape {
width: 0,
offset: shape.offset + last_line_width + 1,
..shape
});
let rhs = choose_rhs(
context,
ex,
orig_shape,
ex.rewrite(context, orig_shape),
rhs_tactics,
)?;
Some(lhs + &rhs)
}
fn choose_rhs<R: Rewrite>(
context: &RewriteContext,
expr: &R,
shape: Shape,
orig_rhs: Option<String>,
rhs_tactics: RhsTactics,
) -> Option<String> {
match orig_rhs {
Some(ref new_str) if !new_str.contains('\n') && new_str.len() <= shape.width => {
Some(format!(" {}", new_str))
}
_ => {
// Expression did not fit on the same line as the identifier.
// Try splitting the line and see if that works better.
let new_shape = shape_from_rhs_tactic(context, shape, rhs_tactics)?;
let new_rhs = expr.rewrite(context, new_shape);
let new_indent_str = &shape
.indent
.block_indent(context.config)
.to_string_with_newline(context.config);
match (orig_rhs, new_rhs) {
(Some(ref orig_rhs), Some(ref new_rhs))
if wrap_str(new_rhs.clone(), context.config.max_width(), new_shape)
.is_none() =>
{
Some(format!(" {}", orig_rhs))
}
(Some(ref orig_rhs), Some(ref new_rhs))
if prefer_next_line(orig_rhs, new_rhs, rhs_tactics) =>
{
Some(format!("{}{}", new_indent_str, new_rhs))
}
(None, Some(ref new_rhs)) => Some(format!("{}{}", new_indent_str, new_rhs)),
(None, None) => None,
(Some(orig_rhs), _) => Some(format!(" {}", orig_rhs)),
}
}
}
}
fn shape_from_rhs_tactic(
context: &RewriteContext,
shape: Shape,
rhs_tactic: RhsTactics,
) -> Option<Shape> {
match rhs_tactic {
RhsTactics::ForceNextLineWithoutIndent => Some(shape.with_max_width(context.config)),
RhsTactics::Default => {
Shape::indented(shape.indent.block_indent(context.config), context.config)
.sub_width(shape.rhs_overhead(context.config))
}
}
}
pub fn prefer_next_line(orig_rhs: &str, next_line_rhs: &str, rhs_tactics: RhsTactics) -> bool {
rhs_tactics == RhsTactics::ForceNextLineWithoutIndent
|| !next_line_rhs.contains('\n')
|| count_newlines(orig_rhs) > count_newlines(next_line_rhs) + 1
|| first_line_ends_with(orig_rhs, '(') && !first_line_ends_with(next_line_rhs, '(')
|| first_line_ends_with(orig_rhs, '{') && !first_line_ends_with(next_line_rhs, '{')
|| first_line_ends_with(orig_rhs, '[') && !first_line_ends_with(next_line_rhs, '[')
}
fn rewrite_expr_addrof(
context: &RewriteContext,
mutability: ast::Mutability,
expr: &ast::Expr,
shape: Shape,
) -> Option<String> {
let operator_str = match mutability {
ast::Mutability::Immutable => "&",
ast::Mutability::Mutable => "&mut ",
};
rewrite_unary_prefix(context, operator_str, expr, shape)
}
pub trait ToExpr {
fn to_expr(&self) -> Option<&ast::Expr>;
fn can_be_overflowed(&self, context: &RewriteContext, len: usize) -> bool;
}
impl ToExpr for ast::Expr {
fn to_expr(&self) -> Option<&ast::Expr> {
Some(self)
}
fn can_be_overflowed(&self, context: &RewriteContext, len: usize) -> bool {
can_be_overflowed_expr(context, self, len)
}
}
impl ToExpr for ast::Ty {
fn to_expr(&self) -> Option<&ast::Expr> {
None
}
fn can_be_overflowed(&self, context: &RewriteContext, len: usize) -> bool {
can_be_overflowed_type(context, self, len)
}
}
impl<'a> ToExpr for TuplePatField<'a> {
fn to_expr(&self) -> Option<&ast::Expr> {
None
}
fn can_be_overflowed(&self, context: &RewriteContext, len: usize) -> bool {
can_be_overflowed_pat(context, self, len)
}
}
impl<'a> ToExpr for ast::StructField {
fn to_expr(&self) -> Option<&ast::Expr> {
None
}
fn can_be_overflowed(&self, _: &RewriteContext, _: usize) -> bool {
false
}
}
impl<'a> ToExpr for MacroArg {
fn to_expr(&self) -> Option<&ast::Expr> {
match *self {
MacroArg::Expr(ref expr) => Some(expr),
_ => None,
}
}
fn can_be_overflowed(&self, context: &RewriteContext, len: usize) -> bool {
match *self {
MacroArg::Expr(ref expr) => can_be_overflowed_expr(context, expr, len),
MacroArg::Ty(ref ty) => can_be_overflowed_type(context, ty, len),
MacroArg::Pat(..) => false,
MacroArg::Item(..) => len == 1,
}
}
}
impl ToExpr for ast::GenericParam {
fn to_expr(&self) -> Option<&ast::Expr> {
None
}
fn can_be_overflowed(&self, _: &RewriteContext, _: usize) -> bool {
false
}
}
pub fn is_method_call(expr: &ast::Expr) -> bool {
match expr.node {
ast::ExprKind::MethodCall(..) => true,
ast::ExprKind::AddrOf(_, ref expr)
| ast::ExprKind::Box(ref expr)
| ast::ExprKind::Cast(ref expr, _)
| ast::ExprKind::Try(ref expr)
| ast::ExprKind::Unary(_, ref expr) => is_method_call(expr),
_ => false,
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/string.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Format string literals.
use regex::Regex;
use unicode_segmentation::UnicodeSegmentation;
use config::Config;
use shape::Shape;
use utils::wrap_str;
const MIN_STRING: usize = 10;
pub struct StringFormat<'a> {
pub opener: &'a str,
pub closer: &'a str,
pub line_start: &'a str,
pub line_end: &'a str,
pub shape: Shape,
pub trim_end: bool,
pub config: &'a Config,
}
impl<'a> StringFormat<'a> {
pub fn new(shape: Shape, config: &'a Config) -> StringFormat<'a> {
StringFormat {
opener: "\"",
closer: "\"",
line_start: " ",
line_end: "\\",
shape,
trim_end: false,
config,
}
}
/// Returns the maximum number of graphemes that is possible on a line while taking the
/// indentation into account.
///
/// If we cannot put at least a single character per line, the rewrite won't succeed.
fn max_chars_with_indent(&self) -> Option<usize> {
Some(
self.shape
.width
.checked_sub(self.opener.len() + self.line_end.len() + 1)?
+ 1,
)
}
/// Like max_chars_with_indent but the indentation is not substracted.
/// This allows to fit more graphemes from the string on a line when
/// SnippetState::Overflow.
fn max_chars_without_indent(&self) -> Option<usize> {
Some(self.config.max_width().checked_sub(self.line_end.len())?)
}
}
pub fn rewrite_string<'a>(orig: &str, fmt: &StringFormat<'a>) -> Option<String> {
let max_chars_with_indent = fmt.max_chars_with_indent()?;
let max_chars_without_indent = fmt.max_chars_without_indent()?;
let indent = fmt.shape.indent.to_string_with_newline(fmt.config);
// Strip line breaks.
// With this regex applied, all remaining whitespaces are significant
let strip_line_breaks_re = Regex::new(r"([^\\](\\\\)*)\\[\n\r][[:space:]]*").unwrap();
let stripped_str = strip_line_breaks_re.replace_all(orig, "$1");
let graphemes = UnicodeSegmentation::graphemes(&*stripped_str, false).collect::<Vec<&str>>();
// `cur_start` is the position in `orig` of the start of the current line.
let mut cur_start = 0;
let mut result = String::with_capacity(
stripped_str
.len()
.checked_next_power_of_two()
.unwrap_or(usize::max_value()),
);
result.push_str(fmt.opener);
// Snip a line at a time from `stripped_str` until it is used up. Push the snippet
// onto result.
let mut cur_max_chars = max_chars_with_indent;
loop {
// All the input starting at cur_start fits on the current line
if graphemes.len() - cur_start <= cur_max_chars {
result.push_str(&graphemes[cur_start..].join(""));
break;
}
// The input starting at cur_start needs to be broken
match break_string(cur_max_chars, fmt.trim_end, &graphemes[cur_start..]) {
SnippetState::LineEnd(line, len) => {
result.push_str(&line);
result.push_str(fmt.line_end);
result.push_str(&indent);
result.push_str(fmt.line_start);
cur_max_chars = max_chars_with_indent;
cur_start += len;
}
SnippetState::Overflow(line, len) => {
result.push_str(&line);
cur_max_chars = max_chars_without_indent;
cur_start += len;
}
SnippetState::EndOfInput(line) => {
result.push_str(&line);
break;
}
}
}
result.push_str(fmt.closer);
wrap_str(result, fmt.config.max_width(), fmt.shape)
}
/// Result of breaking a string so it fits in a line and the state it ended in.
/// The state informs about what to do with the snippet and how to continue the breaking process.
#[derive(Debug, PartialEq)]
enum SnippetState {
/// The input could not be broken and so rewriting the string is finished.
EndOfInput(String),
/// The input could be broken and the returned snippet should be ended with a
/// `[StringFormat::line_end]`. The next snippet needs to be indented.
LineEnd(String, usize),
/// The input could be broken but the returned snippet should not be ended with a
/// `[StringFormat::line_end]` because the whitespace is significant. Therefore, the next
/// snippet should not be indented.
Overflow(String, usize),
}
/// Break the input string at a boundary character around the offset `max_chars`. A boundary
/// character is either a punctuation or a whitespace.
fn break_string(max_chars: usize, trim_end: bool, input: &[&str]) -> SnippetState {
let break_at = |index /* grapheme at index is included */| {
// Take in any whitespaces to the left/right of `input[index]` and
// check if there is a line feed, in which case whitespaces needs to be kept.
let mut index_minus_ws = index;
for (i, grapheme) in input[0..=index].iter().enumerate().rev() {
if !trim_end && is_line_feed(grapheme) {
return SnippetState::Overflow(input[0..=i].join("").to_string(), i + 1);
} else if !is_whitespace(grapheme) {
index_minus_ws = i;
break;
}
}
let mut index_plus_ws = index;
for (i, grapheme) in input[index + 1..].iter().enumerate() {
if !trim_end && is_line_feed(grapheme) {
return SnippetState::Overflow(
input[0..=index + 1 + i].join("").to_string(),
index + 2 + i,
);
} else if !is_whitespace(grapheme) {
index_plus_ws = index + i;
break;
}
}
if trim_end {
SnippetState::LineEnd(
input[0..=index_minus_ws].join("").to_string(),
index_plus_ws + 1,
)
} else {
SnippetState::LineEnd(
input[0..=index_plus_ws].join("").to_string(),
index_plus_ws + 1,
)
}
};
// Find the position in input for breaking the string
match input[0..max_chars]
.iter()
.rposition(|grapheme| is_whitespace(grapheme))
{
// Found a whitespace and what is on its left side is big enough.
Some(index) if index >= MIN_STRING => break_at(index),
// No whitespace found, try looking for a punctuation instead
_ => match input[0..max_chars]
.iter()
.rposition(|grapheme| is_punctuation(grapheme))
{
// Found a punctuation and what is on its left side is big enough.
Some(index) if index >= MIN_STRING => break_at(index),
// Either no boundary character was found to the left of `input[max_chars]`, or the line
// got too small. We try searching for a boundary character to the right.
_ => match input[max_chars..]
.iter()
.position(|grapheme| is_whitespace(grapheme) || is_punctuation(grapheme))
{
// A boundary was found after the line limit
Some(index) => break_at(max_chars + index),
// No boundary to the right, the input cannot be broken
None => SnippetState::EndOfInput(input.join("").to_string()),
},
},
}
}
fn is_line_feed(grapheme: &str) -> bool {
grapheme.as_bytes()[0] == b'\n'
}
fn is_whitespace(grapheme: &str) -> bool {
grapheme.chars().all(|c| c.is_whitespace())
}
fn is_punctuation(grapheme: &str) -> bool {
match grapheme.as_bytes()[0] {
b':' | b',' | b';' | b'.' => true,
_ => false,
}
}
#[cfg(test)]
mod test {
use super::{break_string, rewrite_string, SnippetState, StringFormat};
use shape::{Indent, Shape};
use unicode_segmentation::UnicodeSegmentation;
#[test]
fn issue343() {
let config = Default::default();
let fmt = StringFormat::new(Shape::legacy(2, Indent::empty()), &config);
rewrite_string("eq_", &fmt);
}
#[test]
fn should_break_on_whitespace() {
let string = "Placerat felis. Mauris porta ante sagittis purus.";
let graphemes = UnicodeSegmentation::graphemes(&*string, false).collect::<Vec<&str>>();
assert_eq!(
break_string(20, false, &graphemes[..]),
SnippetState::LineEnd("Placerat felis. ".to_string(), 16)
);
assert_eq!(
break_string(20, true, &graphemes[..]),
SnippetState::LineEnd("Placerat felis.".to_string(), 16)
);
}
#[test]
fn should_break_on_punctuation() {
let string = "Placerat_felis._Mauris_porta_ante_sagittis_purus.";
let graphemes = UnicodeSegmentation::graphemes(&*string, false).collect::<Vec<&str>>();
assert_eq!(
break_string(20, false, &graphemes[..]),
SnippetState::LineEnd("Placerat_felis.".to_string(), 15)
);
}
#[test]
fn should_break_forward() {
let string = "Venenatis_tellus_vel_tellus. Aliquam aliquam dolor at justo.";
let graphemes = UnicodeSegmentation::graphemes(&*string, false).collect::<Vec<&str>>();
assert_eq!(
break_string(20, false, &graphemes[..]),
SnippetState::LineEnd("Venenatis_tellus_vel_tellus. ".to_string(), 29)
);
assert_eq!(
break_string(20, true, &graphemes[..]),
SnippetState::LineEnd("Venenatis_tellus_vel_tellus.".to_string(), 29)
);
}
#[test]
fn nothing_to_break() {
let string = "Venenatis_tellus_vel_tellus";
let graphemes = UnicodeSegmentation::graphemes(&*string, false).collect::<Vec<&str>>();
assert_eq!(
break_string(20, false, &graphemes[..]),
SnippetState::EndOfInput("Venenatis_tellus_vel_tellus".to_string())
);
}
#[test]
fn significant_whitespaces() {
let string = "Neque in sem. \n Pellentesque tellus augue.";
let graphemes = UnicodeSegmentation::graphemes(&*string, false).collect::<Vec<&str>>();
assert_eq!(
break_string(15, false, &graphemes[..]),
SnippetState::Overflow("Neque in sem. \n".to_string(), 20)
);
assert_eq!(
break_string(25, false, &graphemes[..]),
SnippetState::Overflow("Neque in sem. \n".to_string(), 20)
);
// if `StringFormat::line_end` is true, then the line feed does not matter anymore
assert_eq!(
break_string(15, true, &graphemes[..]),
SnippetState::LineEnd("Neque in sem.".to_string(), 26)
);
assert_eq!(
break_string(25, true, &graphemes[..]),
SnippetState::LineEnd("Neque in sem.".to_string(), 26)
);
}
#[test]
fn big_whitespace() {
let string = "Neque in sem. Pellentesque tellus augue.";
let graphemes = UnicodeSegmentation::graphemes(&*string, false).collect::<Vec<&str>>();
assert_eq!(
break_string(20, false, &graphemes[..]),
SnippetState::LineEnd("Neque in sem. ".to_string(), 25)
);
assert_eq!(
break_string(20, true, &graphemes[..]),
SnippetState::LineEnd("Neque in sem.".to_string(), 25)
);
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/modules.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::collections::BTreeMap;
use std::io;
use std::path::{Path, PathBuf};
use syntax::ast;
use syntax::parse::{parser, DirectoryOwnership};
use syntax::source_map;
use syntax_pos::symbol::Symbol;
use config::FileName;
use utils::contains_skip;
/// List all the files containing modules of a crate.
/// If a file is used twice in a crate, it appears only once.
pub fn list_files<'a>(
krate: &'a ast::Crate,
source_map: &source_map::SourceMap,
) -> Result<BTreeMap<FileName, &'a ast::Mod>, io::Error> {
let mut result = BTreeMap::new(); // Enforce file order determinism
let root_filename = source_map.span_to_filename(krate.span);
{
let parent = match root_filename {
source_map::FileName::Real(ref path) => path.parent().unwrap(),
_ => Path::new(""),
};
list_submodules(&krate.module, parent, None, source_map, &mut result)?;
}
result.insert(root_filename.into(), &krate.module);
Ok(result)
}
fn path_value(attr: &ast::Attribute) -> Option<Symbol> {
if attr.name() == "path" {
attr.value_str()
} else {
None
}
}
// N.B. Even when there are multiple `#[path = ...]` attributes, we just need to
// examine the first one, since rustc ignores the second and the subsequent ones
// as unused attributes.
fn find_path_value(attrs: &[ast::Attribute]) -> Option<Symbol> {
attrs.iter().flat_map(path_value).next()
}
/// Recursively list all external modules included in a module.
fn list_submodules<'a>(
module: &'a ast::Mod,
search_dir: &Path,
relative: Option<ast::Ident>,
source_map: &source_map::SourceMap,
result: &mut BTreeMap<FileName, &'a ast::Mod>,
) -> Result<(), io::Error> {
debug!("list_submodules: search_dir: {:?}", search_dir);
for item in &module.items {
if let ast::ItemKind::Mod(ref sub_mod) = item.node {
if !contains_skip(&item.attrs) {
let is_internal = source_map.span_to_filename(item.span)
== source_map.span_to_filename(sub_mod.inner);
let (dir_path, relative) = if is_internal {
if let Some(path) = find_path_value(&item.attrs) {
(search_dir.join(&path.as_str()), None)
} else {
(search_dir.join(&item.ident.to_string()), None)
}
} else {
let (mod_path, relative) =
module_file(item.ident, &item.attrs, search_dir, relative, source_map)?;
let dir_path = mod_path.parent().unwrap().to_owned();
result.insert(FileName::Real(mod_path), sub_mod);
(dir_path, relative)
};
list_submodules(sub_mod, &dir_path, relative, source_map, result)?;
}
}
}
Ok(())
}
/// Find the file corresponding to an external mod
fn module_file(
id: ast::Ident,
attrs: &[ast::Attribute],
dir_path: &Path,
relative: Option<ast::Ident>,
source_map: &source_map::SourceMap,
) -> Result<(PathBuf, Option<ast::Ident>), io::Error> {
if let Some(path) = parser::Parser::submod_path_from_attr(attrs, dir_path) {
return Ok((path, None));
}
match parser::Parser::default_submod_path(id, relative, dir_path, source_map).result {
Ok(parser::ModulePathSuccess {
path,
directory_ownership,
..
}) => {
let relative = if let DirectoryOwnership::Owned { relative } = directory_ownership {
relative
} else {
None
};
Ok((path, relative))
}
Err(_) => Err(io::Error::new(
io::ErrorKind::Other,
format!("Couldn't find module {}", id),
)),
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/rewrite.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// A generic trait to abstract the rewriting of an element (of the AST).
use syntax::parse::ParseSess;
use syntax::source_map::{SourceMap, Span};
use config::{Config, IndentStyle};
use shape::Shape;
use visitor::SnippetProvider;
use FormatReport;
use std::cell::RefCell;
pub trait Rewrite {
/// Rewrite self into shape.
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String>;
}
#[derive(Clone)]
pub struct RewriteContext<'a> {
pub parse_session: &'a ParseSess,
pub source_map: &'a SourceMap,
pub config: &'a Config,
pub inside_macro: RefCell<bool>,
// Force block indent style even if we are using visual indent style.
pub use_block: RefCell<bool>,
// When `format_if_else_cond_comment` is true, unindent the comment on top
// of the `else` or `else if`.
pub is_if_else_block: RefCell<bool>,
// When rewriting chain, veto going multi line except the last element
pub force_one_line_chain: RefCell<bool>,
pub snippet_provider: &'a SnippetProvider<'a>,
// Used for `format_snippet`
pub(crate) macro_rewrite_failure: RefCell<bool>,
pub(crate) report: FormatReport,
}
impl<'a> RewriteContext<'a> {
pub fn snippet(&self, span: Span) -> &str {
self.snippet_provider.span_to_snippet(span).unwrap()
}
/// Return true if we should use block indent style for rewriting function call.
pub fn use_block_indent(&self) -> bool {
self.config.indent_style() == IndentStyle::Block || *self.use_block.borrow()
}
pub fn budget(&self, used_width: usize) -> usize {
self.config.max_width().saturating_sub(used_width)
}
pub fn inside_macro(&self) -> bool {
*self.inside_macro.borrow()
}
pub fn is_if_else_block(&self) -> bool {
*self.is_if_else_block.borrow()
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/missed_spans.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::borrow::Cow;
use syntax::source_map::{BytePos, Pos, Span};
use comment::{rewrite_comment, CodeCharKind, CommentCodeSlices};
use config::{EmitMode, FileName};
use shape::{Indent, Shape};
use source_map::LineRangeUtils;
use utils::{count_newlines, last_line_width, mk_sp};
use visitor::FmtVisitor;
struct SnippetStatus {
/// An offset to the current line from the beginning of the original snippet.
line_start: usize,
/// A length of trailing whitespaces on the current line.
last_wspace: Option<usize>,
/// The current line number.
cur_line: usize,
}
impl SnippetStatus {
fn new(cur_line: usize) -> Self {
SnippetStatus {
line_start: 0,
last_wspace: None,
cur_line,
}
}
}
impl<'a> FmtVisitor<'a> {
fn output_at_start(&self) -> bool {
self.buffer.is_empty()
}
pub fn format_missing(&mut self, end: BytePos) {
// HACK(topecongiro)
// We use `format_missing()` to extract a missing comment between a macro
// (or alike) and a trailing semicolon. Here we just try to avoid calling
// `format_missing_inner` in the common case where there is no such comment.
// This is a hack, ideally we should fix a possible bug in `format_missing_inner`
// or refactor `visit_mac` and `rewrite_macro`, but this should suffice to fix the
// issue (#2727).
let missing_snippet = self.snippet(mk_sp(self.last_pos, end));
if missing_snippet.trim() == ";" {
self.push_str(";");
self.last_pos = end;
return;
}
self.format_missing_inner(end, |this, last_snippet, _| this.push_str(last_snippet))
}
pub fn format_missing_with_indent(&mut self, end: BytePos) {
let config = self.config;
self.format_missing_inner(end, |this, last_snippet, snippet| {
this.push_str(last_snippet.trim_right());
if last_snippet == snippet && !this.output_at_start() {
// No new lines in the snippet.
this.push_str("\n");
}
let indent = this.block_indent.to_string(config);
this.push_str(&indent);
})
}
pub fn format_missing_no_indent(&mut self, end: BytePos) {
self.format_missing_inner(end, |this, last_snippet, _| {
this.push_str(last_snippet.trim_right());
})
}
fn format_missing_inner<F: Fn(&mut FmtVisitor, &str, &str)>(
&mut self,
end: BytePos,
process_last_snippet: F,
) {
let start = self.last_pos;
if start == end {
// Do nothing if this is the beginning of the file.
if !self.output_at_start() {
process_last_snippet(self, "", "");
}
return;
}
assert!(
start < end,
"Request to format inverted span: {:?} to {:?}",
self.source_map.lookup_char_pos(start),
self.source_map.lookup_char_pos(end)
);
self.last_pos = end;
let span = mk_sp(start, end);
let snippet = self.snippet(span);
// Do nothing for spaces in the beginning of the file
if start == BytePos(0) && end.0 as usize == snippet.len() && snippet.trim().is_empty() {
return;
}
if snippet.trim().is_empty() && !out_of_file_lines_range!(self, span) {
// Keep vertical spaces within range.
self.push_vertical_spaces(count_newlines(snippet));
process_last_snippet(self, "", snippet);
} else {
self.write_snippet(span, &process_last_snippet);
}
}
fn push_vertical_spaces(&mut self, mut newline_count: usize) {
let offset = self.count_trailing_newlines();
let newline_upper_bound = self.config.blank_lines_upper_bound() + 1;
let newline_lower_bound = self.config.blank_lines_lower_bound() + 1;
if newline_count + offset > newline_upper_bound {
if offset >= newline_upper_bound {
newline_count = 0;
} else {
newline_count = newline_upper_bound - offset;
}
} else if newline_count + offset < newline_lower_bound {
if offset >= newline_lower_bound {
newline_count = 0;
} else {
newline_count = newline_lower_bound - offset;
}
}
let blank_lines = "\n".repeat(newline_count);
self.push_str(&blank_lines);
}
fn count_trailing_newlines(&self) -> usize {
let mut buf = &*self.buffer;
let mut result = 0;
while buf.ends_with('\n') {
buf = &buf[..buf.len() - 1];
result += 1;
}
result
}
fn write_snippet<F>(&mut self, span: Span, process_last_snippet: F)
where
F: Fn(&mut FmtVisitor, &str, &str),
{
// Get a snippet from the file start to the span's hi without allocating.
// We need it to determine what precedes the current comment. If the comment
// follows code on the same line, we won't touch it.
let big_span_lo = self.source_map.lookup_char_pos(span.lo()).file.start_pos;
let local_begin = self.source_map.lookup_byte_offset(big_span_lo);
let local_end = self.source_map.lookup_byte_offset(span.hi());
let start_index = local_begin.pos.to_usize();
let end_index = local_end.pos.to_usize();
let big_snippet = &local_begin.fm.src.as_ref().unwrap()[start_index..end_index];
let big_diff = (span.lo() - big_span_lo).to_usize();
let snippet = self.snippet(span);
debug!("write_snippet `{}`", snippet);
self.write_snippet_inner(big_snippet, big_diff, snippet, span, process_last_snippet);
}
fn write_snippet_inner<F>(
&mut self,
big_snippet: &str,
big_diff: usize,
old_snippet: &str,
span: Span,
process_last_snippet: F,
) where
F: Fn(&mut FmtVisitor, &str, &str),
{
// Trim whitespace from the right hand side of each line.
// Annoyingly, the library functions for splitting by lines etc. are not
// quite right, so we must do it ourselves.
let char_pos = self.source_map.lookup_char_pos(span.lo());
let file_name = &char_pos.file.name.clone().into();
let mut status = SnippetStatus::new(char_pos.line);
let snippet = &*match self.config.emit_mode() {
EmitMode::Coverage => Cow::from(replace_chars(old_snippet)),
_ => Cow::from(old_snippet),
};
for (kind, offset, subslice) in CommentCodeSlices::new(snippet) {
debug!("{:?}: {:?}", kind, subslice);
let newline_count = count_newlines(subslice);
let within_file_lines_range = self.config.file_lines().contains_range(
file_name,
status.cur_line,
status.cur_line + newline_count,
);
if CodeCharKind::Comment == kind && within_file_lines_range {
// 1: comment.
self.process_comment(
&mut status,
snippet,
&big_snippet[..(offset + big_diff)],
offset,
subslice,
);
} else if subslice.trim().is_empty() && newline_count > 0 && within_file_lines_range {
// 2: blank lines.
self.push_vertical_spaces(newline_count);
status.cur_line += newline_count;
status.line_start = offset + newline_count;
} else {
// 3: code which we failed to format or which is not within file-lines range.
self.process_missing_code(&mut status, snippet, subslice, offset, file_name);
}
}
process_last_snippet(self, &snippet[status.line_start..], snippet);
}
fn process_comment(
&mut self,
status: &mut SnippetStatus,
snippet: &str,
big_snippet: &str,
offset: usize,
subslice: &str,
) {
let last_char = big_snippet
.chars()
.rev()
.skip_while(|rev_c| [' ', '\t'].contains(rev_c))
.next();
let fix_indent = last_char.map_or(true, |rev_c| ['{', '\n'].contains(&rev_c));
let comment_indent = if fix_indent {
if let Some('{') = last_char {
self.push_str("\n");
}
let indent_str = self.block_indent.to_string(self.config);
self.push_str(&indent_str);
self.block_indent
} else {
self.push_str(" ");
Indent::from_width(self.config, last_line_width(&self.buffer))
};
let comment_width = ::std::cmp::min(
self.config.comment_width(),
self.config.max_width() - self.block_indent.width(),
);
let comment_shape = Shape::legacy(comment_width, comment_indent);
let comment_str = rewrite_comment(subslice, false, comment_shape, self.config)
.unwrap_or_else(|| String::from(subslice));
self.push_str(&comment_str);
status.last_wspace = None;
status.line_start = offset + subslice.len();
if let Some('/') = subslice.chars().nth(1) {
// check that there are no contained block comments
if !subslice
.split('\n')
.map(|s| s.trim_left())
.any(|s| s.len() >= 2 && &s[0..2] == "/*")
{
// Add a newline after line comments
self.push_str("\n");
}
} else if status.line_start <= snippet.len() {
// For other comments add a newline if there isn't one at the end already
match snippet[status.line_start..].chars().next() {
Some('\n') | Some('\r') => (),
_ => self.push_str("\n"),
}
}
status.cur_line += count_newlines(subslice);
}
fn process_missing_code(
&mut self,
status: &mut SnippetStatus,
snippet: &str,
subslice: &str,
offset: usize,
file_name: &FileName,
) {
for (mut i, c) in subslice.char_indices() {
i += offset;
if c == '\n' {
let skip_this_line = !self
.config
.file_lines()
.contains_line(file_name, status.cur_line);
if skip_this_line {
status.last_wspace = None;
}
if let Some(lw) = status.last_wspace {
self.push_str(&snippet[status.line_start..lw]);
self.push_str("\n");
status.last_wspace = None;
} else {
self.push_str(&snippet[status.line_start..i + 1]);
}
status.cur_line += 1;
status.line_start = i + 1;
} else if c.is_whitespace() && status.last_wspace.is_none() {
status.last_wspace = Some(i);
} else if c == ';' && status.last_wspace.is_some() {
status.line_start = i;
status.last_wspace = None;
} else {
status.last_wspace = None;
}
}
let remaining = snippet[status.line_start..subslice.len() + offset].trim();
if !remaining.is_empty() {
self.push_str(remaining);
status.line_start = subslice.len() + offset;
}
}
}
fn replace_chars(string: &str) -> String {
string
.chars()
.map(|ch| if ch.is_whitespace() { ch } else { 'X' })
.collect()
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/macros.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Format list-like macro invocations. These are invocations whose token trees
// can be interpreted as expressions and separated by commas.
// Note that these token trees do not actually have to be interpreted as
// expressions by the compiler. An example of an invocation we would reformat is
// foo!( x, y, z ). The token x may represent an identifier in the code, but we
// interpreted as an expression.
// Macro uses which are not-list like, such as bar!(key => val), will not be
// reformatted.
// List-like invocations with parentheses will be formatted as function calls,
// and those with brackets will be formatted as array literals.
use std::collections::HashMap;
use config::lists::*;
use syntax::parse::new_parser_from_tts;
use syntax::parse::parser::Parser;
use syntax::parse::token::{BinOpToken, DelimToken, Token};
use syntax::print::pprust;
use syntax::source_map::{BytePos, Span};
use syntax::symbol;
use syntax::tokenstream::{Cursor, ThinTokenStream, TokenStream, TokenTree};
use syntax::ThinVec;
use syntax::{ast, ptr};
use comment::{
contains_comment, remove_trailing_white_spaces, CharClasses, FindUncommented, FullCodeCharKind,
LineClasses,
};
use expr::rewrite_array;
use lists::{itemize_list, write_list, ListFormatting};
use overflow;
use rewrite::{Rewrite, RewriteContext};
use shape::{Indent, Shape};
use source_map::SpanUtils;
use spanned::Spanned;
use utils::{format_visibility, mk_sp, rewrite_ident, wrap_str};
const FORCED_BRACKET_MACROS: &[&str] = &["vec!"];
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum MacroPosition {
Item,
Statement,
Expression,
Pat,
}
#[derive(Debug)]
pub enum MacroArg {
Expr(ptr::P<ast::Expr>),
Ty(ptr::P<ast::Ty>),
Pat(ptr::P<ast::Pat>),
Item(ptr::P<ast::Item>),
}
impl Rewrite for ast::Item {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
let mut visitor = ::visitor::FmtVisitor::from_context(context);
visitor.block_indent = shape.indent;
visitor.last_pos = self.span().lo();
visitor.visit_item(self);
Some(visitor.buffer)
}
}
impl Rewrite for MacroArg {
fn rewrite(&self, context: &RewriteContext, shape: Shape) -> Option<String> {
match *self {
MacroArg::Expr(ref expr) => expr.rewrite(context, shape),
MacroArg::Ty(ref ty) => ty.rewrite(context, shape),
MacroArg::Pat(ref pat) => pat.rewrite(context, shape),
MacroArg::Item(ref item) => item.rewrite(context, shape),
}
}
}
fn parse_macro_arg(parser: &mut Parser) -> Option<MacroArg> {
macro_rules! parse_macro_arg {
($macro_arg:ident, $parser:ident, $f:expr) => {
let mut cloned_parser = (*parser).clone();
match cloned_parser.$parser() {
Ok(x) => {
if parser.sess.span_diagnostic.has_errors() {
parser.sess.span_diagnostic.reset_err_count();
} else {
// Parsing succeeded.
*parser = cloned_parser;
return Some(MacroArg::$macro_arg($f(x)?));
}
}
Err(mut e) => {
e.cancel();
parser.sess.span_diagnostic.reset_err_count();
}
}
};
}
parse_macro_arg!(Expr, parse_expr, |x: ptr::P<ast::Expr>| Some(x));
parse_macro_arg!(Ty, parse_ty, |x: ptr::P<ast::Ty>| Some(x));
parse_macro_arg!(Pat, parse_pat, |x: ptr::P<ast::Pat>| Some(x));
// `parse_item` returns `Option<ptr::P<ast::Item>>`.
parse_macro_arg!(Item, parse_item, |x: Option<ptr::P<ast::Item>>| x);
None
}
/// Rewrite macro name without using pretty-printer if possible.
fn rewrite_macro_name(
context: &RewriteContext,
path: &ast::Path,
extra_ident: Option<ast::Ident>,
) -> String {
let name = if path.segments.len() == 1 {
// Avoid using pretty-printer in the common case.
format!("{}!", rewrite_ident(context, path.segments[0].ident))
} else {
format!("{}!", path)
};
match extra_ident {
Some(ident) if ident != symbol::keywords::Invalid.ident() => format!("{} {}", name, ident),
_ => name,
}
}
// Use this on failing to format the macro call.
fn return_original_snippet_with_failure_marked(
context: &RewriteContext,
span: Span,
) -> Option<String> {
context.macro_rewrite_failure.replace(true);
Some(context.snippet(span).to_owned())
}
struct InsideMacroGuard<'a> {
context: &'a RewriteContext<'a>,
is_nested: bool,
}
impl<'a> InsideMacroGuard<'a> {
fn inside_macro_context(context: &'a RewriteContext) -> InsideMacroGuard<'a> {
let is_nested = context.inside_macro.replace(true);
InsideMacroGuard { context, is_nested }
}
}
impl<'a> Drop for InsideMacroGuard<'a> {
fn drop(&mut self) {
self.context.inside_macro.replace(self.is_nested);
}
}
pub fn rewrite_macro(
mac: &ast::Mac,
extra_ident: Option<ast::Ident>,
context: &RewriteContext,
shape: Shape,
position: MacroPosition,
) -> Option<String> {
let guard = InsideMacroGuard::inside_macro_context(context);
let result = rewrite_macro_inner(mac, extra_ident, context, shape, position, guard.is_nested);
if result.is_none() {
context.macro_rewrite_failure.replace(true);
}
result
}
pub fn rewrite_macro_inner(
mac: &ast::Mac,
extra_ident: Option<ast::Ident>,
context: &RewriteContext,
shape: Shape,
position: MacroPosition,
is_nested_macro: bool,
) -> Option<String> {
if context.config.use_try_shorthand() {
if let Some(expr) = convert_try_mac(mac, context) {
context.inside_macro.replace(false);
return expr.rewrite(context, shape);
}
}
let original_style = macro_style(mac, context);
let macro_name = rewrite_macro_name(context, &mac.node.path, extra_ident);
let style = if FORCED_BRACKET_MACROS.contains(&¯o_name[..]) && !is_nested_macro {
DelimToken::Bracket
} else {
original_style
};
let ts: TokenStream = mac.node.stream();
let has_comment = contains_comment(context.snippet(mac.span));
if ts.is_empty() && !has_comment {
return match style {
DelimToken::Paren if position == MacroPosition::Item => {
Some(format!("{}();", macro_name))
}
DelimToken::Paren => Some(format!("{}()", macro_name)),
DelimToken::Bracket => Some(format!("{}[]", macro_name)),
DelimToken::Brace => Some(format!("{}{{}}", macro_name)),
_ => unreachable!(),
};
}
// Format well-known macros which cannot be parsed as a valid AST.
if macro_name == "lazy_static!" && !has_comment {
if let success @ Some(..) = format_lazy_static(context, shape, &ts) {
return success;
}
}
let mut parser = new_parser_from_tts(context.parse_session, ts.trees().collect());
let mut arg_vec = Vec::new();
let mut vec_with_semi = false;
let mut trailing_comma = false;
if DelimToken::Brace != style {
loop {
match parse_macro_arg(&mut parser) {
Some(arg) => arg_vec.push(arg),
None => return return_original_snippet_with_failure_marked(context, mac.span),
}
match parser.token {
Token::Eof => break,
Token::Comma => (),
Token::Semi => {
// Try to parse `vec![expr; expr]`
if FORCED_BRACKET_MACROS.contains(&¯o_name[..]) {
parser.bump();
if parser.token != Token::Eof {
match parse_macro_arg(&mut parser) {
Some(arg) => {
arg_vec.push(arg);
parser.bump();
if parser.token == Token::Eof && arg_vec.len() == 2 {
vec_with_semi = true;
break;
}
}
None => {
return return_original_snippet_with_failure_marked(
context, mac.span,
)
}
}
}
}
return return_original_snippet_with_failure_marked(context, mac.span);
}
_ => return return_original_snippet_with_failure_marked(context, mac.span),
}
parser.bump();
if parser.token == Token::Eof {
trailing_comma = true;
break;
}
}
}
match style {
DelimToken::Paren => {
// Format macro invocation as function call, preserve the trailing
// comma because not all macros support them.
overflow::rewrite_with_parens(
context,
¯o_name,
&arg_vec.iter().map(|e| &*e).collect::<Vec<_>>(),
shape,
mac.span,
context.config.width_heuristics().fn_call_width,
if trailing_comma {
Some(SeparatorTactic::Always)
} else {
Some(SeparatorTactic::Never)
},
).map(|rw| match position {
MacroPosition::Item => format!("{};", rw),
_ => rw,
})
}
DelimToken::Bracket => {
// Handle special case: `vec![expr; expr]`
if vec_with_semi {
let mac_shape = shape.offset_left(macro_name.len())?;
// 8 = `vec![]` + `; `
let total_overhead = 8;
let nested_shape = mac_shape.block_indent(context.config.tab_spaces());
let lhs = arg_vec[0].rewrite(context, nested_shape)?;
let rhs = arg_vec[1].rewrite(context, nested_shape)?;
if !lhs.contains('\n')
&& !rhs.contains('\n')
&& lhs.len() + rhs.len() + total_overhead <= shape.width
{
Some(format!("{}[{}; {}]", macro_name, lhs, rhs))
} else {
Some(format!(
"{}[{}{};{}{}{}]",
macro_name,
nested_shape.indent.to_string_with_newline(context.config),
lhs,
nested_shape.indent.to_string_with_newline(context.config),
rhs,
shape.indent.to_string_with_newline(context.config),
))
}
} else {
// If we are rewriting `vec!` macro or other special macros,
// then we can rewrite this as an usual array literal.
// Otherwise, we must preserve the original existence of trailing comma.
let macro_name = ¯o_name.as_str();
let mut force_trailing_comma = if trailing_comma {
Some(SeparatorTactic::Always)
} else {
Some(SeparatorTactic::Never)
};
if FORCED_BRACKET_MACROS.contains(macro_name) && !is_nested_macro {
context.inside_macro.replace(false);
if context.use_block_indent() {
force_trailing_comma = Some(SeparatorTactic::Vertical);
};
}
// Convert `MacroArg` into `ast::Expr`, as `rewrite_array` only accepts the latter.
let arg_vec = &arg_vec.iter().map(|e| &*e).collect::<Vec<_>>();
let rewrite = rewrite_array(
macro_name,
arg_vec,
mac.span,
context,
shape,
force_trailing_comma,
Some(original_style),
)?;
let comma = match position {
MacroPosition::Item => ";",
_ => "",
};
Some(format!("{}{}", rewrite, comma))
}
}
DelimToken::Brace => {
// Skip macro invocations with braces, for now.
indent_macro_snippet(context, context.snippet(mac.span), shape.indent)
}
_ => unreachable!(),
}
}
pub fn rewrite_macro_def(
context: &RewriteContext,
shape: Shape,
indent: Indent,
def: &ast::MacroDef,
ident: ast::Ident,
vis: &ast::Visibility,
span: Span,
) -> Option<String> {
let snippet = Some(remove_trailing_white_spaces(context.snippet(span)));
if snippet.as_ref().map_or(true, |s| s.ends_with(';')) {
return snippet;
}
let mut parser = MacroParser::new(def.stream().into_trees());
let parsed_def = match parser.parse() {
Some(def) => def,
None => return snippet,
};
let mut result = if def.legacy {
String::from("macro_rules!")
} else {
format!("{}macro", format_visibility(context, vis))
};
result += " ";
result += rewrite_ident(context, ident);
let multi_branch_style = def.legacy || parsed_def.branches.len() != 1;
let arm_shape = if multi_branch_style {
shape
.block_indent(context.config.tab_spaces())
.with_max_width(context.config)
} else {
shape
};
let branch_items = itemize_list(
context.snippet_provider,
parsed_def.branches.iter(),
"}",
";",
|branch| branch.span.lo(),
|branch| branch.span.hi(),
|branch| branch.rewrite(context, arm_shape, multi_branch_style),
context.snippet_provider.span_after(span, "{"),
span.hi(),
false,
).collect::<Vec<_>>();
let fmt = ListFormatting::new(arm_shape, context.config)
.separator(if def.legacy { ";" } else { "" })
.trailing_separator(SeparatorTactic::Always)
.preserve_newline(true);
if multi_branch_style {
result += " {";
result += &arm_shape.indent.to_string_with_newline(context.config);
}
match write_list(&branch_items, &fmt) {
Some(ref s) => result += s,
None => return snippet,
}
if multi_branch_style {
result += &indent.to_string_with_newline(context.config);
result += "}";
}
Some(result)
}
fn register_metavariable(
map: &mut HashMap<String, String>,
result: &mut String,
name: &str,
dollar_count: usize,
) {
let mut new_name = String::new();
let mut old_name = String::new();
old_name.push('$');
for _ in 0..(dollar_count - 1) {
new_name.push('$');
old_name.push('$');
}
new_name.push('z');
new_name.push_str(&name);
old_name.push_str(&name);
result.push_str(&new_name);
map.insert(old_name, new_name);
}
// Replaces `$foo` with `zfoo`. We must check for name overlap to ensure we
// aren't causing problems.
// This should also work for escaped `$` variables, where we leave earlier `$`s.
fn replace_names(input: &str) -> Option<(String, HashMap<String, String>)> {
// Each substitution will require five or six extra bytes.
let mut result = String::with_capacity(input.len() + 64);
let mut substs = HashMap::new();
let mut dollar_count = 0;
let mut cur_name = String::new();
for (kind, c) in CharClasses::new(input.chars()) {
if kind != FullCodeCharKind::Normal {
result.push(c);
} else if c == '$' {
dollar_count += 1;
} else if dollar_count == 0 {
result.push(c);
} else if !c.is_alphanumeric() && !cur_name.is_empty() {
// Terminates a name following one or more dollars.
register_metavariable(&mut substs, &mut result, &cur_name, dollar_count);
result.push(c);
dollar_count = 0;
cur_name.clear();
} else if c == '(' && cur_name.is_empty() {
// FIXME: Support macro def with repeat.
return None;
} else if c.is_alphanumeric() || c == '_' {
cur_name.push(c);
}
}
if !cur_name.is_empty() {
register_metavariable(&mut substs, &mut result, &cur_name, dollar_count);
}
debug!("replace_names `{}` {:?}", result, substs);
Some((result, substs))
}
#[derive(Debug, Clone)]
enum MacroArgKind {
/// e.g. `$x: expr`.
MetaVariable(ast::Ident, String),
/// e.g. `$($foo: expr),*`
Repeat(
/// `()`, `[]` or `{}`.
DelimToken,
/// Inner arguments inside delimiters.
Vec<ParsedMacroArg>,
/// Something after the closing delimiter and the repeat token, if available.
Option<Box<ParsedMacroArg>>,
/// The repeat token. This could be one of `*`, `+` or `?`.
Token,
),
/// e.g. `[derive(Debug)]`
Delimited(DelimToken, Vec<ParsedMacroArg>),
/// A possible separator. e.g. `,` or `;`.
Separator(String, String),
/// Other random stuff that does not fit to other kinds.
/// e.g. `== foo` in `($x: expr == foo)`.
Other(String, String),
}
fn delim_token_to_str(
context: &RewriteContext,
delim_token: &DelimToken,
shape: Shape,
use_multiple_lines: bool,
inner_is_empty: bool,
) -> (String, String) {
let (lhs, rhs) = match *delim_token {
DelimToken::Paren => ("(", ")"),
DelimToken::Bracket => ("[", "]"),
DelimToken::Brace => {
if inner_is_empty || use_multiple_lines {
("{", "}")
} else {
("{ ", " }")
}
}
DelimToken::NoDelim => ("", ""),
};
if use_multiple_lines {
let indent_str = shape.indent.to_string_with_newline(context.config);
let nested_indent_str = shape
.indent
.block_indent(context.config)
.to_string_with_newline(context.config);
(
format!("{}{}", lhs, nested_indent_str),
format!("{}{}", indent_str, rhs),
)
} else {
(lhs.to_owned(), rhs.to_owned())
}
}
impl MacroArgKind {
fn starts_with_brace(&self) -> bool {
match *self {
MacroArgKind::Repeat(DelimToken::Brace, _, _, _)
| MacroArgKind::Delimited(DelimToken::Brace, _) => true,
_ => false,
}
}
fn starts_with_dollar(&self) -> bool {
match *self {
MacroArgKind::Repeat(..) | MacroArgKind::MetaVariable(..) => true,
_ => false,
}
}
fn ends_with_space(&self) -> bool {
match *self {
MacroArgKind::Separator(..) => true,
_ => false,
}
}
fn has_meta_var(&self) -> bool {
match *self {
MacroArgKind::MetaVariable(..) => true,
MacroArgKind::Repeat(_, ref args, _, _) => args.iter().any(|a| a.kind.has_meta_var()),
_ => false,
}
}
fn rewrite(
&self,
context: &RewriteContext,
shape: Shape,
use_multiple_lines: bool,
) -> Option<String> {
let rewrite_delimited_inner = |delim_tok, args| -> Option<(String, String, String)> {
let inner = wrap_macro_args(context, args, shape)?;
let (lhs, rhs) = delim_token_to_str(context, delim_tok, shape, false, inner.is_empty());
if lhs.len() + inner.len() + rhs.len() <= shape.width {
return Some((lhs, inner, rhs));
}
let (lhs, rhs) = delim_token_to_str(context, delim_tok, shape, true, false);
let nested_shape = shape
.block_indent(context.config.tab_spaces())
.with_max_width(context.config);
let inner = wrap_macro_args(context, args, nested_shape)?;
Some((lhs, inner, rhs))
};
match *self {
MacroArgKind::MetaVariable(ty, ref name) => {
Some(format!("${}:{}", name, ty.name.as_str()))
}
MacroArgKind::Repeat(ref delim_tok, ref args, ref another, ref tok) => {
let (lhs, inner, rhs) = rewrite_delimited_inner(delim_tok, args)?;
let another = another
.as_ref()
.and_then(|a| a.rewrite(context, shape, use_multiple_lines))
.unwrap_or_else(|| "".to_owned());
let repeat_tok = pprust::token_to_string(tok);
Some(format!("${}{}{}{}{}", lhs, inner, rhs, another, repeat_tok))
}
MacroArgKind::Delimited(ref delim_tok, ref args) => {
rewrite_delimited_inner(delim_tok, args)
.map(|(lhs, inner, rhs)| format!("{}{}{}", lhs, inner, rhs))
}
MacroArgKind::Separator(ref sep, ref prefix) => Some(format!("{}{} ", prefix, sep)),
MacroArgKind::Other(ref inner, ref prefix) => Some(format!("{}{}", prefix, inner)),
}
}
}
#[derive(Debug, Clone)]
struct ParsedMacroArg {
kind: MacroArgKind,
span: Span,
}
impl ParsedMacroArg {
pub fn rewrite(
&self,
context: &RewriteContext,
shape: Shape,
use_multiple_lines: bool,
) -> Option<String> {
self.kind.rewrite(context, shape, use_multiple_lines)
}
}
/// Parses macro arguments on macro def.
struct MacroArgParser {
/// Holds either a name of the next metavariable, a separator or a junk.
buf: String,
/// The start position on the current buffer.
lo: BytePos,
/// The first token of the current buffer.
start_tok: Token,
/// Set to true if we are parsing a metavariable or a repeat.
is_meta_var: bool,
/// The position of the last token.
hi: BytePos,
/// The last token parsed.
last_tok: Token,
/// Holds the parsed arguments.
result: Vec<ParsedMacroArg>,
}
fn last_tok(tt: &TokenTree) -> Token {
match *tt {
TokenTree::Token(_, ref t) => t.clone(),
TokenTree::Delimited(_, ref d) => d.close_token(),
}
}
impl MacroArgParser {
pub fn new() -> MacroArgParser {
MacroArgParser {
lo: BytePos(0),
hi: BytePos(0),
buf: String::new(),
is_meta_var: false,
last_tok: Token::Eof,
start_tok: Token::Eof,
result: vec![],
}
}
fn set_last_tok(&mut self, tok: &TokenTree) {
self.hi = tok.span().hi();
self.last_tok = last_tok(tok);
}
fn add_separator(&mut self) {
let prefix = if self.need_space_prefix() {
" ".to_owned()
} else {
"".to_owned()
};
self.result.push(ParsedMacroArg {
kind: MacroArgKind::Separator(self.buf.clone(), prefix),
span: mk_sp(self.lo, self.hi),
});
self.buf.clear();
}
fn add_other(&mut self) {
let prefix = if self.need_space_prefix() {
" ".to_owned()
} else {
"".to_owned()
};
self.result.push(ParsedMacroArg {
kind: MacroArgKind::Other(self.buf.clone(), prefix),
span: mk_sp(self.lo, self.hi),
});
self.buf.clear();
}
fn add_meta_variable(&mut self, iter: &mut Cursor) -> Option<()> {
match iter.next() {
Some(TokenTree::Token(sp, Token::Ident(ref ident, _))) => {
self.result.push(ParsedMacroArg {
kind: MacroArgKind::MetaVariable(*ident, self.buf.clone()),
span: mk_sp(self.lo, sp.hi()),
});
self.buf.clear();
self.is_meta_var = false;
Some(())
}
_ => None,
}
}
fn add_delimited(&mut self, inner: Vec<ParsedMacroArg>, delim: DelimToken, span: Span) {
self.result.push(ParsedMacroArg {
kind: MacroArgKind::Delimited(delim, inner),
span,
});
}
// $($foo: expr),?
fn add_repeat(
&mut self,
inner: Vec<ParsedMacroArg>,
delim: DelimToken,
iter: &mut Cursor,
span: Span,
) -> Option<()> {
let mut buffer = String::new();
let mut first = false;
let mut lo = span.lo();
let mut hi = span.hi();
// Parse '*', '+' or '?.
for ref tok in iter {
self.set_last_tok(tok);
if first {
first = false;
lo = tok.span().lo();
}
match tok {
TokenTree::Token(_, Token::BinOp(BinOpToken::Plus))
| TokenTree::Token(_, Token::Question)
| TokenTree::Token(_, Token::BinOp(BinOpToken::Star)) => {
break;
}
TokenTree::Token(sp, ref t) => {
buffer.push_str(&pprust::token_to_string(t));
hi = sp.hi();
}
_ => return None,
}
}
// There could be some random stuff between ')' and '*', '+' or '?'.
let another = if buffer.trim().is_empty() {
None
} else {
Some(Box::new(ParsedMacroArg {
kind: MacroArgKind::Other(buffer, "".to_owned()),
span: mk_sp(lo, hi),
}))
};
self.result.push(ParsedMacroArg {
kind: MacroArgKind::Repeat(delim, inner, another, self.last_tok.clone()),
span: mk_sp(self.lo, self.hi),
});
Some(())
}
fn update_buffer(&mut self, lo: BytePos, t: &Token) {
if self.buf.is_empty() {
self.lo = lo;
self.start_tok = t.clone();
} else {
let needs_space = match next_space(&self.last_tok) {
SpaceState::Ident => ident_like(t),
SpaceState::Punctuation => !ident_like(t),
SpaceState::Always => true,
SpaceState::Never => false,
};
if force_space_before(t) || needs_space {
self.buf.push(' ');
}
}
self.buf.push_str(&pprust::token_to_string(t));
}
fn need_space_prefix(&self) -> bool {
if self.result.is_empty() {
return false;
}
let last_arg = self.result.last().unwrap();
if let MacroArgKind::MetaVariable(..) = last_arg.kind {
if ident_like(&self.start_tok) {
return true;
}
if self.start_tok == Token::Colon {
return true;
}
}
if force_space_before(&self.start_tok) {
return true;
}
false
}
/// Returns a collection of parsed macro def's arguments.
pub fn parse(mut self, tokens: ThinTokenStream) -> Option<Vec<ParsedMacroArg>> {
let mut iter = (tokens.into(): TokenStream).trees();
while let Some(ref tok) = iter.next() {
match tok {
TokenTree::Token(sp, Token::Dollar) => {
// We always want to add a separator before meta variables.
if !self.buf.is_empty() {
self.add_separator();
}
// Start keeping the name of this metavariable in the buffer.
self.is_meta_var = true;
self.lo = sp.lo();
self.start_tok = Token::Dollar;
}
TokenTree::Token(_, Token::Colon) if self.is_meta_var => {
self.add_meta_variable(&mut iter)?;
}
TokenTree::Token(sp, ref t) => self.update_buffer(sp.lo(), t),
TokenTree::Delimited(sp, delimited) => {
if !self.buf.is_empty() {
if next_space(&self.last_tok) == SpaceState::Always {
self.add_separator();
} else {
self.add_other();
}
}
// Parse the stuff inside delimiters.
let mut parser = MacroArgParser::new();
parser.lo = sp.lo();
let delimited_arg = parser.parse(delimited.tts.clone())?;
if self.is_meta_var {
self.add_repeat(delimited_arg, delimited.delim, &mut iter, *sp)?;
self.is_meta_var = false;
} else {
self.add_delimited(delimited_arg, delimited.delim, *sp);
}
}
}
self.set_last_tok(tok);
}
// We are left with some stuff in the buffer. Since there is nothing
// left to separate, add this as `Other`.
if !self.buf.is_empty() {
self.add_other();
}
Some(self.result)
}
}
fn wrap_macro_args(
context: &RewriteContext,
args: &[ParsedMacroArg],
shape: Shape,
) -> Option<String> {
wrap_macro_args_inner(context, args, shape, false)
.or_else(|| wrap_macro_args_inner(context, args, shape, true))
}
fn wrap_macro_args_inner(
context: &RewriteContext,
args: &[ParsedMacroArg],
shape: Shape,
use_multiple_lines: bool,
) -> Option<String> {
let mut result = String::with_capacity(128);
let mut iter = args.iter().peekable();
let indent_str = shape.indent.to_string_with_newline(context.config);
while let Some(ref arg) = iter.next() {
result.push_str(&arg.rewrite(context, shape, use_multiple_lines)?);
if use_multiple_lines
&& (arg.kind.ends_with_space() || iter.peek().map_or(false, |a| a.kind.has_meta_var()))
{
if arg.kind.ends_with_space() {
result.pop();
}
result.push_str(&indent_str);
} else if let Some(ref next_arg) = iter.peek() {
let space_before_dollar =
!arg.kind.ends_with_space() && next_arg.kind.starts_with_dollar();
let space_before_brace = next_arg.kind.starts_with_brace();
if space_before_dollar || space_before_brace {
result.push(' ');
}
}
}
if !use_multiple_lines && result.len() >= shape.width {
None
} else {
Some(result)
}
}
// This is a bit sketchy. The token rules probably need tweaking, but it works
// for some common cases. I hope the basic logic is sufficient. Note that the
// meaning of some tokens is a bit different here from usual Rust, e.g., `*`
// and `(`/`)` have special meaning.
//
// We always try and format on one line.
// FIXME: Use multi-line when every thing does not fit on one line.
fn format_macro_args(
context: &RewriteContext,
toks: ThinTokenStream,
shape: Shape,
) -> Option<String> {
if !context.config.format_macro_matchers() {
let token_stream: TokenStream = toks.into();
let span = span_for_token_stream(token_stream);
return Some(match span {
Some(span) => context.snippet(span).to_owned(),
None => String::new(),
});
}
let parsed_args = MacroArgParser::new().parse(toks)?;
wrap_macro_args(context, &parsed_args, shape)
}
fn span_for_token_stream(token_stream: TokenStream) -> Option<Span> {
token_stream.trees().next().map(|tt| tt.span())
}
// We should insert a space if the next token is a:
#[derive(Copy, Clone, PartialEq)]
enum SpaceState {
Never,
Punctuation,
Ident, // Or ident/literal-like thing.
Always,
}
fn force_space_before(tok: &Token) -> bool {
debug!("tok: force_space_before {:?}", tok);
match *tok {
Token::Eq
| Token::Lt
| Token::Le
| Token::EqEq
| Token::Ne
| Token::Ge
| Token::Gt
| Token::AndAnd
| Token::OrOr
| Token::Not
| Token::Tilde
| Token::BinOpEq(_)
| Token::At
| Token::RArrow
| Token::LArrow
| Token::FatArrow
| Token::BinOp(_)
| Token::Pound
| Token::Dollar => true,
_ => false,
}
}
fn ident_like(tok: &Token) -> bool {
match *tok {
Token::Ident(..) | Token::Literal(..) | Token::Lifetime(_) => true,
_ => false,
}
}
fn next_space(tok: &Token) -> SpaceState {
debug!("next_space: {:?}", tok);
match *tok {
Token::Not
| Token::BinOp(BinOpToken::And)
| Token::Tilde
| Token::At
| Token::Comma
| Token::Dot
| Token::DotDot
| Token::DotDotDot
| Token::DotDotEq
| Token::DotEq
| Token::Question => SpaceState::Punctuation,
Token::ModSep
| Token::Pound
| Token::Dollar
| Token::OpenDelim(_)
| Token::CloseDelim(_)
| Token::Whitespace => SpaceState::Never,
Token::Literal(..) | Token::Ident(..) | Token::Lifetime(_) => SpaceState::Ident,
_ => SpaceState::Always,
}
}
/// Tries to convert a macro use into a short hand try expression. Returns None
/// when the macro is not an instance of try! (or parsing the inner expression
/// failed).
pub fn convert_try_mac(mac: &ast::Mac, context: &RewriteContext) -> Option<ast::Expr> {
if &format!("{}", mac.node.path) == "try" {
let ts: TokenStream = mac.node.tts.clone().into();
let mut parser = new_parser_from_tts(context.parse_session, ts.trees().collect());
Some(ast::Expr {
id: ast::NodeId::new(0), // dummy value
node: ast::ExprKind::Try(parser.parse_expr().ok()?),
span: mac.span, // incorrect span, but shouldn't matter too much
attrs: ThinVec::new(),
})
} else {
None
}
}
fn macro_style(mac: &ast::Mac, context: &RewriteContext) -> DelimToken {
let snippet = context.snippet(mac.span);
let paren_pos = snippet.find_uncommented("(").unwrap_or(usize::max_value());
let bracket_pos = snippet.find_uncommented("[").unwrap_or(usize::max_value());
let brace_pos = snippet.find_uncommented("{").unwrap_or(usize::max_value());
if paren_pos < bracket_pos && paren_pos < brace_pos {
DelimToken::Paren
} else if bracket_pos < brace_pos {
DelimToken::Bracket
} else {
DelimToken::Brace
}
}
/// Indent each line according to the specified `indent`.
/// e.g.
///
/// ```rust,ignore
/// foo!{
/// x,
/// y,
/// foo(
/// a,
/// b,
/// c,
/// ),
/// }
/// ```
///
/// will become
///
/// ```rust,ignore
/// foo!{
/// x,
/// y,
/// foo(
/// a,
/// b,
/// c,
/// ),
/// }
/// ```
fn indent_macro_snippet(
context: &RewriteContext,
macro_str: &str,
indent: Indent,
) -> Option<String> {
let mut lines = LineClasses::new(macro_str);
let first_line = lines.next().map(|(_, s)| s.trim_right().to_owned())?;
let mut trimmed_lines = Vec::with_capacity(16);
let mut veto_trim = false;
let min_prefix_space_width = lines
.filter_map(|(kind, line)| {
let mut trimmed = true;
let prefix_space_width = if is_empty_line(&line) {
None
} else {
Some(get_prefix_space_width(context, &line))
};
let line = if veto_trim || (kind.is_string() && !line.ends_with('\\')) {
veto_trim = kind.is_string() && !line.ends_with('\\');
trimmed = false;
line
} else {
line.trim().to_owned()
};
trimmed_lines.push((trimmed, line, prefix_space_width));
prefix_space_width
}).min()?;
Some(
first_line + "\n" + &trimmed_lines
.iter()
.map(
|&(trimmed, ref line, prefix_space_width)| match prefix_space_width {
_ if !trimmed => line.to_owned(),
Some(original_indent_width) => {
let new_indent_width = indent.width() + original_indent_width
.saturating_sub(min_prefix_space_width);
let new_indent = Indent::from_width(context.config, new_indent_width);
format!("{}{}", new_indent.to_string(context.config), line.trim())
}
None => String::new(),
},
).collect::<Vec<_>>()
.join("\n"),
)
}
fn get_prefix_space_width(context: &RewriteContext, s: &str) -> usize {
let mut width = 0;
for c in s.chars() {
match c {
' ' => width += 1,
'\t' => width += context.config.tab_spaces(),
_ => return width,
}
}
width
}
fn is_empty_line(s: &str) -> bool {
s.is_empty() || s.chars().all(char::is_whitespace)
}
// A very simple parser that just parses a macros 2.0 definition into its branches.
// Currently we do not attempt to parse any further than that.
#[derive(new)]
struct MacroParser {
toks: Cursor,
}
impl MacroParser {
// (`(` ... `)` `=>` `{` ... `}`)*
fn parse(&mut self) -> Option<Macro> {
let mut branches = vec![];
while self.toks.look_ahead(1).is_some() {
branches.push(self.parse_branch()?);
}
Some(Macro { branches })
}
// `(` ... `)` `=>` `{` ... `}`
fn parse_branch(&mut self) -> Option<MacroBranch> {
let tok = self.toks.next()?;
let (lo, args_paren_kind) = match tok {
TokenTree::Token(..) => return None,
TokenTree::Delimited(sp, ref d) => (sp.lo(), d.delim),
};
let args = tok.joint().into();
match self.toks.next()? {
TokenTree::Token(_, Token::FatArrow) => {}
_ => return None,
}
let (mut hi, body, whole_body) = match self.toks.next()? {
TokenTree::Token(..) => return None,
TokenTree::Delimited(sp, _) => {
let data = sp.data();
(
data.hi,
Span::new(data.lo + BytePos(1), data.hi - BytePos(1), data.ctxt),
sp,
)
}
};
if let Some(TokenTree::Token(sp, Token::Semi)) = self.toks.look_ahead(0) {
self.toks.next();
hi = sp.hi();
}
Some(MacroBranch {
span: mk_sp(lo, hi),
args_paren_kind,
args,
body,
whole_body,
})
}
}
// A parsed macros 2.0 macro definition.
struct Macro {
branches: Vec<MacroBranch>,
}
// FIXME: it would be more efficient to use references to the token streams
// rather than clone them, if we can make the borrowing work out.
struct MacroBranch {
span: Span,
args_paren_kind: DelimToken,
args: ThinTokenStream,
body: Span,
whole_body: Span,
}
impl MacroBranch {
fn rewrite(
&self,
context: &RewriteContext,
shape: Shape,
multi_branch_style: bool,
) -> Option<String> {
// Only attempt to format function-like macros.
if self.args_paren_kind != DelimToken::Paren {
// FIXME(#1539): implement for non-sugared macros.
return None;
}
// 5 = " => {"
let mut result = format_macro_args(context, self.args.clone(), shape.sub_width(5)?)?;
if multi_branch_style {
result += " =>";
}
if !context.config.format_macro_bodies() {
result += " ";
result += context.snippet(self.whole_body);
return Some(result);
}
// The macro body is the most interesting part. It might end up as various
// AST nodes, but also has special variables (e.g, `$foo`) which can't be
// parsed as regular Rust code (and note that these can be escaped using
// `$$`). We'll try and format like an AST node, but we'll substitute
// variables for new names with the same length first.
let old_body = context.snippet(self.body).trim();
let (body_str, substs) = replace_names(old_body)?;
let has_block_body = old_body.starts_with('{');
let mut config = context.config.clone();
config.set().hide_parse_errors(true);
result += " {";
let body_indent = if has_block_body {
shape.indent
} else {
shape.indent.block_indent(&config)
};
let new_width = config.max_width() - body_indent.width();
config.set().max_width(new_width);
// First try to format as items, then as statements.
let new_body = match ::format_snippet(&body_str, &config) {
Some(new_body) => new_body,
None => {
let new_width = new_width + config.tab_spaces();
config.set().max_width(new_width);
match ::format_code_block(&body_str, &config) {
Some(new_body) => new_body,
None => return None,
}
}
};
let new_body = wrap_str(new_body, config.max_width(), shape)?;
// Indent the body since it is in a block.
let indent_str = body_indent.to_string(&config);
let mut new_body = LineClasses::new(new_body.trim_right())
.fold(
(String::new(), true),
|(mut s, need_indent), (kind, ref l)| {
if !l.is_empty() && need_indent {
s += &indent_str;
}
(s + l + "\n", !kind.is_string() || l.ends_with('\\'))
},
).0;
// Undo our replacement of macro variables.
// FIXME: this could be *much* more efficient.
for (old, new) in &substs {
if old_body.find(new).is_some() {
debug!("rewrite_macro_def: bailing matching variable: `{}`", new);
return None;
}
new_body = new_body.replace(new, old);
}
if has_block_body {
result += new_body.trim();
} else if !new_body.is_empty() {
result += "\n";
result += &new_body;
result += &shape.indent.to_string(&config);
}
result += "}";
Some(result)
}
}
/// Format `lazy_static!` from https://crates.io/crates/lazy_static.
///
/// # Expected syntax
///
/// ```ignore
/// lazy_static! {
/// [pub] static ref NAME_1: TYPE_1 = EXPR_1;
/// [pub] static ref NAME_2: TYPE_2 = EXPR_2;
/// ...
/// [pub] static ref NAME_N: TYPE_N = EXPR_N;
/// }
/// ```
fn format_lazy_static(context: &RewriteContext, shape: Shape, ts: &TokenStream) -> Option<String> {
let mut result = String::with_capacity(1024);
let mut parser = new_parser_from_tts(context.parse_session, ts.trees().collect());
let nested_shape = shape
.block_indent(context.config.tab_spaces())
.with_max_width(context.config);
result.push_str("lazy_static! {");
result.push_str(&nested_shape.indent.to_string_with_newline(context.config));
macro parse_or($method:ident $(,)* $($arg:expr),* $(,)*) {
match parser.$method($($arg,)*) {
Ok(val) => {
if parser.sess.span_diagnostic.has_errors() {
parser.sess.span_diagnostic.reset_err_count();
return None;
} else {
val
}
}
Err(mut err) => {
err.cancel();
parser.sess.span_diagnostic.reset_err_count();
return None;
}
}
}
while parser.token != Token::Eof {
// Parse a `lazy_static!` item.
let vis = ::utils::format_visibility(context, &parse_or!(parse_visibility, false));
parser.eat_keyword(symbol::keywords::Static);
parser.eat_keyword(symbol::keywords::Ref);
let id = parse_or!(parse_ident);
parser.eat(&Token::Colon);
let ty = parse_or!(parse_ty);
parser.eat(&Token::Eq);
let expr = parse_or!(parse_expr);
parser.eat(&Token::Semi);
// Rewrite as a static item.
let mut stmt = String::with_capacity(128);
stmt.push_str(&format!(
"{}static ref {}: {} =",
vis,
id,
ty.rewrite(context, nested_shape)?
));
result.push_str(&::expr::rewrite_assign_rhs(
context,
stmt,
&*expr,
nested_shape.sub_width(1)?,
)?);
result.push(';');
if parser.token != Token::Eof {
result.push_str(&nested_shape.indent.to_string_with_newline(context.config));
}
}
result.push_str(&shape.indent.to_string_with_newline(context.config));
result.push('}');
Some(result)
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/shape.rs
|
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::borrow::Cow;
use std::cmp::min;
use std::ops::{Add, Sub};
use Config;
#[derive(Copy, Clone, Debug)]
pub struct Indent {
// Width of the block indent, in characters. Must be a multiple of
// Config::tab_spaces.
pub block_indent: usize,
// Alignment in characters.
pub alignment: usize,
}
// INDENT_BUFFER.len() = 81
const INDENT_BUFFER_LEN: usize = 80;
const INDENT_BUFFER: &str =
"\n ";
impl Indent {
pub fn new(block_indent: usize, alignment: usize) -> Indent {
Indent {
block_indent,
alignment,
}
}
pub fn from_width(config: &Config, width: usize) -> Indent {
if config.hard_tabs() {
let tab_num = width / config.tab_spaces();
let alignment = width % config.tab_spaces();
Indent::new(config.tab_spaces() * tab_num, alignment)
} else {
Indent::new(width, 0)
}
}
pub fn empty() -> Indent {
Indent::new(0, 0)
}
pub fn block_only(&self) -> Indent {
Indent {
block_indent: self.block_indent,
alignment: 0,
}
}
pub fn block_indent(mut self, config: &Config) -> Indent {
self.block_indent += config.tab_spaces();
self
}
pub fn block_unindent(mut self, config: &Config) -> Indent {
if self.block_indent < config.tab_spaces() {
Indent::new(self.block_indent, 0)
} else {
self.block_indent -= config.tab_spaces();
self
}
}
pub fn width(&self) -> usize {
self.block_indent + self.alignment
}
pub fn to_string(&self, config: &Config) -> Cow<'static, str> {
self.to_string_inner(config, 1)
}
pub fn to_string_with_newline(&self, config: &Config) -> Cow<'static, str> {
self.to_string_inner(config, 0)
}
fn to_string_inner(&self, config: &Config, offset: usize) -> Cow<'static, str> {
let (num_tabs, num_spaces) = if config.hard_tabs() {
(self.block_indent / config.tab_spaces(), self.alignment)
} else {
(0, self.width())
};
let num_chars = num_tabs + num_spaces;
if num_tabs == 0 && num_chars + offset <= INDENT_BUFFER_LEN {
Cow::from(&INDENT_BUFFER[offset..num_chars + 1])
} else {
let mut indent = String::with_capacity(num_chars + if offset == 0 { 1 } else { 0 });
if offset == 0 {
indent.push('\n');
}
for _ in 0..num_tabs {
indent.push('\t')
}
for _ in 0..num_spaces {
indent.push(' ')
}
Cow::from(indent)
}
}
}
impl Add for Indent {
type Output = Indent;
fn add(self, rhs: Indent) -> Indent {
Indent {
block_indent: self.block_indent + rhs.block_indent,
alignment: self.alignment + rhs.alignment,
}
}
}
impl Sub for Indent {
type Output = Indent;
fn sub(self, rhs: Indent) -> Indent {
Indent::new(
self.block_indent - rhs.block_indent,
self.alignment - rhs.alignment,
)
}
}
impl Add<usize> for Indent {
type Output = Indent;
fn add(self, rhs: usize) -> Indent {
Indent::new(self.block_indent, self.alignment + rhs)
}
}
impl Sub<usize> for Indent {
type Output = Indent;
fn sub(self, rhs: usize) -> Indent {
Indent::new(self.block_indent, self.alignment - rhs)
}
}
#[derive(Copy, Clone, Debug)]
pub struct Shape {
pub width: usize,
// The current indentation of code.
pub indent: Indent,
// Indentation + any already emitted text on the first line of the current
// statement.
pub offset: usize,
}
impl Shape {
/// `indent` is the indentation of the first line. The next lines
/// should begin with at least `indent` spaces (except backwards
/// indentation). The first line should not begin with indentation.
/// `width` is the maximum number of characters on the last line
/// (excluding `indent`). The width of other lines is not limited by
/// `width`.
/// Note that in reality, we sometimes use width for lines other than the
/// last (i.e., we are conservative).
// .......*-------*
// | |
// | *-*
// *-----|
// |<------------>| max width
// |<---->| indent
// |<--->| width
pub fn legacy(width: usize, indent: Indent) -> Shape {
Shape {
width,
indent,
offset: indent.alignment,
}
}
pub fn indented(indent: Indent, config: &Config) -> Shape {
Shape {
width: config.max_width().saturating_sub(indent.width()),
indent,
offset: indent.alignment,
}
}
pub fn with_max_width(&self, config: &Config) -> Shape {
Shape {
width: config.max_width().saturating_sub(self.indent.width()),
..*self
}
}
pub fn visual_indent(&self, extra_width: usize) -> Shape {
let alignment = self.offset + extra_width;
Shape {
width: self.width,
indent: Indent::new(self.indent.block_indent, alignment),
offset: alignment,
}
}
pub fn block_indent(&self, extra_width: usize) -> Shape {
if self.indent.alignment == 0 {
Shape {
width: self.width,
indent: Indent::new(self.indent.block_indent + extra_width, 0),
offset: 0,
}
} else {
Shape {
width: self.width,
indent: self.indent + extra_width,
offset: self.indent.alignment + extra_width,
}
}
}
pub fn block_left(&self, width: usize) -> Option<Shape> {
self.block_indent(width).sub_width(width)
}
pub fn add_offset(&self, extra_width: usize) -> Shape {
Shape {
offset: self.offset + extra_width,
..*self
}
}
pub fn block(&self) -> Shape {
Shape {
indent: self.indent.block_only(),
..*self
}
}
pub fn sub_width(&self, width: usize) -> Option<Shape> {
Some(Shape {
width: self.width.checked_sub(width)?,
..*self
})
}
pub fn shrink_left(&self, width: usize) -> Option<Shape> {
Some(Shape {
width: self.width.checked_sub(width)?,
indent: self.indent + width,
offset: self.offset + width,
})
}
pub fn offset_left(&self, width: usize) -> Option<Shape> {
self.add_offset(width).sub_width(width)
}
pub fn used_width(&self) -> usize {
self.indent.block_indent + self.offset
}
pub fn rhs_overhead(&self, config: &Config) -> usize {
config
.max_width()
.saturating_sub(self.used_width() + self.width)
}
pub fn comment(&self, config: &Config) -> Shape {
let width = min(
self.width,
config.comment_width().saturating_sub(self.indent.width()),
);
Shape { width, ..*self }
}
pub fn to_string_with_newline(&self, config: &Config) -> Cow<'static, str> {
let mut offset_indent = self.indent;
offset_indent.alignment = self.offset;
offset_indent.to_string_inner(config, 0)
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn indent_add_sub() {
let indent = Indent::new(4, 8) + Indent::new(8, 12);
assert_eq!(12, indent.block_indent);
assert_eq!(20, indent.alignment);
let indent = indent - Indent::new(4, 4);
assert_eq!(8, indent.block_indent);
assert_eq!(16, indent.alignment);
}
#[test]
fn indent_add_sub_alignment() {
let indent = Indent::new(4, 8) + 4;
assert_eq!(4, indent.block_indent);
assert_eq!(12, indent.alignment);
let indent = indent - 4;
assert_eq!(4, indent.block_indent);
assert_eq!(8, indent.alignment);
}
#[test]
fn indent_to_string_spaces() {
let config = Config::default();
let indent = Indent::new(4, 8);
// 12 spaces
assert_eq!(" ", indent.to_string(&config));
}
#[test]
fn indent_to_string_hard_tabs() {
let mut config = Config::default();
config.set().hard_tabs(true);
let indent = Indent::new(8, 4);
// 2 tabs + 4 spaces
assert_eq!("\t\t ", indent.to_string(&config));
}
#[test]
fn shape_visual_indent() {
let config = Config::default();
let indent = Indent::new(4, 8);
let shape = Shape::legacy(config.max_width(), indent);
let shape = shape.visual_indent(20);
assert_eq!(config.max_width(), shape.width);
assert_eq!(4, shape.indent.block_indent);
assert_eq!(28, shape.indent.alignment);
assert_eq!(28, shape.offset);
}
#[test]
fn shape_block_indent_without_alignment() {
let config = Config::default();
let indent = Indent::new(4, 0);
let shape = Shape::legacy(config.max_width(), indent);
let shape = shape.block_indent(20);
assert_eq!(config.max_width(), shape.width);
assert_eq!(24, shape.indent.block_indent);
assert_eq!(0, shape.indent.alignment);
assert_eq!(0, shape.offset);
}
#[test]
fn shape_block_indent_with_alignment() {
let config = Config::default();
let indent = Indent::new(4, 8);
let shape = Shape::legacy(config.max_width(), indent);
let shape = shape.block_indent(20);
assert_eq!(config.max_width(), shape.width);
assert_eq!(4, shape.indent.block_indent);
assert_eq!(28, shape.indent.alignment);
assert_eq!(28, shape.offset);
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/rustfmt_diff.rs
|
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use config::{Color, Config, Verbosity};
use diff;
use std::collections::VecDeque;
use std::io;
use std::io::Write;
#[derive(Debug, PartialEq)]
pub enum DiffLine {
Context(String),
Expected(String),
Resulting(String),
}
#[derive(Debug, PartialEq)]
pub struct Mismatch {
/// The line number in the formatted version.
pub line_number: u32,
/// The line number in the original version.
pub line_number_orig: u32,
/// The set of lines (context and old/new) in the mismatch.
pub lines: Vec<DiffLine>,
}
impl Mismatch {
fn new(line_number: u32, line_number_orig: u32) -> Mismatch {
Mismatch {
line_number,
line_number_orig,
lines: Vec::new(),
}
}
}
// This struct handles writing output to stdout and abstracts away the logic
// of printing in color, if it's possible in the executing environment.
pub struct OutputWriter {
terminal: Option<Box<term::Terminal<Output = io::Stdout>>>,
}
impl OutputWriter {
// Create a new OutputWriter instance based on the caller's preference
// for colorized output and the capabilities of the terminal.
pub fn new(color: Color) -> Self {
if let Some(t) = term::stdout() {
if color.use_colored_tty() && t.supports_color() {
return OutputWriter { terminal: Some(t) };
}
}
OutputWriter { terminal: None }
}
// Write output in the optionally specified color. The output is written
// in the specified color if this OutputWriter instance contains a
// Terminal in its `terminal` field.
pub fn writeln(&mut self, msg: &str, color: Option<term::color::Color>) {
match &mut self.terminal {
Some(ref mut t) => {
if let Some(color) = color {
t.fg(color).unwrap();
}
writeln!(t, "{}", msg).unwrap();
if color.is_some() {
t.reset().unwrap();
}
}
None => println!("{}", msg),
}
}
}
// Produces a diff between the expected output and actual output of rustfmt.
pub fn make_diff(expected: &str, actual: &str, context_size: usize) -> Vec<Mismatch> {
let mut line_number = 1;
let mut line_number_orig = 1;
let mut context_queue: VecDeque<&str> = VecDeque::with_capacity(context_size);
let mut lines_since_mismatch = context_size + 1;
let mut results = Vec::new();
let mut mismatch = Mismatch::new(0, 0);
for result in diff::lines(expected, actual) {
match result {
diff::Result::Left(str) => {
if lines_since_mismatch >= context_size && lines_since_mismatch > 0 {
results.push(mismatch);
mismatch = Mismatch::new(
line_number - context_queue.len() as u32,
line_number_orig - context_queue.len() as u32,
);
}
while let Some(line) = context_queue.pop_front() {
mismatch.lines.push(DiffLine::Context(line.to_owned()));
}
mismatch.lines.push(DiffLine::Resulting(str.to_owned()));
line_number_orig += 1;
lines_since_mismatch = 0;
}
diff::Result::Right(str) => {
if lines_since_mismatch >= context_size && lines_since_mismatch > 0 {
results.push(mismatch);
mismatch = Mismatch::new(
line_number - context_queue.len() as u32,
line_number_orig - context_queue.len() as u32,
);
}
while let Some(line) = context_queue.pop_front() {
mismatch.lines.push(DiffLine::Context(line.to_owned()));
}
mismatch.lines.push(DiffLine::Expected(str.to_owned()));
line_number += 1;
lines_since_mismatch = 0;
}
diff::Result::Both(str, _) => {
if context_queue.len() >= context_size {
let _ = context_queue.pop_front();
}
if lines_since_mismatch < context_size {
mismatch.lines.push(DiffLine::Context(str.to_owned()));
} else if context_size > 0 {
context_queue.push_back(str);
}
line_number += 1;
line_number_orig += 1;
lines_since_mismatch += 1;
}
}
}
results.push(mismatch);
results.remove(0);
results
}
pub fn print_diff<F>(diff: Vec<Mismatch>, get_section_title: F, config: &Config)
where
F: Fn(u32) -> String,
{
let color = config.color();
let line_terminator = if config.verbose() == Verbosity::Verbose {
"⏎"
} else {
""
};
let mut writer = OutputWriter::new(color);
for mismatch in diff {
let title = get_section_title(mismatch.line_number);
writer.writeln(&title, None);
for line in mismatch.lines {
match line {
DiffLine::Context(ref str) => {
writer.writeln(&format!(" {}{}", str, line_terminator), None)
}
DiffLine::Expected(ref str) => writer.writeln(
&format!("+{}{}", str, line_terminator),
Some(term::color::GREEN),
),
DiffLine::Resulting(ref str) => writer.writeln(
&format!("-{}{}", str, line_terminator),
Some(term::color::RED),
),
}
}
}
}
/// Convert a Mismatch into a serialised form which just includes
/// enough information to modify the original file.
/// Each section starts with a line with three integers, space separated:
/// lineno num_removed num_added
/// followed by (num_added) lines of added text. The line numbers are
/// relative to the original file.
pub fn output_modified<W>(mut out: W, diff: Vec<Mismatch>)
where
W: Write,
{
for mismatch in diff {
let (num_removed, num_added) =
mismatch
.lines
.iter()
.fold((0, 0), |(rem, add), line| match *line {
DiffLine::Context(_) => panic!("No Context expected"),
DiffLine::Expected(_) => (rem, add + 1),
DiffLine::Resulting(_) => (rem + 1, add),
});
// Write a header with enough information to separate the modified lines.
writeln!(
out,
"{} {} {}",
mismatch.line_number_orig, num_removed, num_added
).unwrap();
for line in mismatch.lines {
match line {
DiffLine::Context(_) | DiffLine::Resulting(_) => (),
DiffLine::Expected(ref str) => {
writeln!(out, "{}", str).unwrap();
}
}
}
}
}
#[cfg(test)]
mod test {
use super::DiffLine::*;
use super::{make_diff, Mismatch};
#[test]
fn diff_simple() {
let src = "one\ntwo\nthree\nfour\nfive\n";
let dest = "one\ntwo\ntrois\nfour\nfive\n";
let diff = make_diff(src, dest, 1);
assert_eq!(
diff,
vec![Mismatch {
line_number: 2,
line_number_orig: 2,
lines: vec![
Context("two".to_owned()),
Resulting("three".to_owned()),
Expected("trois".to_owned()),
Context("four".to_owned()),
],
}]
);
}
#[test]
fn diff_simple2() {
let src = "one\ntwo\nthree\nfour\nfive\nsix\nseven\n";
let dest = "one\ntwo\ntrois\nfour\ncinq\nsix\nseven\n";
let diff = make_diff(src, dest, 1);
assert_eq!(
diff,
vec![
Mismatch {
line_number: 2,
line_number_orig: 2,
lines: vec![
Context("two".to_owned()),
Resulting("three".to_owned()),
Expected("trois".to_owned()),
Context("four".to_owned()),
],
},
Mismatch {
line_number: 5,
line_number_orig: 5,
lines: vec![
Resulting("five".to_owned()),
Expected("cinq".to_owned()),
Context("six".to_owned()),
],
},
]
);
}
#[test]
fn diff_zerocontext() {
let src = "one\ntwo\nthree\nfour\nfive\n";
let dest = "one\ntwo\ntrois\nfour\nfive\n";
let diff = make_diff(src, dest, 0);
assert_eq!(
diff,
vec![Mismatch {
line_number: 3,
line_number_orig: 3,
lines: vec![Resulting("three".to_owned()), Expected("trois".to_owned())],
}]
);
}
#[test]
fn diff_trailing_newline() {
let src = "one\ntwo\nthree\nfour\nfive";
let dest = "one\ntwo\nthree\nfour\nfive\n";
let diff = make_diff(src, dest, 1);
assert_eq!(
diff,
vec![Mismatch {
line_number: 5,
line_number_orig: 5,
lines: vec![Context("five".to_owned()), Expected("".to_owned())],
}]
);
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/reorder.rs
|
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Reorder items.
//!
//! `mod`, `extern crate` and `use` declarations are reorderd in alphabetical
//! order. Trait items are reordered in pre-determined order (associated types
//! and constants comes before methods).
// FIXME(#2455): Reorder trait items.
use config::Config;
use syntax::{ast, attr, source_map::Span};
use attr::filter_inline_attrs;
use comment::combine_strs_with_missing_comments;
use imports::{merge_use_trees, UseTree};
use items::{is_mod_decl, rewrite_extern_crate, rewrite_mod};
use lists::{itemize_list, write_list, ListFormatting, ListItem};
use rewrite::{Rewrite, RewriteContext};
use shape::Shape;
use source_map::LineRangeUtils;
use spanned::Spanned;
use utils::mk_sp;
use visitor::FmtVisitor;
use std::cmp::{Ord, Ordering};
/// Choose the ordering between the given two items.
fn compare_items(a: &ast::Item, b: &ast::Item) -> Ordering {
match (&a.node, &b.node) {
(&ast::ItemKind::Mod(..), &ast::ItemKind::Mod(..)) => {
a.ident.name.as_str().cmp(&b.ident.name.as_str())
}
(&ast::ItemKind::ExternCrate(ref a_name), &ast::ItemKind::ExternCrate(ref b_name)) => {
// `extern crate foo as bar;`
// ^^^ Comparing this.
let a_orig_name =
a_name.map_or_else(|| a.ident.name.as_str(), |symbol| symbol.as_str());
let b_orig_name =
b_name.map_or_else(|| b.ident.name.as_str(), |symbol| symbol.as_str());
let result = a_orig_name.cmp(&b_orig_name);
if result != Ordering::Equal {
return result;
}
// `extern crate foo as bar;`
// ^^^ Comparing this.
match (a_name, b_name) {
(Some(..), None) => Ordering::Greater,
(None, Some(..)) => Ordering::Less,
(None, None) => Ordering::Equal,
(Some(..), Some(..)) => a.ident.name.as_str().cmp(&b.ident.name.as_str()),
}
}
_ => unreachable!(),
}
}
fn wrap_reorderable_items(
context: &RewriteContext,
list_items: &[ListItem],
shape: Shape,
) -> Option<String> {
let fmt = ListFormatting::new(shape, context.config).separator("");
write_list(list_items, &fmt)
}
fn rewrite_reorderable_item(
context: &RewriteContext,
item: &ast::Item,
shape: Shape,
) -> Option<String> {
let attrs = filter_inline_attrs(&item.attrs, item.span());
let attrs_str = attrs.rewrite(context, shape)?;
let missed_span = if attrs.is_empty() {
mk_sp(item.span.lo(), item.span.lo())
} else {
mk_sp(attrs.last().unwrap().span.hi(), item.span.lo())
};
let item_str = match item.node {
ast::ItemKind::ExternCrate(..) => rewrite_extern_crate(context, item)?,
ast::ItemKind::Mod(..) => rewrite_mod(context, item),
_ => return None,
};
combine_strs_with_missing_comments(context, &attrs_str, &item_str, missed_span, shape, false)
}
/// Rewrite a list of items with reordering. Every item in `items` must have
/// the same `ast::ItemKind`.
fn rewrite_reorderable_items(
context: &RewriteContext,
reorderable_items: &[&ast::Item],
shape: Shape,
span: Span,
) -> Option<String> {
match reorderable_items[0].node {
// FIXME: Remove duplicated code.
ast::ItemKind::Use(..) => {
let mut normalized_items: Vec<_> = reorderable_items
.iter()
.filter_map(|item| UseTree::from_ast_with_normalization(context, item))
.collect();
let cloned = normalized_items.clone();
// Add comments before merging.
let list_items = itemize_list(
context.snippet_provider,
cloned.iter(),
"",
";",
|item| item.span().lo(),
|item| item.span().hi(),
|_item| Some("".to_owned()),
span.lo(),
span.hi(),
false,
);
for (item, list_item) in normalized_items.iter_mut().zip(list_items) {
item.list_item = Some(list_item.clone());
}
if context.config.merge_imports() {
normalized_items = merge_use_trees(normalized_items);
}
normalized_items.sort();
// 4 = "use ", 1 = ";"
let nested_shape = shape.offset_left(4)?.sub_width(1)?;
let item_vec: Vec<_> = normalized_items
.into_iter()
.map(|use_tree| ListItem {
item: use_tree.rewrite_top_level(context, nested_shape),
..use_tree.list_item.unwrap_or_else(ListItem::empty)
}).collect();
wrap_reorderable_items(context, &item_vec, nested_shape)
}
_ => {
let list_items = itemize_list(
context.snippet_provider,
reorderable_items.iter(),
"",
";",
|item| item.span().lo(),
|item| item.span().hi(),
|item| rewrite_reorderable_item(context, item, shape),
span.lo(),
span.hi(),
false,
);
let mut item_pair_vec: Vec<_> = list_items.zip(reorderable_items.iter()).collect();
item_pair_vec.sort_by(|a, b| compare_items(a.1, b.1));
let item_vec: Vec<_> = item_pair_vec.into_iter().map(|pair| pair.0).collect();
wrap_reorderable_items(context, &item_vec, shape)
}
}
}
fn contains_macro_use_attr(item: &ast::Item) -> bool {
attr::contains_name(&filter_inline_attrs(&item.attrs, item.span()), "macro_use")
}
/// A simplified version of `ast::ItemKind`.
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
enum ReorderableItemKind {
ExternCrate,
Mod,
Use,
/// An item that cannot be reordered. Either has an unreorderable item kind
/// or an `macro_use` attribute.
Other,
}
impl ReorderableItemKind {
fn from(item: &ast::Item) -> Self {
match item.node {
_ if contains_macro_use_attr(item) => ReorderableItemKind::Other,
ast::ItemKind::ExternCrate(..) => ReorderableItemKind::ExternCrate,
ast::ItemKind::Mod(..) if is_mod_decl(item) => ReorderableItemKind::Mod,
ast::ItemKind::Use(..) => ReorderableItemKind::Use,
_ => ReorderableItemKind::Other,
}
}
fn is_same_item_kind(&self, item: &ast::Item) -> bool {
ReorderableItemKind::from(item) == *self
}
fn is_reorderable(&self, config: &Config) -> bool {
match *self {
ReorderableItemKind::ExternCrate => config.reorder_imports(),
ReorderableItemKind::Mod => config.reorder_modules(),
ReorderableItemKind::Use => config.reorder_imports(),
ReorderableItemKind::Other => false,
}
}
fn in_group(&self) -> bool {
match *self {
ReorderableItemKind::ExternCrate
| ReorderableItemKind::Mod
| ReorderableItemKind::Use => true,
ReorderableItemKind::Other => false,
}
}
}
impl<'b, 'a: 'b> FmtVisitor<'a> {
/// Format items with the same item kind and reorder them. If `in_group` is
/// `true`, then the items separated by an empty line will not be reordered
/// together.
fn walk_reorderable_items(
&mut self,
items: &[&ast::Item],
item_kind: ReorderableItemKind,
in_group: bool,
) -> usize {
let mut last = self.source_map.lookup_line_range(items[0].span());
let item_length = items
.iter()
.take_while(|ppi| {
item_kind.is_same_item_kind(&***ppi)
&& (!in_group || {
let current = self.source_map.lookup_line_range(ppi.span());
let in_same_group = current.lo < last.hi + 2;
last = current;
in_same_group
})
}).count();
let items = &items[..item_length];
let at_least_one_in_file_lines = items
.iter()
.any(|item| !out_of_file_lines_range!(self, item.span));
if at_least_one_in_file_lines && !items.is_empty() {
let lo = items.first().unwrap().span().lo();
let hi = items.last().unwrap().span().hi();
let span = mk_sp(lo, hi);
let rw = rewrite_reorderable_items(&self.get_context(), items, self.shape(), span);
self.push_rewrite(span, rw);
} else {
for item in items {
self.push_rewrite(item.span, None);
}
}
item_length
}
/// Visit and format the given items. Items are reordered If they are
/// consecutive and reorderable.
pub fn visit_items_with_reordering(&mut self, mut items: &[&ast::Item]) {
while !items.is_empty() {
// If the next item is a `use`, `extern crate` or `mod`, then extract it and any
// subsequent items that have the same item kind to be reordered within
// `walk_reorderable_items`. Otherwise, just format the next item for output.
let item_kind = ReorderableItemKind::from(items[0]);
if item_kind.is_reorderable(self.config) {
let visited_items_num =
self.walk_reorderable_items(items, item_kind, item_kind.in_group());
let (_, rest) = items.split_at(visited_items_num);
items = rest;
} else {
// Reaching here means items were not reordered. There must be at least
// one item left in `items`, so calling `unwrap()` here is safe.
let (item, rest) = items.split_first().unwrap();
self.visit_item(item);
items = rest;
}
}
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/source_file.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::fs;
use std::io::{self, Write};
use checkstyle::output_checkstyle_file;
use config::{Config, EmitMode, FileName, Verbosity};
use rustfmt_diff::{make_diff, output_modified, print_diff};
#[cfg(test)]
use formatting::FileRecord;
// Append a newline to the end of each file.
pub fn append_newline(s: &mut String) {
s.push_str("\n");
}
#[cfg(test)]
pub(crate) fn write_all_files<T>(
source_file: &[FileRecord],
out: &mut T,
config: &Config,
) -> Result<(), io::Error>
where
T: Write,
{
if config.emit_mode() == EmitMode::Checkstyle {
write!(out, "{}", ::checkstyle::header())?;
}
for &(ref filename, ref text) in source_file {
write_file(text, filename, out, config)?;
}
if config.emit_mode() == EmitMode::Checkstyle {
write!(out, "{}", ::checkstyle::footer())?;
}
Ok(())
}
pub fn write_file<T>(
formatted_text: &str,
filename: &FileName,
out: &mut T,
config: &Config,
) -> Result<bool, io::Error>
where
T: Write,
{
let filename_to_path = || match *filename {
FileName::Real(ref path) => path,
_ => panic!("cannot format `{}` and emit to files", filename),
};
match config.emit_mode() {
EmitMode::Files if config.make_backup() => {
let filename = filename_to_path();
let ori = fs::read_to_string(filename)?;
if ori != formatted_text {
// Do a little dance to make writing safer - write to a temp file
// rename the original to a .bk, then rename the temp file to the
// original.
let tmp_name = filename.with_extension("tmp");
let bk_name = filename.with_extension("bk");
fs::write(&tmp_name, formatted_text)?;
fs::rename(filename, bk_name)?;
fs::rename(tmp_name, filename)?;
}
}
EmitMode::Files => {
// Write text directly over original file if there is a diff.
let filename = filename_to_path();
let ori = fs::read_to_string(filename)?;
if ori != formatted_text {
fs::write(filename, formatted_text)?;
}
}
EmitMode::Stdout | EmitMode::Coverage => {
if config.verbose() != Verbosity::Quiet {
println!("{}:\n", filename);
}
write!(out, "{}", formatted_text)?;
}
EmitMode::ModifiedLines => {
let filename = filename_to_path();
let ori = fs::read_to_string(filename)?;
let mismatch = make_diff(&ori, formatted_text, 0);
let has_diff = !mismatch.is_empty();
output_modified(out, mismatch);
return Ok(has_diff);
}
EmitMode::Checkstyle => {
let filename = filename_to_path();
let ori = fs::read_to_string(filename)?;
let diff = make_diff(&ori, formatted_text, 3);
output_checkstyle_file(out, filename, diff)?;
}
EmitMode::Diff => {
let filename = filename_to_path();
let ori = fs::read_to_string(filename)?;
let mismatch = make_diff(&ori, formatted_text, 3);
let has_diff = !mismatch.is_empty();
print_diff(
mismatch,
|line_num| format!("Diff in {} at line {}:", filename.display(), line_num),
config,
);
return Ok(has_diff);
}
}
// when we are not in diff mode, don't indicate differing files
Ok(false)
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/utils.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::borrow::Cow;
use rustc_target::spec::abi;
use syntax::ast::{
self, Attribute, CrateSugar, MetaItem, MetaItemKind, NestedMetaItem, NestedMetaItemKind, Path,
Visibility, VisibilityKind,
};
use syntax::ptr;
use syntax::source_map::{BytePos, Span, NO_EXPANSION};
use comment::filter_normal_code;
use rewrite::RewriteContext;
use shape::Shape;
pub const DEPR_SKIP_ANNOTATION: &str = "rustfmt_skip";
pub const SKIP_ANNOTATION: &str = "rustfmt::skip";
pub fn rewrite_ident<'a>(context: &'a RewriteContext, ident: ast::Ident) -> &'a str {
context.snippet(ident.span)
}
// Computes the length of a string's last line, minus offset.
pub fn extra_offset(text: &str, shape: Shape) -> usize {
match text.rfind('\n') {
// 1 for newline character
Some(idx) => text.len().saturating_sub(idx + 1 + shape.used_width()),
None => text.len(),
}
}
pub fn is_same_visibility(a: &Visibility, b: &Visibility) -> bool {
match (&a.node, &b.node) {
(
VisibilityKind::Restricted { path: p, .. },
VisibilityKind::Restricted { path: q, .. },
) => format!("{}", p) == format!("{}", q),
(VisibilityKind::Public, VisibilityKind::Public)
| (VisibilityKind::Inherited, VisibilityKind::Inherited)
| (
VisibilityKind::Crate(CrateSugar::PubCrate),
VisibilityKind::Crate(CrateSugar::PubCrate),
)
| (
VisibilityKind::Crate(CrateSugar::JustCrate),
VisibilityKind::Crate(CrateSugar::JustCrate),
) => true,
_ => false,
}
}
// Uses Cow to avoid allocating in the common cases.
pub fn format_visibility(context: &RewriteContext, vis: &Visibility) -> Cow<'static, str> {
match vis.node {
VisibilityKind::Public => Cow::from("pub "),
VisibilityKind::Inherited => Cow::from(""),
VisibilityKind::Crate(CrateSugar::PubCrate) => Cow::from("pub(crate) "),
VisibilityKind::Crate(CrateSugar::JustCrate) => Cow::from("crate "),
VisibilityKind::Restricted { ref path, .. } => {
let Path { ref segments, .. } = **path;
let mut segments_iter = segments.iter().map(|seg| rewrite_ident(context, seg.ident));
if path.is_global() {
segments_iter
.next()
.expect("Non-global path in pub(restricted)?");
}
let is_keyword = |s: &str| s == "self" || s == "super";
let path = segments_iter.collect::<Vec<_>>().join("::");
let in_str = if is_keyword(&path) { "" } else { "in " };
Cow::from(format!("pub({}{}) ", in_str, path))
}
}
}
#[inline]
pub fn format_async(is_async: ast::IsAsync) -> &'static str {
match is_async {
ast::IsAsync::Async { .. } => "async ",
ast::IsAsync::NotAsync => "",
}
}
#[inline]
pub fn format_constness(constness: ast::Constness) -> &'static str {
match constness {
ast::Constness::Const => "const ",
ast::Constness::NotConst => "",
}
}
#[inline]
pub fn format_defaultness(defaultness: ast::Defaultness) -> &'static str {
match defaultness {
ast::Defaultness::Default => "default ",
ast::Defaultness::Final => "",
}
}
#[inline]
pub fn format_unsafety(unsafety: ast::Unsafety) -> &'static str {
match unsafety {
ast::Unsafety::Unsafe => "unsafe ",
ast::Unsafety::Normal => "",
}
}
#[inline]
pub fn format_auto(is_auto: ast::IsAuto) -> &'static str {
match is_auto {
ast::IsAuto::Yes => "auto ",
ast::IsAuto::No => "",
}
}
#[inline]
pub fn format_mutability(mutability: ast::Mutability) -> &'static str {
match mutability {
ast::Mutability::Mutable => "mut ",
ast::Mutability::Immutable => "",
}
}
#[inline]
pub fn format_abi(abi: abi::Abi, explicit_abi: bool, is_mod: bool) -> Cow<'static, str> {
if abi == abi::Abi::Rust && !is_mod {
Cow::from("")
} else if abi == abi::Abi::C && !explicit_abi {
Cow::from("extern ")
} else {
Cow::from(format!("extern {} ", abi))
}
}
#[inline]
// Transform `Vec<syntax::ptr::P<T>>` into `Vec<&T>`
pub fn ptr_vec_to_ref_vec<T>(vec: &[ptr::P<T>]) -> Vec<&T> {
vec.iter().map(|x| &**x).collect::<Vec<_>>()
}
#[inline]
pub fn filter_attributes(attrs: &[ast::Attribute], style: ast::AttrStyle) -> Vec<ast::Attribute> {
attrs
.iter()
.filter(|a| a.style == style)
.cloned()
.collect::<Vec<_>>()
}
#[inline]
pub fn inner_attributes(attrs: &[ast::Attribute]) -> Vec<ast::Attribute> {
filter_attributes(attrs, ast::AttrStyle::Inner)
}
#[inline]
pub fn outer_attributes(attrs: &[ast::Attribute]) -> Vec<ast::Attribute> {
filter_attributes(attrs, ast::AttrStyle::Outer)
}
#[inline]
pub fn is_single_line(s: &str) -> bool {
s.chars().find(|&c| c == '\n').is_none()
}
#[inline]
pub fn first_line_contains_single_line_comment(s: &str) -> bool {
s.lines().next().map_or(false, |l| l.contains("//"))
}
#[inline]
pub fn last_line_contains_single_line_comment(s: &str) -> bool {
s.lines().last().map_or(false, |l| l.contains("//"))
}
#[inline]
pub fn is_attributes_extendable(attrs_str: &str) -> bool {
!attrs_str.contains('\n') && !last_line_contains_single_line_comment(attrs_str)
}
// The width of the first line in s.
#[inline]
pub fn first_line_width(s: &str) -> usize {
match s.find('\n') {
Some(n) => n,
None => s.len(),
}
}
// The width of the last line in s.
#[inline]
pub fn last_line_width(s: &str) -> usize {
match s.rfind('\n') {
Some(n) => s.len() - n - 1,
None => s.len(),
}
}
// The total used width of the last line.
#[inline]
pub fn last_line_used_width(s: &str, offset: usize) -> usize {
if s.contains('\n') {
last_line_width(s)
} else {
offset + s.len()
}
}
#[inline]
pub fn trimmed_last_line_width(s: &str) -> usize {
match s.rfind('\n') {
Some(n) => s[(n + 1)..].trim().len(),
None => s.trim().len(),
}
}
#[inline]
pub fn last_line_extendable(s: &str) -> bool {
if s.ends_with("\"#") {
return true;
}
for c in s.chars().rev() {
match c {
'(' | ')' | ']' | '}' | '?' | '>' => continue,
'\n' => break,
_ if c.is_whitespace() => continue,
_ => return false,
}
}
true
}
#[inline]
fn is_skip(meta_item: &MetaItem) -> bool {
match meta_item.node {
MetaItemKind::Word => {
let path_str = meta_item.ident.to_string();
path_str == SKIP_ANNOTATION || path_str == DEPR_SKIP_ANNOTATION
}
MetaItemKind::List(ref l) => {
meta_item.name() == "cfg_attr" && l.len() == 2 && is_skip_nested(&l[1])
}
_ => false,
}
}
#[inline]
fn is_skip_nested(meta_item: &NestedMetaItem) -> bool {
match meta_item.node {
NestedMetaItemKind::MetaItem(ref mi) => is_skip(mi),
NestedMetaItemKind::Literal(_) => false,
}
}
#[inline]
pub fn contains_skip(attrs: &[Attribute]) -> bool {
attrs
.iter()
.any(|a| a.meta().map_or(false, |a| is_skip(&a)))
}
#[inline]
pub fn semicolon_for_expr(context: &RewriteContext, expr: &ast::Expr) -> bool {
match expr.node {
ast::ExprKind::Ret(..) | ast::ExprKind::Continue(..) | ast::ExprKind::Break(..) => {
context.config.trailing_semicolon()
}
_ => false,
}
}
#[inline]
pub fn semicolon_for_stmt(context: &RewriteContext, stmt: &ast::Stmt) -> bool {
match stmt.node {
ast::StmtKind::Semi(ref expr) => match expr.node {
ast::ExprKind::While(..)
| ast::ExprKind::WhileLet(..)
| ast::ExprKind::Loop(..)
| ast::ExprKind::ForLoop(..) => false,
ast::ExprKind::Break(..) | ast::ExprKind::Continue(..) | ast::ExprKind::Ret(..) => {
context.config.trailing_semicolon()
}
_ => true,
},
ast::StmtKind::Expr(..) => false,
_ => true,
}
}
#[inline]
pub fn stmt_expr(stmt: &ast::Stmt) -> Option<&ast::Expr> {
match stmt.node {
ast::StmtKind::Expr(ref expr) => Some(expr),
_ => None,
}
}
#[inline]
pub fn count_newlines(input: &str) -> usize {
// Using `as_bytes` to omit UTF-8 decoding
input.as_bytes().iter().filter(|&&c| c == b'\n').count()
}
// For format_missing and last_pos, need to use the source callsite (if applicable).
// Required as generated code spans aren't guaranteed to follow on from the last span.
macro_rules! source {
($this:ident, $sp:expr) => {
$sp.source_callsite()
};
}
pub fn mk_sp(lo: BytePos, hi: BytePos) -> Span {
Span::new(lo, hi, NO_EXPANSION)
}
// Return true if the given span does not intersect with file lines.
macro_rules! out_of_file_lines_range {
($self:ident, $span:expr) => {
!$self.config.file_lines().is_all() && !$self
.config
.file_lines()
.intersects(&$self.source_map.lookup_line_range($span))
};
}
macro_rules! skip_out_of_file_lines_range {
($self:ident, $span:expr) => {
if out_of_file_lines_range!($self, $span) {
return None;
}
};
}
macro_rules! skip_out_of_file_lines_range_visitor {
($self:ident, $span:expr) => {
if out_of_file_lines_range!($self, $span) {
$self.push_rewrite($span, None);
return;
}
};
}
// Wraps String in an Option. Returns Some when the string adheres to the
// Rewrite constraints defined for the Rewrite trait and None otherwise.
pub fn wrap_str(s: String, max_width: usize, shape: Shape) -> Option<String> {
if is_valid_str(&filter_normal_code(&s), max_width, shape) {
Some(s)
} else {
None
}
}
fn is_valid_str(snippet: &str, max_width: usize, shape: Shape) -> bool {
if !snippet.is_empty() {
// First line must fits with `shape.width`.
if first_line_width(snippet) > shape.width {
return false;
}
// If the snippet does not include newline, we are done.
if first_line_width(snippet) == snippet.len() {
return true;
}
// The other lines must fit within the maximum width.
if snippet.lines().skip(1).any(|line| line.len() > max_width) {
return false;
}
// A special check for the last line, since the caller may
// place trailing characters on this line.
if last_line_width(snippet) > shape.used_width() + shape.width {
return false;
}
}
true
}
#[inline]
pub fn colon_spaces(before: bool, after: bool) -> &'static str {
match (before, after) {
(true, true) => " : ",
(true, false) => " :",
(false, true) => ": ",
(false, false) => ":",
}
}
#[inline]
pub fn left_most_sub_expr(e: &ast::Expr) -> &ast::Expr {
match e.node {
ast::ExprKind::Call(ref e, _)
| ast::ExprKind::Binary(_, ref e, _)
| ast::ExprKind::Cast(ref e, _)
| ast::ExprKind::Type(ref e, _)
| ast::ExprKind::Assign(ref e, _)
| ast::ExprKind::AssignOp(_, ref e, _)
| ast::ExprKind::Field(ref e, _)
| ast::ExprKind::Index(ref e, _)
| ast::ExprKind::Range(Some(ref e), _, _)
| ast::ExprKind::Try(ref e) => left_most_sub_expr(e),
_ => e,
}
}
#[inline]
pub fn starts_with_newline(s: &str) -> bool {
s.starts_with('\n') || s.starts_with("\r\n")
}
#[inline]
pub fn first_line_ends_with(s: &str, c: char) -> bool {
s.lines().next().map_or(false, |l| l.ends_with(c))
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/checkstyle.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::io::{self, Write};
use std::path::Path;
use rustfmt_diff::{DiffLine, Mismatch};
/// The checkstyle header - should be emitted before the output of Rustfmt.
///
/// Note that emitting checkstyle output is not stable and may removed in a
/// future version of Rustfmt.
pub fn header() -> String {
let mut xml_heading = String::new();
xml_heading.push_str("<?xml version=\"1.0\" encoding=\"utf-8\"?>");
xml_heading.push_str("\n");
xml_heading.push_str("<checkstyle version=\"4.3\">");
xml_heading
}
/// The checkstyle footer - should be emitted after the output of Rustfmt.
///
/// Note that emitting checkstyle output is not stable and may removed in a
/// future version of Rustfmt.
pub fn footer() -> String {
"</checkstyle>\n".to_owned()
}
pub fn output_checkstyle_file<T>(
mut writer: T,
filename: &Path,
diff: Vec<Mismatch>,
) -> Result<(), io::Error>
where
T: Write,
{
write!(writer, "<file name=\"{}\">", filename.display())?;
for mismatch in diff {
for line in mismatch.lines {
// Do nothing with `DiffLine::Context` and `DiffLine::Resulting`.
if let DiffLine::Expected(ref str) = line {
let message = xml_escape_str(str);
write!(
writer,
"<error line=\"{}\" severity=\"warning\" message=\"Should be `{}`\" \
/>",
mismatch.line_number, message
)?;
}
}
}
write!(writer, "</file>")?;
Ok(())
}
// Convert special characters into XML entities.
// This is needed for checkstyle output.
fn xml_escape_str(string: &str) -> String {
let mut out = String::new();
for c in string.chars() {
match c {
'<' => out.push_str("<"),
'>' => out.push_str(">"),
'"' => out.push_str("""),
'\'' => out.push_str("'"),
'&' => out.push_str("&"),
_ => out.push(c),
}
}
out
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/visitor.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use syntax::attr::HasAttrs;
use syntax::parse::ParseSess;
use syntax::source_map::{self, BytePos, Pos, SourceMap, Span};
use syntax::{ast, visit};
use attr::*;
use comment::{CodeCharKind, CommentCodeSlices, FindUncommented};
use config::{BraceStyle, Config};
use items::{
format_impl, format_trait, format_trait_alias, is_mod_decl, is_use_item,
rewrite_associated_impl_type, rewrite_associated_type, rewrite_existential_impl_type,
rewrite_existential_type, rewrite_extern_crate, rewrite_type_alias, FnSig, StaticParts,
StructParts,
};
use macros::{rewrite_macro, rewrite_macro_def, MacroPosition};
use rewrite::{Rewrite, RewriteContext};
use shape::{Indent, Shape};
use source_map::{LineRangeUtils, SpanUtils};
use spanned::Spanned;
use utils::{
self, contains_skip, count_newlines, inner_attributes, mk_sp, ptr_vec_to_ref_vec,
rewrite_ident, DEPR_SKIP_ANNOTATION,
};
use {ErrorKind, FormatReport, FormattingError};
use std::cell::RefCell;
/// Creates a string slice corresponding to the specified span.
pub struct SnippetProvider<'a> {
/// A pointer to the content of the file we are formatting.
big_snippet: &'a str,
/// A position of the start of `big_snippet`, used as an offset.
start_pos: usize,
}
impl<'a> SnippetProvider<'a> {
pub fn span_to_snippet(&self, span: Span) -> Option<&str> {
let start_index = span.lo().to_usize().checked_sub(self.start_pos)?;
let end_index = span.hi().to_usize().checked_sub(self.start_pos)?;
Some(&self.big_snippet[start_index..end_index])
}
pub fn new(start_pos: BytePos, big_snippet: &'a str) -> Self {
let start_pos = start_pos.to_usize();
SnippetProvider {
big_snippet,
start_pos,
}
}
}
pub struct FmtVisitor<'a> {
pub parse_session: &'a ParseSess,
pub source_map: &'a SourceMap,
pub buffer: String,
pub last_pos: BytePos,
// FIXME: use an RAII util or closure for indenting
pub block_indent: Indent,
pub config: &'a Config,
pub is_if_else_block: bool,
pub snippet_provider: &'a SnippetProvider<'a>,
pub line_number: usize,
pub skipped_range: Vec<(usize, usize)>,
pub macro_rewrite_failure: bool,
pub(crate) report: FormatReport,
}
impl<'b, 'a: 'b> FmtVisitor<'a> {
pub fn shape(&self) -> Shape {
Shape::indented(self.block_indent, self.config)
}
fn visit_stmt(&mut self, stmt: &ast::Stmt) {
debug!(
"visit_stmt: {:?} {:?}",
self.source_map.lookup_char_pos(stmt.span.lo()),
self.source_map.lookup_char_pos(stmt.span.hi())
);
match stmt.node {
ast::StmtKind::Item(ref item) => {
self.visit_item(item);
// Handle potential `;` after the item.
self.format_missing(stmt.span.hi());
}
ast::StmtKind::Local(..) | ast::StmtKind::Expr(..) | ast::StmtKind::Semi(..) => {
if contains_skip(get_attrs_from_stmt(stmt)) {
self.push_skipped_with_span(stmt.span());
} else {
let rewrite = stmt.rewrite(&self.get_context(), self.shape());
self.push_rewrite(stmt.span(), rewrite)
}
}
ast::StmtKind::Mac(ref mac) => {
let (ref mac, _macro_style, ref attrs) = **mac;
if self.visit_attrs(attrs, ast::AttrStyle::Outer) {
self.push_skipped_with_span(stmt.span());
} else {
self.visit_mac(mac, None, MacroPosition::Statement);
}
self.format_missing(stmt.span.hi());
}
}
}
pub fn visit_block(
&mut self,
b: &ast::Block,
inner_attrs: Option<&[ast::Attribute]>,
has_braces: bool,
) {
debug!(
"visit_block: {:?} {:?}",
self.source_map.lookup_char_pos(b.span.lo()),
self.source_map.lookup_char_pos(b.span.hi())
);
// Check if this block has braces.
let brace_compensation = BytePos(if has_braces { 1 } else { 0 });
self.last_pos = self.last_pos + brace_compensation;
self.block_indent = self.block_indent.block_indent(self.config);
self.push_str("{");
if let Some(first_stmt) = b.stmts.first() {
let attr_lo = inner_attrs
.and_then(|attrs| inner_attributes(attrs).first().map(|attr| attr.span.lo()))
.or_else(|| {
// Attributes for an item in a statement position
// do not belong to the statement. (rust-lang/rust#34459)
if let ast::StmtKind::Item(ref item) = first_stmt.node {
item.attrs.first()
} else {
first_stmt.attrs().first()
}.and_then(|attr| {
// Some stmts can have embedded attributes.
// e.g. `match { #![attr] ... }`
let attr_lo = attr.span.lo();
if attr_lo < first_stmt.span.lo() {
Some(attr_lo)
} else {
None
}
})
});
let snippet = self.snippet(mk_sp(
self.last_pos,
attr_lo.unwrap_or_else(|| first_stmt.span.lo()),
));
let len = CommentCodeSlices::new(snippet)
.nth(0)
.and_then(|(kind, _, s)| {
if kind == CodeCharKind::Normal {
s.rfind('\n')
} else {
None
}
});
if let Some(len) = len {
self.last_pos = self.last_pos + BytePos::from_usize(len);
}
}
// Format inner attributes if available.
let skip_rewrite = if let Some(attrs) = inner_attrs {
self.visit_attrs(attrs, ast::AttrStyle::Inner)
} else {
false
};
if skip_rewrite {
self.push_rewrite(b.span, None);
self.close_block(false);
self.last_pos = source!(self, b.span).hi();
return;
}
self.walk_block_stmts(b);
if !b.stmts.is_empty() {
if let Some(expr) = utils::stmt_expr(&b.stmts[b.stmts.len() - 1]) {
if utils::semicolon_for_expr(&self.get_context(), expr) {
self.push_str(";");
}
}
}
let mut remove_len = BytePos(0);
if let Some(stmt) = b.stmts.last() {
let snippet = self.snippet(mk_sp(
stmt.span.hi(),
source!(self, b.span).hi() - brace_compensation,
));
let len = CommentCodeSlices::new(snippet)
.last()
.and_then(|(kind, _, s)| {
if kind == CodeCharKind::Normal && s.trim().is_empty() {
Some(s.len())
} else {
None
}
});
if let Some(len) = len {
remove_len = BytePos::from_usize(len);
}
}
let unindent_comment = (self.is_if_else_block && !b.stmts.is_empty()) && {
let end_pos = source!(self, b.span).hi() - brace_compensation - remove_len;
let snippet = self.snippet(mk_sp(self.last_pos, end_pos));
snippet.contains("//") || snippet.contains("/*")
};
// FIXME: we should compress any newlines here to just one
if unindent_comment {
self.block_indent = self.block_indent.block_unindent(self.config);
}
self.format_missing_with_indent(
source!(self, b.span).hi() - brace_compensation - remove_len,
);
if unindent_comment {
self.block_indent = self.block_indent.block_indent(self.config);
}
self.close_block(unindent_comment);
self.last_pos = source!(self, b.span).hi();
}
// FIXME: this is a terrible hack to indent the comments between the last
// item in the block and the closing brace to the block's level.
// The closing brace itself, however, should be indented at a shallower
// level.
fn close_block(&mut self, unindent_comment: bool) {
let total_len = self.buffer.len();
let chars_too_many = if unindent_comment {
0
} else if self.config.hard_tabs() {
1
} else {
self.config.tab_spaces()
};
self.buffer.truncate(total_len - chars_too_many);
self.push_str("}");
self.block_indent = self.block_indent.block_unindent(self.config);
}
// Note that this only gets called for function definitions. Required methods
// on traits do not get handled here.
fn visit_fn(
&mut self,
fk: visit::FnKind,
generics: &ast::Generics,
fd: &ast::FnDecl,
s: Span,
defaultness: ast::Defaultness,
inner_attrs: Option<&[ast::Attribute]>,
) {
let indent = self.block_indent;
let block;
let rewrite = match fk {
visit::FnKind::ItemFn(ident, _, _, b) | visit::FnKind::Method(ident, _, _, b) => {
block = b;
self.rewrite_fn(
indent,
ident,
&FnSig::from_fn_kind(&fk, generics, fd, defaultness),
mk_sp(s.lo(), b.span.lo()),
b,
inner_attrs,
)
}
visit::FnKind::Closure(_) => unreachable!(),
};
if let Some(fn_str) = rewrite {
self.format_missing_with_indent(source!(self, s).lo());
self.push_str(&fn_str);
if let Some(c) = fn_str.chars().last() {
if c == '}' {
self.last_pos = source!(self, block.span).hi();
return;
}
}
} else {
self.format_missing(source!(self, block.span).lo());
}
self.last_pos = source!(self, block.span).lo();
self.visit_block(block, inner_attrs, true)
}
pub fn visit_item(&mut self, item: &ast::Item) {
skip_out_of_file_lines_range_visitor!(self, item.span);
// This is where we bail out if there is a skip attribute. This is only
// complex in the module case. It is complex because the module could be
// in a separate file and there might be attributes in both files, but
// the AST lumps them all together.
let filtered_attrs;
let mut attrs = &item.attrs;
match item.node {
// For use items, skip rewriting attributes. Just check for a skip attribute.
ast::ItemKind::Use(..) => {
if contains_skip(attrs) {
self.push_skipped_with_span(item.span());
return;
}
}
// Module is inline, in this case we treat it like any other item.
_ if !is_mod_decl(item) => {
if self.visit_attrs(&item.attrs, ast::AttrStyle::Outer) {
self.push_skipped_with_span(item.span());
return;
}
}
// Module is not inline, but should be skipped.
ast::ItemKind::Mod(..) if contains_skip(&item.attrs) => {
return;
}
// Module is not inline and should not be skipped. We want
// to process only the attributes in the current file.
ast::ItemKind::Mod(..) => {
filtered_attrs = filter_inline_attrs(&item.attrs, item.span());
// Assert because if we should skip it should be caught by
// the above case.
assert!(!self.visit_attrs(&filtered_attrs, ast::AttrStyle::Outer));
attrs = &filtered_attrs;
}
_ => {
if self.visit_attrs(&item.attrs, ast::AttrStyle::Outer) {
self.push_skipped_with_span(item.span());
return;
}
}
}
match item.node {
ast::ItemKind::Use(ref tree) => self.format_import(item, tree),
ast::ItemKind::Impl(..) => {
let snippet = self.snippet(item.span);
let where_span_end = snippet
.find_uncommented("{")
.map(|x| BytePos(x as u32) + source!(self, item.span).lo());
let rw = format_impl(&self.get_context(), item, self.block_indent, where_span_end);
self.push_rewrite(item.span, rw);
}
ast::ItemKind::Trait(..) => {
let rw = format_trait(&self.get_context(), item, self.block_indent);
self.push_rewrite(item.span, rw);
}
ast::ItemKind::TraitAlias(ref generics, ref generic_bounds) => {
let shape = Shape::indented(self.block_indent, self.config);
let rw = format_trait_alias(
&self.get_context(),
item.ident,
generics,
generic_bounds,
shape,
);
self.push_rewrite(item.span, rw);
}
ast::ItemKind::ExternCrate(_) => {
let rw = rewrite_extern_crate(&self.get_context(), item);
self.push_rewrite(item.span, rw);
}
ast::ItemKind::Struct(..) | ast::ItemKind::Union(..) => {
self.visit_struct(&StructParts::from_item(item));
}
ast::ItemKind::Enum(ref def, ref generics) => {
self.format_missing_with_indent(source!(self, item.span).lo());
self.visit_enum(item.ident, &item.vis, def, generics, item.span);
self.last_pos = source!(self, item.span).hi();
}
ast::ItemKind::Mod(ref module) => {
let is_inline = !is_mod_decl(item);
self.format_missing_with_indent(source!(self, item.span).lo());
self.format_mod(module, &item.vis, item.span, item.ident, attrs, is_inline);
}
ast::ItemKind::Mac(ref mac) => {
self.visit_mac(mac, Some(item.ident), MacroPosition::Item);
}
ast::ItemKind::ForeignMod(ref foreign_mod) => {
self.format_missing_with_indent(source!(self, item.span).lo());
self.format_foreign_mod(foreign_mod, item.span);
}
ast::ItemKind::Static(..) | ast::ItemKind::Const(..) => {
self.visit_static(&StaticParts::from_item(item));
}
ast::ItemKind::Fn(ref decl, fn_header, ref generics, ref body) => {
let inner_attrs = inner_attributes(&item.attrs);
self.visit_fn(
visit::FnKind::ItemFn(item.ident, fn_header, &item.vis, body),
generics,
decl,
item.span,
ast::Defaultness::Final,
Some(&inner_attrs),
)
}
ast::ItemKind::Ty(ref ty, ref generics) => {
let rewrite = rewrite_type_alias(
&self.get_context(),
self.block_indent,
item.ident,
ty,
generics,
&item.vis,
);
self.push_rewrite(item.span, rewrite);
}
ast::ItemKind::Existential(ref generic_bounds, ref generics) => {
let rewrite = rewrite_existential_type(
&self.get_context(),
self.block_indent,
item.ident,
generic_bounds,
generics,
&item.vis,
);
self.push_rewrite(item.span, rewrite);
}
ast::ItemKind::GlobalAsm(..) => {
let snippet = Some(self.snippet(item.span).to_owned());
self.push_rewrite(item.span, snippet);
}
ast::ItemKind::MacroDef(ref def) => {
let rewrite = rewrite_macro_def(
&self.get_context(),
self.shape(),
self.block_indent,
def,
item.ident,
&item.vis,
item.span,
);
self.push_rewrite(item.span, rewrite);
}
}
}
pub fn visit_trait_item(&mut self, ti: &ast::TraitItem) {
skip_out_of_file_lines_range_visitor!(self, ti.span);
if self.visit_attrs(&ti.attrs, ast::AttrStyle::Outer) {
self.push_skipped_with_span(ti.span());
return;
}
match ti.node {
ast::TraitItemKind::Const(..) => self.visit_static(&StaticParts::from_trait_item(ti)),
ast::TraitItemKind::Method(ref sig, None) => {
let indent = self.block_indent;
let rewrite =
self.rewrite_required_fn(indent, ti.ident, sig, &ti.generics, ti.span);
self.push_rewrite(ti.span, rewrite);
}
ast::TraitItemKind::Method(ref sig, Some(ref body)) => {
let inner_attrs = inner_attributes(&ti.attrs);
self.visit_fn(
visit::FnKind::Method(ti.ident, sig, None, body),
&ti.generics,
&sig.decl,
ti.span,
ast::Defaultness::Final,
Some(&inner_attrs),
);
}
ast::TraitItemKind::Type(ref generic_bounds, ref type_default) => {
let rewrite = rewrite_associated_type(
ti.ident,
type_default.as_ref(),
Some(generic_bounds),
&self.get_context(),
self.block_indent,
);
self.push_rewrite(ti.span, rewrite);
}
ast::TraitItemKind::Macro(ref mac) => {
self.visit_mac(mac, Some(ti.ident), MacroPosition::Item);
}
}
}
pub fn visit_impl_item(&mut self, ii: &ast::ImplItem) {
skip_out_of_file_lines_range_visitor!(self, ii.span);
if self.visit_attrs(&ii.attrs, ast::AttrStyle::Outer) {
self.push_skipped_with_span(ii.span());
return;
}
match ii.node {
ast::ImplItemKind::Method(ref sig, ref body) => {
let inner_attrs = inner_attributes(&ii.attrs);
self.visit_fn(
visit::FnKind::Method(ii.ident, sig, Some(&ii.vis), body),
&ii.generics,
&sig.decl,
ii.span,
ii.defaultness,
Some(&inner_attrs),
);
}
ast::ImplItemKind::Const(..) => self.visit_static(&StaticParts::from_impl_item(ii)),
ast::ImplItemKind::Type(ref ty) => {
let rewrite = rewrite_associated_impl_type(
ii.ident,
ii.defaultness,
Some(ty),
&self.get_context(),
self.block_indent,
);
self.push_rewrite(ii.span, rewrite);
}
ast::ImplItemKind::Existential(ref generic_bounds) => {
let rewrite = rewrite_existential_impl_type(
&self.get_context(),
ii.ident,
generic_bounds,
self.block_indent,
);
self.push_rewrite(ii.span, rewrite);
}
ast::ImplItemKind::Macro(ref mac) => {
self.visit_mac(mac, Some(ii.ident), MacroPosition::Item);
}
}
}
fn visit_mac(&mut self, mac: &ast::Mac, ident: Option<ast::Ident>, pos: MacroPosition) {
skip_out_of_file_lines_range_visitor!(self, mac.span);
// 1 = ;
let shape = self.shape().sub_width(1).unwrap();
let rewrite = self.with_context(|ctx| rewrite_macro(mac, ident, ctx, shape, pos));
self.push_rewrite(mac.span, rewrite);
}
pub fn push_str(&mut self, s: &str) {
self.line_number += count_newlines(s);
self.buffer.push_str(s);
}
#[cfg_attr(feature = "cargo-clippy", allow(needless_pass_by_value))]
fn push_rewrite_inner(&mut self, span: Span, rewrite: Option<String>) {
if let Some(ref s) = rewrite {
self.push_str(s);
} else {
let snippet = self.snippet(span);
self.push_str(snippet);
}
self.last_pos = source!(self, span).hi();
}
pub fn push_rewrite(&mut self, span: Span, rewrite: Option<String>) {
self.format_missing_with_indent(source!(self, span).lo());
self.push_rewrite_inner(span, rewrite);
}
pub fn push_skipped_with_span(&mut self, span: Span) {
self.format_missing_with_indent(source!(self, span).lo());
let lo = self.line_number + 1;
self.push_rewrite_inner(span, None);
let hi = self.line_number + 1;
self.skipped_range.push((lo, hi));
}
pub fn from_context(ctx: &'a RewriteContext) -> FmtVisitor<'a> {
FmtVisitor::from_source_map(
ctx.parse_session,
ctx.config,
ctx.snippet_provider,
ctx.report.clone(),
)
}
pub(crate) fn from_source_map(
parse_session: &'a ParseSess,
config: &'a Config,
snippet_provider: &'a SnippetProvider,
report: FormatReport,
) -> FmtVisitor<'a> {
FmtVisitor {
parse_session,
source_map: parse_session.source_map(),
buffer: String::with_capacity(snippet_provider.big_snippet.len() * 2),
last_pos: BytePos(0),
block_indent: Indent::empty(),
config,
is_if_else_block: false,
snippet_provider,
line_number: 0,
skipped_range: vec![],
macro_rewrite_failure: false,
report,
}
}
pub fn opt_snippet(&'b self, span: Span) -> Option<&'a str> {
self.snippet_provider.span_to_snippet(span)
}
pub fn snippet(&'b self, span: Span) -> &'a str {
self.opt_snippet(span).unwrap()
}
// Returns true if we should skip the following item.
pub fn visit_attrs(&mut self, attrs: &[ast::Attribute], style: ast::AttrStyle) -> bool {
for attr in attrs {
if attr.name() == DEPR_SKIP_ANNOTATION {
let file_name = self.source_map.span_to_filename(attr.span).into();
self.report.append(
file_name,
vec![FormattingError::from_span(
&attr.span,
&self.source_map,
ErrorKind::DeprecatedAttr,
)],
);
} else if attr.path.segments[0].ident.to_string() == "rustfmt" {
if attr.path.segments.len() == 1
|| attr.path.segments[1].ident.to_string() != "skip"
{
let file_name = self.source_map.span_to_filename(attr.span).into();
self.report.append(
file_name,
vec![FormattingError::from_span(
&attr.span,
&self.source_map,
ErrorKind::BadAttr,
)],
);
}
}
}
if contains_skip(attrs) {
return true;
}
let attrs: Vec<_> = attrs.iter().filter(|a| a.style == style).cloned().collect();
if attrs.is_empty() {
return false;
}
let rewrite = attrs.rewrite(&self.get_context(), self.shape());
let span = mk_sp(attrs[0].span.lo(), attrs[attrs.len() - 1].span.hi());
self.push_rewrite(span, rewrite);
false
}
fn walk_mod_items(&mut self, m: &ast::Mod) {
self.visit_items_with_reordering(&ptr_vec_to_ref_vec(&m.items));
}
fn walk_stmts(&mut self, stmts: &[ast::Stmt]) {
fn to_stmt_item(stmt: &ast::Stmt) -> Option<&ast::Item> {
match stmt.node {
ast::StmtKind::Item(ref item) => Some(&**item),
_ => None,
}
}
if stmts.is_empty() {
return;
}
// Extract leading `use ...;`.
let items: Vec<_> = stmts
.iter()
.take_while(|stmt| to_stmt_item(stmt).map_or(false, is_use_item))
.filter_map(|stmt| to_stmt_item(stmt))
.collect();
if items.is_empty() {
self.visit_stmt(&stmts[0]);
self.walk_stmts(&stmts[1..]);
} else {
self.visit_items_with_reordering(&items);
self.walk_stmts(&stmts[items.len()..]);
}
}
fn walk_block_stmts(&mut self, b: &ast::Block) {
self.walk_stmts(&b.stmts)
}
fn format_mod(
&mut self,
m: &ast::Mod,
vis: &ast::Visibility,
s: Span,
ident: ast::Ident,
attrs: &[ast::Attribute],
is_internal: bool,
) {
let vis_str = utils::format_visibility(&self.get_context(), vis);
self.push_str(&*vis_str);
self.push_str("mod ");
// Calling `to_owned()` to work around borrow checker.
let ident_str = rewrite_ident(&self.get_context(), ident).to_owned();
self.push_str(&ident_str);
if is_internal {
match self.config.brace_style() {
BraceStyle::AlwaysNextLine => {
let indent_str = self.block_indent.to_string_with_newline(self.config);
self.push_str(&indent_str);
self.push_str("{");
}
_ => self.push_str(" {"),
}
// Hackery to account for the closing }.
let mod_lo = self.snippet_provider.span_after(source!(self, s), "{");
let body_snippet =
self.snippet(mk_sp(mod_lo, source!(self, m.inner).hi() - BytePos(1)));
let body_snippet = body_snippet.trim();
if body_snippet.is_empty() {
self.push_str("}");
} else {
self.last_pos = mod_lo;
self.block_indent = self.block_indent.block_indent(self.config);
self.visit_attrs(attrs, ast::AttrStyle::Inner);
self.walk_mod_items(m);
self.format_missing_with_indent(source!(self, m.inner).hi() - BytePos(1));
self.close_block(false);
}
self.last_pos = source!(self, m.inner).hi();
} else {
self.push_str(";");
self.last_pos = source!(self, s).hi();
}
}
pub fn format_separate_mod(&mut self, m: &ast::Mod, source_file: &source_map::SourceFile) {
self.block_indent = Indent::empty();
self.walk_mod_items(m);
self.format_missing_with_indent(source_file.end_pos);
}
pub fn skip_empty_lines(&mut self, end_pos: BytePos) {
while let Some(pos) = self
.snippet_provider
.opt_span_after(mk_sp(self.last_pos, end_pos), "\n")
{
if let Some(snippet) = self.opt_snippet(mk_sp(self.last_pos, pos)) {
if snippet.trim().is_empty() {
self.last_pos = pos;
} else {
return;
}
}
}
}
pub fn with_context<F>(&mut self, f: F) -> Option<String>
where
F: Fn(&RewriteContext) -> Option<String>,
{
let result;
let macro_rewrite_failure = {
let context = self.get_context();
result = f(&context);
unsafe { *context.macro_rewrite_failure.as_ptr() }
};
self.macro_rewrite_failure |= macro_rewrite_failure;
result
}
pub fn get_context(&self) -> RewriteContext {
RewriteContext {
parse_session: self.parse_session,
source_map: self.source_map,
config: self.config,
inside_macro: RefCell::new(false),
use_block: RefCell::new(false),
is_if_else_block: RefCell::new(false),
force_one_line_chain: RefCell::new(false),
snippet_provider: self.snippet_provider,
macro_rewrite_failure: RefCell::new(false),
report: self.report.clone(),
}
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/format-diff/main.rs
|
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Inspired by Clang's clang-format-diff:
//
// https://github.com/llvm-mirror/clang/blob/master/tools/clang-format/clang-format-diff.py
#![deny(warnings)]
extern crate env_logger;
#[macro_use]
extern crate failure;
extern crate getopts;
#[macro_use]
extern crate log;
extern crate regex;
#[macro_use]
extern crate serde_derive;
extern crate serde_json as json;
use std::collections::HashSet;
use std::io::{self, BufRead};
use std::{env, process};
use regex::Regex;
/// The default pattern of files to format.
///
/// We only want to format rust files by default.
const DEFAULT_PATTERN: &str = r".*\.rs";
#[derive(Fail, Debug)]
enum FormatDiffError {
#[fail(display = "{}", _0)]
IncorrectOptions(#[cause] getopts::Fail),
#[fail(display = "{}", _0)]
IncorrectFilter(#[cause] regex::Error),
#[fail(display = "{}", _0)]
IoError(#[cause] io::Error),
}
impl From<getopts::Fail> for FormatDiffError {
fn from(fail: getopts::Fail) -> Self {
FormatDiffError::IncorrectOptions(fail)
}
}
impl From<regex::Error> for FormatDiffError {
fn from(err: regex::Error) -> Self {
FormatDiffError::IncorrectFilter(err)
}
}
impl From<io::Error> for FormatDiffError {
fn from(fail: io::Error) -> Self {
FormatDiffError::IoError(fail)
}
}
fn main() {
env_logger::init();
let mut opts = getopts::Options::new();
opts.optflag("h", "help", "show this message");
opts.optopt(
"p",
"skip-prefix",
"skip the smallest prefix containing NUMBER slashes",
"NUMBER",
);
opts.optopt(
"f",
"filter",
"custom pattern selecting file paths to reformat",
"PATTERN",
);
if let Err(e) = run(&opts) {
println!("{}", opts.usage(&format!("{}", e)));
process::exit(1);
}
}
#[derive(Debug, Eq, PartialEq, Serialize, Deserialize)]
struct Range {
file: String,
range: [u32; 2],
}
fn run(opts: &getopts::Options) -> Result<(), FormatDiffError> {
let matches = opts.parse(env::args().skip(1))?;
if matches.opt_present("h") {
println!("{}", opts.usage("usage: "));
return Ok(());
}
let filter = matches
.opt_str("f")
.unwrap_or_else(|| DEFAULT_PATTERN.to_owned());
let skip_prefix = matches
.opt_str("p")
.and_then(|p| p.parse::<u32>().ok())
.unwrap_or(0);
let (files, ranges) = scan_diff(io::stdin(), skip_prefix, &filter)?;
run_rustfmt(&files, &ranges)
}
fn run_rustfmt(files: &HashSet<String>, ranges: &[Range]) -> Result<(), FormatDiffError> {
if files.is_empty() || ranges.is_empty() {
debug!("No files to format found");
return Ok(());
}
let ranges_as_json = json::to_string(ranges).unwrap();
debug!("Files: {:?}", files);
debug!("Ranges: {:?}", ranges);
let exit_status = process::Command::new("rustfmt")
.args(files)
.arg("--file-lines")
.arg(ranges_as_json)
.status()?;
if !exit_status.success() {
return Err(FormatDiffError::IoError(io::Error::new(
io::ErrorKind::Other,
format!("rustfmt failed with {}", exit_status),
)));
}
Ok(())
}
/// Scans a diff from `from`, and returns the set of files found, and the ranges
/// in those files.
fn scan_diff<R>(
from: R,
skip_prefix: u32,
file_filter: &str,
) -> Result<(HashSet<String>, Vec<Range>), FormatDiffError>
where
R: io::Read,
{
let diff_pattern = format!(r"^\+\+\+\s(?:.*?/){{{}}}(\S*)", skip_prefix);
let diff_pattern = Regex::new(&diff_pattern).unwrap();
let lines_pattern = Regex::new(r"^@@.*\+(\d+)(,(\d+))?").unwrap();
let file_filter = Regex::new(&format!("^{}$", file_filter))?;
let mut current_file = None;
let mut files = HashSet::new();
let mut ranges = vec![];
for line in io::BufReader::new(from).lines() {
let line = line.unwrap();
if let Some(captures) = diff_pattern.captures(&line) {
current_file = Some(captures.get(1).unwrap().as_str().to_owned());
}
let file = match current_file {
Some(ref f) => &**f,
None => continue,
};
// FIXME(emilio): We could avoid this most of the time if needed, but
// it's not clear it's worth it.
if !file_filter.is_match(file) {
continue;
}
let lines_captures = match lines_pattern.captures(&line) {
Some(captures) => captures,
None => continue,
};
let start_line = lines_captures
.get(1)
.unwrap()
.as_str()
.parse::<u32>()
.unwrap();
let line_count = match lines_captures.get(3) {
Some(line_count) => line_count.as_str().parse::<u32>().unwrap(),
None => 1,
};
if line_count == 0 {
continue;
}
let end_line = start_line + line_count - 1;
files.insert(file.to_owned());
ranges.push(Range {
file: file.to_owned(),
range: [start_line, end_line],
});
}
Ok((files, ranges))
}
#[test]
fn scan_simple_git_diff() {
const DIFF: &'static str = include_str!("test/bindgen.diff");
let (files, ranges) = scan_diff(DIFF.as_bytes(), 1, r".*\.rs").expect("scan_diff failed?");
assert!(
files.contains("src/ir/traversal.rs"),
"Should've matched the filter"
);
assert!(
!files.contains("tests/headers/anon_enum.hpp"),
"Shouldn't have matched the filter"
);
assert_eq!(
&ranges,
&[
Range {
file: "src/ir/item.rs".to_owned(),
range: [148, 158],
},
Range {
file: "src/ir/item.rs".to_owned(),
range: [160, 170],
},
Range {
file: "src/ir/traversal.rs".to_owned(),
range: [9, 16],
},
Range {
file: "src/ir/traversal.rs".to_owned(),
range: [35, 43],
},
]
);
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/test/mod.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
extern crate assert_cli;
use std::collections::{HashMap, HashSet};
use std::env;
use std::fs;
use std::io::{self, BufRead, BufReader, Read, Write};
use std::iter::{Enumerate, Peekable};
use std::mem;
use std::path::{Path, PathBuf};
use std::str::Chars;
use config::{Color, Config, EmitMode, FileName, ReportTactic};
use formatting::{ModifiedChunk, SourceFile};
use rustfmt_diff::{make_diff, print_diff, DiffLine, Mismatch, OutputWriter};
use source_file;
use {FormatReport, Input, Session};
const DIFF_CONTEXT_SIZE: usize = 3;
const CONFIGURATIONS_FILE_NAME: &str = "Configurations.md";
// Returns a `Vec` containing `PathBuf`s of files with a rs extension in the
// given path. The `recursive` argument controls if files from subdirectories
// are also returned.
fn get_test_files(path: &Path, recursive: bool) -> Vec<PathBuf> {
let mut files = vec![];
if path.is_dir() {
for entry in fs::read_dir(path).expect(&format!(
"Couldn't read directory {}",
path.to_str().unwrap()
)) {
let entry = entry.expect("Couldn't get DirEntry");
let path = entry.path();
if path.is_dir() && recursive {
files.append(&mut get_test_files(&path, recursive));
} else if path.extension().map_or(false, |f| f == "rs") {
files.push(path);
}
}
}
files
}
fn verify_config_used(path: &Path, config_name: &str) {
for entry in fs::read_dir(path).expect(&format!(
"Couldn't read {} directory",
path.to_str().unwrap()
)) {
let entry = entry.expect("Couldn't get directory entry");
let path = entry.path();
if path.extension().map_or(false, |f| f == "rs") {
// check if "// rustfmt-<config_name>:" appears in the file.
let filebuf = BufReader::new(
fs::File::open(&path).expect(&format!("Couldn't read file {}", path.display())),
);
assert!(
filebuf
.lines()
.map(|l| l.unwrap())
.take_while(|l| l.starts_with("//"))
.any(|l| l.starts_with(&format!("// rustfmt-{}", config_name))),
format!(
"config option file {} does not contain expected config name",
path.display()
)
);
}
}
}
#[test]
fn verify_config_test_names() {
for path in &[
Path::new("tests/source/configs"),
Path::new("tests/target/configs"),
] {
for entry in fs::read_dir(path).expect("Couldn't read configs directory") {
let entry = entry.expect("Couldn't get directory entry");
let path = entry.path();
if path.is_dir() {
let config_name = path.file_name().unwrap().to_str().unwrap();
// Make sure that config name is used in the files in the directory.
verify_config_used(&path, config_name);
}
}
}
}
// This writes to the terminal using the same approach (via term::stdout or
// println!) that is used by `rustfmt::rustfmt_diff::print_diff`. Writing
// using only one or the other will cause the output order to differ when
// `print_diff` selects the approach not used.
fn write_message(msg: &str) {
let mut writer = OutputWriter::new(Color::Auto);
writer.writeln(&format!("{}", msg), None);
}
// Integration tests. The files in the tests/source are formatted and compared
// to their equivalent in tests/target. The target file and config can be
// overridden by annotations in the source file. The input and output must match
// exactly.
#[test]
fn system_tests() {
// Get all files in the tests/source directory.
let files = get_test_files(Path::new("tests/source"), true);
let (_reports, count, fails) = check_files(files, None);
// Display results.
println!("Ran {} system tests.", count);
assert_eq!(fails, 0, "{} system tests failed", fails);
}
// Do the same for tests/coverage-source directory
// the only difference is the coverage mode
#[test]
fn coverage_tests() {
let files = get_test_files(Path::new("tests/coverage/source"), true);
let (_reports, count, fails) = check_files(files, None);
println!("Ran {} tests in coverage mode.", count);
assert_eq!(fails, 0, "{} tests failed", fails);
}
#[test]
fn checkstyle_test() {
let filename = "tests/writemode/source/fn-single-line.rs";
let expected_filename = "tests/writemode/target/checkstyle.xml";
assert_output(Path::new(filename), Path::new(expected_filename));
}
#[test]
fn modified_test() {
use std::io::BufRead;
// Test "modified" output
let filename = "tests/writemode/source/modified.rs";
let mut data = Vec::new();
let mut config = Config::default();
config.set().emit_mode(::config::EmitMode::ModifiedLines);
{
let mut session = Session::new(config, Some(&mut data));
session.format(Input::File(filename.into())).unwrap();
}
let mut lines = data.lines();
let mut chunks = Vec::new();
while let Some(Ok(header)) = lines.next() {
// Parse the header line
let values: Vec<_> = header
.split(' ')
.map(|s| s.parse::<u32>().unwrap())
.collect();
assert_eq!(values.len(), 3);
let line_number_orig = values[0];
let lines_removed = values[1];
let num_added = values[2];
let mut added_lines = Vec::new();
for _ in 0..num_added {
added_lines.push(lines.next().unwrap().unwrap());
}
chunks.push(ModifiedChunk {
line_number_orig,
lines_removed,
lines: added_lines,
});
}
assert_eq!(
chunks,
vec![
ModifiedChunk {
line_number_orig: 4,
lines_removed: 4,
lines: vec!["fn blah() {}".into()],
},
ModifiedChunk {
line_number_orig: 9,
lines_removed: 6,
lines: vec!["#[cfg(a, b)]".into(), "fn main() {}".into()],
},
],
);
}
// Helper function for comparing the results of rustfmt
// to a known output file generated by one of the write modes.
fn assert_output(source: &Path, expected_filename: &Path) {
let config = read_config(source);
let (_, source_file, _) = format_file(source, config.clone());
// Populate output by writing to a vec.
let mut out = vec![];
let _ = source_file::write_all_files(&source_file, &mut out, &config);
let output = String::from_utf8(out).unwrap();
let mut expected_file = fs::File::open(&expected_filename).expect("Couldn't open target");
let mut expected_text = String::new();
expected_file
.read_to_string(&mut expected_text)
.expect("Failed reading target");
let compare = make_diff(&expected_text, &output, DIFF_CONTEXT_SIZE);
if !compare.is_empty() {
let mut failures = HashMap::new();
failures.insert(source.to_owned(), compare);
print_mismatches_default_message(failures);
assert!(false, "Text does not match expected output");
}
}
// Idempotence tests. Files in tests/target are checked to be unaltered by
// rustfmt.
#[test]
fn idempotence_tests() {
match option_env!("CFG_RELEASE_CHANNEL") {
None | Some("nightly") => {}
_ => return, // these tests require nightly
}
// Get all files in the tests/target directory.
let files = get_test_files(Path::new("tests/target"), true);
let (_reports, count, fails) = check_files(files, None);
// Display results.
println!("Ran {} idempotent tests.", count);
assert_eq!(fails, 0, "{} idempotent tests failed", fails);
}
// Run rustfmt on itself. This operation must be idempotent. We also check that
// no warnings are emitted.
#[test]
fn self_tests() {
let mut files = get_test_files(Path::new("tests"), false);
let bin_directories = vec!["cargo-fmt", "git-rustfmt", "bin", "format-diff"];
for dir in bin_directories {
let mut path = PathBuf::from("src");
path.push(dir);
path.push("main.rs");
files.push(path);
}
files.push(PathBuf::from("src/lib.rs"));
let (reports, count, fails) = check_files(files, Some(PathBuf::from("rustfmt.toml")));
let mut warnings = 0;
// Display results.
println!("Ran {} self tests.", count);
assert_eq!(fails, 0, "{} self tests failed", fails);
for format_report in reports {
println!("{}", format_report);
warnings += format_report.warning_count();
}
assert_eq!(
warnings, 0,
"Rustfmt's code generated {} warnings",
warnings
);
}
#[test]
fn stdin_formatting_smoke_test() {
let input = Input::Text("fn main () {}".to_owned());
let mut config = Config::default();
config.set().emit_mode(EmitMode::Stdout);
let mut buf: Vec<u8> = vec![];
{
let mut session = Session::new(config, Some(&mut buf));
session.format(input).unwrap();
assert!(session.has_no_errors());
}
//eprintln!("{:?}", );
#[cfg(not(windows))]
assert_eq!(buf, "fn main() {}\n".as_bytes());
#[cfg(windows)]
assert_eq!(buf, "fn main() {}\r\n".as_bytes());
}
// FIXME(#1990) restore this test
// #[test]
// fn stdin_disable_all_formatting_test() {
// let input = String::from("fn main() { println!(\"This should not be formatted.\"); }");
// let mut child = Command::new("./target/debug/rustfmt")
// .stdin(Stdio::piped())
// .stdout(Stdio::piped())
// .arg("--config-path=./tests/config/disable_all_formatting.toml")
// .spawn()
// .expect("failed to execute child");
// {
// let stdin = child.stdin.as_mut().expect("failed to get stdin");
// stdin
// .write_all(input.as_bytes())
// .expect("failed to write stdin");
// }
// let output = child.wait_with_output().expect("failed to wait on child");
// assert!(output.status.success());
// assert!(output.stderr.is_empty());
// assert_eq!(input, String::from_utf8(output.stdout).unwrap());
// }
#[test]
fn format_lines_errors_are_reported() {
let long_identifier = String::from_utf8(vec![b'a'; 239]).unwrap();
let input = Input::Text(format!("fn {}() {{}}", long_identifier));
let mut config = Config::default();
config.set().error_on_line_overflow(true);
let mut session = Session::<io::Stdout>::new(config, None);
session.format(input).unwrap();
assert!(session.has_formatting_errors());
}
#[test]
fn format_lines_errors_are_reported_with_tabs() {
let long_identifier = String::from_utf8(vec![b'a'; 97]).unwrap();
let input = Input::Text(format!("fn a() {{\n\t{}\n}}", long_identifier));
let mut config = Config::default();
config.set().error_on_line_overflow(true);
config.set().hard_tabs(true);
let mut session = Session::<io::Stdout>::new(config, None);
session.format(input).unwrap();
assert!(session.has_formatting_errors());
}
// For each file, run rustfmt and collect the output.
// Returns the number of files checked and the number of failures.
fn check_files(files: Vec<PathBuf>, opt_config: Option<PathBuf>) -> (Vec<FormatReport>, u32, u32) {
let mut count = 0;
let mut fails = 0;
let mut reports = vec![];
for file_name in files {
debug!("Testing '{}'...", file_name.display());
match idempotent_check(&file_name, &opt_config) {
Ok(ref report) if report.has_warnings() => {
print!("{}", report);
fails += 1;
}
Ok(report) => reports.push(report),
Err(err) => {
if let IdempotentCheckError::Mismatch(msg) = err {
print_mismatches_default_message(msg);
}
fails += 1;
}
}
count += 1;
}
(reports, count, fails)
}
fn print_mismatches_default_message(result: HashMap<PathBuf, Vec<Mismatch>>) {
for (file_name, diff) in result {
let mismatch_msg_formatter =
|line_num| format!("\nMismatch at {}:{}:", file_name.display(), line_num);
print_diff(diff, &mismatch_msg_formatter, &Default::default());
}
if let Some(mut t) = term::stdout() {
t.reset().unwrap_or(());
}
}
fn print_mismatches<T: Fn(u32) -> String>(
result: HashMap<PathBuf, Vec<Mismatch>>,
mismatch_msg_formatter: T,
) {
for (_file_name, diff) in result {
print_diff(diff, &mismatch_msg_formatter, &Default::default());
}
if let Some(mut t) = term::stdout() {
t.reset().unwrap_or(());
}
}
fn read_config(filename: &Path) -> Config {
let sig_comments = read_significant_comments(filename);
// Look for a config file... If there is a 'config' property in the significant comments, use
// that. Otherwise, if there are no significant comments at all, look for a config file with
// the same name as the test file.
let mut config = if !sig_comments.is_empty() {
get_config(sig_comments.get("config").map(Path::new))
} else {
get_config(filename.with_extension("toml").file_name().map(Path::new))
};
for (key, val) in &sig_comments {
if key != "target" && key != "config" {
config.override_value(key, val);
if config.is_default(key) {
warn!("Default value {} used explicitly for {}", val, key);
}
}
}
// Don't generate warnings for to-do items.
config.set().report_todo(ReportTactic::Never);
config
}
fn format_file<P: Into<PathBuf>>(filepath: P, config: Config) -> (bool, SourceFile, FormatReport) {
let filepath = filepath.into();
let input = Input::File(filepath);
let mut session = Session::<io::Stdout>::new(config, None);
let result = session.format(input).unwrap();
let parsing_errors = session.has_parsing_errors();
let mut source_file = SourceFile::new();
mem::swap(&mut session.source_file, &mut source_file);
(parsing_errors, source_file, result)
}
enum IdempotentCheckError {
Mismatch(HashMap<PathBuf, Vec<Mismatch>>),
Parse,
}
fn idempotent_check(
filename: &PathBuf,
opt_config: &Option<PathBuf>,
) -> Result<FormatReport, IdempotentCheckError> {
let sig_comments = read_significant_comments(filename);
let config = if let Some(ref config_file_path) = opt_config {
Config::from_toml_path(config_file_path).expect("rustfmt.toml not found")
} else {
read_config(filename)
};
let (parsing_errors, source_file, format_report) = format_file(filename, config);
if parsing_errors {
return Err(IdempotentCheckError::Parse);
}
let mut write_result = HashMap::new();
for (filename, text) in source_file {
if let FileName::Real(ref filename) = filename {
write_result.insert(filename.to_owned(), text);
}
}
let target = sig_comments.get("target").map(|x| &(*x)[..]);
handle_result(write_result, target).map(|_| format_report)
}
// Reads test config file using the supplied (optional) file name. If there's no file name or the
// file doesn't exist, just return the default config. Otherwise, the file must be read
// successfully.
fn get_config(config_file: Option<&Path>) -> Config {
let config_file_name = match config_file {
None => return Default::default(),
Some(file_name) => {
let mut full_path = PathBuf::from("tests/config/");
full_path.push(file_name);
if !full_path.exists() {
return Default::default();
};
full_path
}
};
let mut def_config_file = fs::File::open(config_file_name).expect("Couldn't open config");
let mut def_config = String::new();
def_config_file
.read_to_string(&mut def_config)
.expect("Couldn't read config");
Config::from_toml(&def_config, Path::new("tests/config/")).expect("Invalid toml")
}
// Reads significant comments of the form: // rustfmt-key: value
// into a hash map.
fn read_significant_comments(file_name: &Path) -> HashMap<String, String> {
let file =
fs::File::open(file_name).expect(&format!("Couldn't read file {}", file_name.display()));
let reader = BufReader::new(file);
let pattern = r"^\s*//\s*rustfmt-([^:]+):\s*(\S+)";
let regex = regex::Regex::new(pattern).expect("Failed creating pattern 1");
// Matches lines containing significant comments or whitespace.
let line_regex = regex::Regex::new(r"(^\s*$)|(^\s*//\s*rustfmt-[^:]+:\s*\S+)")
.expect("Failed creating pattern 2");
reader
.lines()
.map(|line| line.expect("Failed getting line"))
.take_while(|line| line_regex.is_match(line))
.filter_map(|line| {
regex.captures_iter(&line).next().map(|capture| {
(
capture
.get(1)
.expect("Couldn't unwrap capture")
.as_str()
.to_owned(),
capture
.get(2)
.expect("Couldn't unwrap capture")
.as_str()
.to_owned(),
)
})
}).collect()
}
// Compare output to input.
// TODO: needs a better name, more explanation.
fn handle_result(
result: HashMap<PathBuf, String>,
target: Option<&str>,
) -> Result<(), IdempotentCheckError> {
let mut failures = HashMap::new();
for (file_name, fmt_text) in result {
// If file is in tests/source, compare to file with same name in tests/target.
let target = get_target(&file_name, target);
let open_error = format!("Couldn't open target {:?}", &target);
let mut f = fs::File::open(&target).expect(&open_error);
let mut text = String::new();
let read_error = format!("Failed reading target {:?}", &target);
f.read_to_string(&mut text).expect(&read_error);
// Ignore LF and CRLF difference for Windows.
if !string_eq_ignore_newline_repr(&fmt_text, &text) {
let diff = make_diff(&text, &fmt_text, DIFF_CONTEXT_SIZE);
assert!(
!diff.is_empty(),
"Empty diff? Maybe due to a missing a newline at the end of a file?"
);
failures.insert(file_name, diff);
}
}
if failures.is_empty() {
Ok(())
} else {
Err(IdempotentCheckError::Mismatch(failures))
}
}
// Map source file paths to their target paths.
fn get_target(file_name: &Path, target: Option<&str>) -> PathBuf {
if let Some(n) = file_name
.components()
.position(|c| c.as_os_str() == "source")
{
let mut target_file_name = PathBuf::new();
for (i, c) in file_name.components().enumerate() {
if i == n {
target_file_name.push("target");
} else {
target_file_name.push(c.as_os_str());
}
}
if let Some(replace_name) = target {
target_file_name.with_file_name(replace_name)
} else {
target_file_name
}
} else {
// This is either and idempotence check or a self check
file_name.to_owned()
}
}
#[test]
fn rustfmt_diff_make_diff_tests() {
let diff = make_diff("a\nb\nc\nd", "a\ne\nc\nd", 3);
assert_eq!(
diff,
vec![Mismatch {
line_number: 1,
line_number_orig: 1,
lines: vec![
DiffLine::Context("a".into()),
DiffLine::Resulting("b".into()),
DiffLine::Expected("e".into()),
DiffLine::Context("c".into()),
DiffLine::Context("d".into()),
],
}]
);
}
#[test]
fn rustfmt_diff_no_diff_test() {
let diff = make_diff("a\nb\nc\nd", "a\nb\nc\nd", 3);
assert_eq!(diff, vec![]);
}
// Compare strings without distinguishing between CRLF and LF
fn string_eq_ignore_newline_repr(left: &str, right: &str) -> bool {
let left = CharsIgnoreNewlineRepr(left.chars().peekable());
let right = CharsIgnoreNewlineRepr(right.chars().peekable());
left.eq(right)
}
struct CharsIgnoreNewlineRepr<'a>(Peekable<Chars<'a>>);
impl<'a> Iterator for CharsIgnoreNewlineRepr<'a> {
type Item = char;
fn next(&mut self) -> Option<char> {
self.0.next().map(|c| {
if c == '\r' {
if *self.0.peek().unwrap_or(&'\0') == '\n' {
self.0.next();
'\n'
} else {
'\r'
}
} else {
c
}
})
}
}
#[test]
fn string_eq_ignore_newline_repr_test() {
assert!(string_eq_ignore_newline_repr("", ""));
assert!(!string_eq_ignore_newline_repr("", "abc"));
assert!(!string_eq_ignore_newline_repr("abc", ""));
assert!(string_eq_ignore_newline_repr("a\nb\nc\rd", "a\nb\r\nc\rd"));
assert!(string_eq_ignore_newline_repr("a\r\n\r\n\r\nb", "a\n\n\nb"));
assert!(!string_eq_ignore_newline_repr("a\r\nbcd", "a\nbcdefghijk"));
}
// This enum is used to represent one of three text features in Configurations.md: a block of code
// with its starting line number, the name of a rustfmt configuration option, or the value of a
// rustfmt configuration option.
enum ConfigurationSection {
CodeBlock((String, u32)), // (String: block of code, u32: line number of code block start)
ConfigName(String),
ConfigValue(String),
}
impl ConfigurationSection {
fn get_section<I: Iterator<Item = String>>(
file: &mut Enumerate<I>,
) -> Option<ConfigurationSection> {
lazy_static! {
static ref CONFIG_NAME_REGEX: regex::Regex =
regex::Regex::new(r"^## `([^`]+)`").expect("Failed creating configuration pattern");
static ref CONFIG_VALUE_REGEX: regex::Regex =
regex::Regex::new(r#"^#### `"?([^`"]+)"?`"#)
.expect("Failed creating configuration value pattern");
}
loop {
match file.next() {
Some((i, line)) => {
if line.starts_with("```rust") {
// Get the lines of the code block.
let lines: Vec<String> = file
.map(|(_i, l)| l)
.take_while(|l| !l.starts_with("```"))
.collect();
let block = format!("{}\n", lines.join("\n"));
// +1 to translate to one-based indexing
// +1 to get to first line of code (line after "```")
let start_line = (i + 2) as u32;
return Some(ConfigurationSection::CodeBlock((block, start_line)));
} else if let Some(c) = CONFIG_NAME_REGEX.captures(&line) {
return Some(ConfigurationSection::ConfigName(String::from(&c[1])));
} else if let Some(c) = CONFIG_VALUE_REGEX.captures(&line) {
return Some(ConfigurationSection::ConfigValue(String::from(&c[1])));
}
}
None => return None, // reached the end of the file
}
}
}
}
// This struct stores the information about code blocks in the configurations
// file, formats the code blocks, and prints formatting errors.
struct ConfigCodeBlock {
config_name: Option<String>,
config_value: Option<String>,
code_block: Option<String>,
code_block_start: Option<u32>,
}
impl ConfigCodeBlock {
fn new() -> ConfigCodeBlock {
ConfigCodeBlock {
config_name: None,
config_value: None,
code_block: None,
code_block_start: None,
}
}
fn set_config_name(&mut self, name: Option<String>) {
self.config_name = name;
self.config_value = None;
}
fn set_config_value(&mut self, value: Option<String>) {
self.config_value = value;
}
fn set_code_block(&mut self, code_block: String, code_block_start: u32) {
self.code_block = Some(code_block);
self.code_block_start = Some(code_block_start);
}
fn get_block_config(&self) -> Config {
let mut config = Config::default();
if self.config_value.is_some() && self.config_value.is_some() {
config.override_value(
self.config_name.as_ref().unwrap(),
self.config_value.as_ref().unwrap(),
);
}
config
}
fn code_block_valid(&self) -> bool {
// We never expect to not have a code block.
assert!(self.code_block.is_some() && self.code_block_start.is_some());
// See if code block begins with #![rustfmt::skip].
let fmt_skip = self
.code_block
.as_ref()
.unwrap()
.split('\n')
.nth(0)
.unwrap_or("")
== "#![rustfmt::skip]";
if self.config_name.is_none() && !fmt_skip {
write_message(&format!(
"No configuration name for {}:{}",
CONFIGURATIONS_FILE_NAME,
self.code_block_start.unwrap()
));
return false;
}
if self.config_value.is_none() && !fmt_skip {
write_message(&format!(
"No configuration value for {}:{}",
CONFIGURATIONS_FILE_NAME,
self.code_block_start.unwrap()
));
return false;
}
true
}
fn has_parsing_errors<T: Write>(&self, session: &Session<T>) -> bool {
if session.has_parsing_errors() {
write_message(&format!(
"\u{261d}\u{1f3fd} Cannot format {}:{}",
CONFIGURATIONS_FILE_NAME,
self.code_block_start.unwrap()
));
return true;
}
false
}
fn print_diff(&self, compare: Vec<Mismatch>) {
let mut mismatches = HashMap::new();
mismatches.insert(PathBuf::from(CONFIGURATIONS_FILE_NAME), compare);
print_mismatches(mismatches, |line_num| {
format!(
"\nMismatch at {}:{}:",
CONFIGURATIONS_FILE_NAME,
line_num + self.code_block_start.unwrap() - 1
)
});
}
fn formatted_has_diff(&self, text: &str) -> bool {
let compare = make_diff(self.code_block.as_ref().unwrap(), text, DIFF_CONTEXT_SIZE);
if !compare.is_empty() {
self.print_diff(compare);
return true;
}
false
}
// Return a bool indicating if formatting this code block is an idempotent
// operation. This function also triggers printing any formatting failure
// messages.
fn formatted_is_idempotent(&self) -> bool {
// Verify that we have all of the expected information.
if !self.code_block_valid() {
return false;
}
let input = Input::Text(self.code_block.as_ref().unwrap().to_owned());
let mut config = self.get_block_config();
config.set().emit_mode(EmitMode::Stdout);
let mut buf: Vec<u8> = vec![];
{
let mut session = Session::new(config, Some(&mut buf));
session.format(input).unwrap();
if self.has_parsing_errors(&session) {
return false;
}
}
!self.formatted_has_diff(&String::from_utf8(buf).unwrap())
}
// Extract a code block from the iterator. Behavior:
// - Rust code blocks are identifed by lines beginning with "```rust".
// - One explicit configuration setting is supported per code block.
// - Rust code blocks with no configuration setting are illegal and cause an
// assertion failure, unless the snippet begins with #![rustfmt::skip].
// - Configuration names in Configurations.md must be in the form of
// "## `NAME`".
// - Configuration values in Configurations.md must be in the form of
// "#### `VALUE`".
fn extract<I: Iterator<Item = String>>(
file: &mut Enumerate<I>,
prev: Option<&ConfigCodeBlock>,
hash_set: &mut HashSet<String>,
) -> Option<ConfigCodeBlock> {
let mut code_block = ConfigCodeBlock::new();
code_block.config_name = prev.and_then(|cb| cb.config_name.clone());
loop {
match ConfigurationSection::get_section(file) {
Some(ConfigurationSection::CodeBlock((block, start_line))) => {
code_block.set_code_block(block, start_line);
break;
}
Some(ConfigurationSection::ConfigName(name)) => {
assert!(
Config::is_valid_name(&name),
"an unknown configuration option was found: {}",
name
);
assert!(
hash_set.remove(&name),
"multiple configuration guides found for option {}",
name
);
code_block.set_config_name(Some(name));
}
Some(ConfigurationSection::ConfigValue(value)) => {
code_block.set_config_value(Some(value));
}
None => return None, // end of file was reached
}
}
Some(code_block)
}
}
#[test]
fn configuration_snippet_tests() {
// Read Configurations.md and build a `Vec` of `ConfigCodeBlock` structs with one
// entry for each Rust code block found.
fn get_code_blocks() -> Vec<ConfigCodeBlock> {
let mut file_iter = BufReader::new(
fs::File::open(Path::new(CONFIGURATIONS_FILE_NAME))
.expect(&format!("Couldn't read file {}", CONFIGURATIONS_FILE_NAME)),
).lines()
.map(|l| l.unwrap())
.enumerate();
let mut code_blocks: Vec<ConfigCodeBlock> = Vec::new();
let mut hash_set = Config::hash_set();
while let Some(cb) =
ConfigCodeBlock::extract(&mut file_iter, code_blocks.last(), &mut hash_set)
{
code_blocks.push(cb);
}
for name in hash_set {
if !Config::is_hidden_option(&name) {
panic!("{} does not have a configuration guide", name);
}
}
code_blocks
}
let blocks = get_code_blocks();
let failures = blocks
.iter()
.map(|b| b.formatted_is_idempotent())
.fold(0, |acc, r| acc + (!r as u32));
// Display results.
println!("Ran {} configurations tests.", blocks.len());
assert_eq!(failures, 0, "{} configurations tests failed", failures);
}
struct TempFile {
path: PathBuf,
}
fn make_temp_file(file_name: &'static str) -> TempFile {
use std::env::var;
use std::fs::File;
// Used in the Rust build system.
let target_dir = var("RUSTFMT_TEST_DIR").unwrap_or_else(|_| ".".to_owned());
let path = Path::new(&target_dir).join(file_name);
let mut file = File::create(&path).expect("Couldn't create temp file");
let content = "fn main() {}\n";
file.write_all(content.as_bytes())
.expect("Couldn't write temp file");
TempFile { path }
}
impl Drop for TempFile {
fn drop(&mut self) {
use std::fs::remove_file;
remove_file(&self.path).expect("Couldn't delete temp file");
}
}
fn rustfmt() -> PathBuf {
let mut me = env::current_exe().expect("failed to get current executable");
me.pop(); // chop of the test name
me.pop(); // chop off `deps`
me.push("rustfmt");
assert!(
me.is_file() || me.with_extension("exe").is_file(),
"no rustfmt bin, try running `cargo build` before testing"
);
return me;
}
#[test]
fn verify_check_works() {
let temp_file = make_temp_file("temp_check.rs");
assert_cli::Assert::command(&[
rustfmt().to_str().unwrap(),
"--check",
temp_file.path.to_str().unwrap(),
]).succeeds()
.unwrap();
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/config/lists.rs
|
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Configuration options related to rewriting a list.
use config::config_type::ConfigType;
use config::IndentStyle;
/// The definitive formatting tactic for lists.
#[derive(Eq, PartialEq, Debug, Copy, Clone)]
pub enum DefinitiveListTactic {
Vertical,
Horizontal,
Mixed,
/// Special case tactic for `format!()`, `write!()` style macros.
SpecialMacro(usize),
}
impl DefinitiveListTactic {
pub fn ends_with_newline(&self, indent_style: IndentStyle) -> bool {
match indent_style {
IndentStyle::Block => *self != DefinitiveListTactic::Horizontal,
IndentStyle::Visual => false,
}
}
}
/// Formatting tactic for lists. This will be cast down to a
/// `DefinitiveListTactic` depending on the number and length of the items and
/// their comments.
#[derive(Eq, PartialEq, Debug, Copy, Clone)]
pub enum ListTactic {
// One item per row.
Vertical,
// All items on one row.
Horizontal,
// Try Horizontal layout, if that fails then vertical.
HorizontalVertical,
// HorizontalVertical with a soft limit of n characters.
LimitedHorizontalVertical(usize),
// Pack as many items as possible per row over (possibly) many rows.
Mixed,
}
impl_enum_serialize_and_deserialize!(ListTactic, Vertical, Horizontal, HorizontalVertical, Mixed);
#[derive(Eq, PartialEq, Debug, Copy, Clone)]
pub enum SeparatorTactic {
Always,
Never,
Vertical,
}
impl_enum_serialize_and_deserialize!(SeparatorTactic, Always, Never, Vertical);
impl SeparatorTactic {
pub fn from_bool(b: bool) -> SeparatorTactic {
if b {
SeparatorTactic::Always
} else {
SeparatorTactic::Never
}
}
}
/// Where to put separator.
#[derive(Eq, PartialEq, Debug, Copy, Clone)]
pub enum SeparatorPlace {
Front,
Back,
}
impl_enum_serialize_and_deserialize!(SeparatorPlace, Front, Back);
impl SeparatorPlace {
pub fn is_front(&self) -> bool {
*self == SeparatorPlace::Front
}
pub fn is_back(&self) -> bool {
*self == SeparatorPlace::Back
}
pub fn from_tactic(
default: SeparatorPlace,
tactic: DefinitiveListTactic,
sep: &str,
) -> SeparatorPlace {
match tactic {
DefinitiveListTactic::Vertical => default,
_ => if sep == "," {
SeparatorPlace::Back
} else {
default
},
}
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/config/options.rs
|
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use config::config_type::ConfigType;
use config::lists::*;
use config::{Config, FileName};
// use isatty::stdout_isatty;
use std::collections::HashSet;
use std::path::{Path, PathBuf};
/// Macro for deriving implementations of Serialize/Deserialize for enums
#[macro_export]
macro_rules! impl_enum_serialize_and_deserialize {
( $e:ident, $( $x:ident ),* ) => {
impl ::serde::ser::Serialize for $e {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where S: ::serde::ser::Serializer
{
use serde::ser::Error;
// We don't know whether the user of the macro has given us all options.
#[allow(unreachable_patterns)]
match *self {
$(
$e::$x => serializer.serialize_str(stringify!($x)),
)*
_ => {
Err(S::Error::custom(format!("Cannot serialize {:?}", self)))
}
}
}
}
impl<'de> ::serde::de::Deserialize<'de> for $e {
fn deserialize<D>(d: D) -> Result<Self, D::Error>
where D: ::serde::Deserializer<'de> {
use serde::de::{Error, Visitor};
use std::marker::PhantomData;
use std::fmt;
struct StringOnly<T>(PhantomData<T>);
impl<'de, T> Visitor<'de> for StringOnly<T>
where T: ::serde::Deserializer<'de> {
type Value = String;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
formatter.write_str("string")
}
fn visit_str<E>(self, value: &str) -> Result<String, E> {
Ok(String::from(value))
}
}
let s = d.deserialize_string(StringOnly::<D>(PhantomData))?;
$(
if stringify!($x).eq_ignore_ascii_case(&s) {
return Ok($e::$x);
}
)*
static ALLOWED: &'static[&str] = &[$(stringify!($x),)*];
Err(D::Error::unknown_variant(&s, ALLOWED))
}
}
impl ::std::str::FromStr for $e {
type Err = &'static str;
fn from_str(s: &str) -> Result<Self, Self::Err> {
$(
if stringify!($x).eq_ignore_ascii_case(s) {
return Ok($e::$x);
}
)*
Err("Bad variant")
}
}
impl ConfigType for $e {
fn doc_hint() -> String {
let mut variants = Vec::new();
$(
variants.push(stringify!($x));
)*
format!("[{}]", variants.join("|"))
}
}
};
}
macro_rules! configuration_option_enum{
($e:ident: $( $x:ident ),+ $(,)*) => {
#[derive(Copy, Clone, Eq, PartialEq, Debug)]
pub enum $e {
$( $x ),+
}
impl_enum_serialize_and_deserialize!($e, $( $x ),+);
}
}
configuration_option_enum! { NewlineStyle:
Auto, // Auto-detect based on the raw source input
Windows, // \r\n
Unix, // \n
Native, // \r\n in Windows, \n on other platforms
}
impl NewlineStyle {
fn auto_detect(raw_input_text: &str) -> NewlineStyle {
if let Some(pos) = raw_input_text.find('\n') {
let pos = pos.saturating_sub(1);
if let Some('\r') = raw_input_text.chars().nth(pos) {
NewlineStyle::Windows
} else {
NewlineStyle::Unix
}
} else {
NewlineStyle::Native
}
}
fn native() -> NewlineStyle {
if cfg!(windows) {
NewlineStyle::Windows
} else {
NewlineStyle::Unix
}
}
/// Apply this newline style to the formatted text. When the style is set
/// to `Auto`, the `raw_input_text` is used to detect the existing line
/// endings.
///
/// If the style is set to `Auto` and `raw_input_text` contains no
/// newlines, the `Native` style will be used.
pub(crate) fn apply(self, formatted_text: &mut String, raw_input_text: &str) {
use NewlineStyle::*;
let mut style = self;
if style == Auto {
style = Self::auto_detect(raw_input_text);
}
if style == Native {
style = Self::native();
}
match style {
Windows => {
let mut transformed = String::with_capacity(2 * formatted_text.capacity());
for c in formatted_text.chars() {
match c {
'\n' => transformed.push_str("\r\n"),
'\r' => continue,
c => transformed.push(c),
}
}
*formatted_text = transformed;
}
Unix => return,
Native => unreachable!("NewlineStyle::Native"),
Auto => unreachable!("NewlineStyle::Auto"),
}
}
}
configuration_option_enum! { BraceStyle:
AlwaysNextLine,
PreferSameLine,
// Prefer same line except where there is a where clause, in which case force
// the brace to the next line.
SameLineWhere,
}
configuration_option_enum! { ControlBraceStyle:
// K&R style, Rust community default
AlwaysSameLine,
// Stroustrup style
ClosingNextLine,
// Allman style
AlwaysNextLine,
}
configuration_option_enum! { IndentStyle:
// First line on the same line as the opening brace, all lines aligned with
// the first line.
Visual,
// First line is on a new line and all lines align with block indent.
Block,
}
configuration_option_enum! { Density:
// Fit as much on one line as possible.
Compressed,
// Use more lines.
Tall,
// Place every item on a separate line.
Vertical,
}
configuration_option_enum! { TypeDensity:
// No spaces around "=" and "+"
Compressed,
// Spaces around " = " and " + "
Wide,
}
configuration_option_enum! { Heuristics:
// Turn off any heuristics
Off,
// Turn on max heuristics
Max,
// Use Rustfmt's defaults
Default,
}
impl Density {
pub fn to_list_tactic(self) -> ListTactic {
match self {
Density::Compressed => ListTactic::Mixed,
Density::Tall => ListTactic::HorizontalVertical,
Density::Vertical => ListTactic::Vertical,
}
}
}
configuration_option_enum! { ReportTactic:
Always,
Unnumbered,
Never,
}
// What Rustfmt should emit. Mostly corresponds to the `--emit` command line
// option.
configuration_option_enum! { EmitMode:
// Emits to files.
Files,
// Writes the output to stdout.
Stdout,
// Displays how much of the input file was processed
Coverage,
// Unfancy stdout
Checkstyle,
// Output the changed lines (for internal value only)
ModifiedLines,
// Checks if a diff can be generated. If so, rustfmt outputs a diff and quits with exit code 1.
// This option is designed to be run in CI where a non-zero exit signifies non-standard code
// formatting. Used for `--check`.
Diff,
}
// Client-preference for coloured output.
configuration_option_enum! { Color:
// Always use color, whether it is a piped or terminal output
Always,
// Never use color
Never,
// Automatically use color, if supported by terminal
Auto,
}
impl Color {
/// Whether we should use a coloured terminal.
pub fn use_colored_tty(&self) -> bool {
match self {
Color::Always => true,
Color::Never => false,
// Color::Auto => stdout_isatty(),
Color::Auto => false,
}
}
}
// How chatty should Rustfmt be?
configuration_option_enum! { Verbosity:
// Emit more.
Verbose,
Normal,
// Emit as little as possible.
Quiet,
}
#[derive(Deserialize, Serialize, Clone, Debug, PartialEq)]
pub struct WidthHeuristics {
// Maximum width of the args of a function call before falling back
// to vertical formatting.
pub fn_call_width: usize,
// Maximum width in the body of a struct lit before falling back to
// vertical formatting.
pub struct_lit_width: usize,
// Maximum width in the body of a struct variant before falling back
// to vertical formatting.
pub struct_variant_width: usize,
// Maximum width of an array literal before falling back to vertical
// formatting.
pub array_width: usize,
// Maximum length of a chain to fit on a single line.
pub chain_width: usize,
// Maximum line length for single line if-else expressions. A value
// of zero means always break if-else expressions.
pub single_line_if_else_max_width: usize,
}
impl WidthHeuristics {
// Using this WidthHeuristics means we ignore heuristics.
pub fn null() -> WidthHeuristics {
WidthHeuristics {
fn_call_width: usize::max_value(),
struct_lit_width: 0,
struct_variant_width: 0,
array_width: usize::max_value(),
chain_width: usize::max_value(),
single_line_if_else_max_width: 0,
}
}
pub fn set(max_width: usize) -> WidthHeuristics {
WidthHeuristics {
fn_call_width: max_width,
struct_lit_width: max_width,
struct_variant_width: max_width,
array_width: max_width,
chain_width: max_width,
single_line_if_else_max_width: max_width,
}
}
// scale the default WidthHeuristics according to max_width
pub fn scaled(max_width: usize) -> WidthHeuristics {
const DEFAULT_MAX_WIDTH: usize = 100;
let max_width_ratio = if max_width > DEFAULT_MAX_WIDTH {
let ratio = max_width as f32 / DEFAULT_MAX_WIDTH as f32;
// round to the closest 0.1
(ratio * 10.0).round() / 10.0
} else {
1.0
};
WidthHeuristics {
fn_call_width: (60.0 * max_width_ratio).round() as usize,
struct_lit_width: (18.0 * max_width_ratio).round() as usize,
struct_variant_width: (35.0 * max_width_ratio).round() as usize,
array_width: (60.0 * max_width_ratio).round() as usize,
chain_width: (60.0 * max_width_ratio).round() as usize,
single_line_if_else_max_width: (50.0 * max_width_ratio).round() as usize,
}
}
}
impl ::std::str::FromStr for WidthHeuristics {
type Err = &'static str;
fn from_str(_: &str) -> Result<Self, Self::Err> {
Err("WidthHeuristics is not parsable")
}
}
impl Default for EmitMode {
fn default() -> EmitMode {
EmitMode::Files
}
}
/// A set of directories, files and modules that rustfmt should ignore.
#[derive(Default, Deserialize, Serialize, Clone, Debug, PartialEq)]
pub struct IgnoreList(HashSet<PathBuf>);
impl IgnoreList {
pub fn add_prefix(&mut self, dir: &Path) {
self.0 = self
.0
.iter()
.map(|s| {
if s.has_root() {
s.clone()
} else {
let mut path = PathBuf::from(dir);
path.push(s);
path
}
}).collect();
}
fn skip_file_inner(&self, file: &Path) -> bool {
self.0.iter().any(|path| file.starts_with(path))
}
pub fn skip_file(&self, file: &FileName) -> bool {
if let FileName::Real(ref path) = file {
self.skip_file_inner(path)
} else {
false
}
}
}
impl ::std::str::FromStr for IgnoreList {
type Err = &'static str;
fn from_str(_: &str) -> Result<Self, Self::Err> {
Err("IgnoreList is not parsable")
}
}
/// Maps client-supplied options to Rustfmt's internals, mostly overriding
/// values in a config with values from the command line.
pub trait CliOptions {
fn apply_to(self, config: &mut Config);
fn config_path(&self) -> Option<&Path>;
}
/// The edition of the compiler (RFC 2052)
configuration_option_enum!{ Edition:
Edition2015,
Edition2018,
}
impl Edition {
pub(crate) fn to_libsyntax_pos_edition(&self) -> syntax_pos::edition::Edition {
match self {
Edition::Edition2015 => syntax_pos::edition::Edition::Edition2015,
Edition::Edition2018 => syntax_pos::edition::Edition::Edition2018,
}
}
}
#[test]
fn test_newline_style_auto_detect() {
let lf = "One\nTwo\nThree";
let crlf = "One\r\nTwo\r\nThree";
let none = "One Two Three";
assert_eq!(NewlineStyle::Unix, NewlineStyle::auto_detect(lf));
assert_eq!(NewlineStyle::Windows, NewlineStyle::auto_detect(crlf));
assert_eq!(NewlineStyle::Native, NewlineStyle::auto_detect(none));
}
#[test]
fn test_newline_style_auto_apply() {
let auto = NewlineStyle::Auto;
let formatted_text = "One\nTwo\nThree";
let raw_input_text = "One\nTwo\nThree";
let mut out = String::from(formatted_text);
auto.apply(&mut out, raw_input_text);
assert_eq!("One\nTwo\nThree", &out, "auto should detect 'lf'");
let formatted_text = "One\nTwo\nThree";
let raw_input_text = "One\r\nTwo\r\nThree";
let mut out = String::from(formatted_text);
auto.apply(&mut out, raw_input_text);
assert_eq!("One\r\nTwo\r\nThree", &out, "auto should detect 'crlf'");
#[cfg(not(windows))]
{
let formatted_text = "One\nTwo\nThree";
let raw_input_text = "One Two Three";
let mut out = String::from(formatted_text);
auto.apply(&mut out, raw_input_text);
assert_eq!(
"One\nTwo\nThree", &out,
"auto-native-unix should detect 'lf'"
);
}
#[cfg(windows)]
{
let formatted_text = "One\nTwo\nThree";
let raw_input_text = "One Two Three";
let mut out = String::from(formatted_text);
auto.apply(&mut out, raw_input_text);
assert_eq!(
"One\r\nTwo\r\nThree", &out,
"auto-native-windows should detect 'crlf'"
);
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/config/config_type.rs
|
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use config::file_lines::FileLines;
use config::options::{IgnoreList, WidthHeuristics};
/// Trait for types that can be used in `Config`.
pub trait ConfigType: Sized {
/// Returns hint text for use in `Config::print_docs()`. For enum types, this is a
/// pipe-separated list of variants; for other types it returns "<type>".
fn doc_hint() -> String;
}
impl ConfigType for bool {
fn doc_hint() -> String {
String::from("<boolean>")
}
}
impl ConfigType for usize {
fn doc_hint() -> String {
String::from("<unsigned integer>")
}
}
impl ConfigType for isize {
fn doc_hint() -> String {
String::from("<signed integer>")
}
}
impl ConfigType for String {
fn doc_hint() -> String {
String::from("<string>")
}
}
impl ConfigType for FileLines {
fn doc_hint() -> String {
String::from("<json>")
}
}
impl ConfigType for WidthHeuristics {
fn doc_hint() -> String {
String::new()
}
}
impl ConfigType for IgnoreList {
fn doc_hint() -> String {
String::from("[<string>,..]")
}
}
/// Check if we're in a nightly build.
///
/// The environment variable `CFG_RELEASE_CHANNEL` is set during the rustc bootstrap
/// to "stable", "beta", or "nightly" depending on what toolchain is being built.
/// If we are being built as part of the stable or beta toolchains, we want
/// to disable unstable configuration options.
///
/// If we're being built by cargo (e.g. `cargo +nightly install rustfmt-nightly`),
/// `CFG_RELEASE_CHANNEL` is not set. As we only support being built against the
/// nightly compiler when installed from crates.io, default to nightly mode.
macro_rules! is_nightly_channel {
() => {
option_env!("CFG_RELEASE_CHANNEL")
.map(|c| c == "nightly" || c == "dev")
.unwrap_or(true)
};
}
macro_rules! create_config {
($($i:ident: $ty:ty, $def:expr, $stb:expr, $( $dstring:expr ),+ );+ $(;)*) => (
#[cfg(test)]
use std::collections::HashSet;
use std::io::Write;
#[derive(Clone)]
pub struct Config {
// if a license_template_path has been specified, successfully read, parsed and compiled
// into a regex, it will be stored here
pub license_template: Option<Regex>,
// For each config item, we store a bool indicating whether it has
// been accessed and the value, and a bool whether the option was
// manually initialised, or taken from the default,
$($i: (Cell<bool>, bool, $ty, bool)),+
}
// Just like the Config struct but with each property wrapped
// as Option<T>. This is used to parse a rustfmt.toml that doesn't
// specify all properties of `Config`.
// We first parse into `PartialConfig`, then create a default `Config`
// and overwrite the properties with corresponding values from `PartialConfig`.
#[derive(Deserialize, Serialize, Clone)]
pub struct PartialConfig {
$(pub $i: Option<$ty>),+
}
impl PartialConfig {
pub fn to_toml(&self) -> Result<String, String> {
// Non-user-facing options can't be specified in TOML
let mut cloned = self.clone();
cloned.file_lines = None;
cloned.verbose = None;
cloned.width_heuristics = None;
::toml::to_string(&cloned)
.map_err(|e| format!("Could not output config: {}", e.to_string()))
}
}
// Macro hygiene won't allow us to make `set_$i()` methods on Config
// for each item, so this struct is used to give the API to set values:
// `config.set().option(false)`. It's pretty ugly. Consider replacing
// with `config.set_option(false)` if we ever get a stable/usable
// `concat_idents!()`.
pub struct ConfigSetter<'a>(&'a mut Config);
impl<'a> ConfigSetter<'a> {
$(
pub fn $i(&mut self, value: $ty) {
(self.0).$i.2 = value;
match stringify!($i) {
"max_width" | "use_small_heuristics" => self.0.set_heuristics(),
"license_template_path" => self.0.set_license_template(),
&_ => (),
}
}
)+
}
// Query each option, returns true if the user set the option, false if
// a default was used.
pub struct ConfigWasSet<'a>(&'a Config);
impl<'a> ConfigWasSet<'a> {
$(
pub fn $i(&self) -> bool {
(self.0).$i.1
}
)+
}
impl Config {
pub(crate) fn version_meets_requirement(&self) -> bool {
if self.was_set().required_version() {
let version = env!("CARGO_PKG_VERSION");
let required_version = self.required_version();
if version != required_version {
println!(
"Error: rustfmt version ({}) doesn't match the required version ({})",
version,
required_version,
);
return false;
}
}
true
}
$(
pub fn $i(&self) -> $ty {
self.$i.0.set(true);
self.$i.2.clone()
}
)+
pub fn set<'a>(&'a mut self) -> ConfigSetter<'a> {
ConfigSetter(self)
}
pub fn was_set<'a>(&'a self) -> ConfigWasSet<'a> {
ConfigWasSet(self)
}
fn fill_from_parsed_config(mut self, parsed: PartialConfig, dir: &Path) -> Config {
$(
if let Some(val) = parsed.$i {
if self.$i.3 {
self.$i.1 = true;
self.$i.2 = val;
} else {
if is_nightly_channel!() {
self.$i.1 = true;
self.$i.2 = val;
} else {
eprintln!("Warning: can't set `{} = {:?}`, unstable features are only \
available in nightly channel.", stringify!($i), val);
}
}
}
)+
self.set_heuristics();
self.set_license_template();
self.set_ignore(dir);
self
}
/// Returns a hash set initialized with every user-facing config option name.
#[cfg(test)]
pub(crate) fn hash_set() -> HashSet<String> {
let mut hash_set = HashSet::new();
$(
hash_set.insert(stringify!($i).to_owned());
)+
hash_set
}
pub(crate) fn is_valid_name(name: &str) -> bool {
match name {
$(
stringify!($i) => true,
)+
_ => false,
}
}
pub(crate) fn from_toml(toml: &str, dir: &Path) -> Result<Config, String> {
let parsed: ::toml::Value =
toml.parse().map_err(|e| format!("Could not parse TOML: {}", e))?;
let mut err: String = String::new();
{
let table = parsed
.as_table()
.ok_or(String::from("Parsed config was not table"))?;
for key in table.keys() {
if !Config::is_valid_name(key) {
let msg = &format!("Warning: Unknown configuration option `{}`\n", key);
err.push_str(msg)
}
}
}
match parsed.try_into() {
Ok(parsed_config) => {
if !err.is_empty() {
eprint!("{}", err);
}
Ok(Config::default().fill_from_parsed_config(parsed_config, dir: &Path))
}
Err(e) => {
err.push_str("Error: Decoding config file failed:\n");
err.push_str(format!("{}\n", e).as_str());
err.push_str("Please check your config file.");
Err(err)
}
}
}
pub fn used_options(&self) -> PartialConfig {
PartialConfig {
$(
$i: if self.$i.0.get() {
Some(self.$i.2.clone())
} else {
None
},
)+
}
}
pub fn all_options(&self) -> PartialConfig {
PartialConfig {
$(
$i: Some(self.$i.2.clone()),
)+
}
}
pub fn override_value(&mut self, key: &str, val: &str)
{
match key {
$(
stringify!($i) => {
self.$i.1 = true;
self.$i.2 = val.parse::<$ty>()
.expect(&format!("Failed to parse override for {} (\"{}\") as a {}",
stringify!($i),
val,
stringify!($ty)));
}
)+
_ => panic!("Unknown config key in override: {}", key)
}
match key {
"max_width" | "use_small_heuristics" => self.set_heuristics(),
"license_template_path" => self.set_license_template(),
&_ => (),
}
}
/// Construct a `Config` from the toml file specified at `file_path`.
///
/// This method only looks at the provided path, for a method that
/// searches parents for a `rustfmt.toml` see `from_resolved_toml_path`.
///
/// Return a `Config` if the config could be read and parsed from
/// the file, Error otherwise.
pub(super) fn from_toml_path(file_path: &Path) -> Result<Config, Error> {
let mut file = File::open(&file_path)?;
let mut toml = String::new();
file.read_to_string(&mut toml)?;
Config::from_toml(&toml, file_path.parent().unwrap())
.map_err(|err| Error::new(ErrorKind::InvalidData, err))
}
/// Resolve the config for input in `dir`.
///
/// Searches for `rustfmt.toml` beginning with `dir`, and
/// recursively checking parents of `dir` if no config file is found.
/// If no config file exists in `dir` or in any parent, a
/// default `Config` will be returned (and the returned path will be empty).
///
/// Returns the `Config` to use, and the path of the project file if there was
/// one.
pub(super) fn from_resolved_toml_path(
dir: &Path,
) -> Result<(Config, Option<PathBuf>), Error> {
/// Try to find a project file in the given directory and its parents.
/// Returns the path of a the nearest project file if one exists,
/// or `None` if no project file was found.
fn resolve_project_file(dir: &Path) -> Result<Option<PathBuf>, Error> {
let mut current = if dir.is_relative() {
env::current_dir()?.join(dir)
} else {
dir.to_path_buf()
};
current = fs::canonicalize(current)?;
loop {
match get_toml_path(¤t) {
Ok(Some(path)) => return Ok(Some(path)),
Err(e) => return Err(e),
_ => ()
}
// If the current directory has no parent, we're done searching.
if !current.pop() {
return Ok(None);
}
}
}
match resolve_project_file(dir)? {
None => Ok((Config::default(), None)),
Some(path) => Config::from_toml_path(&path).map(|config| (config, Some(path))),
}
}
pub fn is_hidden_option(name: &str) -> bool {
const HIDE_OPTIONS: [&str; 4] =
["verbose", "verbose_diff", "file_lines", "width_heuristics"];
HIDE_OPTIONS.contains(&name)
}
pub fn print_docs(out: &mut Write, include_unstable: bool) {
use std::cmp;
let max = 0;
$( let max = cmp::max(max, stringify!($i).len()+1); )+
let mut space_str = String::with_capacity(max);
for _ in 0..max {
space_str.push(' ');
}
writeln!(out, "Configuration Options:").unwrap();
$(
if $stb || include_unstable {
let name_raw = stringify!($i);
if !Config::is_hidden_option(name_raw) {
let mut name_out = String::with_capacity(max);
for _ in name_raw.len()..max-1 {
name_out.push(' ')
}
name_out.push_str(name_raw);
name_out.push(' ');
writeln!(out,
"{}{} Default: {:?}{}",
name_out,
<$ty>::doc_hint(),
$def,
if !$stb { " (unstable)" } else { "" }).unwrap();
$(
writeln!(out, "{}{}", space_str, $dstring).unwrap();
)+
writeln!(out).unwrap();
}
}
)+
}
fn set_heuristics(&mut self) {
if self.use_small_heuristics.2 == Heuristics::Default {
let max_width = self.max_width.2;
self.set().width_heuristics(WidthHeuristics::scaled(max_width));
} else if self.use_small_heuristics.2 == Heuristics::Max {
let max_width = self.max_width.2;
self.set().width_heuristics(WidthHeuristics::set(max_width));
} else {
self.set().width_heuristics(WidthHeuristics::null());
}
}
fn set_license_template(&mut self) {
if self.was_set().license_template_path() {
let lt_path = self.license_template_path();
match license::load_and_compile_template(<_path) {
Ok(re) => self.license_template = Some(re),
Err(msg) => eprintln!("Warning for license template file {:?}: {}",
lt_path, msg),
}
}
}
fn set_ignore(&mut self, dir: &Path) {
self.ignore.2.add_prefix(dir);
}
/// Returns true if the config key was explicitely set and is the default value.
pub fn is_default(&self, key: &str) -> bool {
$(
if let stringify!($i) = key {
return self.$i.1 && self.$i.2 == $def;
}
)+
false
}
}
// Template for the default configuration
impl Default for Config {
fn default() -> Config {
Config {
license_template: None,
$(
$i: (Cell::new(false), false, $def, $stb),
)+
}
}
}
)
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/config/mod.rs
|
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use regex::Regex;
use std::cell::Cell;
use std::default::Default;
use std::fs::File;
use std::io::{Error, ErrorKind, Read};
use std::path::{Path, PathBuf};
use std::{env, fs};
use config::config_type::ConfigType;
pub use config::file_lines::{FileLines, FileName, Range};
pub use config::lists::*;
pub use config::options::*;
#[macro_use]
pub mod config_type;
#[macro_use]
pub mod options;
pub mod file_lines;
pub mod license;
pub mod lists;
/// This macro defines configuration options used in rustfmt. Each option
/// is defined as follows:
///
/// `name: value type, default value, is stable, description;`
create_config! {
// Fundamental stuff
max_width: usize, 100, true, "Maximum width of each line";
hard_tabs: bool, false, true, "Use tab characters for indentation, spaces for alignment";
tab_spaces: usize, 4, true, "Number of spaces per tab";
newline_style: NewlineStyle, NewlineStyle::Auto, true, "Unix or Windows line endings";
use_small_heuristics: Heuristics, Heuristics::Default, true, "Whether to use different \
formatting for items and expressions if they satisfy a heuristic notion of 'small'";
indent_style: IndentStyle, IndentStyle::Block, false, "How do we indent expressions or items";
// Comments. macros, and strings
wrap_comments: bool, false, false, "Break comments to fit on the line";
comment_width: usize, 80, false,
"Maximum length of comments. No effect unless wrap_comments = true";
normalize_comments: bool, false, false, "Convert /* */ comments to // comments where possible";
license_template_path: String, String::default(), false,
"Beginning of file must match license template";
format_strings: bool, false, false, "Format string literals where necessary";
format_macro_matchers: bool, false, false,
"Format the metavariable matching patterns in macros";
format_macro_bodies: bool, true, false, "Format the bodies of macros";
// Single line expressions and items
empty_item_single_line: bool, true, false,
"Put empty-body functions and impls on a single line";
struct_lit_single_line: bool, true, false,
"Put small struct literals on a single line";
fn_single_line: bool, false, false, "Put single-expression functions on a single line";
where_single_line: bool, false, false, "Force where clauses to be on a single line";
// Imports
imports_indent: IndentStyle, IndentStyle::Block, false, "Indent of imports";
imports_layout: ListTactic, ListTactic::Mixed, false, "Item layout inside a import block";
merge_imports: bool, false, false, "Merge imports";
// Ordering
reorder_imports: bool, true, true, "Reorder import and extern crate statements alphabetically";
reorder_modules: bool, true, true, "Reorder module statements alphabetically in group";
reorder_impl_items: bool, false, false, "Reorder impl items";
// Spaces around punctuation
type_punctuation_density: TypeDensity, TypeDensity::Wide, false,
"Determines if '+' or '=' are wrapped in spaces in the punctuation of types";
space_before_colon: bool, false, false, "Leave a space before the colon";
space_after_colon: bool, true, false, "Leave a space after the colon";
spaces_around_ranges: bool, false, false, "Put spaces around the .. and ..= range operators";
binop_separator: SeparatorPlace, SeparatorPlace::Front, false,
"Where to put a binary operator when a binary expression goes multiline";
// Misc.
remove_nested_parens: bool, true, true, "Remove nested parens";
combine_control_expr: bool, true, false, "Combine control expressions with function calls";
struct_field_align_threshold: usize, 0, false, "Align struct fields if their diffs fits within \
threshold";
match_arm_blocks: bool, true, false, "Wrap the body of arms in blocks when it does not fit on \
the same line with the pattern of arms";
force_multiline_blocks: bool, false, false,
"Force multiline closure bodies and match arms to be wrapped in a block";
fn_args_density: Density, Density::Tall, false, "Argument density in functions";
brace_style: BraceStyle, BraceStyle::SameLineWhere, false, "Brace style for items";
control_brace_style: ControlBraceStyle, ControlBraceStyle::AlwaysSameLine, false,
"Brace style for control flow constructs";
trailing_semicolon: bool, true, false,
"Add trailing semicolon after break, continue and return";
trailing_comma: SeparatorTactic, SeparatorTactic::Vertical, false,
"How to handle trailing commas for lists";
match_block_trailing_comma: bool, false, false,
"Put a trailing comma after a block based match arm (non-block arms are not affected)";
blank_lines_upper_bound: usize, 1, false,
"Maximum number of blank lines which can be put between items";
blank_lines_lower_bound: usize, 0, false,
"Minimum number of blank lines which must be put between items";
edition: Edition, Edition::Edition2015, false, "The edition of the parser (RFC 2052)";
// Options that can change the source code beyond whitespace/blocks (somewhat linty things)
merge_derives: bool, true, true, "Merge multiple `#[derive(...)]` into a single one";
use_try_shorthand: bool, false, true, "Replace uses of the try! macro by the ? shorthand";
use_field_init_shorthand: bool, false, true, "Use field initialization shorthand if possible";
force_explicit_abi: bool, true, true, "Always print the abi for extern items";
condense_wildcard_suffixes: bool, false, false, "Replace strings of _ wildcards by a single .. \
in tuple patterns";
// Control options (changes the operation of rustfmt, rather than the formatting)
color: Color, Color::Auto, false,
"What Color option to use when none is supplied: Always, Never, Auto";
required_version: String, env!("CARGO_PKG_VERSION").to_owned(), false,
"Require a specific version of rustfmt";
unstable_features: bool, false, false,
"Enables unstable features. Only available on nightly channel";
disable_all_formatting: bool, false, false, "Don't reformat anything";
skip_children: bool, false, false, "Don't reformat out of line modules";
hide_parse_errors: bool, false, false, "Hide errors from the parser";
error_on_line_overflow: bool, false, false, "Error if unable to get all lines within max_width";
error_on_unformatted: bool, false, false,
"Error if unable to get comments or string literals within max_width, \
or they are left with trailing whitespaces";
report_todo: ReportTactic, ReportTactic::Never, false,
"Report all, none or unnumbered occurrences of TODO in source file comments";
report_fixme: ReportTactic, ReportTactic::Never, false,
"Report all, none or unnumbered occurrences of FIXME in source file comments";
ignore: IgnoreList, IgnoreList::default(), false,
"Skip formatting the specified files and directories";
// Not user-facing
verbose: Verbosity, Verbosity::Normal, false, "How much to information to emit to the user";
file_lines: FileLines, FileLines::all(), false,
"Lines to format; this is not supported in rustfmt.toml, and can only be specified \
via the --file-lines option";
width_heuristics: WidthHeuristics, WidthHeuristics::scaled(100), false,
"'small' heuristic values";
emit_mode: EmitMode, EmitMode::Files, false,
"What emit Mode to use when none is supplied";
make_backup: bool, false, false, "Backup changed files";
}
/// Load a config by checking the client-supplied options and if appropriate, the
/// file system (including searching the file system for overrides).
pub fn load_config<O: CliOptions>(
file_path: Option<&Path>,
options: Option<O>,
) -> Result<(Config, Option<PathBuf>), Error> {
let over_ride = match options {
Some(ref opts) => config_path(opts)?,
None => None,
};
let result = if let Some(over_ride) = over_ride {
Config::from_toml_path(over_ride.as_ref()).map(|p| (p, Some(over_ride.to_owned())))
} else if let Some(file_path) = file_path {
Config::from_resolved_toml_path(file_path)
} else {
Ok((Config::default(), None))
};
result.map(|(mut c, p)| {
if let Some(options) = options {
options.apply_to(&mut c);
}
(c, p)
})
}
// Check for the presence of known config file names (`rustfmt.toml, `.rustfmt.toml`) in `dir`
//
// Return the path if a config file exists, empty if no file exists, and Error for IO errors
fn get_toml_path(dir: &Path) -> Result<Option<PathBuf>, Error> {
const CONFIG_FILE_NAMES: [&str; 2] = [".rustfmt.toml", "rustfmt.toml"];
for config_file_name in &CONFIG_FILE_NAMES {
let config_file = dir.join(config_file_name);
match fs::metadata(&config_file) {
// Only return if it's a file to handle the unlikely situation of a directory named
// `rustfmt.toml`.
Ok(ref md) if md.is_file() => return Ok(Some(config_file)),
// Return the error if it's something other than `NotFound`; otherwise we didn't
// find the project file yet, and continue searching.
Err(e) => {
if e.kind() != ErrorKind::NotFound {
return Err(e);
}
}
_ => {}
}
}
Ok(None)
}
fn config_path(options: &CliOptions) -> Result<Option<PathBuf>, Error> {
let config_path_not_found = |path: &str| -> Result<Option<PathBuf>, Error> {
Err(Error::new(
ErrorKind::NotFound,
format!(
"Error: unable to find a config file for the given path: `{}`",
path
),
))
};
// Read the config_path and convert to parent dir if a file is provided.
// If a config file cannot be found from the given path, return error.
match options.config_path() {
Some(path) if !path.exists() => config_path_not_found(path.to_str().unwrap()),
Some(path) if path.is_dir() => {
let config_file_path = get_toml_path(path)?;
if config_file_path.is_some() {
Ok(config_file_path)
} else {
config_path_not_found(path.to_str().unwrap())
}
}
path => Ok(path.map(|p| p.to_owned())),
}
}
#[cfg(test)]
mod test {
use super::*;
use std::str;
#[allow(dead_code)]
mod mock {
use super::super::*;
create_config! {
// Options that are used by the generated functions
max_width: usize, 100, true, "Maximum width of each line";
use_small_heuristics: Heuristics, Heuristics::Default, true,
"Whether to use different formatting for items and \
expressions if they satisfy a heuristic notion of 'small'.";
license_template_path: String, String::default(), false,
"Beginning of file must match license template";
required_version: String, env!("CARGO_PKG_VERSION").to_owned(), false,
"Require a specific version of rustfmt.";
ignore: IgnoreList, IgnoreList::default(), false,
"Skip formatting the specified files and directories.";
verbose: Verbosity, Verbosity::Normal, false,
"How much to information to emit to the user";
file_lines: FileLines, FileLines::all(), false,
"Lines to format; this is not supported in rustfmt.toml, and can only be specified \
via the --file-lines option";
width_heuristics: WidthHeuristics, WidthHeuristics::scaled(100), false,
"'small' heuristic values";
// Options that are used by the tests
stable_option: bool, false, true, "A stable option";
unstable_option: bool, false, false, "An unstable option";
}
}
#[test]
fn test_config_set() {
let mut config = Config::default();
config.set().verbose(Verbosity::Quiet);
assert_eq!(config.verbose(), Verbosity::Quiet);
config.set().verbose(Verbosity::Normal);
assert_eq!(config.verbose(), Verbosity::Normal);
}
#[test]
fn test_config_used_to_toml() {
let config = Config::default();
let merge_derives = config.merge_derives();
let skip_children = config.skip_children();
let used_options = config.used_options();
let toml = used_options.to_toml().unwrap();
assert_eq!(
toml,
format!(
"merge_derives = {}\nskip_children = {}\n",
merge_derives, skip_children,
)
);
}
#[test]
fn test_was_set() {
use std::path::Path;
let config = Config::from_toml("hard_tabs = true", Path::new("")).unwrap();
assert_eq!(config.was_set().hard_tabs(), true);
assert_eq!(config.was_set().verbose(), false);
}
#[test]
fn test_print_docs_exclude_unstable() {
use self::mock::Config;
let mut output = Vec::new();
Config::print_docs(&mut output, false);
let s = str::from_utf8(&output).unwrap();
assert_eq!(s.contains("stable_option"), true);
assert_eq!(s.contains("unstable_option"), false);
assert_eq!(s.contains("(unstable)"), false);
}
#[test]
fn test_print_docs_include_unstable() {
use self::mock::Config;
let mut output = Vec::new();
Config::print_docs(&mut output, true);
let s = str::from_utf8(&output).unwrap();
assert_eq!(s.contains("stable_option"), true);
assert_eq!(s.contains("unstable_option"), true);
assert_eq!(s.contains("(unstable)"), true);
}
// FIXME(#2183) these tests cannot be run in parallel because they use env vars
// #[test]
// fn test_as_not_nightly_channel() {
// let mut config = Config::default();
// assert_eq!(config.was_set().unstable_features(), false);
// config.set().unstable_features(true);
// assert_eq!(config.was_set().unstable_features(), false);
// }
// #[test]
// fn test_as_nightly_channel() {
// let v = ::std::env::var("CFG_RELEASE_CHANNEL").unwrap_or(String::from(""));
// ::std::env::set_var("CFG_RELEASE_CHANNEL", "nightly");
// let mut config = Config::default();
// config.set().unstable_features(true);
// assert_eq!(config.was_set().unstable_features(), false);
// config.set().unstable_features(true);
// assert_eq!(config.unstable_features(), true);
// ::std::env::set_var("CFG_RELEASE_CHANNEL", v);
// }
// #[test]
// fn test_unstable_from_toml() {
// let mut config = Config::from_toml("unstable_features = true").unwrap();
// assert_eq!(config.was_set().unstable_features(), false);
// let v = ::std::env::var("CFG_RELEASE_CHANNEL").unwrap_or(String::from(""));
// ::std::env::set_var("CFG_RELEASE_CHANNEL", "nightly");
// config = Config::from_toml("unstable_features = true").unwrap();
// assert_eq!(config.was_set().unstable_features(), true);
// assert_eq!(config.unstable_features(), true);
// ::std::env::set_var("CFG_RELEASE_CHANNEL", v);
// }
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/config/file_lines.rs
|
// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! This module contains types and functions to support formatting specific line ranges.
use std::collections::HashMap;
use std::path::PathBuf;
use std::rc::Rc;
use std::{cmp, fmt, iter, str};
use serde::de::{Deserialize, Deserializer};
use serde::ser::{self, Serialize, Serializer};
use serde_json as json;
use syntax::source_map::{self, SourceFile};
/// A range of lines in a file, inclusive of both ends.
pub struct LineRange {
pub file: Rc<SourceFile>,
pub lo: usize,
pub hi: usize,
}
/// Defines the name of an input - either a file or stdin.
#[derive(Clone, Debug, Eq, PartialEq, Hash, Ord, PartialOrd)]
pub enum FileName {
Real(PathBuf),
Stdin,
}
impl From<source_map::FileName> for FileName {
fn from(name: source_map::FileName) -> FileName {
match name {
source_map::FileName::Real(p) => FileName::Real(p),
source_map::FileName::Custom(ref f) if f == "stdin" => FileName::Stdin,
_ => unreachable!(),
}
}
}
impl fmt::Display for FileName {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
FileName::Real(p) => write!(f, "{}", p.to_str().unwrap()),
FileName::Stdin => write!(f, "stdin"),
}
}
}
impl<'de> Deserialize<'de> for FileName {
fn deserialize<D>(deserializer: D) -> Result<FileName, D::Error>
where
D: Deserializer<'de>,
{
let s = String::deserialize(deserializer)?;
if s == "stdin" {
Ok(FileName::Stdin)
} else {
Ok(FileName::Real(s.into()))
}
}
}
impl Serialize for FileName {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let s = match self {
FileName::Stdin => Ok("stdin"),
FileName::Real(path) => path
.to_str()
.ok_or_else(|| ser::Error::custom("path can't be serialized as UTF-8 string")),
};
s.and_then(|s| serializer.serialize_str(s))
}
}
impl LineRange {
pub fn file_name(&self) -> FileName {
self.file.name.clone().into()
}
}
/// A range that is inclusive of both ends.
#[derive(Clone, Copy, Debug, Eq, PartialEq, PartialOrd, Ord, Deserialize)]
pub struct Range {
lo: usize,
hi: usize,
}
impl<'a> From<&'a LineRange> for Range {
fn from(range: &'a LineRange) -> Range {
Range::new(range.lo, range.hi)
}
}
impl Range {
pub fn new(lo: usize, hi: usize) -> Range {
Range { lo, hi }
}
fn is_empty(self) -> bool {
self.lo > self.hi
}
#[allow(dead_code)]
fn contains(self, other: Range) -> bool {
if other.is_empty() {
true
} else {
!self.is_empty() && self.lo <= other.lo && self.hi >= other.hi
}
}
fn intersects(self, other: Range) -> bool {
if self.is_empty() || other.is_empty() {
false
} else {
(self.lo <= other.hi && other.hi <= self.hi)
|| (other.lo <= self.hi && self.hi <= other.hi)
}
}
fn adjacent_to(self, other: Range) -> bool {
if self.is_empty() || other.is_empty() {
false
} else {
self.hi + 1 == other.lo || other.hi + 1 == self.lo
}
}
/// Returns a new `Range` with lines from `self` and `other` if they were adjacent or
/// intersect; returns `None` otherwise.
fn merge(self, other: Range) -> Option<Range> {
if self.adjacent_to(other) || self.intersects(other) {
Some(Range::new(
cmp::min(self.lo, other.lo),
cmp::max(self.hi, other.hi),
))
} else {
None
}
}
}
/// A set of lines in files.
///
/// It is represented as a multimap keyed on file names, with values a collection of
/// non-overlapping ranges sorted by their start point. An inner `None` is interpreted to mean all
/// lines in all files.
#[derive(Clone, Debug, Default, PartialEq)]
pub struct FileLines(Option<HashMap<FileName, Vec<Range>>>);
/// Normalizes the ranges so that the invariants for `FileLines` hold: ranges are non-overlapping,
/// and ordered by their start point.
fn normalize_ranges(ranges: &mut HashMap<FileName, Vec<Range>>) {
for ranges in ranges.values_mut() {
ranges.sort();
let mut result = vec![];
{
let mut iter = ranges.into_iter().peekable();
while let Some(next) = iter.next() {
let mut next = *next;
while let Some(&&mut peek) = iter.peek() {
if let Some(merged) = next.merge(peek) {
iter.next().unwrap();
next = merged;
} else {
break;
}
}
result.push(next)
}
}
*ranges = result;
}
}
impl FileLines {
/// Creates a `FileLines` that contains all lines in all files.
pub(crate) fn all() -> FileLines {
FileLines(None)
}
/// Returns true if this `FileLines` contains all lines in all files.
pub(crate) fn is_all(&self) -> bool {
self.0.is_none()
}
pub fn from_ranges(mut ranges: HashMap<FileName, Vec<Range>>) -> FileLines {
normalize_ranges(&mut ranges);
FileLines(Some(ranges))
}
/// Returns an iterator over the files contained in `self`.
pub fn files(&self) -> Files {
Files(self.0.as_ref().map(|m| m.keys()))
}
/// Returns JSON representation as accepted by the `--file-lines JSON` arg.
pub fn to_json_spans(&self) -> Vec<JsonSpan> {
match &self.0 {
None => vec![],
Some(file_ranges) => file_ranges
.iter()
.flat_map(|(file, ranges)| ranges.iter().map(move |r| (file, r)))
.map(|(file, range)| JsonSpan {
file: file.to_owned(),
range: (range.lo, range.hi),
}).collect(),
}
}
/// Returns true if `self` includes all lines in all files. Otherwise runs `f` on all ranges in
/// the designated file (if any) and returns true if `f` ever does.
fn file_range_matches<F>(&self, file_name: &FileName, f: F) -> bool
where
F: FnMut(&Range) -> bool,
{
let map = match self.0 {
// `None` means "all lines in all files".
None => return true,
Some(ref map) => map,
};
match canonicalize_path_string(file_name).and_then(|file| map.get(&file)) {
Some(ranges) => ranges.iter().any(f),
None => false,
}
}
/// Returns true if `range` is fully contained in `self`.
#[allow(dead_code)]
pub(crate) fn contains(&self, range: &LineRange) -> bool {
self.file_range_matches(&range.file_name(), |r| r.contains(Range::from(range)))
}
/// Returns true if any lines in `range` are in `self`.
pub(crate) fn intersects(&self, range: &LineRange) -> bool {
self.file_range_matches(&range.file_name(), |r| r.intersects(Range::from(range)))
}
/// Returns true if `line` from `file_name` is in `self`.
pub(crate) fn contains_line(&self, file_name: &FileName, line: usize) -> bool {
self.file_range_matches(file_name, |r| r.lo <= line && r.hi >= line)
}
/// Returns true if all the lines between `lo` and `hi` from `file_name` are in `self`.
pub(crate) fn contains_range(&self, file_name: &FileName, lo: usize, hi: usize) -> bool {
self.file_range_matches(file_name, |r| r.contains(Range::new(lo, hi)))
}
}
/// `FileLines` files iterator.
pub struct Files<'a>(Option<::std::collections::hash_map::Keys<'a, FileName, Vec<Range>>>);
impl<'a> iter::Iterator for Files<'a> {
type Item = &'a FileName;
fn next(&mut self) -> Option<&'a FileName> {
self.0.as_mut().and_then(Iterator::next)
}
}
fn canonicalize_path_string(file: &FileName) -> Option<FileName> {
match *file {
FileName::Real(ref path) => path.canonicalize().ok().map(FileName::Real),
_ => Some(file.clone()),
}
}
// This impl is needed for `Config::override_value` to work for use in tests.
impl str::FromStr for FileLines {
type Err = String;
fn from_str(s: &str) -> Result<FileLines, String> {
let v: Vec<JsonSpan> = json::from_str(s).map_err(|e| e.to_string())?;
let mut m = HashMap::new();
for js in v {
let (s, r) = JsonSpan::into_tuple(js)?;
m.entry(s).or_insert_with(|| vec![]).push(r);
}
Ok(FileLines::from_ranges(m))
}
}
// For JSON decoding.
#[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Debug, Deserialize, Serialize)]
pub struct JsonSpan {
file: FileName,
range: (usize, usize),
}
impl JsonSpan {
fn into_tuple(self) -> Result<(FileName, Range), String> {
let (lo, hi) = self.range;
let canonical = canonicalize_path_string(&self.file)
.ok_or_else(|| format!("Can't canonicalize {}", &self.file))?;
Ok((canonical, Range::new(lo, hi)))
}
}
// This impl is needed for inclusion in the `Config` struct. We don't have a toml representation
// for `FileLines`, so it will just panic instead.
impl<'de> ::serde::de::Deserialize<'de> for FileLines {
fn deserialize<D>(_: D) -> Result<Self, D::Error>
where
D: ::serde::de::Deserializer<'de>,
{
panic!(
"FileLines cannot be deserialized from a project rustfmt.toml file: please \
specify it via the `--file-lines` option instead"
);
}
}
// We also want to avoid attempting to serialize a FileLines to toml. The
// `Config` struct should ensure this impl is never reached.
impl ::serde::ser::Serialize for FileLines {
fn serialize<S>(&self, _: S) -> Result<S::Ok, S::Error>
where
S: ::serde::ser::Serializer,
{
unreachable!("FileLines cannot be serialized. This is a rustfmt bug.");
}
}
#[cfg(test)]
mod test {
use super::Range;
#[test]
fn test_range_intersects() {
assert!(Range::new(1, 2).intersects(Range::new(1, 1)));
assert!(Range::new(1, 2).intersects(Range::new(2, 2)));
assert!(!Range::new(1, 2).intersects(Range::new(0, 0)));
assert!(!Range::new(1, 2).intersects(Range::new(3, 10)));
assert!(!Range::new(1, 3).intersects(Range::new(5, 5)));
}
#[test]
fn test_range_adjacent_to() {
assert!(!Range::new(1, 2).adjacent_to(Range::new(1, 1)));
assert!(!Range::new(1, 2).adjacent_to(Range::new(2, 2)));
assert!(Range::new(1, 2).adjacent_to(Range::new(0, 0)));
assert!(Range::new(1, 2).adjacent_to(Range::new(3, 10)));
assert!(!Range::new(1, 3).adjacent_to(Range::new(5, 5)));
}
#[test]
fn test_range_contains() {
assert!(Range::new(1, 2).contains(Range::new(1, 1)));
assert!(Range::new(1, 2).contains(Range::new(2, 2)));
assert!(!Range::new(1, 2).contains(Range::new(0, 0)));
assert!(!Range::new(1, 2).contains(Range::new(3, 10)));
}
#[test]
fn test_range_merge() {
assert_eq!(None, Range::new(1, 3).merge(Range::new(5, 5)));
assert_eq!(None, Range::new(4, 7).merge(Range::new(0, 1)));
assert_eq!(
Some(Range::new(3, 7)),
Range::new(3, 5).merge(Range::new(4, 7))
);
assert_eq!(
Some(Range::new(3, 7)),
Range::new(3, 5).merge(Range::new(5, 7))
);
assert_eq!(
Some(Range::new(3, 7)),
Range::new(3, 5).merge(Range::new(6, 7))
);
assert_eq!(
Some(Range::new(3, 7)),
Range::new(3, 7).merge(Range::new(4, 5))
);
}
use super::json::{self, json};
use super::{FileLines, FileName};
use std::{collections::HashMap, path::PathBuf};
#[test]
fn file_lines_to_json() {
let ranges: HashMap<FileName, Vec<Range>> = [
(
FileName::Real(PathBuf::from("src/main.rs")),
vec![Range::new(1, 3), Range::new(5, 7)],
),
(
FileName::Real(PathBuf::from("src/lib.rs")),
vec![Range::new(1, 7)],
),
]
.iter()
.cloned()
.collect();
let file_lines = FileLines::from_ranges(ranges);
let mut spans = file_lines.to_json_spans();
spans.sort();
let json = json::to_value(&spans).unwrap();
assert_eq!(
json,
json! {[
{"file": "src/lib.rs", "range": [1, 7]},
{"file": "src/main.rs", "range": [1, 3]},
{"file": "src/main.rs", "range": [5, 7]},
]}
);
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/config/license.rs
|
use std::fmt;
use std::fs::File;
use std::io;
use std::io::Read;
use regex;
use regex::Regex;
#[derive(Debug)]
pub enum LicenseError {
IO(io::Error),
Regex(regex::Error),
Parse(String),
}
impl fmt::Display for LicenseError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
LicenseError::IO(ref err) => err.fmt(f),
LicenseError::Regex(ref err) => err.fmt(f),
LicenseError::Parse(ref err) => write!(f, "parsing failed, {}", err),
}
}
}
impl From<io::Error> for LicenseError {
fn from(err: io::Error) -> LicenseError {
LicenseError::IO(err)
}
}
impl From<regex::Error> for LicenseError {
fn from(err: regex::Error) -> LicenseError {
LicenseError::Regex(err)
}
}
// the template is parsed using a state machine
enum ParsingState {
Lit,
LitEsc,
// the u32 keeps track of brace nesting
Re(u32),
ReEsc(u32),
Abort(String),
}
use self::ParsingState::*;
pub struct TemplateParser {
parsed: String,
buffer: String,
state: ParsingState,
linum: u32,
open_brace_line: u32,
}
impl TemplateParser {
fn new() -> Self {
Self {
parsed: "^".to_owned(),
buffer: String::new(),
state: Lit,
linum: 1,
// keeps track of last line on which a regex placeholder was started
open_brace_line: 0,
}
}
/// Convert a license template into a string which can be turned into a regex.
///
/// The license template could use regex syntax directly, but that would require a lot of manual
/// escaping, which is inconvenient. It is therefore literal by default, with optional regex
/// subparts delimited by `{` and `}`. Additionally:
///
/// - to insert literal `{`, `}` or `\`, escape it with `\`
/// - an empty regex placeholder (`{}`) is shorthand for `{.*?}`
///
/// This function parses this input format and builds a properly escaped *string* representation
/// of the equivalent regular expression. It **does not** however guarantee that the returned
/// string is a syntactically valid regular expression.
///
/// # Examples
///
/// ```ignore
/// assert_eq!(
/// TemplateParser::parse(
/// r"
/// // Copyright {\d+} The \} Rust \\ Project \{ Developers. See the {([A-Z]+)}
/// // file at the top-level directory of this distribution and at
/// // {}.
/// //
/// // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
/// // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
/// // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
/// // option. This file may not be copied, modified, or distributed
/// // except according to those terms.
/// "
/// ).unwrap(),
/// r"^
/// // Copyright \d+ The \} Rust \\ Project \{ Developers\. See the ([A-Z]+)
/// // file at the top\-level directory of this distribution and at
/// // .*?\.
/// //
/// // Licensed under the Apache License, Version 2\.0 <LICENSE\-APACHE or
/// // http://www\.apache\.org/licenses/LICENSE\-2\.0> or the MIT license
/// // <LICENSE\-MIT or http://opensource\.org/licenses/MIT>, at your
/// // option\. This file may not be copied, modified, or distributed
/// // except according to those terms\.
/// "
/// );
/// ```
pub fn parse(template: &str) -> Result<String, LicenseError> {
let mut parser = Self::new();
for chr in template.chars() {
if chr == '\n' {
parser.linum += 1;
}
parser.state = match parser.state {
Lit => parser.trans_from_lit(chr),
LitEsc => parser.trans_from_litesc(chr),
Re(brace_nesting) => parser.trans_from_re(chr, brace_nesting),
ReEsc(brace_nesting) => parser.trans_from_reesc(chr, brace_nesting),
Abort(msg) => return Err(LicenseError::Parse(msg)),
};
}
// check if we've ended parsing in a valid state
match parser.state {
Abort(msg) => return Err(LicenseError::Parse(msg)),
Re(_) | ReEsc(_) => {
return Err(LicenseError::Parse(format!(
"escape or balance opening brace on l. {}",
parser.open_brace_line
)));
}
LitEsc => {
return Err(LicenseError::Parse(format!(
"incomplete escape sequence on l. {}",
parser.linum
)))
}
_ => (),
}
parser.parsed.push_str(®ex::escape(&parser.buffer));
Ok(parser.parsed)
}
fn trans_from_lit(&mut self, chr: char) -> ParsingState {
match chr {
'{' => {
self.parsed.push_str(®ex::escape(&self.buffer));
self.buffer.clear();
self.open_brace_line = self.linum;
Re(1)
}
'}' => Abort(format!(
"escape or balance closing brace on l. {}",
self.linum
)),
'\\' => LitEsc,
_ => {
self.buffer.push(chr);
Lit
}
}
}
fn trans_from_litesc(&mut self, chr: char) -> ParsingState {
self.buffer.push(chr);
Lit
}
fn trans_from_re(&mut self, chr: char, brace_nesting: u32) -> ParsingState {
match chr {
'{' => {
self.buffer.push(chr);
Re(brace_nesting + 1)
}
'}' => {
match brace_nesting {
1 => {
// default regex for empty placeholder {}
if self.buffer.is_empty() {
self.parsed.push_str(".*?");
} else {
self.parsed.push_str(&self.buffer);
}
self.buffer.clear();
Lit
}
_ => {
self.buffer.push(chr);
Re(brace_nesting - 1)
}
}
}
'\\' => {
self.buffer.push(chr);
ReEsc(brace_nesting)
}
_ => {
self.buffer.push(chr);
Re(brace_nesting)
}
}
}
fn trans_from_reesc(&mut self, chr: char, brace_nesting: u32) -> ParsingState {
self.buffer.push(chr);
Re(brace_nesting)
}
}
pub fn load_and_compile_template(path: &str) -> Result<Regex, LicenseError> {
let mut lt_file = File::open(&path)?;
let mut lt_str = String::new();
lt_file.read_to_string(&mut lt_str)?;
let lt_parsed = TemplateParser::parse(<_str)?;
Ok(Regex::new(<_parsed)?)
}
#[cfg(test)]
mod test {
use super::TemplateParser;
#[test]
fn test_parse_license_template() {
assert_eq!(
TemplateParser::parse("literal (.*)").unwrap(),
r"^literal \(\.\*\)"
);
assert_eq!(
TemplateParser::parse(r"escaping \}").unwrap(),
r"^escaping \}"
);
assert!(TemplateParser::parse("unbalanced } without escape").is_err());
assert_eq!(
TemplateParser::parse(r"{\d+} place{-?}holder{s?}").unwrap(),
r"^\d+ place-?holders?"
);
assert_eq!(TemplateParser::parse("default {}").unwrap(), "^default .*?");
assert_eq!(
TemplateParser::parse(r"unbalanced nested braces {\{{3}}").unwrap(),
r"^unbalanced nested braces \{{3}"
);
assert_eq!(
&TemplateParser::parse("parsing error }")
.unwrap_err()
.to_string(),
"parsing failed, escape or balance closing brace on l. 1"
);
assert_eq!(
&TemplateParser::parse("parsing error {\nsecond line")
.unwrap_err()
.to_string(),
"parsing failed, escape or balance opening brace on l. 1"
);
assert_eq!(
&TemplateParser::parse(r"parsing error \")
.unwrap_err()
.to_string(),
"parsing failed, incomplete escape sequence on l. 1"
);
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/git-rustfmt/main.rs
|
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
extern crate env_logger;
extern crate getopts;
#[macro_use]
extern crate log;
extern crate rustfmt_nightly as rustfmt;
use std::env;
use std::io::stdout;
use std::path::{Path, PathBuf};
use std::process::Command;
use std::str::FromStr;
use getopts::{Matches, Options};
use rustfmt::{load_config, CliOptions, Input, Session};
fn prune_files(files: Vec<&str>) -> Vec<&str> {
let prefixes: Vec<_> = files
.iter()
.filter(|f| f.ends_with("mod.rs") || f.ends_with("lib.rs"))
.map(|f| &f[..f.len() - 6])
.collect();
let mut pruned_prefixes = vec![];
for p1 in prefixes {
if p1.starts_with("src/bin/") || pruned_prefixes.iter().all(|p2| !p1.starts_with(p2)) {
pruned_prefixes.push(p1);
}
}
debug!("prefixes: {:?}", pruned_prefixes);
files
.into_iter()
.filter(|f| {
if f.ends_with("mod.rs") || f.ends_with("lib.rs") || f.starts_with("src/bin/") {
return true;
}
pruned_prefixes.iter().all(|pp| !f.starts_with(pp))
}).collect()
}
fn git_diff(commits: &str) -> String {
let mut cmd = Command::new("git");
cmd.arg("diff");
if commits != "0" {
cmd.arg(format!("HEAD~{}", commits));
}
let output = cmd.output().expect("Couldn't execute `git diff`");
String::from_utf8_lossy(&output.stdout).into_owned()
}
fn get_files(input: &str) -> Vec<&str> {
input
.lines()
.filter(|line| line.starts_with("+++ b/") && line.ends_with(".rs"))
.map(|line| &line[6..])
.collect()
}
fn fmt_files(files: &[&str]) -> i32 {
let (config, _) =
load_config::<NullOptions>(Some(Path::new(".")), None).expect("couldn't load config");
let mut exit_code = 0;
let mut out = stdout();
let mut session = Session::new(config, Some(&mut out));
for file in files {
let report = session.format(Input::File(PathBuf::from(file))).unwrap();
if report.has_warnings() {
eprintln!("{}", report);
}
if !session.has_no_errors() {
exit_code = 1;
}
}
exit_code
}
struct NullOptions;
impl CliOptions for NullOptions {
fn apply_to(self, _: &mut rustfmt::Config) {
unreachable!();
}
fn config_path(&self) -> Option<&Path> {
unreachable!();
}
}
fn uncommitted_files() -> Vec<String> {
let mut cmd = Command::new("git");
cmd.arg("ls-files");
cmd.arg("--others");
cmd.arg("--modified");
cmd.arg("--exclude-standard");
let output = cmd.output().expect("Couldn't execute Git");
let stdout = String::from_utf8_lossy(&output.stdout);
stdout
.lines()
.filter(|s| s.ends_with(".rs"))
.map(|s| s.to_owned())
.collect()
}
fn check_uncommitted() {
let uncommitted = uncommitted_files();
debug!("uncommitted files: {:?}", uncommitted);
if !uncommitted.is_empty() {
println!("Found untracked changes:");
for f in &uncommitted {
println!(" {}", f);
}
println!("Commit your work, or run with `-u`.");
println!("Exiting.");
std::process::exit(1);
}
}
fn make_opts() -> Options {
let mut opts = Options::new();
opts.optflag("h", "help", "show this message");
opts.optflag("c", "check", "check only, don't format (unimplemented)");
opts.optflag("u", "uncommitted", "format uncommitted files");
opts
}
struct Config {
commits: String,
uncommitted: bool,
check: bool,
}
impl Config {
fn from_args(matches: &Matches, opts: &Options) -> Config {
// `--help` display help message and quit
if matches.opt_present("h") {
let message = format!(
"\nusage: {} <commits> [options]\n\n\
commits: number of commits to format, default: 1",
env::args_os().next().unwrap().to_string_lossy()
);
println!("{}", opts.usage(&message));
std::process::exit(0);
}
let mut config = Config {
commits: "1".to_owned(),
uncommitted: false,
check: false,
};
if matches.opt_present("c") {
config.check = true;
unimplemented!();
}
if matches.opt_present("u") {
config.uncommitted = true;
}
if matches.free.len() > 1 {
panic!("unknown arguments, use `-h` for usage");
}
if matches.free.len() == 1 {
let commits = matches.free[0].trim();
if u32::from_str(commits).is_err() {
panic!("Couldn't parse number of commits");
}
config.commits = commits.to_owned();
}
config
}
}
fn main() {
env_logger::init();
let opts = make_opts();
let matches = opts
.parse(env::args().skip(1))
.expect("Couldn't parse command line");
let config = Config::from_args(&matches, &opts);
if !config.uncommitted {
check_uncommitted();
}
let stdout = git_diff(&config.commits);
let files = get_files(&stdout);
debug!("files: {:?}", files);
let files = prune_files(files);
debug!("pruned files: {:?}", files);
let exit_code = fmt_files(&files);
std::process::exit(exit_code);
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-nightly/src/cargo-fmt/main.rs
|
// Copyright 2015-2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Inspired by Paul Woolcock's cargo-fmt (https://github.com/pwoolcoc/cargo-fmt/)
#![cfg(not(test))]
#![deny(warnings)]
extern crate cargo_metadata;
extern crate getopts;
extern crate serde_json as json;
use std::collections::HashSet;
use std::env;
use std::fs;
use std::hash::{Hash, Hasher};
use std::io::{self, Write};
use std::iter::FromIterator;
use std::path::{Path, PathBuf};
use std::process::{Command, ExitStatus};
use std::str;
use getopts::{Matches, Options};
fn main() {
let exit_status = execute();
std::io::stdout().flush().unwrap();
std::process::exit(exit_status);
}
const SUCCESS: i32 = 0;
const FAILURE: i32 = 1;
fn execute() -> i32 {
let mut opts = getopts::Options::new();
opts.optflag("h", "help", "show this message");
opts.optflag("q", "quiet", "no output printed to stdout");
opts.optflag("v", "verbose", "use verbose output");
opts.optmulti(
"p",
"package",
"specify package to format (only usable in workspaces)",
"<package>",
);
opts.optflag("", "version", "print rustfmt version and exit");
opts.optflag("", "all", "format all packages (only usable in workspaces)");
// If there is any invalid argument passed to `cargo fmt`, return without formatting.
let mut is_package_arg = false;
for arg in env::args().skip(2).take_while(|a| a != "--") {
if arg.starts_with('-') {
is_package_arg = arg.starts_with("--package");
} else if !is_package_arg {
print_usage_to_stderr(&opts, &format!("Invalid argument: `{}`.", arg));
return FAILURE;
} else {
is_package_arg = false;
}
}
let matches = match opts.parse(env::args().skip(1).take_while(|a| a != "--")) {
Ok(m) => m,
Err(e) => {
print_usage_to_stderr(&opts, &e.to_string());
return FAILURE;
}
};
let verbosity = match (matches.opt_present("v"), matches.opt_present("q")) {
(false, false) => Verbosity::Normal,
(false, true) => Verbosity::Quiet,
(true, false) => Verbosity::Verbose,
(true, true) => {
print_usage_to_stderr(&opts, "quiet mode and verbose mode are not compatible");
return FAILURE;
}
};
if matches.opt_present("h") {
print_usage_to_stdout(&opts, "");
return SUCCESS;
}
if matches.opt_present("version") {
return handle_command_status(get_version(verbosity), &opts);
}
let strategy = CargoFmtStrategy::from_matches(&matches);
handle_command_status(format_crate(verbosity, &strategy), &opts)
}
macro_rules! print_usage {
($print:ident, $opts:ident, $reason:expr) => {{
let msg = format!("{}\nusage: cargo fmt [options]", $reason);
$print!(
"{}\nThis utility formats all bin and lib files of the current crate using rustfmt. \
Arguments after `--` are passed to rustfmt.",
$opts.usage(&msg)
);
}};
}
fn print_usage_to_stdout(opts: &Options, reason: &str) {
print_usage!(println, opts, reason);
}
fn print_usage_to_stderr(opts: &Options, reason: &str) {
print_usage!(eprintln, opts, reason);
}
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum Verbosity {
Verbose,
Normal,
Quiet,
}
fn handle_command_status(status: Result<ExitStatus, io::Error>, opts: &getopts::Options) -> i32 {
match status {
Err(e) => {
print_usage_to_stderr(opts, &e.to_string());
FAILURE
}
Ok(status) => {
if status.success() {
SUCCESS
} else {
status.code().unwrap_or(FAILURE)
}
}
}
}
fn get_version(verbosity: Verbosity) -> Result<ExitStatus, io::Error> {
run_rustfmt(&[], &[String::from("--version")], verbosity)
}
fn format_crate(
verbosity: Verbosity,
strategy: &CargoFmtStrategy,
) -> Result<ExitStatus, io::Error> {
let rustfmt_args = get_fmt_args();
let targets = if rustfmt_args
.iter()
.any(|s| ["--print-config", "-h", "--help", "-V", "--verison"].contains(&s.as_str()))
{
HashSet::new()
} else {
get_targets(strategy)?
};
// Currently only bin and lib files get formatted
let files: Vec<_> = targets
.into_iter()
.inspect(|t| {
if verbosity == Verbosity::Verbose {
println!("[{}] {:?}", t.kind, t.path)
}
}).map(|t| t.path)
.collect();
run_rustfmt(&files, &rustfmt_args, verbosity)
}
fn get_fmt_args() -> Vec<String> {
// All arguments after -- are passed to rustfmt
env::args().skip_while(|a| a != "--").skip(1).collect()
}
/// Target uses a `path` field for equality and hashing.
#[derive(Debug)]
pub struct Target {
/// A path to the main source file of the target.
path: PathBuf,
/// A kind of target (e.g. lib, bin, example, ...).
kind: String,
}
impl Target {
pub fn from_target(target: &cargo_metadata::Target) -> Self {
let path = PathBuf::from(&target.src_path);
let canonicalized = fs::canonicalize(&path).unwrap_or(path);
Target {
path: canonicalized,
kind: target.kind[0].clone(),
}
}
}
impl PartialEq for Target {
fn eq(&self, other: &Target) -> bool {
self.path == other.path
}
}
impl Eq for Target {}
impl Hash for Target {
fn hash<H: Hasher>(&self, state: &mut H) {
self.path.hash(state);
}
}
#[derive(Debug, PartialEq, Eq)]
pub enum CargoFmtStrategy {
/// Format every packages and dependencies.
All,
/// Format packages that are specified by the command line argument.
Some(Vec<String>),
/// Format the root packages only.
Root,
}
impl CargoFmtStrategy {
pub fn from_matches(matches: &Matches) -> CargoFmtStrategy {
match (matches.opt_present("all"), matches.opt_present("p")) {
(false, false) => CargoFmtStrategy::Root,
(true, _) => CargoFmtStrategy::All,
(false, true) => CargoFmtStrategy::Some(matches.opt_strs("p")),
}
}
}
/// Based on the specified `CargoFmtStrategy`, returns a set of main source files.
fn get_targets(strategy: &CargoFmtStrategy) -> Result<HashSet<Target>, io::Error> {
let mut targets = HashSet::new();
match *strategy {
CargoFmtStrategy::Root => get_targets_root_only(&mut targets)?,
CargoFmtStrategy::All => get_targets_recursive(None, &mut targets, &mut HashSet::new())?,
CargoFmtStrategy::Some(ref hitlist) => get_targets_with_hitlist(hitlist, &mut targets)?,
}
if targets.is_empty() {
Err(io::Error::new(
io::ErrorKind::Other,
"Failed to find targets".to_owned(),
))
} else {
Ok(targets)
}
}
fn get_targets_root_only(targets: &mut HashSet<Target>) -> Result<(), io::Error> {
let metadata = get_cargo_metadata(None)?;
let current_dir = env::current_dir()?.canonicalize()?;
let current_dir_manifest = current_dir.join("Cargo.toml");
let workspace_root_path = PathBuf::from(&metadata.workspace_root).canonicalize()?;
let in_workspace_root = workspace_root_path == current_dir;
for package in metadata.packages {
if in_workspace_root || PathBuf::from(&package.manifest_path) == current_dir_manifest {
for target in package.targets {
targets.insert(Target::from_target(&target));
}
}
}
Ok(())
}
fn get_targets_recursive(
manifest_path: Option<&Path>,
mut targets: &mut HashSet<Target>,
visited: &mut HashSet<String>,
) -> Result<(), io::Error> {
let metadata = get_cargo_metadata(manifest_path)?;
for package in metadata.packages {
add_targets(&package.targets, &mut targets);
// Look for local dependencies.
for dependency in package.dependencies {
if dependency.source.is_some() || visited.contains(&dependency.name) {
continue;
}
let mut manifest_path = PathBuf::from(&package.manifest_path);
manifest_path.pop();
manifest_path.push(&dependency.name);
manifest_path.push("Cargo.toml");
if manifest_path.exists() {
visited.insert(dependency.name);
get_targets_recursive(Some(&manifest_path), &mut targets, visited)?;
}
}
}
Ok(())
}
fn get_targets_with_hitlist(
hitlist: &[String],
targets: &mut HashSet<Target>,
) -> Result<(), io::Error> {
let metadata = get_cargo_metadata(None)?;
let mut workspace_hitlist: HashSet<&String> = HashSet::from_iter(hitlist);
for package in metadata.packages {
if workspace_hitlist.remove(&package.name) {
for target in package.targets {
targets.insert(Target::from_target(&target));
}
}
}
if workspace_hitlist.is_empty() {
Ok(())
} else {
let package = workspace_hitlist.iter().next().unwrap();
Err(io::Error::new(
io::ErrorKind::InvalidInput,
format!("package `{}` is not a member of the workspace", package),
))
}
}
fn add_targets(target_paths: &[cargo_metadata::Target], targets: &mut HashSet<Target>) {
for target in target_paths {
targets.insert(Target::from_target(target));
}
}
fn run_rustfmt(
files: &[PathBuf],
fmt_args: &[String],
verbosity: Verbosity,
) -> Result<ExitStatus, io::Error> {
let stdout = if verbosity == Verbosity::Quiet {
std::process::Stdio::null()
} else {
std::process::Stdio::inherit()
};
if verbosity == Verbosity::Verbose {
print!("rustfmt");
for a in fmt_args {
print!(" {}", a);
}
for f in files {
print!(" {}", f.display());
}
println!();
}
let mut command = Command::new("rustfmt")
.stdout(stdout)
.args(files)
.args(fmt_args)
.spawn()
.map_err(|e| match e.kind() {
io::ErrorKind::NotFound => io::Error::new(
io::ErrorKind::Other,
"Could not run rustfmt, please make sure it is in your PATH.",
),
_ => e,
})?;
command.wait()
}
fn get_cargo_metadata(manifest_path: Option<&Path>) -> Result<cargo_metadata::Metadata, io::Error> {
match cargo_metadata::metadata(manifest_path) {
Ok(metadata) => Ok(metadata),
Err(..) => Err(io::Error::new(
io::ErrorKind::Other,
"`cargo manifest` failed.",
)),
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-wasm/Cargo.toml
|
[package]
name = "rustfmt-wasm"
version = "0.99.4"
description = "Rust formatter for WASM"
authors = ["Acheron <acheroncrypto@gmail.com>"]
repository = "https://github.com/solana-playground/solana-playground"
license = "Apache-2.0"
homepage = "https://beta.solpg.io"
keywords = ["rustfmt", "wasm", "rust", "format", "rustwasm"]
[lib]
crate-type = ["cdylib"]
[dependencies]
console_error_panic_hook = "0.1"
rustfmt-nightly = { path = "../rustfmt-nightly" }
wasm-bindgen = "=0.2.30"
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-wasm
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/rustfmt-wasm/src/lib.rs
|
// Fork of https://github.com/alexcrichton/rustfmt-wasm
//
// The patches in the repo including the crates.io patches did not work so had to get the
// rustc deps at the time of the repo creation.
//
// Though this repository is from 2018, it still does a good job at formatting rust code.
//
// One would need to compile `rustc-dev` to wasm in order to make recent versions of rustfmt
// to compile to wasm. See: https://github.com/rust-lang/rustfmt/issues/4845
extern crate console_error_panic_hook;
extern crate rustfmt_nightly;
extern crate wasm_bindgen;
use rustfmt_nightly::{Config, ErrorKind, FormatReport, Input, Session};
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub fn rustfmt(input: &str) -> RustfmtResult {
console_error_panic_hook::set_once();
let mut config = Config::default();
config.override_value("emit_mode", "stdout");
let mut dst = Vec::new();
let report = {
let mut session = Session::new(config, Some(&mut dst));
let report = match session.format(Input::Text(input.to_string())) {
Ok(report) => report,
Err(err) => {
return RustfmtResult {
content: String::new(),
state: Err(err),
}
}
};
report
};
RustfmtResult {
content: String::from_utf8(dst).unwrap(),
state: Ok(report),
}
}
#[wasm_bindgen]
pub struct RustfmtResult {
content: String,
state: Result<FormatReport, ErrorKind>,
}
#[wasm_bindgen]
impl RustfmtResult {
pub fn code(self) -> String {
self.content
}
pub fn error(&self) -> Option<String> {
self.state.as_ref().err().map(|s| s.to_string())
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps/libserialize/Cargo.toml
|
[package]
authors = ["The Rust Project Developers"]
name = "serialize"
version = "0.0.0"
[lib]
name = "serialize"
path = "lib.rs"
[dependencies]
smallvec = { version = "0.6.5", features = ["union"] }
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps/libserialize/opaque.rs
|
// Copyright 2012-2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use leb128::{self, read_signed_leb128, write_signed_leb128};
use std::borrow::Cow;
use serialize;
// -----------------------------------------------------------------------------
// Encoder
// -----------------------------------------------------------------------------
pub type EncodeResult = Result<(), !>;
pub struct Encoder {
pub data: Vec<u8>,
}
impl Encoder {
pub fn new(data: Vec<u8>) -> Encoder {
Encoder { data }
}
pub fn into_inner(self) -> Vec<u8> {
self.data
}
#[inline]
pub fn emit_raw_bytes(&mut self, s: &[u8]) {
self.data.extend_from_slice(s);
}
}
macro_rules! write_uleb128 {
($enc:expr, $value:expr, $fun:ident) => {{
leb128::$fun(&mut $enc.data, $value);
Ok(())
}}
}
macro_rules! write_sleb128 {
($enc:expr, $value:expr) => {{
write_signed_leb128(&mut $enc.data, $value as i128);
Ok(())
}}
}
impl serialize::Encoder for Encoder {
type Error = !;
#[inline]
fn emit_nil(&mut self) -> EncodeResult {
Ok(())
}
#[inline]
fn emit_usize(&mut self, v: usize) -> EncodeResult {
write_uleb128!(self, v, write_usize_leb128)
}
#[inline]
fn emit_u128(&mut self, v: u128) -> EncodeResult {
write_uleb128!(self, v, write_u128_leb128)
}
#[inline]
fn emit_u64(&mut self, v: u64) -> EncodeResult {
write_uleb128!(self, v, write_u64_leb128)
}
#[inline]
fn emit_u32(&mut self, v: u32) -> EncodeResult {
write_uleb128!(self, v, write_u32_leb128)
}
#[inline]
fn emit_u16(&mut self, v: u16) -> EncodeResult {
write_uleb128!(self, v, write_u16_leb128)
}
#[inline]
fn emit_u8(&mut self, v: u8) -> EncodeResult {
self.data.push(v);
Ok(())
}
#[inline]
fn emit_isize(&mut self, v: isize) -> EncodeResult {
write_sleb128!(self, v)
}
#[inline]
fn emit_i128(&mut self, v: i128) -> EncodeResult {
write_sleb128!(self, v)
}
#[inline]
fn emit_i64(&mut self, v: i64) -> EncodeResult {
write_sleb128!(self, v)
}
#[inline]
fn emit_i32(&mut self, v: i32) -> EncodeResult {
write_sleb128!(self, v)
}
#[inline]
fn emit_i16(&mut self, v: i16) -> EncodeResult {
write_sleb128!(self, v)
}
#[inline]
fn emit_i8(&mut self, v: i8) -> EncodeResult {
let as_u8: u8 = unsafe { ::std::mem::transmute(v) };
self.emit_u8(as_u8)
}
#[inline]
fn emit_bool(&mut self, v: bool) -> EncodeResult {
self.emit_u8(if v {
1
} else {
0
})
}
#[inline]
fn emit_f64(&mut self, v: f64) -> EncodeResult {
let as_u64: u64 = unsafe { ::std::mem::transmute(v) };
self.emit_u64(as_u64)
}
#[inline]
fn emit_f32(&mut self, v: f32) -> EncodeResult {
let as_u32: u32 = unsafe { ::std::mem::transmute(v) };
self.emit_u32(as_u32)
}
#[inline]
fn emit_char(&mut self, v: char) -> EncodeResult {
self.emit_u32(v as u32)
}
#[inline]
fn emit_str(&mut self, v: &str) -> EncodeResult {
self.emit_usize(v.len())?;
self.emit_raw_bytes(v.as_bytes());
Ok(())
}
}
impl Encoder {
#[inline]
pub fn position(&self) -> usize {
self.data.len()
}
}
// -----------------------------------------------------------------------------
// Decoder
// -----------------------------------------------------------------------------
pub struct Decoder<'a> {
pub data: &'a [u8],
position: usize,
}
impl<'a> Decoder<'a> {
pub fn new(data: &'a [u8], position: usize) -> Decoder<'a> {
Decoder {
data,
position,
}
}
#[inline]
pub fn position(&self) -> usize {
self.position
}
#[inline]
pub fn set_position(&mut self, pos: usize) {
self.position = pos
}
#[inline]
pub fn advance(&mut self, bytes: usize) {
self.position += bytes;
}
#[inline]
pub fn read_raw_bytes(&mut self, s: &mut [u8]) -> Result<(), String> {
let start = self.position;
let end = start + s.len();
s.copy_from_slice(&self.data[start..end]);
self.position = end;
Ok(())
}
}
macro_rules! read_uleb128 {
($dec:expr, $t:ty, $fun:ident) => ({
let (value, bytes_read) = leb128::$fun(&$dec.data[$dec.position ..]);
$dec.position += bytes_read;
Ok(value)
})
}
macro_rules! read_sleb128 {
($dec:expr, $t:ty) => ({
let (value, bytes_read) = read_signed_leb128($dec.data, $dec.position);
$dec.position += bytes_read;
Ok(value as $t)
})
}
impl<'a> serialize::Decoder for Decoder<'a> {
type Error = String;
#[inline]
fn read_nil(&mut self) -> Result<(), Self::Error> {
Ok(())
}
#[inline]
fn read_u128(&mut self) -> Result<u128, Self::Error> {
read_uleb128!(self, u128, read_u128_leb128)
}
#[inline]
fn read_u64(&mut self) -> Result<u64, Self::Error> {
read_uleb128!(self, u64, read_u64_leb128)
}
#[inline]
fn read_u32(&mut self) -> Result<u32, Self::Error> {
read_uleb128!(self, u32, read_u32_leb128)
}
#[inline]
fn read_u16(&mut self) -> Result<u16, Self::Error> {
read_uleb128!(self, u16, read_u16_leb128)
}
#[inline]
fn read_u8(&mut self) -> Result<u8, Self::Error> {
let value = self.data[self.position];
self.position += 1;
Ok(value)
}
#[inline]
fn read_usize(&mut self) -> Result<usize, Self::Error> {
read_uleb128!(self, usize, read_usize_leb128)
}
#[inline]
fn read_i128(&mut self) -> Result<i128, Self::Error> {
read_sleb128!(self, i128)
}
#[inline]
fn read_i64(&mut self) -> Result<i64, Self::Error> {
read_sleb128!(self, i64)
}
#[inline]
fn read_i32(&mut self) -> Result<i32, Self::Error> {
read_sleb128!(self, i32)
}
#[inline]
fn read_i16(&mut self) -> Result<i16, Self::Error> {
read_sleb128!(self, i16)
}
#[inline]
fn read_i8(&mut self) -> Result<i8, Self::Error> {
let as_u8 = self.data[self.position];
self.position += 1;
unsafe { Ok(::std::mem::transmute(as_u8)) }
}
#[inline]
fn read_isize(&mut self) -> Result<isize, Self::Error> {
read_sleb128!(self, isize)
}
#[inline]
fn read_bool(&mut self) -> Result<bool, Self::Error> {
let value = self.read_u8()?;
Ok(value != 0)
}
#[inline]
fn read_f64(&mut self) -> Result<f64, Self::Error> {
let bits = self.read_u64()?;
Ok(unsafe { ::std::mem::transmute(bits) })
}
#[inline]
fn read_f32(&mut self) -> Result<f32, Self::Error> {
let bits = self.read_u32()?;
Ok(unsafe { ::std::mem::transmute(bits) })
}
#[inline]
fn read_char(&mut self) -> Result<char, Self::Error> {
let bits = self.read_u32()?;
Ok(::std::char::from_u32(bits).unwrap())
}
#[inline]
fn read_str(&mut self) -> Result<Cow<str>, Self::Error> {
let len = self.read_usize()?;
let s = ::std::str::from_utf8(&self.data[self.position..self.position + len]).unwrap();
self.position += len;
Ok(Cow::Borrowed(s))
}
#[inline]
fn error(&mut self, err: &str) -> Self::Error {
err.to_string()
}
}
#[cfg(test)]
mod tests {
use serialize::{Encodable, Decodable};
use std::fmt::Debug;
use super::{Encoder, Decoder};
#[derive(PartialEq, Clone, Debug, RustcEncodable, RustcDecodable)]
struct Struct {
a: (),
b: u8,
c: u16,
d: u32,
e: u64,
f: usize,
g: i8,
h: i16,
i: i32,
j: i64,
k: isize,
l: char,
m: String,
n: f32,
o: f64,
p: bool,
q: Option<u32>,
}
fn check_round_trip<T: Encodable + Decodable + PartialEq + Debug>(values: Vec<T>) {
let mut encoder = Encoder::new(Vec::new());
for value in &values {
Encodable::encode(&value, &mut encoder).unwrap();
}
let data = encoder.into_inner();
let mut decoder = Decoder::new(&data[..], 0);
for value in values {
let decoded = Decodable::decode(&mut decoder).unwrap();
assert_eq!(value, decoded);
}
}
#[test]
fn test_unit() {
check_round_trip(vec![(), (), (), ()]);
}
#[test]
fn test_u8() {
let mut vec = vec![];
for i in ::std::u8::MIN..::std::u8::MAX {
vec.push(i);
}
check_round_trip(vec);
}
#[test]
fn test_u16() {
for i in ::std::u16::MIN..::std::u16::MAX {
check_round_trip(vec![1, 2, 3, i, i, i]);
}
}
#[test]
fn test_u32() {
check_round_trip(vec![1, 2, 3, ::std::u32::MIN, 0, 1, ::std::u32::MAX, 2, 1]);
}
#[test]
fn test_u64() {
check_round_trip(vec![1, 2, 3, ::std::u64::MIN, 0, 1, ::std::u64::MAX, 2, 1]);
}
#[test]
fn test_usize() {
check_round_trip(vec![1, 2, 3, ::std::usize::MIN, 0, 1, ::std::usize::MAX, 2, 1]);
}
#[test]
fn test_i8() {
let mut vec = vec![];
for i in ::std::i8::MIN..::std::i8::MAX {
vec.push(i);
}
check_round_trip(vec);
}
#[test]
fn test_i16() {
for i in ::std::i16::MIN..::std::i16::MAX {
check_round_trip(vec![-1, 2, -3, i, i, i, 2]);
}
}
#[test]
fn test_i32() {
check_round_trip(vec![-1, 2, -3, ::std::i32::MIN, 0, 1, ::std::i32::MAX, 2, 1]);
}
#[test]
fn test_i64() {
check_round_trip(vec![-1, 2, -3, ::std::i64::MIN, 0, 1, ::std::i64::MAX, 2, 1]);
}
#[test]
fn test_isize() {
check_round_trip(vec![-1, 2, -3, ::std::isize::MIN, 0, 1, ::std::isize::MAX, 2, 1]);
}
#[test]
fn test_bool() {
check_round_trip(vec![false, true, true, false, false]);
}
#[test]
fn test_f32() {
let mut vec = vec![];
for i in -100..100 {
vec.push((i as f32) / 3.0);
}
check_round_trip(vec);
}
#[test]
fn test_f64() {
let mut vec = vec![];
for i in -100..100 {
vec.push((i as f64) / 3.0);
}
check_round_trip(vec);
}
#[test]
fn test_char() {
let vec = vec!['a', 'b', 'c', 'd', 'A', 'X', ' ', '#', 'Ö', 'Ä', 'µ', '€'];
check_round_trip(vec);
}
#[test]
fn test_string() {
let vec = vec!["abcbuÖeiovÄnameÜavmpßvmea€µsbpnvapeapmaebn".to_string(),
"abcbuÖganeiovÄnameÜavmpßvmea€µsbpnvapeapmaebn".to_string(),
"abcbuÖganeiovÄnameÜavmpßvmea€µsbpapmaebn".to_string(),
"abcbuÖganeiovÄnameÜavmpßvmeabpnvapeapmaebn".to_string(),
"abcbuÖganeiÄnameÜavmpßvmea€µsbpnvapeapmaebn".to_string(),
"abcbuÖganeiovÄnameÜavmpßvmea€µsbpmaebn".to_string(),
"abcbuÖganeiovÄnameÜavmpßvmea€µnvapeapmaebn".to_string()];
check_round_trip(vec);
}
#[test]
fn test_option() {
check_round_trip(vec![Some(-1i8)]);
check_round_trip(vec![Some(-2i16)]);
check_round_trip(vec![Some(-3i32)]);
check_round_trip(vec![Some(-4i64)]);
check_round_trip(vec![Some(-5isize)]);
let none_i8: Option<i8> = None;
check_round_trip(vec![none_i8]);
let none_i16: Option<i16> = None;
check_round_trip(vec![none_i16]);
let none_i32: Option<i32> = None;
check_round_trip(vec![none_i32]);
let none_i64: Option<i64> = None;
check_round_trip(vec![none_i64]);
let none_isize: Option<isize> = None;
check_round_trip(vec![none_isize]);
}
#[test]
fn test_struct() {
check_round_trip(vec![Struct {
a: (),
b: 10,
c: 11,
d: 12,
e: 13,
f: 14,
g: 15,
h: 16,
i: 17,
j: 18,
k: 19,
l: 'x',
m: "abc".to_string(),
n: 20.5,
o: 21.5,
p: false,
q: None,
}]);
check_round_trip(vec![Struct {
a: (),
b: 101,
c: 111,
d: 121,
e: 131,
f: 141,
g: -15,
h: -16,
i: -17,
j: -18,
k: -19,
l: 'y',
m: "def".to_string(),
n: -20.5,
o: -21.5,
p: true,
q: Some(1234567),
}]);
}
#[derive(PartialEq, Clone, Debug, RustcEncodable, RustcDecodable)]
enum Enum {
Variant1,
Variant2(usize, f32),
Variant3 {
a: i32,
b: char,
c: bool,
},
}
#[test]
fn test_enum() {
check_round_trip(vec![Enum::Variant1,
Enum::Variant2(1, 2.5),
Enum::Variant3 {
a: 3,
b: 'b',
c: false,
},
Enum::Variant3 {
a: -4,
b: 'f',
c: true,
}]);
}
#[test]
fn test_sequence() {
let mut vec = vec![];
for i in -100i64..100i64 {
vec.push(i * 100000);
}
check_round_trip(vec![vec]);
}
#[test]
fn test_hash_map() {
use std::collections::HashMap;
let mut map = HashMap::new();
for i in -100i64..100i64 {
map.insert(i * 100000, i * 10000);
}
check_round_trip(vec![map]);
}
#[test]
fn test_tuples() {
check_round_trip(vec![('x', (), false, 0.5f32)]);
check_round_trip(vec![(9i8, 10u16, 1.5f64)]);
check_round_trip(vec![(-12i16, 11u8, 12usize)]);
check_round_trip(vec![(1234567isize, 100000000000000u64, 99999999999999i64)]);
check_round_trip(vec![(String::new(), "some string".to_string())]);
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps/libserialize/leb128.rs
|
// Copyright 2012-2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#[inline]
pub fn write_to_vec(vec: &mut Vec<u8>, byte: u8) {
vec.push(byte);
}
#[cfg(target_pointer_width = "32")]
const USIZE_LEB128_SIZE: usize = 5;
#[cfg(target_pointer_width = "64")]
const USIZE_LEB128_SIZE: usize = 10;
macro_rules! leb128_size {
(u16) => (3);
(u32) => (5);
(u64) => (10);
(u128) => (19);
(usize) => (USIZE_LEB128_SIZE);
}
macro_rules! impl_write_unsigned_leb128 {
($fn_name:ident, $int_ty:ident) => (
#[inline]
pub fn $fn_name(out: &mut Vec<u8>, mut value: $int_ty) {
for _ in 0 .. leb128_size!($int_ty) {
let mut byte = (value & 0x7F) as u8;
value >>= 7;
if value != 0 {
byte |= 0x80;
}
write_to_vec(out, byte);
if value == 0 {
break;
}
}
}
)
}
impl_write_unsigned_leb128!(write_u16_leb128, u16);
impl_write_unsigned_leb128!(write_u32_leb128, u32);
impl_write_unsigned_leb128!(write_u64_leb128, u64);
impl_write_unsigned_leb128!(write_u128_leb128, u128);
impl_write_unsigned_leb128!(write_usize_leb128, usize);
macro_rules! impl_read_unsigned_leb128 {
($fn_name:ident, $int_ty:ident) => (
#[inline]
pub fn $fn_name(slice: &[u8]) -> ($int_ty, usize) {
let mut result: $int_ty = 0;
let mut shift = 0;
let mut position = 0;
for _ in 0 .. leb128_size!($int_ty) {
let byte = unsafe {
*slice.get_unchecked(position)
};
position += 1;
result |= ((byte & 0x7F) as $int_ty) << shift;
if (byte & 0x80) == 0 {
break;
}
shift += 7;
}
// Do a single bounds check at the end instead of for every byte.
assert!(position <= slice.len());
(result, position)
}
)
}
impl_read_unsigned_leb128!(read_u16_leb128, u16);
impl_read_unsigned_leb128!(read_u32_leb128, u32);
impl_read_unsigned_leb128!(read_u64_leb128, u64);
impl_read_unsigned_leb128!(read_u128_leb128, u128);
impl_read_unsigned_leb128!(read_usize_leb128, usize);
#[inline]
/// encodes an integer using signed leb128 encoding and stores
/// the result using a callback function.
///
/// The callback `write` is called once for each position
/// that is to be written to with the byte to be encoded
/// at that position.
pub fn write_signed_leb128_to<W>(mut value: i128, mut write: W)
where W: FnMut(u8)
{
loop {
let mut byte = (value as u8) & 0x7f;
value >>= 7;
let more = !(((value == 0) && ((byte & 0x40) == 0)) ||
((value == -1) && ((byte & 0x40) != 0)));
if more {
byte |= 0x80; // Mark this byte to show that more bytes will follow.
}
write(byte);
if !more {
break;
}
}
}
#[inline]
pub fn write_signed_leb128(out: &mut Vec<u8>, value: i128) {
write_signed_leb128_to(value, |v| write_to_vec(out, v))
}
#[inline]
pub fn read_signed_leb128(data: &[u8], start_position: usize) -> (i128, usize) {
let mut result = 0;
let mut shift = 0;
let mut position = start_position;
let mut byte;
loop {
byte = data[position];
position += 1;
result |= ((byte & 0x7F) as i128) << shift;
shift += 7;
if (byte & 0x80) == 0 {
break;
}
}
if (shift < 64) && ((byte & 0x40) != 0) {
// sign extend
result |= -(1 << shift);
}
(result, position - start_position)
}
macro_rules! impl_test_unsigned_leb128 {
($test_name:ident, $write_fn_name:ident, $read_fn_name:ident, $int_ty:ident) => (
#[test]
fn $test_name() {
let mut stream = Vec::new();
for x in 0..62 {
$write_fn_name(&mut stream, (3u64 << x) as $int_ty);
}
let mut position = 0;
for x in 0..62 {
let expected = (3u64 << x) as $int_ty;
let (actual, bytes_read) = $read_fn_name(&stream[position ..]);
assert_eq!(expected, actual);
position += bytes_read;
}
assert_eq!(stream.len(), position);
}
)
}
impl_test_unsigned_leb128!(test_u16_leb128, write_u16_leb128, read_u16_leb128, u16);
impl_test_unsigned_leb128!(test_u32_leb128, write_u32_leb128, read_u32_leb128, u32);
impl_test_unsigned_leb128!(test_u64_leb128, write_u64_leb128, read_u64_leb128, u64);
impl_test_unsigned_leb128!(test_u128_leb128, write_u128_leb128, read_u128_leb128, u128);
impl_test_unsigned_leb128!(test_usize_leb128, write_usize_leb128, read_usize_leb128, usize);
#[test]
fn test_signed_leb128() {
let values: Vec<_> = (-500..500).map(|i| i * 0x12345789ABCDEF).collect();
let mut stream = Vec::new();
for &x in &values {
write_signed_leb128(&mut stream, x);
}
let mut pos = 0;
for &x in &values {
let (value, bytes_read) = read_signed_leb128(&mut stream, pos);
pos += bytes_read;
assert_eq!(x, value);
}
assert_eq!(pos, stream.len());
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps/libserialize/collection_impls.rs
|
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Implementations of serialization for structures found in liballoc
use std::hash::{Hash, BuildHasher};
use {Decodable, Encodable, Decoder, Encoder};
use std::collections::{LinkedList, VecDeque, BTreeMap, BTreeSet, HashMap, HashSet};
use std::rc::Rc;
use std::sync::Arc;
use smallvec::{Array, SmallVec};
impl<A> Encodable for SmallVec<A>
where A: Array,
A::Item: Encodable
{
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_seq(self.len(), |s| {
for (i, e) in self.iter().enumerate() {
s.emit_seq_elt(i, |s| e.encode(s))?;
}
Ok(())
})
}
}
impl<A> Decodable for SmallVec<A>
where A: Array,
A::Item: Decodable
{
fn decode<D: Decoder>(d: &mut D) -> Result<SmallVec<A>, D::Error> {
d.read_seq(|d, len| {
let mut vec = SmallVec::with_capacity(len);
// FIXME(#48994) - could just be collected into a Result<SmallVec, D::Error>
for i in 0..len {
vec.push(d.read_seq_elt(i, |d| Decodable::decode(d))?);
}
Ok(vec)
})
}
}
impl<T: Encodable> Encodable for LinkedList<T> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_seq(self.len(), |s| {
for (i, e) in self.iter().enumerate() {
s.emit_seq_elt(i, |s| e.encode(s))?;
}
Ok(())
})
}
}
impl<T:Decodable> Decodable for LinkedList<T> {
fn decode<D: Decoder>(d: &mut D) -> Result<LinkedList<T>, D::Error> {
d.read_seq(|d, len| {
let mut list = LinkedList::new();
for i in 0..len {
list.push_back(d.read_seq_elt(i, |d| Decodable::decode(d))?);
}
Ok(list)
})
}
}
impl<T: Encodable> Encodable for VecDeque<T> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_seq(self.len(), |s| {
for (i, e) in self.iter().enumerate() {
s.emit_seq_elt(i, |s| e.encode(s))?;
}
Ok(())
})
}
}
impl<T:Decodable> Decodable for VecDeque<T> {
fn decode<D: Decoder>(d: &mut D) -> Result<VecDeque<T>, D::Error> {
d.read_seq(|d, len| {
let mut deque: VecDeque<T> = VecDeque::new();
for i in 0..len {
deque.push_back(d.read_seq_elt(i, |d| Decodable::decode(d))?);
}
Ok(deque)
})
}
}
impl<K, V> Encodable for BTreeMap<K, V>
where K: Encodable + PartialEq + Ord,
V: Encodable
{
fn encode<S: Encoder>(&self, e: &mut S) -> Result<(), S::Error> {
e.emit_map(self.len(), |e| {
let mut i = 0;
for (key, val) in self {
e.emit_map_elt_key(i, |e| key.encode(e))?;
e.emit_map_elt_val(i, |e| val.encode(e))?;
i += 1;
}
Ok(())
})
}
}
impl<K, V> Decodable for BTreeMap<K, V>
where K: Decodable + PartialEq + Ord,
V: Decodable
{
fn decode<D: Decoder>(d: &mut D) -> Result<BTreeMap<K, V>, D::Error> {
d.read_map(|d, len| {
let mut map = BTreeMap::new();
for i in 0..len {
let key = d.read_map_elt_key(i, |d| Decodable::decode(d))?;
let val = d.read_map_elt_val(i, |d| Decodable::decode(d))?;
map.insert(key, val);
}
Ok(map)
})
}
}
impl<T> Encodable for BTreeSet<T>
where T: Encodable + PartialEq + Ord
{
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_seq(self.len(), |s| {
let mut i = 0;
for e in self {
s.emit_seq_elt(i, |s| e.encode(s))?;
i += 1;
}
Ok(())
})
}
}
impl<T> Decodable for BTreeSet<T>
where T: Decodable + PartialEq + Ord
{
fn decode<D: Decoder>(d: &mut D) -> Result<BTreeSet<T>, D::Error> {
d.read_seq(|d, len| {
let mut set = BTreeSet::new();
for i in 0..len {
set.insert(d.read_seq_elt(i, |d| Decodable::decode(d))?);
}
Ok(set)
})
}
}
impl<K, V, S> Encodable for HashMap<K, V, S>
where K: Encodable + Hash + Eq,
V: Encodable,
S: BuildHasher,
{
fn encode<E: Encoder>(&self, e: &mut E) -> Result<(), E::Error> {
e.emit_map(self.len(), |e| {
let mut i = 0;
for (key, val) in self {
e.emit_map_elt_key(i, |e| key.encode(e))?;
e.emit_map_elt_val(i, |e| val.encode(e))?;
i += 1;
}
Ok(())
})
}
}
impl<K, V, S> Decodable for HashMap<K, V, S>
where K: Decodable + Hash + Eq,
V: Decodable,
S: BuildHasher + Default,
{
fn decode<D: Decoder>(d: &mut D) -> Result<HashMap<K, V, S>, D::Error> {
d.read_map(|d, len| {
let state = Default::default();
let mut map = HashMap::with_capacity_and_hasher(len, state);
for i in 0..len {
let key = d.read_map_elt_key(i, |d| Decodable::decode(d))?;
let val = d.read_map_elt_val(i, |d| Decodable::decode(d))?;
map.insert(key, val);
}
Ok(map)
})
}
}
impl<T, S> Encodable for HashSet<T, S>
where T: Encodable + Hash + Eq,
S: BuildHasher,
{
fn encode<E: Encoder>(&self, s: &mut E) -> Result<(), E::Error> {
s.emit_seq(self.len(), |s| {
let mut i = 0;
for e in self {
s.emit_seq_elt(i, |s| e.encode(s))?;
i += 1;
}
Ok(())
})
}
}
impl<T, S> Decodable for HashSet<T, S>
where T: Decodable + Hash + Eq,
S: BuildHasher + Default,
{
fn decode<D: Decoder>(d: &mut D) -> Result<HashSet<T, S>, D::Error> {
d.read_seq(|d, len| {
let state = Default::default();
let mut set = HashSet::with_capacity_and_hasher(len, state);
for i in 0..len {
set.insert(d.read_seq_elt(i, |d| Decodable::decode(d))?);
}
Ok(set)
})
}
}
impl<T: Encodable> Encodable for Rc<[T]> {
fn encode<E: Encoder>(&self, s: &mut E) -> Result<(), E::Error> {
s.emit_seq(self.len(), |s| {
for (index, e) in self.iter().enumerate() {
s.emit_seq_elt(index, |s| e.encode(s))?;
}
Ok(())
})
}
}
impl<T: Decodable> Decodable for Rc<[T]> {
fn decode<D: Decoder>(d: &mut D) -> Result<Rc<[T]>, D::Error> {
d.read_seq(|d, len| {
let mut vec = Vec::with_capacity(len);
for index in 0..len {
vec.push(d.read_seq_elt(index, |d| Decodable::decode(d))?);
}
Ok(vec.into())
})
}
}
impl<T: Encodable> Encodable for Arc<[T]> {
fn encode<E: Encoder>(&self, s: &mut E) -> Result<(), E::Error> {
s.emit_seq(self.len(), |s| {
for (index, e) in self.iter().enumerate() {
s.emit_seq_elt(index, |s| e.encode(s))?;
}
Ok(())
})
}
}
impl<T: Decodable> Decodable for Arc<[T]> {
fn decode<D: Decoder>(d: &mut D) -> Result<Arc<[T]>, D::Error> {
d.read_seq(|d, len| {
let mut vec = Vec::with_capacity(len);
for index in 0..len {
vec.push(d.read_seq_elt(index, |d| Decodable::decode(d))?);
}
Ok(vec.into())
})
}
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps/libserialize/lib.rs
|
// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Support code for encoding and decoding types.
/*
Core encoding and decoding interfaces.
*/
#![doc(html_logo_url = "https://www.rust-lang.org/logos/rust-logo-128x128-blk-v2.png",
html_favicon_url = "https://doc.rust-lang.org/favicon.ico",
html_root_url = "https://doc.rust-lang.org/nightly/",
html_playground_url = "https://play.rust-lang.org/",
test(attr(allow(unused_variables), deny(warnings))))]
#![feature(box_syntax)]
#![feature(core_intrinsics)]
#![feature(specialization)]
#![feature(never_type)]
#![cfg_attr(not(stage0), feature(nll))]
#![cfg_attr(test, feature(test))]
pub use self::serialize::{Decoder, Encoder, Decodable, Encodable};
pub use self::serialize::{SpecializationError, SpecializedEncoder, SpecializedDecoder};
pub use self::serialize::{UseSpecializedEncodable, UseSpecializedDecodable};
extern crate smallvec;
mod serialize;
mod collection_impls;
pub mod hex;
pub mod json;
pub mod opaque;
pub mod leb128;
mod rustc_serialize {
pub use serialize::*;
}
| 0
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps
|
solana_public_repos/solana-playground/solana-playground/wasm/rustfmt/deps/libserialize/serialize.rs
|
// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Support code for encoding and decoding types.
/*
Core encoding and decoding interfaces.
*/
use std::borrow::Cow;
use std::intrinsics;
use std::path;
use std::rc::Rc;
use std::cell::{Cell, RefCell};
use std::sync::Arc;
pub trait Encoder {
type Error;
// Primitive types:
fn emit_nil(&mut self) -> Result<(), Self::Error>;
fn emit_usize(&mut self, v: usize) -> Result<(), Self::Error>;
fn emit_u128(&mut self, v: u128) -> Result<(), Self::Error>;
fn emit_u64(&mut self, v: u64) -> Result<(), Self::Error>;
fn emit_u32(&mut self, v: u32) -> Result<(), Self::Error>;
fn emit_u16(&mut self, v: u16) -> Result<(), Self::Error>;
fn emit_u8(&mut self, v: u8) -> Result<(), Self::Error>;
fn emit_isize(&mut self, v: isize) -> Result<(), Self::Error>;
fn emit_i128(&mut self, v: i128) -> Result<(), Self::Error>;
fn emit_i64(&mut self, v: i64) -> Result<(), Self::Error>;
fn emit_i32(&mut self, v: i32) -> Result<(), Self::Error>;
fn emit_i16(&mut self, v: i16) -> Result<(), Self::Error>;
fn emit_i8(&mut self, v: i8) -> Result<(), Self::Error>;
fn emit_bool(&mut self, v: bool) -> Result<(), Self::Error>;
fn emit_f64(&mut self, v: f64) -> Result<(), Self::Error>;
fn emit_f32(&mut self, v: f32) -> Result<(), Self::Error>;
fn emit_char(&mut self, v: char) -> Result<(), Self::Error>;
fn emit_str(&mut self, v: &str) -> Result<(), Self::Error>;
// Compound types:
fn emit_enum<F>(&mut self, _name: &str, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
f(self)
}
fn emit_enum_variant<F>(&mut self, _v_name: &str, v_id: usize, _len: usize, f: F)
-> Result<(), Self::Error> where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
self.emit_usize(v_id)?;
f(self)
}
fn emit_enum_variant_arg<F>(&mut self, _a_idx: usize, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
f(self)
}
fn emit_enum_struct_variant<F>(&mut self, v_name: &str, v_id: usize, len: usize, f: F)
-> Result<(), Self::Error> where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
self.emit_enum_variant(v_name, v_id, len, f)
}
fn emit_enum_struct_variant_field<F>(&mut self, _f_name: &str, f_idx: usize, f: F)
-> Result<(), Self::Error> where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
self.emit_enum_variant_arg(f_idx, f)
}
fn emit_struct<F>(&mut self, _name: &str, _len: usize, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
f(self)
}
fn emit_struct_field<F>(&mut self, _f_name: &str, _f_idx: usize, f: F)
-> Result<(), Self::Error> where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
f(self)
}
fn emit_tuple<F>(&mut self, _len: usize, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
f(self)
}
fn emit_tuple_arg<F>(&mut self, _idx: usize, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
f(self)
}
fn emit_tuple_struct<F>(&mut self, _name: &str, len: usize, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
self.emit_tuple(len, f)
}
fn emit_tuple_struct_arg<F>(&mut self, f_idx: usize, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
self.emit_tuple_arg(f_idx, f)
}
// Specialized types:
fn emit_option<F>(&mut self, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
self.emit_enum("Option", f)
}
#[inline]
fn emit_option_none(&mut self) -> Result<(), Self::Error> {
self.emit_enum_variant("None", 0, 0, |_| Ok(()))
}
fn emit_option_some<F>(&mut self, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
self.emit_enum_variant("Some", 1, 1, f)
}
fn emit_seq<F>(&mut self, len: usize, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
self.emit_usize(len)?;
f(self)
}
fn emit_seq_elt<F>(&mut self, _idx: usize, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
f(self)
}
fn emit_map<F>(&mut self, len: usize, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
self.emit_usize(len)?;
f(self)
}
fn emit_map_elt_key<F>(&mut self, _idx: usize, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
f(self)
}
fn emit_map_elt_val<F>(&mut self, _idx: usize, f: F) -> Result<(), Self::Error>
where F: FnOnce(&mut Self) -> Result<(), Self::Error>
{
f(self)
}
}
pub trait Decoder {
type Error;
// Primitive types:
fn read_nil(&mut self) -> Result<(), Self::Error>;
fn read_usize(&mut self) -> Result<usize, Self::Error>;
fn read_u128(&mut self) -> Result<u128, Self::Error>;
fn read_u64(&mut self) -> Result<u64, Self::Error>;
fn read_u32(&mut self) -> Result<u32, Self::Error>;
fn read_u16(&mut self) -> Result<u16, Self::Error>;
fn read_u8(&mut self) -> Result<u8, Self::Error>;
fn read_isize(&mut self) -> Result<isize, Self::Error>;
fn read_i128(&mut self) -> Result<i128, Self::Error>;
fn read_i64(&mut self) -> Result<i64, Self::Error>;
fn read_i32(&mut self) -> Result<i32, Self::Error>;
fn read_i16(&mut self) -> Result<i16, Self::Error>;
fn read_i8(&mut self) -> Result<i8, Self::Error>;
fn read_bool(&mut self) -> Result<bool, Self::Error>;
fn read_f64(&mut self) -> Result<f64, Self::Error>;
fn read_f32(&mut self) -> Result<f32, Self::Error>;
fn read_char(&mut self) -> Result<char, Self::Error>;
fn read_str(&mut self) -> Result<Cow<str>, Self::Error>;
// Compound types:
fn read_enum<T, F>(&mut self, _name: &str, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
f(self)
}
fn read_enum_variant<T, F>(&mut self, _names: &[&str], mut f: F) -> Result<T, Self::Error>
where F: FnMut(&mut Self, usize) -> Result<T, Self::Error>
{
let disr = self.read_usize()?;
f(self, disr)
}
fn read_enum_variant_arg<T, F>(&mut self, _a_idx: usize, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
f(self)
}
fn read_enum_struct_variant<T, F>(&mut self, names: &[&str], f: F) -> Result<T, Self::Error>
where F: FnMut(&mut Self, usize) -> Result<T, Self::Error>
{
self.read_enum_variant(names, f)
}
fn read_enum_struct_variant_field<T, F>(&mut self, _f_name: &str, f_idx: usize, f: F)
-> Result<T, Self::Error> where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
self.read_enum_variant_arg(f_idx, f)
}
fn read_struct<T, F>(&mut self, _s_name: &str, _len: usize, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
f(self)
}
fn read_struct_field<T, F>(&mut self, _f_name: &str, _f_idx: usize, f: F)
-> Result<T, Self::Error> where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
f(self)
}
fn read_tuple<T, F>(&mut self, _len: usize, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
f(self)
}
fn read_tuple_arg<T, F>(&mut self, _a_idx: usize, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
f(self)
}
fn read_tuple_struct<T, F>(&mut self, _s_name: &str, len: usize, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
self.read_tuple(len, f)
}
fn read_tuple_struct_arg<T, F>(&mut self, a_idx: usize, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
self.read_tuple_arg(a_idx, f)
}
// Specialized types:
fn read_option<T, F>(&mut self, mut f: F) -> Result<T, Self::Error>
where F: FnMut(&mut Self, bool) -> Result<T, Self::Error>
{
self.read_enum("Option", move |this| {
this.read_enum_variant(&["None", "Some"], move |this, idx| {
match idx {
0 => f(this, false),
1 => f(this, true),
_ => Err(this.error("read_option: expected 0 for None or 1 for Some")),
}
})
})
}
fn read_seq<T, F>(&mut self, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self, usize) -> Result<T, Self::Error>
{
let len = self.read_usize()?;
f(self, len)
}
fn read_seq_elt<T, F>(&mut self, _idx: usize, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
f(self)
}
fn read_map<T, F>(&mut self, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self, usize) -> Result<T, Self::Error>
{
let len = self.read_usize()?;
f(self, len)
}
fn read_map_elt_key<T, F>(&mut self, _idx: usize, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
f(self)
}
fn read_map_elt_val<T, F>(&mut self, _idx: usize, f: F) -> Result<T, Self::Error>
where F: FnOnce(&mut Self) -> Result<T, Self::Error>
{
f(self)
}
// Failure
fn error(&mut self, err: &str) -> Self::Error;
}
pub trait Encodable {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error>;
}
pub trait Decodable: Sized {
fn decode<D: Decoder>(d: &mut D) -> Result<Self, D::Error>;
}
impl Encodable for usize {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_usize(*self)
}
}
impl Decodable for usize {
fn decode<D: Decoder>(d: &mut D) -> Result<usize, D::Error> {
d.read_usize()
}
}
impl Encodable for u8 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_u8(*self)
}
}
impl Decodable for u8 {
fn decode<D: Decoder>(d: &mut D) -> Result<u8, D::Error> {
d.read_u8()
}
}
impl Encodable for u16 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_u16(*self)
}
}
impl Decodable for u16 {
fn decode<D: Decoder>(d: &mut D) -> Result<u16, D::Error> {
d.read_u16()
}
}
impl Encodable for u32 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_u32(*self)
}
}
impl Decodable for u32 {
fn decode<D: Decoder>(d: &mut D) -> Result<u32, D::Error> {
d.read_u32()
}
}
impl Encodable for u64 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_u64(*self)
}
}
impl Decodable for u64 {
fn decode<D: Decoder>(d: &mut D) -> Result<u64, D::Error> {
d.read_u64()
}
}
impl Encodable for u128 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_u128(*self)
}
}
impl Decodable for u128 {
fn decode<D: Decoder>(d: &mut D) -> Result<u128, D::Error> {
d.read_u128()
}
}
impl Encodable for isize {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_isize(*self)
}
}
impl Decodable for isize {
fn decode<D: Decoder>(d: &mut D) -> Result<isize, D::Error> {
d.read_isize()
}
}
impl Encodable for i8 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_i8(*self)
}
}
impl Decodable for i8 {
fn decode<D: Decoder>(d: &mut D) -> Result<i8, D::Error> {
d.read_i8()
}
}
impl Encodable for i16 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_i16(*self)
}
}
impl Decodable for i16 {
fn decode<D: Decoder>(d: &mut D) -> Result<i16, D::Error> {
d.read_i16()
}
}
impl Encodable for i32 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_i32(*self)
}
}
impl Decodable for i32 {
fn decode<D: Decoder>(d: &mut D) -> Result<i32, D::Error> {
d.read_i32()
}
}
impl Encodable for i64 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_i64(*self)
}
}
impl Decodable for i64 {
fn decode<D: Decoder>(d: &mut D) -> Result<i64, D::Error> {
d.read_i64()
}
}
impl Encodable for i128 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_i128(*self)
}
}
impl Decodable for i128 {
fn decode<D: Decoder>(d: &mut D) -> Result<i128, D::Error> {
d.read_i128()
}
}
impl Encodable for str {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_str(self)
}
}
impl Encodable for String {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_str(&self[..])
}
}
impl Decodable for String {
fn decode<D: Decoder>(d: &mut D) -> Result<String, D::Error> {
Ok(d.read_str()?.into_owned())
}
}
impl Encodable for f32 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_f32(*self)
}
}
impl Decodable for f32 {
fn decode<D: Decoder>(d: &mut D) -> Result<f32, D::Error> {
d.read_f32()
}
}
impl Encodable for f64 {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_f64(*self)
}
}
impl Decodable for f64 {
fn decode<D: Decoder>(d: &mut D) -> Result<f64, D::Error> {
d.read_f64()
}
}
impl Encodable for bool {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_bool(*self)
}
}
impl Decodable for bool {
fn decode<D: Decoder>(d: &mut D) -> Result<bool, D::Error> {
d.read_bool()
}
}
impl Encodable for char {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_char(*self)
}
}
impl Decodable for char {
fn decode<D: Decoder>(d: &mut D) -> Result<char, D::Error> {
d.read_char()
}
}
impl Encodable for () {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_nil()
}
}
impl Decodable for () {
fn decode<D: Decoder>(d: &mut D) -> Result<(), D::Error> {
d.read_nil()
}
}
impl<'a, T: ?Sized + Encodable> Encodable for &'a T {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
(**self).encode(s)
}
}
impl<T: ?Sized + Encodable> Encodable for Box<T> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
(**self).encode(s)
}
}
impl< T: Decodable> Decodable for Box<T> {
fn decode<D: Decoder>(d: &mut D) -> Result<Box<T>, D::Error> {
Ok(box Decodable::decode(d)?)
}
}
impl< T: Decodable> Decodable for Box<[T]> {
fn decode<D: Decoder>(d: &mut D) -> Result<Box<[T]>, D::Error> {
let v: Vec<T> = Decodable::decode(d)?;
Ok(v.into_boxed_slice())
}
}
impl<T:Encodable> Encodable for Rc<T> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
(**self).encode(s)
}
}
impl<T:Decodable> Decodable for Rc<T> {
fn decode<D: Decoder>(d: &mut D) -> Result<Rc<T>, D::Error> {
Ok(Rc::new(Decodable::decode(d)?))
}
}
impl<T:Encodable> Encodable for [T] {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_seq(self.len(), |s| {
for (i, e) in self.iter().enumerate() {
s.emit_seq_elt(i, |s| e.encode(s))?
}
Ok(())
})
}
}
impl<T:Encodable> Encodable for Vec<T> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_seq(self.len(), |s| {
for (i, e) in self.iter().enumerate() {
s.emit_seq_elt(i, |s| e.encode(s))?
}
Ok(())
})
}
}
impl<T:Decodable> Decodable for Vec<T> {
fn decode<D: Decoder>(d: &mut D) -> Result<Vec<T>, D::Error> {
d.read_seq(|d, len| {
let mut v = Vec::with_capacity(len);
for i in 0..len {
v.push(d.read_seq_elt(i, |d| Decodable::decode(d))?);
}
Ok(v)
})
}
}
impl<'a, T:Encodable> Encodable for Cow<'a, [T]> where [T]: ToOwned<Owned = Vec<T>> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_seq(self.len(), |s| {
for (i, e) in self.iter().enumerate() {
s.emit_seq_elt(i, |s| e.encode(s))?
}
Ok(())
})
}
}
impl<T:Decodable+ToOwned> Decodable for Cow<'static, [T]>
where [T]: ToOwned<Owned = Vec<T>>
{
fn decode<D: Decoder>(d: &mut D) -> Result<Cow<'static, [T]>, D::Error> {
d.read_seq(|d, len| {
let mut v = Vec::with_capacity(len);
for i in 0..len {
v.push(d.read_seq_elt(i, |d| Decodable::decode(d))?);
}
Ok(Cow::Owned(v))
})
}
}
impl<T:Encodable> Encodable for Option<T> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_option(|s| {
match *self {
None => s.emit_option_none(),
Some(ref v) => s.emit_option_some(|s| v.encode(s)),
}
})
}
}
impl<T:Decodable> Decodable for Option<T> {
fn decode<D: Decoder>(d: &mut D) -> Result<Option<T>, D::Error> {
d.read_option(|d, b| {
if b {
Ok(Some(Decodable::decode(d)?))
} else {
Ok(None)
}
})
}
}
impl<T1: Encodable, T2: Encodable> Encodable for Result<T1, T2> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
s.emit_enum("Result", |s| {
match *self {
Ok(ref v) => {
s.emit_enum_variant("Ok", 0, 1, |s| {
s.emit_enum_variant_arg(0, |s| {
v.encode(s)
})
})
}
Err(ref v) => {
s.emit_enum_variant("Err", 1, 1, |s| {
s.emit_enum_variant_arg(0, |s| {
v.encode(s)
})
})
}
}
})
}
}
impl<T1:Decodable, T2:Decodable> Decodable for Result<T1, T2> {
fn decode<D: Decoder>(d: &mut D) -> Result<Result<T1, T2>, D::Error> {
d.read_enum("Result", |d| {
d.read_enum_variant(&["Ok", "Err"], |d, disr| {
match disr {
0 => {
Ok(Ok(d.read_enum_variant_arg(0, |d| {
T1::decode(d)
})?))
}
1 => {
Ok(Err(d.read_enum_variant_arg(0, |d| {
T2::decode(d)
})?))
}
_ => {
panic!("Encountered invalid discriminant while \
decoding `Result`.");
}
}
})
})
}
}
macro_rules! peel {
($name:ident, $($other:ident,)*) => (tuple! { $($other,)* })
}
/// Evaluates to the number of identifiers passed to it, for example: `count_idents!(a, b, c) == 3
macro_rules! count_idents {
() => { 0 };
($_i:ident, $($rest:ident,)*) => { 1 + count_idents!($($rest,)*) }
}
macro_rules! tuple {
() => ();
( $($name:ident,)+ ) => (
impl<$($name:Decodable),*> Decodable for ($($name,)*) {
#[allow(non_snake_case)]
fn decode<D: Decoder>(d: &mut D) -> Result<($($name,)*), D::Error> {
let len: usize = count_idents!($($name,)*);
d.read_tuple(len, |d| {
let mut i = 0;
let ret = ($(d.read_tuple_arg({ i+=1; i-1 }, |d| -> Result<$name, D::Error> {
Decodable::decode(d)
})?,)*);
Ok(ret)
})
}
}
impl<$($name:Encodable),*> Encodable for ($($name,)*) {
#[allow(non_snake_case)]
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
let ($(ref $name,)*) = *self;
let mut n = 0;
$(let $name = $name; n += 1;)*
s.emit_tuple(n, |s| {
let mut i = 0;
$(s.emit_tuple_arg({ i+=1; i-1 }, |s| $name.encode(s))?;)*
Ok(())
})
}
}
peel! { $($name,)* }
)
}
tuple! { T0, T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, }
impl Encodable for path::PathBuf {
fn encode<S: Encoder>(&self, e: &mut S) -> Result<(), S::Error> {
self.to_str().unwrap().encode(e)
}
}
impl Decodable for path::PathBuf {
fn decode<D: Decoder>(d: &mut D) -> Result<path::PathBuf, D::Error> {
let bytes: String = Decodable::decode(d)?;
Ok(path::PathBuf::from(bytes))
}
}
impl<T: Encodable + Copy> Encodable for Cell<T> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
self.get().encode(s)
}
}
impl<T: Decodable + Copy> Decodable for Cell<T> {
fn decode<D: Decoder>(d: &mut D) -> Result<Cell<T>, D::Error> {
Ok(Cell::new(Decodable::decode(d)?))
}
}
// FIXME: #15036
// Should use `try_borrow`, returning a
// `encoder.error("attempting to Encode borrowed RefCell")`
// from `encode` when `try_borrow` returns `None`.
impl<T: Encodable> Encodable for RefCell<T> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
self.borrow().encode(s)
}
}
impl<T: Decodable> Decodable for RefCell<T> {
fn decode<D: Decoder>(d: &mut D) -> Result<RefCell<T>, D::Error> {
Ok(RefCell::new(Decodable::decode(d)?))
}
}
impl<T:Encodable> Encodable for Arc<T> {
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
(**self).encode(s)
}
}
impl<T:Decodable> Decodable for Arc<T> {
fn decode<D: Decoder>(d: &mut D) -> Result<Arc<T>, D::Error> {
Ok(Arc::new(Decodable::decode(d)?))
}
}
// ___________________________________________________________________________
// Specialization-based interface for multi-dispatch Encodable/Decodable.
/// Implement this trait on your `{Encodable,Decodable}::Error` types
/// to override the default panic behavior for missing specializations.
pub trait SpecializationError {
/// Create an error for a missing method specialization.
/// Defaults to panicking with type, trait & method names.
/// `S` is the encoder/decoder state type,
/// `T` is the type being encoded/decoded, and
/// the arguments are the names of the trait
/// and method that should've been overridden.
fn not_found<S, T: ?Sized>(trait_name: &'static str, method_name: &'static str) -> Self;
}
impl<E> SpecializationError for E {
default fn not_found<S, T: ?Sized>(trait_name: &'static str, method_name: &'static str) -> E {
panic!("missing specialization: `<{} as {}<{}>>::{}` not overridden",
unsafe { intrinsics::type_name::<S>() },
trait_name,
unsafe { intrinsics::type_name::<T>() },
method_name);
}
}
/// Implement this trait on encoders, with `T` being the type
/// you want to encode (employing `UseSpecializedEncodable`),
/// using a strategy specific to the encoder.
pub trait SpecializedEncoder<T: ?Sized + UseSpecializedEncodable>: Encoder {
/// Encode the value in a manner specific to this encoder state.
fn specialized_encode(&mut self, value: &T) -> Result<(), Self::Error>;
}
impl<E: Encoder, T: ?Sized + UseSpecializedEncodable> SpecializedEncoder<T> for E {
default fn specialized_encode(&mut self, value: &T) -> Result<(), E::Error> {
value.default_encode(self)
}
}
/// Implement this trait on decoders, with `T` being the type
/// you want to decode (employing `UseSpecializedDecodable`),
/// using a strategy specific to the decoder.
pub trait SpecializedDecoder<T: UseSpecializedDecodable>: Decoder {
/// Decode a value in a manner specific to this decoder state.
fn specialized_decode(&mut self) -> Result<T, Self::Error>;
}
impl<D: Decoder, T: UseSpecializedDecodable> SpecializedDecoder<T> for D {
default fn specialized_decode(&mut self) -> Result<T, D::Error> {
T::default_decode(self)
}
}
/// Implement this trait on your type to get an `Encodable`
/// implementation which goes through `SpecializedEncoder`.
pub trait UseSpecializedEncodable {
/// Defaults to returning an error (see `SpecializationError`).
fn default_encode<E: Encoder>(&self, _: &mut E) -> Result<(), E::Error> {
Err(E::Error::not_found::<E, Self>("SpecializedEncoder", "specialized_encode"))
}
}
impl<T: ?Sized + UseSpecializedEncodable> Encodable for T {
default fn encode<E: Encoder>(&self, e: &mut E) -> Result<(), E::Error> {
E::specialized_encode(e, self)
}
}
/// Implement this trait on your type to get an `Decodable`
/// implementation which goes through `SpecializedDecoder`.
pub trait UseSpecializedDecodable: Sized {
/// Defaults to returning an error (see `SpecializationError`).
fn default_decode<D: Decoder>(_: &mut D) -> Result<Self, D::Error> {
Err(D::Error::not_found::<D, Self>("SpecializedDecoder", "specialized_decode"))
}
}
impl<T: UseSpecializedDecodable> Decodable for T {
default fn decode<D: Decoder>(d: &mut D) -> Result<T, D::Error> {
D::specialized_decode(d)
}
}
// Can't avoid specialization for &T and Box<T> impls,
// as proxy impls on them are blankets that conflict
// with the Encodable and Decodable impls above,
// which only have `default` on their methods
// for this exact reason.
// May be fixable in a simpler fashion via the
// more complex lattice model for specialization.
impl<'a, T: ?Sized + Encodable> UseSpecializedEncodable for &'a T {}
impl<T: ?Sized + Encodable> UseSpecializedEncodable for Box<T> {}
impl<T: Decodable> UseSpecializedDecodable for Box<T> {}
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.